您的位置:首页 > 其它

基于Theano的深度学习(Deep Learning)框架Keras学习随笔-02-Example

2016-06-15 00:00 399 查看
http://blog.csdn.net/niuwei22007/article/details/49053771原地址可以查看更多文章 下面这些例子是keras前期版本的,现在已经升级到了keras0.3.0,以下代码需要进行修改才可以。如今的代码更简洁,使用更方便,不需要自己计算每一层的输入shape,除了第一层。
下面来看几个例子,来了解一下Keras的便捷之处。不需要具体去研究代码的意思,只需要看一下这个实现过程。用编程的装饰模式把各个组件模块化,然后可以自己随意的拼装。首先介绍一个基于Keras做的手写MNIST识别的代码,剩下的就看一下实现过程即可。

No.0用Keras实现MNIST识别。

[python]
view plain
copy

from keras.models import Sequential

from keras.layers.core import Dense, Dropout,Activation

from keras.optimizers import SGD

from keras.datasets import mnist

import numpy

model = Sequential()

model.add(Dense(784, 500, init='glorot_uniform')) # 输入层,28*28=784

model.add(Activation('tanh')) # 激活函数是tanh

model.add(Dropout(0.5)) # 采用50%的dropout

model.add(Dense(500, 500, init='glorot_uniform')) # 隐层节点500个

model.add(Activation('tanh'))

model.add(Dropout(0.5))

# 输出结果是10个类别,所以维度是10<
3ff0
span>

model.add(Dense(500, 10, init='glorot_uniform'))

model.add(Activation('softmax')) # 最后一层用softmax

# 设定学习率(lr)等参数

sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9,nesterov=True)

# 使用交叉熵作为loss函数,就是熟知的log损失函数

model.compile(loss='categorical_crossentropy',

optimizer=sgd, class_mode='categorical')

# 使用Keras自带的mnist工具读取数据(第一次需要联网)

(X_train, y_train), (X_test, y_test) = mnist.load_data()

# 由于输入数据维度是(num, 28, 28),这里需要把后面的维度直接拼起来变成784维

X_train = X_train.reshape(X_train.shape[0],

X_train.shape[1]* X_train.shape[2])

X_test = X_test.reshape(X_test.shape[0], X_test.shape[1]* X_test.shape[2])

# 这里需要把index转换成一个one hot的矩阵

Y_train = (numpy.arange(10) == y_train[:,None]).astype(int)

Y_test = (numpy.arange(10) == y_test[:,None]).astype(int)

# 开始训练,这里参数比较多。batch_size就是batch_size,nb_epoch就是最多迭代的次数, shuffle就是是否把数据随机打乱之后再进行训练

# verbose是屏显模式,官方这么说的:verbose: 0 forno logging to stdout, 1 for progress bar logging, 2 for one log line per epoch.

# 就是说0是不屏显,1是显示一个进度条,2是每个epoch都显示一行数据

# show_accuracy就是显示每次迭代后的正确率

# validation_split就是拿出百分之多少用来做交叉验证

model.fit(X_train, Y_train, batch_size=200, nb_epoch=100,shuffle=True, verbose=1, show_accuracy=True, validation_split=0.3)

print 'test set'

model.evaluate(X_test, Y_test, batch_size=200,show_accuracy=True, verbose=1)

No.1Keras实现MLP(1)

[python]
view plain
copy

from keras.models import Sequential

from keras.layers.core import Dense, Dropout, Activation

from keras.optimizers import SGD

model = Sequential()

# Dense(input, output, init=’wegiths initial method’)

model.add(Dense(20, 64, init='uniform'))

model.add(Activation('tanh')) # 激活函数

model.add(Dropout(0.5)) #采用50%的dropout

model.add(Dense(64, 64, init='uniform'))

model.add(Activation('tanh'))

model.add(Dropout(0.5))

model.add(Dense(64, 2, init='uniform'))

model.add(Activation('softmax'))

sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)#设定学习速度,衰减量等

model.compile(loss='mean_squared_error', optimizer=sgd) #损失函数为均方误差

# ………此处是加载训练数据的的代码。

# 开始训练。nb_epoch是迭代次数,batcn_size是数据块大小。

model.fit(X_train, y_train, nb_epoch=20, batch_size=16)

score = model.evaluate(X_test, y_test, batch_size=16)

No.2Keras实现MLP(2):比(1)更简洁

[python]
view plain
copy

model = Sequential()

model.add(Dense(20, 64, init='uniform', activation='tanh'))

model.add(Dropout(0.5))

model.add(Dense(64, 64, init='uniform', activation='tanh'))

model.add(Dropout(0.5))

model.add(Dense(64, 2, init='uniform', activation='softmax'))

sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)

model.compile(loss='mean_squared_error', optimizer=sgd)

No.3 VGG-like卷积网络。

[python]
view plain
copy

from keras.models import Sequential

from keras.layers.core import Dense, Dropout, Activation,Flatten

from keras.layers.convolutional import Convolution2D,MaxPooling2D

from keras.optimizers import SGD

model = Sequential()

model.add(Convolution2D(32, 3, 3, 3, border_mode='full'))

model.add(Activation('relu'))

model.add(Convolution2D(32, 32, 3, 3))

model.add(Activation('relu'))

model.add(MaxPooling2D(poolsize=(2, 2)))

model.add(Dropout(0.25))

model.add(Convolution2D(64, 32, 3, 3, border_mode='full'))

model.add(Activation('relu'))

model.add(Convolution2D(64, 64, 3, 3))

model.add(Activation('relu'))

model.add(MaxPooling2D(poolsize=(2, 2)))

model.add(Dropout(0.25))

model.add(Flatten())

model.add(Dense(64*8*8, 256))

model.add(Activation('relu'))

model.add(Dropout(0.5))

model.add(Dense(256, 10))

model.add(Activation('softmax'))

sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)

model.compile(loss='categorical_crossentropy',optimizer=sgd)

model.fit(X_train, Y_train, batch_size=32, nb_epoch=1)

No.4长短期记忆网络用于序列分类。

[python]
view plain
copy

from keras.models import Sequential

from keras.layers.core import Dense, Dropout, Activation,Flatten

from keras.layers.convolutional
3ff0
import Convolution2D,MaxPooling2D

from keras.optimizers import SGD

model = Sequential()

model.add(Convolution2D(32, 3, 3, 3, border_mode='full'))

model.add(Activation('relu'))

model.add(Convolution2D(32, 32, 3, 3))

model.add(Activation('relu'))

model.add(MaxPooling2D(poolsize=(2, 2)))

model.add(Dropout(0.25))

model.add(Convolution2D(64, 32, 3, 3, border_mode='full'))

model.add(Activation('relu'))

model.add(Convolution2D(64, 64, 3, 3))

model.add(Activation('relu'))

model.add(MaxPooling2D(poolsize=(2, 2)))

model.add(Dropout(0.25))

model.add(Flatten())

model.add(Dense(64*8*8, 256))

model.add(Activation('relu'))

model.add(Dropout(0.5))

model.add(Dense(256, 10))

model.add(Activation('softmax'))

sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)

model.compile(loss='categorical_crossentropy',optimizer=sgd)

model.fit(X_train, Y_train, batch_size=32, nb_epoch=1)

No.5图像字幕识别。

[python]
view plain
copy

max_caption_len = 16

model = Sequential()

model.add(Convolution2D(32, 3, 3, 3, border_mode='full'))

model.add(Activation('relu'))

model.add(Convolution2D(32, 32, 3, 3))

model.add(Activation('relu'))

model.add(MaxPooling2D(poolsize=(2, 2)))

model.add(Convolution2D(64, 32, 3, 3, border_mode='full'))

model.add(Activation('relu'))

model.add(Convolution2D(64, 64, 3, 3))

model.add(Activation('relu'))

model.add(MaxPooling2D(poolsize=(2, 2)))

model.add(Convolution2D(128, 64, 3, 3, border_mode='full'))

model.add(Activation('relu'))

model.add(Convolution2D(128, 128, 3, 3))

model.add(Activation('relu'))

model.add(MaxPooling2D(poolsize=(2, 2)))

model.add(Flatten())

model.add(Dense(128*4*4, 256))

model.add(Activation('relu'))

model.add(Dropout(0.5))

model.add(RepeatVector(max_caption_len))

# the GRU below returns sequences of max_caption_lenvectors of size 256 (our word embedding size)

model.add(GRU(256, 256, return_sequences=True))

model.compile(loss='mean_squared_error', optimizer='rmsprop')

# "images" is a numpy array of shape(nb_samples, nb_channels=3, width, height)

# "captions" is a numpy array of shape(nb_samples, max_caption_len=16, embedding_dim=256)

# captions are supposed already embedded (dense vectors).

model.fit(images, captions, batch_size=16, nb_epoch=100)

参考资料:

官方教程

http://ju.outofmemory.cn/entry/188683

转载来自:http://blog.csdn.net/niuwei22007/article/details/49053771
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: