您的位置:首页 > 理论基础 > 计算机网络

【Keras】学习笔记12:手写数字识别(卷积神经网络)

2020-08-02 14:01 92 查看

文章目录


为了展示卷积神经网络的优势,采用多层感知机来做对比。

一、多层感知机模型

  为了确保每次执行代码生成相同的模型,数据导入之后设定随机数种子。并查看最初的4张手写数字图片。所有图像都是28*28像素的文件。
  输入层(784个输入)->隐藏层(784个神经元)->输出层(10个神经元)
  数据集是三维向量,通过下面这个函数应该转化为2维向量。
  shape函数解释参考:
    (1)在深度学习代码中遇到的问题-shape[0]、shape[1]、shape[2]的区别

    (2)对np.shape()的一点理解

num_pixels = X_train.shape[1] * X_train.shape[2]

  对于多层感知机,模型的输入是二维的向量,因此这里需要将数据集reshape,通过下面的函数即将28*28的向量转成784长度的数组。参考:Python的reshape的用法

X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_validation = X_validation.reshape(X_validation.shape[0], num_pixels).astype('float32')
from keras.datasets import mnist
from matplotlib import pyplot as plt
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import np_utils

# 从Keras导入Mnist数据集
(X_train, y_train), (X_validation, y_validation) = mnist.load_data()

# 显示4张手写数字的图片
plt.subplot(221)
plt.imshow(X_train[0], cmap=plt.get_cmap('gray'))

plt.subplot(222)
plt.imshow(X_train[1], cmap=plt.get_cmap('gray'))

plt.subplot(223)
plt.imshow(X_train[2], cmap=plt.get_cmap('gray'))

plt.subplot(224)
plt.imshow(X_train[3], cmap=plt.get_cmap('gray'))

plt.show()

# 设定随机种子
seed = 7
np.random.seed(seed)

num_pixels = X_train.shape[1] * X_train.shape[2]print(num_pixels)
X_train = X_train.reshape(X_train.shape[0], num_pixels).astype('float32')
X_validation = X_validation.reshape(X_validation.shape[0], num_pixels).astype('float32')
# 格式化数据到0-1之前
X_train = X_train / 255
X_validation = X_validation / 255

# one-hot编码
y_train = np_utils.to_categorical(y_train)
y_validation = np_utils.to_categorical(y_validation)
num_classes = y_validation.shape[1]
print(num_classes)

# 定义基准MLP模型
def create_model():
# 创建模型
model = Sequential()
model.add(Dense(units=num_pixels, input_dim=num_pixels, kernel_initializer='normal', activation='relu'))
model.add(Dense(units=num_classes, kernel_initializer='normal', activation='softmax'))

# 编译模型
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model

model = create_model()
model.fit(X_train, y_train, epochs=10, batch_size=200)

score = model.evaluate(X_validation, y_validation)
print('MLP: %.2f%%' % (score[1] * 100))

最终的准确度为:98.17%

二、简单卷积神经网络

  Keras提供了可以很简单地创建卷积神经网络地API。演示如何在Keras中实现卷积神经网络,包括卷积层、池化层和全连接层。
  (1)第一个隐藏层是一个称为Conv2D的卷积层。该层使用5×5的感受野,输出具有32个特征图,输入的数据具有input_shape参数所描述的特征,并采用ReLU作为激活函数。
  (2)定义一个采用最大值MaxPooling2D的池化层,并配置它在纵向和横向两个方向的采样因子(pool_size)为2×2,这表示图片在两个维度均变为原来的一半。
  (3)下一层是使用名为Dropout的正则化层,并配置为随机排除层中20%的神经元,以减少过度拟合。
  (4)将多维数据转换为一维数据的Flatten层。它的输出便于标准的全连接层的处理。
  (5)接下來是具有128个神经元的全连接层,采用ReLU作为激活函数。
  (6)输出层有10个神经元,在MNIST数据集的输出具有10个分类,因此采用softmax函数,输出每张图片在每个分类上的得分。
拓扑结构如下所示:
输入层 卷积层(隐藏层) 池化层 dropout层 flatten层 全连接层 输出层
1x28x28个输入 32 maps, 5x5 2x2 20% 128 10

  在训练时,将verbose设置为2,仅输出每个epoch的最终结果,忽略在每个epoch的详细内容。

from keras.datasets import mnist
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers.convolutional import  Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.utils import np_utils
from keras import backend
backend.set_image_data_format('channels_first')

# 设定随机种子
seed = 7
np.random.seed(seed)

# 从Keras导入Mnist数据集
(X_train, y_train), (X_validation, y_validation) = mnist.load_data()

X_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype('float32')X_validation = X_validation.reshape(X_validation.shape[0], 1, 28, 28).astype('float32')

# 格式化数据到0-1之前
X_train = X_train / 255
X_validation = X_validation / 255

# one-hot编码
y_train = np_utils.to_categorical(y_train)
y_validation = np_utils.to_categorical(y_validation)

# 创建模型
def create_model():
model = Sequential()
model.add(Conv2D(32, (5, 5), input_shape=(1, 28, 28), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(units=128, activation='relu'))
model.add(Dense(units=10, activation='softmax'))

# 编译模型
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model

model = create_model()
model.fit(X_train, y_train, epochs=10, batch_size=200, verbose=2)

score = model.evaluate(X_validation, y_validation, verbose=0)
print('CNN_Small: %.2f%%' % (score[1] * 100))

最终的准确度为:98.88% (运行了33分钟)

Epoch 1/10
- 200s - loss: 0.2228 - acc: 0.9364
Epoch 2/10
- 184s - loss: 0.0713 - acc: 0.9787
Epoch 3/10
- 194s - loss: 0.0511 - acc: 0.9841
Epoch 4/10
- 196s - loss: 0.0392 - acc: 0.9879
Epoch 5/10
- 190s - loss: 0.0326 - acc: 0.9897
Epoch 6/10
- 186s - loss: 0.0265 - acc: 0.9916
Epoch 7/10
- 192s - loss: 0.0223 - acc: 0.9927
Epoch 8/10
- 197s - loss: 0.0190 - acc: 0.9940
Epoch 9/10
- 190s - loss: 0.0155 - acc: 0.9951
Epoch 10/10
- 196s - loss: 0.0143 - acc: 0.9960
CNN_Small: 98.88%

三、复杂卷积神经网络

  在卷积神经网络中可以有多个卷积层。网络拓扑结构如下:
  (1)卷积层:具有30个特征图,感受野大小为5×5
  (2)采样因子(pool_size)为2×2的池化层
  (3)卷积层:具有15个特征图,感受野大小为3×3
  (4)采样因子(pool_size)为2×2的池化层
  (5)Dropout概率为20%的Dropout层
  (6)Flatten层
  (7)具有128个神经元和ReLU激活函数的全连接层
  (8)具有50个神经元和ReLU激活函数的全连接层
  (9)输出层

from keras.datasets import mnist
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers.convolutional import  Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.utils import np_utils
from keras import backend
backend.set_image_data_format('channels_first')

# 设定随机种子
seed = 7
np.random.seed(seed)

# 从Keras导入Mnist数据集
(X_train, y_train), (X_validation, y_validation) = mnist.load_data()

X_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype('float32')X_validation = X_validation.reshape(X_validation.shape[0], 1, 28, 28).astype('float32')

# 格式化数据到0-1之前
X_train = X_train / 255
X_validation = X_validation / 255

# one-hot编码
y_train = np_utils.to_categorical(y_train)
y_validation = np_utils.to_categorical(y_validation)

# 创建模型
def create_model():
model = Sequential()
model.add(Conv2D(30, (5, 5), input_shape=(1, 28, 28), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(15, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(units=128, activation='relu'))
model.add(Dense(units=50, activation='relu'))
model.add(Dense(units=10, activation='softmax'))

# 编译模型
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model

model = create_model()
model.fit(X_train, y_train, epochs=10, batch_size=200, verbose=2)

score = model.evaluate(X_validation, y_validation, verbose=0)
print('CNN_Large: %.2f%%' % (score[1] * 100))

最终的准确度为:99.16%

Epoch 1/10
- 176s - loss: 0.3866 - acc: 0.8816
Epoch 2/10
- 219s - loss: 0.0990 - acc: 0.9699
Epoch 3/10
- 239s - loss: 0.0733 - acc: 0.9775
Epoch 4/10
- 222s - loss: 0.0602 - acc: 0.9813
Epoch 5/10
- 228s - loss: 0.0517 - acc: 0.9839
Epoch 6/10
- 211s - loss: 0.0438 - acc: 0.9862
Epoch 7/10
- 183s - loss: 0.0383 - acc: 0.9882
Epoch 8/10
- 180s - loss: 0.0348 - acc: 0.9892
Epoch 9/10
- 181s - loss: 0.0319 - acc: 0.9901
Epoch 10/10
- 196s - loss: 0.0298 - acc: 0.9902

参数解释:

X_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype('float32')

[0]应该表示shape中的第一个数据,(1)应该表示为灰度图像,(28,28)表示28x28像素。

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐