您的位置:首页 > 理论基础 > 计算机网络

PyTorch: 训练分类器实战详解(分类CIFAR10)

2018-02-11 17:44 357 查看

神经网络模型训练过程

通过学习前面几个例子,我们可以总结一下神经网络的典型训练过程如下:

第一步:定义具有一些可学习参数(或权重)的神经网络

第二步:迭代输入数据集

第三步:通过网络处理输入

第四步:计算损失(loss)

第五步:反向传播网络的参数

第六步:更新网络的参数,通常使用一个简单的更新规则:weight = weight - learning_rate * gradient

使用PyTorch训练分类器

本文通过官网上一个更为具体的例子来更好的学习使用PyTorch训练分类模型的具体过程。

数据预处理

一般来说,当我们在处理图像,文本,音频或视频数据时,可以使用标准的Python包将数据加载到一个numpy数组中。然后把这个数组转换成 torch.* Tensor。

1.在处理图片数据方面,可以选择使用Pillow,OpenCV等。

2.在处理音频数据方面,可以选择使用scipy,librosa等。

3.在处理文本数据方面,可以选择使用NLTK,SpaCy等。

在PyTorch这个框架中,对于计算机视觉,已经创建了一个名为torchvision的包,包含了常见数据集的数据加载器,如Imagenet,CIFAR10,MNIST等,以及用于图像的数据转换器,即torchvision.datasets和torch.utils.data.DataLoader。

在本文中,我们将使用CIFAR10数据集。 它有 ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’类。 CIFAR-10中的图像大小为3x32x32,即32x32像素的3通道彩色图像。



训练图像分类器

根据前面提到的,我们将按顺序执行以下步骤:

第一步:使用torchvision加载和规范CIFAR10训练和测试数据集

第二步:定义模型:选用卷积神经网络

第三步:定义损失函数

第四步:在训练数据上训练网络

第五步:在测试数据上测试网络

加载CIFAR10数据集

使用torchvision可以很方面的加载CIFAR10数据集。torchvision数据集的输出是范围[0,1]的PILImage图像。 我们将它们转换为归一化范围的张量[-1,1]。下面代码可视化一个batch的图片。

import torch
import torchvision
import torchvision.transforms as transforms

transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)

testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)

classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

import matplotlib.pyplot as plt
import numpy as np

# functions to show an image
def imgshow(img):
img = img / 2 + 0.5     # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()

if __name__ ==  '__main__':
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imgshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))


运行结果:



定义卷积神经网络

此时为3通道图像(而不是单通道图像):

from torch.autograd import  Variable
import torch.nn as nn
import torch.nn.functional as F

# 3*32*32
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)

def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
# fully connect
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x


定义损失函数和优化器

# Define loss (Cross-Entropy)
import torch.optim as optim

criterion = nn.CrossEntropyLoss()
# SGD with momentum
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)


训练网络

循环遍历数据迭代器,并将数据输入到网络并进行优化。

# Train the network
for epoch in range(5):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# warp them in Variable
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward
outputs = net(inputs)
# loss
loss = criterion(outputs, labels)
# backward
loss.backward()
# update weights
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999:  # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0

print("Finished Training")


在测试数据上测试训练好的模型

correct = 0
total = 0
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()


完整代码

#author: yuquanle
#date: 2018.2.5
#Classifier use PyTorch (CIFAR10 dataset)

import torch
import torchvision
import torchvision.transforms as transforms

transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=1)

testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=1)

classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

# 3*32*32
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)

def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
# fully connect
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x

# our model
net = Net()

# Define loss (Cross-Entropy) import torch.optim as optim criterion = nn.CrossEntropyLoss() # SGD with momentum optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

if __name__ == '__main__':
# Train the network
for epoch in range(5):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data

# warp them in Variable
inputs, labels = Variable(inputs), Variable(labels)

# zero the parameter gradients
optimizer.zero_grad()

# forward
outputs = net(inputs)
# loss
loss = criterion(outputs, labels)
# backward
loss.backward()
# update weights
optimizer.step()

# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print("Finished Training")

print("Beginning Testing")
correct = 0 total = 0 for data in testloader: images, labels = data outputs = net(Variable(images)) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum()

print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))


运行结果:

Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
[1,  2000] loss: 2.190
[1,  4000] loss: 1.860
[1,  6000] loss: 1.687
[1,  8000] loss: 1.591
[1, 10000] loss: 1.524
[1, 12000] loss: 1.471
Files already downloaded and verified
Files already downloaded and verified
[2,  2000] loss: 1.385
[2,  4000] loss: 1.367
[2,  6000] loss: 1.343
[2,  8000] loss: 1.311
[2, 10000] loss: 1.285
[2, 12000] loss: 1.281
Files already downloaded and verified
Files already downloaded and verified
[3,  2000] loss: 1.198
[3,  4000] loss: 1.194
[3,  6000] loss: 1.193
[3,  8000] loss: 1.172
[3, 10000] loss: 1.168
[3, 12000] loss: 1.142
Files already downloaded and verified
Files already downloaded and verified
[4,  2000] loss: 1.063
[4,  4000] loss: 1.096
[4,  6000] loss: 1.070
[4,  8000] loss: 1.074
[4, 10000] loss: 1.086
[4, 12000] loss: 1.066
Files already downloaded and verified
Files already downloaded and verified
[5,  2000] loss: 0.988
[5,  4000] loss: 0.996
[5,  6000] loss: 1.010
[5,  8000] loss: 0.999
[5, 10000] loss: 1.012
[5, 12000] loss: 1.016
Finished Training
Beginning Testing
Files already downloaded and verified
Files already downloaded and verified
Accuracy of the network on the 10000 test images: 62 %

Process finished with exit code 0


参考:http://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  神经网络