基本的神经网络前向传播pytorch实现
2020-02-04 04:02
549 查看
基本的神经网络前向传播pytorch实现
记录一下自己学习pytorch的过程,这里是自己利用pytorch实现的神经网络。
import torch from torch import nn import torchvision import torch.nn.functional as F import torchvision.transforms as transforms #Device configuration device = torch.device('cuda'if torch.cuda.is_available() else 'cpu') #Hyper-parameters input_size = 784 hidden_size = 500 num_classes = 10 num_epochs = 5 batch_size = 100 learning_rate = 0.001 #MNist dataset train_dataset = torchvision.datasets.MNIST(root='../../data', train=False, transform=transforms.ToTensor(), download=True) test_dataset = torchvision.datasets.MNIST(root='../../data', train=False, transform=transforms.ToTensor()) # Data loader train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False) # Fully connected neural network with one hidden layer class NeuralNet(nn.Module): def __init__(self,input_size,hidden_size,num_classes): super(NeuralNet,self).__init__() self.fc1 = nn.Linear(input_size,hidden_size) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_size,num_classes) def forward(self,x): out = self.fc1(x) out = self.relu(out) out = self.fc2(out) return out model = NeuralNet(input_size,hidden_size,num_classes).to(device) #loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(),lr = learning_rate) #train the model total_step = len(train_loader) for epoch in range(num_epochs): for i,(images,labels) in enumerate(train_loader): #moves tensors to the configured device images = images.reshape(-1,28*28).to(device) labels = labels.to(device) #forward pass outputs = model(images) loss = criterion(outputs,labels) #backward pass optimizer.zero_grad() loss.backward() optimizer.step() if(i+1)%100 == 0: print('Epoch [{}/{}], step [{}/{}],Loss: {:.4f}'.format(epoch+1,num_epochs,i+1,total_step,loss.item())) #test the model with torch.no_grad(): correct = 0 total = 0 for images,labels in test_loader: images = images.reshape(-1,28*28) labels = labels.to(device) outputs = model(images) _,predicted = torch.max(outputs.data,1) total += labels.size(0) correct += (predicted == labels).sum() print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total))
- 点赞
- 收藏
- 分享
- 文章举报
相关文章推荐
- pytorch学习三——pytorch实现逻辑回归与构建基本神经网络
- 通过变量实现神经网络的参数并实现前向传播的过程
- 利用pytorch实现神经网络风格迁移Neural Transfer
- pytorch如何实现简单的神经网络的方法。
- 神经网络前向后向传播推导及实现
- 简单的前向传播模型实现(四层神经网络),菜鸟用于交流
- python实现单隐层神经网络基本模型 推荐
- PyTorch上搭建简单神经网络实现回归和分类的示例
- 人脸目标检测的多任务级联神经网络MTCNN在Pytorch中的实现
- 利用pytorch实现Fooling Images(添加特定噪声到原始图像,使神经网络误识别)
- Python神经网络代码实现流程(三):反向传播与梯度下降
- 用pytorch实现一个神经网络(一)
- 各框架下(tensorflow, pytorch, theano, keras)实现几个基础结构神经网络(mlp, autoencoder, CNNs, recurrent, recursive)
- 神经网络中的反向传播法算法推导及matlab代码实现
- tensorflow实现最基本的神经网络 + 对比GD、SGD、batch-GD的训练方法
- TensorFlow入门,基本介绍,基本概念,计算图,pip安装,helloworld示例,实现简单的神经网络
- 神经网络中的反向传播的推导和python实现
- pytorch从头开始实现一个RNN(循环神经网络)
- PyTorch实现简单的图神经网络
- PyTorch上搭建简单神经网络实现回归和分类