您的位置:首页 > 理论基础 > 计算机网络

PyTorch笔记4-快速构建神经网络(NN)

2017-11-04 20:37 841 查看
本系列笔记为莫烦PyTorch视频教程笔记 github源码

概要

Torch 中提供了很多方便的途径, 同样是神经网络, 能快则快, 我们看看如何用更简单的方式搭建同样的回归神经网络.

import torch
import torch.nn.functional as F    # activation function


快速搭建

先回顾之前构建神经网络(NN)的步骤,如下:

class Net(torch.nn.Module):
def __init__(self, n_feature, n_hidden, n_output):
super(Net, self).__init__()
self.hidden = torch.nn.Linear(n_feature, n_hidden)
self.prediction = torch.nn.Linear(n_hidden, n_output)

def forward(self, x):
x = F.relu(self.hidden(x))
x = self.prediction(x)

return x

net1 = Net(1, 10, 1)
print(net1)


Net (
(hidden): Linear (1 -> 10)
(prediction): Linear (10 -> 1)
)


上面用 Python class 定义一个神经网络(NN),该 class 继承了 torch 中的神经网络结构,然后定制其隐藏层(hidden layer)和输出层(output layer),然后用前向传播(forward)把各层连接起来

不过可以用 PyTorch 的 sequential 快速搭建,演示如下,

- 注1: sequential,A sequential container. Modules will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of modules can also be passed in.

net2 = torch.nn.Sequential(
torch.nn.Linear(1, 10),
torch.nn.ReLU(),
torch.nn.Linear(10, 1)
)
print(net2)


Sequential (
(0): Linear (1 -> 10)
(1): ReLU ()
(2): Linear (10 -> 1)
)


比较 net1 和 net2 可知,net2 多显示了激励函数(activation function),而在 net1 中,激励函数是在 forward() 中才被调用。所以,相比 net2,net1 的好处是,你可以根据个人需求更加个性化自己的前向传播过程。

下面查看下 net1 和 net2 的类属性,可知含有相同的类属性

print('net1 dict: \n', net1.__dict__)
print('\n net2 dict: \n', net2.__dict__)


net1 dict:
{'_backend': <torch.nn.backends.thnn.THNNFunctionBackend object at 0x110333828>, '_parameters': OrderedDict(), '_buffers': OrderedDict(), '_backward_hooks': OrderedDict(), '_forward_hooks': OrderedDict(), '_forward_pre_hooks': OrderedDict(), '_modules': OrderedDict([('hidden', Linear (1 -> 10)), ('prediction', Linear (10 -> 1))]), 'training': True}

net2 dict:
{'_backend': <torch.nn.backends.thnn.THNNFunctionBackend object at 0x110333828>, '_parameters': OrderedDict(), '_buffers': OrderedDict(), '_backward_hooks': OrderedDict(), '_forward_hooks': OrderedDict(), '_forward_pre_hooks': OrderedDict(), '_modules': OrderedDict([('0', Linear (1 -> 10)), ('1', ReLU ()), ('2', Linear (10 -> 1))]), 'training': True}
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: