Brief overview of backward and forward
2015-09-03 21:04
267 查看
Let’s say we only feed in one data point.
out = model:forward(xi) computes fw(xi) where fw is our model with its current parameters w, and stores the result in out.
loss = criterion:forward(out, yi) computes the loss ℓ(fw(xi),yi) with respect to the true value yi.
dl_dout = criterion:backward(out, yi) computes ∂ℓ(...)∂fw(xi).
model:backward(xi, dl_dout) computes ∂ℓ(...)∂w and stores this gradient in a place we have a reference to, usually called gradParameters in our code.
out = model:forward(xi) computes fw(xi) where fw is our model with its current parameters w, and stores the result in out.
loss = criterion:forward(out, yi) computes the loss ℓ(fw(xi),yi) with respect to the true value yi.
dl_dout = criterion:backward(out, yi) computes ∂ℓ(...)∂fw(xi).
model:backward(xi, dl_dout) computes ∂ℓ(...)∂w and stores this gradient in a place we have a reference to, usually called gradParameters in our code.
相关文章推荐
- optim package
- Defining your own Neural Net Module
- Torch7 Serialization
- Torch7 Tensor slicing
- char-cnn+torch+ubuntu14.04(RNN)
- torch系列:如何实现像matlab那样的hold on
- torch系列:torch中如何读取存放在hdf5文件中的字符串
- torch系列:torch中的nn.Sequential,nn.Concat/ConcatTable,nn.Parallel/PararelTable之间区别
- 给用torch的童鞋,lua代码规范
- torch系列:关于luajit中string.format的支持
- torch自定义cuda layer
- torch7学习笔记1——itorch_notebook与lua编译的安装
- torch mac openmp
- 显存 内存 使用量估计 卷积神经网络 convolution torch finput
- ubuntu14.04 Torch7安装与配置GPU
- Torch Threads
- ubuntu16安装Torch
- mac 安装 neuraltalk2 无gpu
- ubuntu安装torch遇到的问题