您的位置:首页 > 理论基础 > 计算机网络

神经网络中的矩阵求导及反向传播推导

2018-11-04 10:56 483 查看

两层全连接神经网络的实现, 包括网络的实现、梯度的反向传播计算和权重更新过程:
 

[code]# -*- coding: utf-8 -*-
import numpy as np

# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10

# Create random input and output data
x = np.random.randn(N, D_in)
y = np.random.randn(N, D_out)

# Randomly initialize weights
w1 = np.random.randn(D_in, H)
w2 = np.random.randn(H, D_out)

learning_rate = 1e-6
for t in range(500):
# Forward pass: compute predicted y
h = x.dot(w1)
h_relu = np.maximum(h, 0)
y_pred = h_relu.dot(w2)

# Compute and print loss
loss = np.square(y_pred - y).sum()
print(t, loss)

# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.T.dot(grad_y_pred)
grad_h_relu = grad_y_pred.dot(w2.T)
grad_h = grad_h_relu.copy()
grad_h[h < 0] = 0
grad_w1 = x.T.dot(grad_h)

# Update weights
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2

这里解决了我一个错误的认知:以为最速下降法跟各个变量计算的导数无关, 而其实就是每个变量各自按自己的导数下降就可以实现函数最陡的坡进行下降;在图形上可以理解多个向量合并成一个方向;

反向传播 过程,核心代码如下

[code]h = x.dot(w1)
h_relu = np.maximum(h, 0)
y_pred = h_relu.dot(w2)
loss = np.square(y_pred - y).sum()

grad_y_pred = 2.0 * (y_pred - y)    # 64 x 10
grad_w2 = h_relu.T.dot(grad_y_pred) # 100 x 10
grad_h_relu = grad_y_pred.dot(w2.T) # 64 x 100
grad_h = grad_h_relu.copy()         # 64 x 100
grad_h[h < 0] = 0                   # 64 x 100
grad_w1 = x.T.dot(grad_h)           # 1000 x 100

 

 

 

 

 

问题:如何实现relu求导呢?

阅读更多
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: