您的位置:首页 > 运维架构

CS231n作业笔记2.5:dropout的实现与应用

2017-01-04 13:38 417 查看

CS231n简介

详见 CS231n课程笔记1:Introduction

本文都是作者自己的思考,正确性未经过验证,欢迎指教。

作业笔记

dropout中唯一需要注意的就是为了平衡train与test,通过除以期望值即可。

1. 前向传播

if mode == 'train':
mask = (np.random.rand(*x.shape)<p)
out = x*mask / p
elif mode == 'test':
out = x
mask = np.ones_like(x)


2. 后向传播

if mode == 'train':
dx = dout * mask / dropout_param['p']
elif mode == 'test':
dx = dout
return dx


3. 应用:带dropout的多层神经网络

在每一层ReLU后接一层dropout即可。关于多层神经网络的实现,请参考CS231n作业笔记2.4:Batchnorm的实现与使用

cache = {}
hidden_value = None
hidden_value,cache['fc1'] = affine_forward(X,self.params['W1'],self.params['b1'])
if self.use_batchnorm:
hidden_value,cache['bn1'] = batchnorm_forward(hidden_value, self.params['gamma1'], self.params['beta1'], self.bn_params[0])
hidden_value,cache['relu1'] = relu_forward(hidden_value)
if self.use_dropout:
hidden_value, cache['drop1'] = dropout_forward(hidden_value,self.dropout_param)
for index in range(2,self.num_layers):
hidden_value,cache['fc'+str(index)] = affine_forward(hidden_value,self.params['W'+str(index)],self.params['b'+str(index)])
if self.use_batchnorm:
hidden_value,cache['bn'+str(index)] = batchnorm_forward(hidden_value,  self.params['gamma'+str(index)], self.params['beta'+str(index)], self.bn_params[index-1])
hidden_value,cache['relu'+str(index)] = relu_forward(hidden_value)
if self.use_dropout:
hidden_value, cache['drop'+str(index)] = dropout_forward(hidden_value,self.dropout_param)

scores,cache['score'] = affine_forward(hidden_value,self.params['W'+str(self.num_layers)],self.params['b'+str(self.num_layers)])

# If test mode return early
if mode == 'test':
return scores

loss, grads = 0.0, {}
loss,dscores = softmax_loss(scores,y)
for index in range(1,self.num_layers+1):
loss += 0.5*self.reg*np.sum(self.params['W'+str(index)]**2)

dhidden_value,grads['W'+str(self.num_layers)],grads['b'+str(self.num_layers)] = affine_backward(dscores,cache['score'])
for index in range(self.num_layers-1,1,-1):
if (self.use_dropout):
dhidden_value = dropout_backward(dhidden_value, cache['drop'+str(index)])
dhidden_value = relu_backward(dhidden_value,cache['relu'+str(index)])
if self.use_batchnorm:
dhidden_value, grads['gamma'+str(index)], grads['beta'+str(index)] = batchnorm_backward(dhidden_value, cache['bn'+str(index)])
dhidden_value,grads['W'+str(index)],grads['b'+str(index)] = affine_backward(dhidden_value,cache['fc'+str(index)])
if (self.use_dropout):
dhidden_value = dropout_backward(dhidden_value, cache['drop1'])
dhidden_value = relu_backward(dhidden_value,cache['relu1'])
if self.use_batchnorm:
dhidden_value, grads['gamma1'], grads['beta1'] = batchnorm_backward(dhidden_value, cache['bn1'])
dhidden_value,grads['W1'],grads['b1'] = affine_backward(dhidden_value,cache['fc1'])

for index in range(1,self.num_layers+1):
grads['W'+str(index)] += self.reg * self.params['W'+str(index)]
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息