DeepLearning.ai作业:(1-3)-- 浅层神经网络(Shallow neural networks)
title: DeepLearning.ai作业:(1-3)-- 浅层神经网络(Shallow neural networks)
tags:
- dl.ai
- homework
categories: - AI
- Deep Learning
date: 2018-09-12 15:49:22
id: 2018091216
首发于个人博客:fangzh.top,欢迎来访
- 不要抄作业!
- 我只是把思路整理了,供个人学习。
- 不要抄作业!
数据集
数据集是一个类似花的数据集。
而如果用传统的logistic regression,做出来的就是一个二分类问题,简单粗暴的划出了一条线,
可以看见,准确率只有47%。
所以就需要构建神经网络模型了。
神经网络模型
Reminder: The general methodology to build a Neural Network is to:
1. Define the neural network structure ( # of input units, # of hidden units, etc). 2. Initialize the model's parameters 3. Loop: - Implement forward propagation - Compute loss - Implement backward propagation to get the gradients - Update parameters (gradient descent)
已经给出思路了:
- 定义神经网络的结构
- 初始化模型参数
- 循环: 计算正向传播
- 计算损失函数
- 计算反向传播来得到grad
- 更新参数
1. 定义神经网络结构
# GRADED FUNCTION: layer_sizes def layer_sizes(X, Y): """ Arguments: X -- input dataset of shape (input size, number of examples) Y -- labels of shape (output size, number of examples) Returns: n_x -- the size of the input layer n_h -- the size of the hidden layer n_y -- the size of the output layer """ ### START CODE HERE ### (≈ 3 lines of code) n_x = X.shape[0] # size of input layer n_h = 4 n_y = Y.shape[0] # size of output layer ### END CODE HERE ### return (n_x, n_h, n_y)[/code]
2. 初始化参数
来初始化w和b的参数
w:
np.random.rand(a,b) * 0.01
b:
np.zeros((a,b))
# GRADED FUNCTION: initialize_parameters def initialize_parameters(n_x, n_h, n_y): """ Argument: n_x -- size of the input layer n_h -- size of the hidden layer n_y -- size of the output layer Returns: params -- python dictionary containing your parameters: W1 -- weight matrix of shape (n_h, n_x) b1 -- bias vector of shape (n_h, 1) W2 -- weight matrix of shape (n_y, n_h) b2 -- bias vector of shape (n_y, 1) """ np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random. ### START CODE HERE ### (≈ 4 lines of code) W1 = np.random.randn(n_h, n_x) * 0.01 b1 = np.zeros((n_h, 1)) W2 = np.random.randn(n_y, n_h) * 0.01 b2 = np.zeros((n_y, 1)) ### END CODE HERE ### assert (W1.shape == (n_h, n_x)) assert (b1.shape == (n_h, 1)) assert (W2.shape == (n_y, n_h)) assert (b2.shape == (n_y, 1)) parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters[/code]
3. loop
在这里可以使用sigmoid()来做输出层的函数,np.tanh()来做hidden layer的激活函数。
3.1 forward propagation
在这个函数中,输入的是X,和parameters,然后就可以根据
(1)z[1](i)=W[1]x(i)+b[1]z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}z[1](i)=W[1]x(i)+b[1](1)
(2)a[1](i)=tanh(z[1](i))a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}a[1](i)=tanh(z[1](i))(2)
(3)z[2](i)=W[2]a[1](i)+b[2]z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}z[2](i)=W[2]a[1](i)+b[2](3)
(4)y^(i)=a[2](i)=σ(z[2](i))\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}y^(i)=a[2](i)=σ(z[2](i))(4)
得到每一层的Z和A了。
# GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Argument: X -- input data of size (n_x, m) parameters -- python dictionary containing your parameters (output of initialization function) Returns: A2 -- The sigmoid output of the second activation cache -- a dictionary containing "Z1", "A1", "Z2" and "A2" """ # Retrieve each parameter from the dictionary "parameters" ### START CODE HERE ### (≈ 4 lines of code) W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] ### END CODE HERE ### # Implement Forward Propagation to calculate A2 (probabilities) ### START CODE HERE ### (≈ 4 lines of code) Z1 = np.dot(W1,X) + b1 A1 = np.tanh(Z1) Z2 = np.dot(W2,A1) + b2 A2 = sigmoid(Z2) ### END CODE HERE ### assert(A2.shape == (1, X.shape[1])) cache = {"Z1": Z1, "A1": A1, "Z2": Z2, "A2": A2} return A2, cache[/code]
3.2 cost
接下来,在得到A2的值后,就可以根据公式来计算损失函数了。
J=−1m∑i=0m(y(i)log(a[2](i))+(1−y(i))log(1−a[2](i)))J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \smallJ=−m1i=0∑m(y(i)log(a[2](i))+(1−y(i))log(1−a[2](i)))
在这里需要注意的是交叉熵的计算,交叉熵使用np.multiply()来计算,然后用np.sum(),求和。
而单单计算
logprobs = np.multiply(np.log(A2),Y)是不够的,因为这个只得到了公式的前一半的部分,Y=0的部分在元素相乘中就相当于没有了,所以还要再后面加一项
np.multiply(np.log(1-A2),1-Y)
# GRADED FUNCTION: compute_cost def compute_cost(A2, Y, parameters): """ Computes the cross-entropy cost given in equation (13) Arguments: A2 -- The sigmoid output of the second activation, of shape (1, number of examples) Y -- "true" labels vector of shape (1, number of examples) parameters -- python dictionary containing your parameters W1, b1, W2 and b2 Returns: cost -- cross-entropy cost given equation (13) """ m = Y.shape[1] # number of example # Compute the cross-entropy cost ### START CODE HERE ### (≈ 2 lines of code) logprobs = np.multiply(np.log(A2),Y) + np.multiply(np.log(1-A2),1-Y) cost = -1 / m * np.sum(logprobs) ### END CODE HERE ### cost = np.squeeze(cost) # makes sure cost is the dimension we expect. # E.g., turns [[17]] into 17 assert(isinstance(cost, float)) return cost[/code]
3.3 backworad propagation
NG说神经网络中最难理解的是这个,但是现在公式已经帮我们推倒好了。
其中, g[1]′(Z[1])g^{[1]'}(Z^{[1]})g[1]′(Z[1]) using
(1 - np.power(A1, 2))
可以看到,公式中需要的变量有X,Y,A,W,然后输出一个字典结构的grads
def backward_propagation(parameters, cache, X, Y): """ Implement the backward propagation using the instructions above. Arguments: parameters -- python dictionary containing our parameters cache -- a dictionary containing "Z1", "A1", "Z2" and "A2". X -- input data of shape (2, number of examples) Y -- "true" labels vector of shape (1, number of examples) Returns: grads -- python dictionary containing your gradients with respect to different parameters """ m = X.shape[1] # First, retrieve W1 and W2 from the dictionary "parameters". ### START CODE HERE ### (≈ 2 lines of code) W1 = parameters['W1'] W2 = parameters['W2'] ### END CODE HERE ### # Retrieve also A1 and A2 from dictionary "cache". ### START CODE HERE ### (≈ 2 lines of code) A1 = cache['A1'] A2 = cache['A2'] ### END CODE HERE ### # Backward propagation: calculate dW1, db1, dW2, db2. ### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above) dZ2 = A2 - Y dW2 = 1 / m * np.dot(dZ2, A1.T) db2 = 1 / m * np.sum(dZ2, axis=1, keepdims=True) dZ1 = np.dot(W2.T, dZ2) * (1 - np.power(A1, 2)) dW1 = 1 / m * np.dot(dZ1, X.T) db1 = 1 / m * np.sum(dZ1, axis=1, keepdims=True) ### END CODE HERE ### grads = {"dW1": dW1, "db1": db1, "dW2": dW2, "db2": db2} return grads[/code]
3.4 update parameters
最后根据得到的grads,乘上学习速率,就可以更新参数了。
# GRADED FUNCTION: update_parameters def update_parameters(parameters, grads, learning_rate = 1.2): """ Updates parameters using the gradient descent update rule given above Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients Returns: parameters -- python dictionary containing your updated parameters """ # Retrieve each parameter from the dictionary "parameters" ### START CODE HERE ### (≈ 4 lines of code) W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] ### END CODE HERE ### # Retrieve each gradient from the dictionary "grads" ### START CODE HERE ### (≈ 4 lines of code) dW1 = grads['dW1'] db1 = grads['db1'] dW2 = grads['dW2'] db2 = grads['db2'] ## END CODE HERE ### # Update rule for each parameter ### START CODE HERE ### (≈ 4 lines of code) W1 = W1 - learning_rate * dW1 b1 = b1 - learning_rate * db1 W2 = W2 - learning_rate * dW2 b2 = b2 - learning_rate * db2 ### END CODE HERE ### parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters[/code]
然后把更新完的参数再传入前面的循环中,不断循环,直到达到循环的次数。
nn_model
把前面的函数都调用过来。
模型中传入的参数是,X,Y,和迭代次数
- 首先需要得到你要设计的神经网络结构,调用
layer_sizes()
得到了n_x,n_y,也就是输入层和输出层。 - 初始化参数
initialize_parameters(n_x, n_h, n_y)
,得到初始化的 W1, b1, W2, b2 - 然后开始循环
使用
forward_propagation(X, parameters)
,先得到各个神经元的计算值。 - 然后
compute_cost(A2, Y, parameters)
,得到cost backward_propagation(parameters, cache, X, Y)
计算出每一步的梯度update_parameters(parameters, grads)
更新一下参数
# GRADED FUNCTION: nn_model def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False): """ Arguments: X -- dataset of shape (2, number of examples) Y -- labels of shape (1, number of examples) n_h -- size of the hidden layer num_iterations -- Number of iterations in gradient descent loop print_cost -- if True, print the cost every 1000 iterations Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ np.random.seed(3) n_x = layer_sizes(X, Y)[0] n_y = layer_sizes(X, Y)[2] # Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: "n_x, n_h, n_y". Outputs = "W1, b1, W2, b2, parameters". ### START CODE HERE ### (≈ 5 lines of code) parameters = initialize_parameters(n_x, n_h, n_y) W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] ### END CODE HERE ### # Loop (gradient descent) for i in range(0, num_iterations): ### START CODE HERE ### (≈ 4 lines of code) # Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache". A2, cache = forward_propagation(X, parameters) # Cost function. Inputs: "A2, Y, parameters". Outputs: "cost". cost = compute_cost(A2, Y, parameters) # Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads". grads = backward_propagation(parameters, cache, X, Y) # Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters". parameters = update_parameters(parameters, grads) ### END CODE HERE ### # Print the cost every 1000 iterations if print_cost and i % 1000 == 0: print ("Cost after iteration %i: %f" %(i, cost)) return parameters[/code]
预测
得到训练后的parameters,再用
forward_propagation(X, parameters)计算出输出层最终的值A2,以0.5为分界,分为0和1。
# GRADED FUNCTION: predict def predict(parameters, X): """ Using the learned parameters, predicts a class for each example in X Arguments: parameters -- python dictionary containing your parameters X -- input data of size (n_x, m) Returns predictions -- vector of predictions of our model (red: 0 / blue: 1) """ # Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold. ### START CODE HERE ### (≈ 2 lines of code) A2, cache = forward_propagation(X, parameters) predictions = (A2 > 0.5) ### END CODE HERE ### return predictions[/code]
# Build a model with a n_h-dimensional hidden layer parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True) # Plot the decision boundary plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y) plt.title("Decision Boundary for hidden layer size " + str(4))[/code]
可以看到,训练后神经网络得到的分界线更为合理。
# Print accuracy predictions = predict(parameters, X) print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')[/code]
准确率高达90%
优化参数
这个时候就可以设置不同的hidden_layer的维度大小[1, 2, 3, 4, 5, 20, 50]
# This may take about 2 minutes to run plt.figure(figsize=(16, 32)) hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50] for i, n_h in enumerate(hidden_layer_sizes): plt.subplot(5, 2, i+1) plt.title('Hidden Layer of size %d' % n_h) parameters = nn_model(X, Y, n_h, num_iterations = 5000) plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y) predictions = predict(parameters, X) accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))[/code]
Accuracy for 1 hidden units: 67.5 % Accuracy for 2 hidden units: 67.25 % Accuracy for 3 hidden units: 90.75 % Accuracy for 4 hidden units: 90.5 % Accuracy for 5 hidden units: 91.25 % Accuracy for 20 hidden units: 90.0 % Accuracy for 50 hidden units: 90.25 %
得到的结果在n_h = 5时有最大值。
阅读更多- DeepLearning.ai作业:(5-1)-- 循环神经网络(Recurrent Neural Networks)(1)
- DeepLearning.ai作业:(5-1)-- 循环神经网络(Recurrent Neural Networks)(2)
- DeepLearning.ai作业:(5-1)-- 循环神经网络(Recurrent Neural Networks)(3)
- DeepLearning.ai作业:(1-4)-- 深层神经网络(Deep neural networks)
- DeepLearning.ai笔记:(5-1)-- 循环神经网络(Recurrent Neural Networks)
- Coursera deeplearning.ai 深度学习笔记1-4-Deep Neural Networks-深度神经网络原理推导与代码实现
- 【deeplearning.ai】Neural Networks and Deep Learning——浅层神经网络
- 【deeplearning.ai】Neural Networks and Deep Learning——深层神经网络
- Coursera deeplearning.ai 深度学习笔记1-3-Shallow Neural Networks-浅层神经网络原理推导与代码实现
- DeepLearning.ai笔记:(1-4)-- 深层神经网络(Deep neural networks)
- 吴恩达Deeplearning.ai专项课程笔记(一)-- 神经网络基础
- 吴恩达Deeplearning.ai课程:浅层神经网络python实验代码(二)
- 【吴恩达deeplearning.ai】深度学习(9):循环神经网络
- 吴恩达 DeepLearning 神经网络基础 第一课第三周编程题目及作业
- 深层神经网络 --DeepLearning.ai 学习笔记(1-4)
- 【deeplearning.ai】第二门课:提升深层神经网络——正则化的编程作业
- DeepLearning.ai作业:(4-2)-- 深度卷积网络实例探究(Deep convolutional models:case studies)
- 吴恩达 deeplearning.ai课程-卷积神经网络 (1)卷积神经网络基础
- 吴恩达Coursera深度学习课程 DeepLearning.ai 提炼笔记(1-4)-- 深层神经网络
- Deeplearning.ai学习笔记-改善深层神经网络(一)