cs231n:assignment1——Q4: Two-Layer Neural Network
2016-12-05 15:24
726 查看
自己写的cs231n的作业,希望给点意见,支出错误和不足.谢谢
参数调了好久,有一次调到57+%,当时没记参数,后来调不回来了
two_layer_netipynb内容
Implementing a Neural Network
Forward pass compute scores
Forward pass compute loss
Backward pass
Train the network
Load the data
Train a network
Debug the training
Tune your hyperparameters
Run on the test set
neural_netpy内容
We will use the class
Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
One strategy for getting insight into what’s wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
We will give you extra bonus point for every 1% of accuracy above 52%.
参数调了好久,有一次调到57+%,当时没记参数,后来调不回来了
two_layer_netipynb内容
Implementing a Neural Network
Forward pass compute scores
Forward pass compute loss
Backward pass
Train the network
Load the data
Train a network
Debug the training
Tune your hyperparameters
Run on the test set
neural_netpy内容
two_layer_net.ipynb内容:
Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.# A bit of setup import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.neural_net import TwoLayerNet %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): """ returns relative error """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
We will use the class
TwoLayerNetin the file
cs231n/classifiers/neural_net.pyto represent instances of our network. The network parameters are stored in the instance variable
self.paramswhere keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
# Create a small net and some toy data to check your implementations. # Note that we set the random seed for repeatable experiments. input_size = 4 hidden_size = 10 num_classes = 3 num_inputs = 5 def init_toy_model(): np.random.seed(0) return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1) def init_toy_data(): np.random.seed(1) X = 10 * np.random.randn(num_inputs, input_size) y = np.array([0, 1, 2, 2, 1]) return X, y net = init_toy_model() X, y = init_toy_data()
Forward pass: compute scores
Open the filecs231n/classifiers/neural_net.pyand look at the method
TwoLayerNet.loss. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters.
Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
scores = net.loss(X) print 'Your scores:' print scores print print 'correct scores:' correct_scores = np.asarray([ [-0.81233741, -1.27654624, -0.70335995], [-0.17129677, -1.18803311, -0.47310444], [-0.51590475, -1.01354314, -0.8504215 ], [-0.15419291, -0.48629638, -0.52901952], [-0.00618733, -0.12435261, -0.15226949]]) print correct_scores print # The difference should be very small. We get < 1e-7 print 'Difference between your scores and correct scores:' print np.sum(np.abs(scores - correct_scores))
Your scores: [[-0.81233741 -1.27654624 -0.70335995] [-0.17129677 -1.18803311 -0.47310444] [-0.51590475 -1.01354314 -0.8504215 ] [-0.15419291 -0.48629638 -0.52901952] [-0.00618733 -0.12435261 -0.15226949]] correct scores: [[-0.81233741 -1.27654624 -0.70335995] [-0.17129677 -1.18803311 -0.47310444] [-0.51590475 -1.01354314 -0.8504215 ] [-0.15419291 -0.48629638 -0.52901952] [-0.00618733 -0.12435261 -0.15226949]] Difference between your scores and correct scores: 3.68027209324e-08
Forward pass: compute loss
In the same function, implement the second part that computes the data and regularizaion loss.loss, _ = net.loss(X, y, reg=0.1) correct_loss = 1.30378789133 # should be very small, we get < 1e-12 print 'Difference between your loss and correct loss:' print np.sum(np.abs(loss - correct_loss))
Difference between your loss and correct loss: 1.79412040779e-13
Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variablesW1,
b1,
W2, and
b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
from cs231n.gradient_check import eval_numerical_gradient # Use numeric gradient checking to check your implementation of the backward pass. # If your implementation is correct, the difference between the numeric and # analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2. loss, grads = net.loss(X, y, reg=0.1) # these should all be less than 1e-8 or so for param_name in grads: f = lambda W: net.loss(X, y, reg=0.1)[0] param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False) print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
W1 max relative error: 3.669857e-09 W2 max relative error: 3.440708e-09 b2 max relative error: 3.865028e-11 b1 max relative error: 1.125423e-09
Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the functionTwoLayerNet.trainand fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement
TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
net = init_toy_model() stats = net.train(X, y, X, y, learning_rate=1e-1, reg=1e-5, num_iters=100, verbose=False) print 'Final training loss: ', stats['loss_history'][-1] # plot the loss history plt.plot(stats['loss_history']) plt.xlabel('iteration') plt.ylabel('training loss') plt.title('Training Loss history') plt.show()
Final training loss: 0.0171496079387
Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it’s time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.from cs231n.data_utils import load_CIFAR10 def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000): """ Load the CIFAR-10 dataset from disk and perform preprocessing to prepare it for the two-layer neural net classifier. These are the same steps as we used for the SVM, but condensed to a single function. """ # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Subsample the data mask = range(num_training, num_training + num_validation) X_val = X_train[mask] y_val = y_train[mask] mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] # Normalize the data: subtract the mean image mean_image = np.mean(X_train, axis=0) X_train -= mean_image X_val -= mean_image X_test -= mean_image # Reshape data to rows X_train = X_train.reshape(num_training, -1) X_val = X_val.reshape(num_validation, -1) X_test = X_test.reshape(num_test, -1) return X_train, y_train, X_val, y_val, X_test, y_test # Invoke the above function to get our data. X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data() print 'Train data shape: ', X_train.shape print 'Train labels shape: ', y_train.shape print 'Validation data shape: ', X_val.shape print 'Validation labels shape: ', y_val.shape print 'Test data shape: ', X_test.shape print 'Test labels shape: ', y_test.shape
Train data shape: (49000, 3072) Train labels shape: (49000,) Validation data shape: (1000, 3072) Validation labels shape: (1000,) Test data shape: (1000, 3072) Test labels shape: (1000,)
Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.input_size = 32 * 32 * 3 hidden_size = 50 num_classes = 10 net = TwoLayerNet(input_size, hidden_size, num_classes) # Train the network stats = net.train(X_train, y_train, X_val, y_val, num_iters=1000, batch_size=200, learning_rate=1e-4, learning_rate_decay=0.95, reg=0.5, verbose=True) # Predict on the validation set val_acc = (net.predict(X_val) == y_val).mean() print 'Validation accuracy: ', val_acc
iteration 0 / 1000: loss 2.302954 iteration 100 / 1000: loss 2.302550 iteration 200 / 1000: loss 2.297648 iteration 300 / 1000: loss 2.259602 iteration 400 / 1000: loss 2.204170 iteration 500 / 1000: loss 2.118565 iteration 600 / 1000: loss 2.051535 iteration 700 / 1000: loss 1.988466 iteration 800 / 1000: loss 2.006591 iteration 900 / 1000: loss 1.951473 Validation accuracy: 0.287
Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn’t very good.One strategy for getting insight into what’s wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
# Plot the loss function and train / validation accuracies plt.subplot(2, 1, 1) plt.plot(stats['loss_history']) plt.title('Loss history') plt.xlabel('Iteration') plt.ylabel('Loss') plt.subplot(2, 1, 2) plt.plot(stats['train_acc_history'], label='train') plt.plot(stats['val_acc_history'], label='val') plt.title('Classification accuracy history') plt.xlabel('Epoch') plt.ylabel('Clasification accuracy') plt.show()
from cs231n.vis_utils import visualize_grid # Visualize the weights of the network def show_net_weights(net): W1 = net.params['W1'] W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2) plt.imshow(visualize_grid(W1, padding=3).astype('uint8')) plt.gca().axis('off') plt.show() show_net_weights(net)
Tune your hyperparameters
What’s wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
best_net = None # store the best model into this ################################################################################# # TODO: Tune hyperparameters using the validation set. Store your best trained # # model in best_net. # # # # To help debug your network, it may help to use visualizations similar to the # # ones we used above; these visualizations will have significant qualitative # # differences from the ones we saw above for the poorly tuned network. # # # # Tweaking hyperparameters by hand can be fun, but you might find it useful to # # write code to sweep through possible combinations of hyperparameters # # automatically like we did on the previous exercises. # ################################################################################# best_acc = -1 input_size = 32 * 32 * 3 best_stats = None #hidden_size_choice = [x*100+50 for x in xrange(11)] #reg_choice = [0.1, 0.5, 5, 15, 50, 100, 1000] #learning_rate_choice = [1e-4, 5e-4, 1e-3, 5e-3, 1e-2, 1e-1, 1] #batch_size_choice = [8, 40, 80, 160, 500, 1000] hidden_size_choice = [400] learning_rate_choice = [3e-3] reg_choice = [0.02, 0.05, 0.1] batch_size_choice =[500] num_iters_choice = [1200] for batch_size_curr in batch_size_choice: for reg_cur in reg_choice: for learning_rate_curr in learning_rate_choice: for hidden_size_curr in hidden_size_choice: for num_iters_curr in num_iters_choice: print print "current training hidden_size:",hidden_size_curr print "current training learning_rate:",learning_rate_curr print "current training reg:",reg_cur print "current training batch_size:",batch_size_curr net = TwoLayerNet(input_size, hidden_size_curr, num_classes) best_stats = net.train(X_train, y_train, X_val, y_val, num_iters=num_iters_curr, batch_size=batch_size_curr, learning_rate=learning_rate_curr, learning_rate_decay=0.95, reg=reg_cur, verbose=True) val_acc = (net.predict(X_val) == y_val).mean() print "current val_acc:",val_acc if val_acc>best_acc: best_acc = val_acc best_net = net best_stats = stats print print "best_acc:",best_acc print "best hidden_size:",best_net.params['W1'].shape[1] print "best learning_rate:",best_net.hyper_params['learning_rate'] print "best reg:",best_net.hyper_params['reg'] print "best batch_size:",best_net.hyper_params['batch_size'] print ################################################################################# # END OF YOUR CODE # #################################################################################
current training hidden_size: 400 current training learning_rate: 0.003 current training reg: 0.02 current training batch_size: 500 iteration 0 / 1200: loss 2.302679 iteration 100 / 1200: loss 1.651489 iteration 200 / 1200: loss 1.500087 iteration 300 / 1200: loss 1.391165 iteration 400 / 1200: loss 1.515288 iteration 500 / 1200: loss 1.409726 iteration 600 / 1200: loss 1.450177 iteration 700 / 1200: loss 1.439996 iteration 800 / 1200: loss 1.286857 iteration 900 / 1200: loss 1.289027 iteration 1000 / 1200: loss 1.310876 iteration 1100 / 1200: loss 1.150956 current val_acc: 0.54 best_acc: 0.54 best hidden_size: 400 best learning_rate: 0.003 best reg: 0.02 best batch_size: 500 current training hidden_size: 400 current training learning_rate: 0.003 current training reg: 0.05 current training batch_size: 500 iteration 0 / 1200: loss 2.302859 iteration 100 / 1200: loss 1.761263 iteration 200 / 1200: loss 1.579761 iteration 300 / 1200: loss 1.472029 iteration 400 / 1200: loss 1.458600 iteration 500 / 1200: loss 1.414810 iteration 600 / 1200: loss 1.425350 iteration 700 / 1200: loss 1.366904 iteration 800 / 1200: loss 1.374242 iteration 900 / 1200: loss 1.415730 iteration 1000 / 1200: loss 1.152137 iteration 1100 / 1200: loss 1.198664 current val_acc: 0.514 current training hidden_size: 400 current training learning_rate: 0.003 current training reg: 0.1 current training batch_size: 500 iteration 0 / 1200: loss 2.303143 iteration 100 / 1200: loss 1.722455 iteration 200 / 1200: loss 1.530982 iteration 300 / 1200: loss 1.543712 iteration 400 / 1200: loss 1.400823 iteration 500 / 1200: loss 1.451125 iteration 600 / 1200: loss 1.402639 iteration 700 / 1200: loss 1.476569 iteration 800 / 1200: loss 1.349223 iteration 900 / 1200: loss 1.191459 iteration 1000 / 1200: loss 1.279797 iteration 1100 / 1200: loss 1.268143 current val_acc: 0.509
#自己加的(insert by myself) #在上面调好的范围内微调 test_net = TwoLayerNet(input_size, 450, num_classes) test_stats = test_net.train(X_train, y_train, X_val, y_val, num_iters=1800, batch_size=500, learning_rate=2e-3, learning_rate_decay=0.95, reg=0.02, verbose=True) test_val_acc = (test_net.predict(X_val) == y_val).mean() print print "test_acc:",test_val_acc print "test hidden_size:",test_net.hyper_params['hidden_size'] print "test learning_rate:",test_net.hyper_params['learning_rate'] print "test reg:",test_net.hyper_params['reg'] print "test batch_size:",test_net.hyper_params['batch_size'] print "test num_iter:",test_net.hyper_params['num_iter']
iteration 0 / 1800: loss 2.302743 iteration 100 / 1800: loss 1.635457 iteration 200 / 1800: loss 1.517586 iteration 300 / 1800: loss 1.529778 iteration 400 / 1800: loss 1.442434 iteration 500 / 1800: loss 1.374035 iteration 600 / 1800: loss 1.355994 iteration 700 / 1800: loss 1.322699 iteration 800 / 1800: loss 1.254596 iteration 900 / 1800: loss 1.260026 iteration 1000 / 1800: loss 1.164887 iteration 1100 / 1800: loss 1.162341 iteration 1200 / 1800: loss 1.170499 iteration 1300 / 1800: loss 1.165954 iteration 1400 / 1800: loss 1.129984 iteration 1500 / 1800: loss 1.118211 iteration 1600 / 1800: loss 1.088840 iteration 1700 / 1800: loss 1.041198 test_acc: 0.551 test hidden_size: 450 test learning_rate: 0.002 test reg: 0.02 test batch_size: 500 test num_iter: 1800
#自己加的(insert by myself) # Plot the loss function and train / validation accuracies plt.subplot(2, 1, 1) plt.plot(test_stats['loss_history']) plt.title('Loss history') plt.xlabel('Iteration') plt.ylabel('Loss') plt.subplot(2, 1, 2) plt.plot(test_stats['train_acc_history'], label='train') plt.plot(test_stats['val_acc_history'], label='val') plt.title('Classification accuracy history') plt.xlabel('Epoch') plt.ylabel('Clasification accuracy') plt.show()
# visualize the weights of the best network show_net_weights(best_net) print "best hidden_size:",best_net.hyper_params['hidden_size'] print "learning_rate",best_net.hyper_params['learning_rate'] print "reg",best_net.hyper_params['reg'] print "batch_size",best_net.hyper_params['batch_size']
best hidden_size: 400 learning_rate 0.003 reg 0.02 batch_size 500
Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.We will give you extra bonus point for every 1% of accuracy above 52%.
final_test_acc = (best_net.predict(X_test) == y_test).mean() print 'Test accuracy: ', final_test_acc test_acc = (test_net.predict(X_test) == y_test).mean() print 'Test accuracy: ', test_acc
Test accuracy: 0.533 Test accuracy: 0.546
neural_net.py内容:
import numpy as np import matplotlib.pyplot as plt class TwoLayerNet(object): """ A two-layer fully-connected neural network. The net has an input dimension of N, a hidden layer dimension of H, and performs classification over C classes. We train the network with a softmax loss function and L2 regularization on the weight matrices. The network uses a ReLU nonlinearity after the first fully connected layer. In other words, the network has the following architecture: input - fully connected layer - ReLU - fully connected layer - softmax The outputs of the second fully-connected layer are the scores for each class. """ def __init__(self, input_size, hidden_size, output_size, std=1e-4): """ Initialize the model. Weights are initialized to small random values and biases are initialized to zero. Weights and biases are stored in the variable self.params, which is a dictionary with the following keys: W1: First layer weights; has shape (D, H) b1: First layer biases; has shape (H,) W2: Second layer weights; has shape (H, C) b2: Second layer biases; has shape (C,) Inputs: - input_size: The dimension D of the input data. - hidden_size: The number of neurons H in the hidden layer. - output_size: The number of classes C. """ self.params = {} self.params['W1'] = std * np.random.randn(input_size, hidden_size) self.params['b1'] = np.zeros(hidden_size) self.params['W2'] = std * np.random.randn(hidden_size, output_size) self.params['b2'] = np.zeros(output_size) def loss(self, X, y=None, reg=0.0): """ Compute the loss and gradients for a two layer fully connected neural network. Inputs: - X: Input data of shape (N, D). Each X[i] is a training sample. - y: Vector of training labels. y[i] is the label for X[i], and each y[i] is an integer in the range 0 <= y[i] < C. This parameter is optional; if it is not passed then we only return scores, and if it is passed then we instead return the loss and gradients. - reg: Regularization strength. Returns: If y is None, return a matrix scores of shape (N, C) where scores[i, c] is the score for class c on input X[i]. If y is not None, instead return a tuple of: - loss: Loss (data loss and regularization loss) for this batch of training samples. - grads: Dictionary mapping parameter names to gradients of those parameters with respect to the loss function; has the same keys as self.params. """ # Unpack variables from the params dictionary W1, b1 = self.params['W1'], self.params['b1'] W2, b2 = self.params['W2'], self.params['b2'] N, D = X.shape # Compute the forward pass scores = None ############################################################################# # TODO: Perform the forward pass, computing the class scores for the input. # # Store the result in the scores variable, which should be an array of # # shape (N, C). # ############################################################################# z2 = X.dot(W1) + b1 a2 = np.zeros_like(z2) a2 = np.maximum(z2, 0) scores = a2.dot(W2) + b2 ############################################################################# # END OF YOUR CODE # ############################################################################# # If the targets are not given then jump out, we're done if y is None: return scores # Compute the loss loss = None ############################################################################# # TODO: Finish the forward pass, and compute the loss. This should include # # both the data loss and L2 regularization for W1 and W2. Store the result # # in the variable loss, which should be a scalar. Use the Softmax # # classifier loss. So that your results match ours, multiply the # # regularization loss by 0.5 # ############################################################################# exp_scores = np.exp(scores) row_sum = exp_scores.sum(axis=1).reshape((N, 1)) norm_scores = exp_scores / row_sum data_loss = -1.0/N * np.log(norm_scores[np.arange(N), y]).sum() reg_loss = 0.5 * reg * (np.sum(W1*W1) + np.sum(W2*W2)) loss = data_loss + reg_loss ############################################################################# # END OF YOUR CODE # ############################################################################# # Backward pass: compute gradients grads = {} ############################################################################# # TODO: Compute the backward pass, computing the derivatives of the weights # # and biases. Store the results in the grads dictionary. For example, # # grads['W1'] should store the gradient on W1, and be a matrix of same size # ############################################################################# delta3 = np.zeros_like(norm_scores) #delta3 = dloss / dz3 delta3[np.arange(N), y] -= 1 delta3 += norm_scores grads['W2'] = a2.T.dot(delta3) / N + reg * W2 #grads['b2'] = np.ones((1,N)).dot(delta3) / N grads['b2'] = np.ones(N).dot(delta3) / N da2_dz2 = np.zeros_like(z2) da2_dz2[z2>0] = 1 delta2 = delta3.dot(W2.T) * da2_dz2 grads['W1'] = X.T.dot(delta2) / N + reg * W1 grads['b1'] = np.ones(N).dot(delta2) / N ############################################################################# # END OF YOUR CODE # ############################################################################# return loss, grads def train(self, X, y, X_val, y_val, learning_rate=1e-3, learning_rate_decay=0.95, reg=1e-5, num_iters=100, batch_size=200, verbose=False): """ Train this neural network using stochastic gradient descent. Inputs: - X: A numpy array of shape (N, D) giving training data. - y: A numpy array f shape (N,) giving training labels; y[i] = c means that X[i] has label c, where 0 <= c < C. - X_val: A numpy array of shape (N_val, D) giving validation data. - y_val: A numpy array of shape (N_val,) giving validation labels. - learning_rate: Scalar giving learning rate for optimization. - learning_rate_decay: Scalar giving factor used to decay the learning rate after each epoch. - reg: Scalar giving regularization strength. - num_iters: Number of steps to take when optimizing. - batch_size: Number of training examples to use per step. - verbose: boolean; if true print progress during optimization. """ self.hyper_params = {} self.hyper_params['learning_rate'] = learning_rate self.hyper_params['reg'] = reg self.hyper_params['batch_size'] = batch_size self.hyper_params['hidden_size'] = self.params['W1'].shape[1] self.hyper_params['num_iter'] = num_iters num_train = X.shape[0] iterations_per_epoch = max(num_train / batch_size, 1) # Use SGD to optimize the parameters in self.model loss_history = [] train_acc_history = [] val_acc_history = [] for it in xrange(num_iters): X_batch = None y_batch = None ######################################################################### # TODO: Create a random minibatch of training data and labels, storing # # them in X_batch and y_batch respectively. # ######################################################################### batch_inx = np.random.choice(num_train, batch_size) X_batch = X[batch_inx,:] y_batch = y[batch_inx] ######################################################################### # END OF YOUR CODE # ######################################################################### # Compute loss and gradients using the current minibatch loss, grads = self.loss(X_batch, y=y_batch, reg=reg) loss_history.append(loss) ######################################################################### # TODO: Use the gradients in the grads dictionary to update the # # parameters of the network (stored in the dictionary self.params) # # using stochastic gradient descent. You'll need to use the gradients # # stored in the grads dictionary defined above. # ######################################################################### self.params['W1'] -= learning_rate * grads['W1'] self.params['b1'] -= learning_rate * grads['b1'] self.params['W2'] -= learning_rate * grads['W2'] self.params['b2'] -= learning_rate * grads['b2'] ######################################################################### # END OF YOUR CODE # ######################################################################### if verbose and it % 100 == 0: print 'iteration %d / %d: loss %f' % (it, num_iters, loss) # Every epoch, check train and val accuracy and decay learning rate. if it % iterations_per_epoch == 0: # Check accuracy train_acc = (self.predict(X_batch) == y_batch).mean() val_acc = (self.predict(X_val) == y_val).mean() train_acc_history.append(train_acc) val_acc_history.append(val_acc) # Decay learning rate learning_rate *= learning_rate_decay return { 'loss_history': loss_history, 'train_acc_history': train_acc_history, 'val_acc_history': val_acc_history, } def predict(self, X): """ Use the trained weights of this two-layer network to predict labels for data points. For each data point we predict scores for each of the C classes, and assign each data point to the class with the highest score. Inputs: - X: A numpy array of shape (N, D) giving N D-dimensional data points to classify. Returns: - y_pred: A numpy array of shape (N,) giving predicted labels for each of the elements of X. For all i, y_pred[i] = c means that X[i] is predicted to have class c, where 0 <= c < C. """ y_pred = None ########################################################################### # TODO: Implement this function; it should be VERY simple! # ########################################################################### scores = self.loss(X) y_pred = np.argmax(scores, axis=1) ########################################################################### # END OF YOUR CODE # ########################################################################### return y_pred
相关文章推荐
- cs231n学习笔记 note1 image classification:data-driven,knn,train/val/test splits
- CS231n winter 2016 学习笔记lecture 1
- CS231N图像分类笔记总结
- CS231n lesson2. Linear classification I 学习笔记
- 【CS231n笔记】02 Image Classification pipeline
- 【CS231n笔记】01 Introduction
- 【CS231n笔记】00 课程相关信息
- cs231n笔记
- cs231n-(1)图像分类和kNN
- cs231n-(2)线性分类器:SVM和Softmax
- cs231n-(4)反向传播
- cs231n-(5)神经网络-1:建立架构
- cs231n-(5)神经网络-2:设置数据和Loss
- cs231n-(5)神经网络-3:学习和评估
- 深度学习斯坦福cs231n 课程笔记
- cs231n - assignment1- k-Nearest Neighbor Classifier 梯度推导
- cs231n - assignment1 - linear-svm 梯度推导
- cs231n - assignment1 - softmax 梯度推导
- cs231n - assignment1 - neural net 梯度推导
- CS231n Class Notes- lecture11 ConvNets in Practice