UFLDL Exercise:Learning color features with Sparse Autoencoders
2014-11-23 21:28
441 查看
这一节的内容比较简单,就是实现一个线性解码器,为什么要什么用线性呢,因为在有些应用的场景(如用pca白化处理的数据,因为数据的均值为零,方差为1,所以不一定能落在0~1的范围内)里,输入是不能缩放到0~1之间的,而s型激励函数的输出是0~1,所以我们就只能线性函数来作为输出层的激励函数了
STEP 1: Create and modify sparseAutoencoderLinearCost.m to use a linear decoder
sparseAutoencoderLinearCost.m
接下来只需执行它提供的代码就可以看到自编码器的学到了什么~
STEP 1: Create and modify sparseAutoencoderLinearCost.m to use a linear decoder
sparseAutoencoderLinearCost.m
function [cost,grad] = sparseAutoencoderLinearCost(theta, visibleSize, hiddenSize, ... lambda, sparsityParam, beta, data) % visibleSize: the number of input units (probably 64) % hiddenSize: the number of hidden units (probably 25) % lambda: weight decay parameter % sparsityParam: The desired average activation for the hidden units (denoted in the lecture % notes by the greek alphabet rho, which looks like a lower-case "p"). % beta: weight of sparsity penalty term % data: Our 64x10000 matrix containing the training data. So, data(:,i) is the i-th training example. % The input theta is a vector (because minFunc expects the parameters to be a vector). % We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this % follows the notation convention of the lecture notes. W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize); W2 = reshape(theta(hiddenSize*visibleSize+1:2*hiddenSize*visibleSize), visibleSize, hiddenSize); b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize); b2 = theta(2*hiddenSize*visibleSize+hiddenSize+1:end); % Cost and gradient variables (your code needs to compute these values). % Here, we initialize them to zeros. cost = 0; W1grad = zeros(size(W1)); W2grad = zeros(size(W2)); b1grad = zeros(size(b1)); b2grad = zeros(size(b2)); %% ---------- YOUR CODE HERE -------------------------------------- % Instructions: Compute the cost/optimization objective J_sparse(W,b) for the Sparse Autoencoder, % and the corresponding gradients W1grad, W2grad, b1grad, b2grad. % % W1grad, W2grad, b1grad and b2grad should be computed using backpropagation. % Note that W1grad has the same dimensions as W1, b1grad has the same dimensions % as b1, etc. Your code should set W1grad to be the partial derivative of J_sparse(W,b) with % respect to W1. I.e., W1grad(i,j) should be the partial derivative of J_sparse(W,b) % with respect to the input parameter W1(i,j). Thus, W1grad should be equal to the term % [(1/m) \Delta W^{(1)} + \lambda W^{(1)}] in the last block of pseudo-code in Section 2.2 % of the lecture notes (and similarly for W2grad, b1grad, b2grad). % % Stated differently, if we were using batch gradient descent to optimize the parameters, % the gradient descent update to W1 would be W1 := W1 - alpha * W1grad, and similarly for W2, b1, b2. % a1 = sigmoid(bsxfun(@plus,W1 * data,b1)); %hidden层输出 a2 = bsxfun(@plus,W2 * a1,b2); %输出层输出,为恒等激励 p = mean(a1,2); %隐藏神经元的平均活跃度 sparsity = sparsityParam .* log(sparsityParam ./ p) + (1 - sparsityParam) .* log((1 - sparsityParam) ./ (1.-p)); %惩罚因子 %cost = sum(sum((a2 - data).^2)) / 2 / size(data,2); %cost = sum(sum((a2 - data).^2)) / 2 / size(data,2) + lambda / 2 * (sum(sum(W1.^2)) + sum(sum(W2.^2))); %cost = sum(sum((a2 - data).^2)) / 2 / size(data,2) + beta * sum(sparsity); cost = sum(sum((a2 - data).^2)) / 2 / size(data,2) + lambda / 2 * (sum(sum(W1.^2)) + sum(sum(W2.^2))) + beta * sum(sparsity); %代价函数 delt2 = (a2 - data); %输出层残差,注意这里用的是恒等激励,所以导数为1 %delt1 = W2' * delt2 .* a1 .* (1 - a1); delt1 = (W2' * delt2 + beta .* repmat((-sparsityParam./p + (1-sparsityParam)./(1.-p)),1,size(data,2))) .* a1 .* (1 - a1); %hidden层残差 W2grad = delt2 * a1' ./ size(data,2) + lambda * W2; W1grad = delt1 * data' ./ size(data,2) + lambda * W1; % W2grad = delt2 * a1' ./ size(data,2); % W1grad = delt1 * data' ./ size(data,2); b2grad = sum(delt2,2) ./ size(data,2); b1grad = sum(delt1,2) ./ size(data,2); %------------------------------------------------------------------- % After computing the cost and gradient, we will convert the gradients back % to a vector format (suitable for minFunc). Specifically, we will unroll % your gradient matrices into a vector. grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)]; end %------------------------------------------------------------------- % Here's an implementation of the sigmoid function, which you may find useful % in your computation of the costs and the gradients. This inputs a (row or % column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). function sigm = sigmoid(x) sigm = 1 ./ (1 + exp(-x)); end当然,在写完sparseAutoencoderLinearCost的代码后还是要check gradient,保证代码没问题才进行下一步~
接下来只需执行它提供的代码就可以看到自编码器的学到了什么~
相关文章推荐
- Stanford UFLDL教程 Exercise:Learning color features with Sparse Autoencoders
- Convolutional neural networks(CNN) (十) Learning color features with Sparse Autoencoders Exercise
- 深度学习笔记6:Learning color features with Sparse Autoencoders
- UFLDL教程: Exercise:Learning color features with Sparse Autoencoders
- UFLDL教程答案(7):Exercise:Learning color features with Sparse Autoencoders
- 【DeepLearning】Exercise:Learning color features with Sparse Autoencoders
- Exercise:Learning color features with Sparse Autoencoders 代码示例
- UFLDL教程Exercise答案(7):Learning color features with Sparse Autoencoders
- UFLFL Exercise: Learning color features with Sparse Autoencoders
- 卷积神经Extracting and Composing Robust Features with Denoising Autoencoders
- 论文笔记(3)-Extracting and Composing Robust Features with Denoising Autoencoders
- Extracting and composing robust features with denosing autoencoders 论文
- Extracting and Composing Robust Features with Denoising Autoencoders(经典文章阅读)
- A wizard’s guide to Adversarial Autoencoders: Part 2, Exploring latent space with Adversarial Autoen
- UFLDL Tutorial_Linear Decoders with Autoencoders
- UFLDL Exercise:Sparse Autoencoder
- Linear Decoders with Autoencoders编程代码整理
- A Tutorial on Deep Learning Part 2: Autoencoders, Convolutional Neural Networks and Recurrent Neural
- Deeplearning toolbox 中如何实现sparse/stack/denoise autoencoder (转载)
- Autoencoders in Deep Learning