UFLDL教程Exercise答案(1):Sparse Autoencoder
2016-11-21 13:02
501 查看
教程地址:http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial
Exercise地址:http://deeplearning.stanford.edu/wiki/index.php/Exercise:Sparse_Autoencoder
for i=1:numpatches
r = 0;
imth = randi([1,10],1);
patchth1 = randi([1,504],1);
patchth2 = randi([1,504],1);
for j = patchth1:(patchth1+7)
for k = patchth2:(patchth2+7)
r = r+1;
patches(r,i)=IMAGES(j,k,imth);
end
end
end
%%正向传播的实现,计算各层激活值
z2 = W1 * data + repmat(b1,1,numSample);
a2 = sigmoid(z2);
z3 = W2 * a2 + repmat(b2,1,numSample);
a3 = sigmoid(z3);
%计算隐藏神经元的激活度
averageActiv = (1/numSample)*sum(a2,2);
%计算惩罚因子
aparsityParameter = sum(sparsityParam.*log(sparsityParam./averageActiv)+(1-sparsityParam)*log((1-sparsityParam)./(1-averageActiv)));
%计算delta2中多的项
sparsityDelta = -(sparsityParam./averageActiv)+(1-sparsityParam)./(1-averageActiv);
%计算 error term 残差
delta3 = -(data-a3).*(a3.*(1-a3)); %输出层
delta2 = ((W2'*delta3)+beta*repmat(sparsityDelta,1,numSample)).*(a2.*(1-a2)); %hidden layer
%计算整体代价函数
cost =(1/numSample)*(1/2)*sum(sum((a3-data).*(a3-data)))+ (lambda/2)*(sum(sum(W1.*W1))+sum(sum((W2.*W2)))) + beta*aparsityParameter ;
%计算偏导数set W1grad to be the partial derivative of J_sparse(W,b) with
% respect to W1
W1grad = (1/numSample)* (delta2 * data') + lambda.*W1;
W2grad = (1/numSample)* (delta3 * a2') + lambda.*W2;
b1grad = (1/numSample)* sum(delta2,2);
b2grad = (1/numSample)* sum(delta3,2);
E = eye(size(theta,1),size(theta,1));
for i=1:size(theta,1)
numgrad(i,1) = (J(theta+EPSILON*E(:,i))-J(theta-EPSILON*E(:,i)))./(2*EPSILON);
end
Exercise地址:http://deeplearning.stanford.edu/wiki/index.php/Exercise:Sparse_Autoencoder
代码
实现的时候就尽量没用loop循环,能向量化实现的地方尽量用了向量化实现。Step 1: Generate training set (sampleIMAGES.m. )
随机采样1000个图像块for i=1:numpatches
r = 0;
imth = randi([1,10],1);
patchth1 = randi([1,504],1);
patchth2 = randi([1,504],1);
for j = patchth1:(patchth1+7)
for k = patchth2:(patchth2+7)
r = r+1;
patches(r,i)=IMAGES(j,k,imth);
end
end
end
Step 2: Sparse autoencoder objective (sparseAutoencoderCost.m)
numSample = size(data,2);%%正向传播的实现,计算各层激活值
z2 = W1 * data + repmat(b1,1,numSample);
a2 = sigmoid(z2);
z3 = W2 * a2 + repmat(b2,1,numSample);
a3 = sigmoid(z3);
%计算隐藏神经元的激活度
averageActiv = (1/numSample)*sum(a2,2);
%计算惩罚因子
aparsityParameter = sum(sparsityParam.*log(sparsityParam./averageActiv)+(1-sparsityParam)*log((1-sparsityParam)./(1-averageActiv)));
%计算delta2中多的项
sparsityDelta = -(sparsityParam./averageActiv)+(1-sparsityParam)./(1-averageActiv);
%计算 error term 残差
delta3 = -(data-a3).*(a3.*(1-a3)); %输出层
delta2 = ((W2'*delta3)+beta*repmat(sparsityDelta,1,numSample)).*(a2.*(1-a2)); %hidden layer
%计算整体代价函数
cost =(1/numSample)*(1/2)*sum(sum((a3-data).*(a3-data)))+ (lambda/2)*(sum(sum(W1.*W1))+sum(sum((W2.*W2)))) + beta*aparsityParameter ;
%计算偏导数set W1grad to be the partial derivative of J_sparse(W,b) with
% respect to W1
W1grad = (1/numSample)* (delta2 * data') + lambda.*W1;
W2grad = (1/numSample)* (delta3 * a2') + lambda.*W2;
b1grad = (1/numSample)* sum(delta2,2);
b2grad = (1/numSample)* sum(delta3,2);
Step 3: Gradient checking (computeNumericalGradient.m.)
EPSILON = 10^(-4);E = eye(size(theta,1),size(theta,1));
for i=1:size(theta,1)
numgrad(i,1) = (J(theta+EPSILON*E(:,i))-J(theta-EPSILON*E(:,i)))./(2*EPSILON);
end
结果
相关文章推荐
- UFLDL教程 Exercise:Sparse Autoencoder(答案)
- UFLDL教程答案(7):Exercise:Learning color features with Sparse Autoencoders
- UFLDL教程Exercise答案(7):Learning color features with Sparse Autoencoders
- UFLDL教程: Exercise: Sparse Autoencoder
- UFLDL教程答案(1):Exercise:Sparse_Autoencoder
- UFLDL Exercise:Sparse Autoencoder
- UFLDL——Exercise: Sparse Autoencoder 稀疏自动编码
- UFLDL教程: Exercise:Learning color features with Sparse Autoencoders
- 【UFLDL-exercise1-Sparse Autoencoder】
- UFLDL教程之一 (Sparse Autoencoder练习)
- Stanford UFLDL教程 Exercise:Sparse Autoencoder
- Deep Learning 1_深度学习UFLDL教程:Sparse Autoencoder练习(斯坦福大学深度学习教程)
- UFLDL教程Exercise答案(6):Implement deep networks for digit classification
- UFLDL教程答案(4):Exercise:Softmax Regression
- UFLDL教程Exercise答案(2):Vectorization
- 【DeepLearning】Exercise:Sparse Autoencoder
- UFLDL教程答案(5):Exercise:Self-Taught Learning
- Sparse Autoencoder Exercise
- UFLDL教程Exercise答案(3.1):PCA in 2D
- UFLDL教程Exercise答案(4):Softmax Regression