您的位置:首页 > 其它

UFLDL教程Exercise答案(1):Sparse Autoencoder

2016-11-21 13:02 501 查看
教程地址:http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial

Exercise地址:http://deeplearning.stanford.edu/wiki/index.php/Exercise:Sparse_Autoencoder

代码

实现的时候就尽量没用loop循环,能向量化实现的地方尽量用了向量化实现。

Step 1: Generate training set (sampleIMAGES.m. )

随机采样1000个图像块

for i=1:numpatches
r = 0;
imth = randi([1,10],1);
patchth1 = randi([1,504],1);
patchth2 = randi([1,504],1);
for j = patchth1:(patchth1+7)
for k = patchth2:(patchth2+7)
r = r+1;
patches(r,i)=IMAGES(j,k,imth);

end
end

end

Step 2: Sparse autoencoder objective (sparseAutoencoderCost.m)

numSample = size(data,2);
%%正向传播的实现,计算各层激活值
z2 = W1 * data + repmat(b1,1,numSample);
a2 = sigmoid(z2);
z3 = W2 * a2 + repmat(b2,1,numSample);
a3 = sigmoid(z3);

%计算隐藏神经元的激活度
averageActiv = (1/numSample)*sum(a2,2);
%计算惩罚因子
aparsityParameter = sum(sparsityParam.*log(sparsityParam./averageActiv)+(1-sparsityParam)*log((1-sparsityParam)./(1-averageActiv)));

%计算delta2中多的项
sparsityDelta = -(sparsityParam./averageActiv)+(1-sparsityParam)./(1-averageActiv);
%计算 error term 残差
delta3 = -(data-a3).*(a3.*(1-a3)); %输出层
delta2 = ((W2'*delta3)+beta*repmat(sparsityDelta,1,numSample)).*(a2.*(1-a2)); %hidden layer

%计算整体代价函数
cost =(1/numSample)*(1/2)*sum(sum((a3-data).*(a3-data)))+ (lambda/2)*(sum(sum(W1.*W1))+sum(sum((W2.*W2)))) + beta*aparsityParameter ;

%计算偏导数set W1grad to be the partial derivative of J_sparse(W,b) with
% respect to W1
W1grad = (1/numSample)* (delta2 * data') + lambda.*W1;
W2grad = (1/numSample)* (delta3 * a2') + lambda.*W2;
b1grad = (1/numSample)* sum(delta2,2);
b2grad = (1/numSample)* sum(delta3,2);

Step 3: Gradient checking (computeNumericalGradient.m.)

EPSILON = 10^(-4);
E = eye(size(theta,1),size(theta,1));
for i=1:size(theta,1)
numgrad(i,1) = (J(theta+EPSILON*E(:,i))-J(theta-EPSILON*E(:,i)))./(2*EPSILON);
end

结果

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: