您的位置:首页 > 编程语言

Exercise:Convolution and Pooling 代码示例

2014-12-25 16:56 459 查看
练习参考Convolution and Pooling

       这个练习用于处理大型图像,需要编写代码实现卷积特征提取和池化(采样)两个过程。在上一个练习中,通过小尺寸图像样本训练线性编码器得到的权重矩阵W、偏差向量b以及预处理的ZCA白化矩阵ZCAWhite、均值向量meanPatch存为文件STL10Features.mat。此练习利用STL10Features.mat中的特征与大图作卷积生成卷积特征矩阵。

       卷积计算在cnnConvolve.m中实现。对每张图像的每个特征(隐藏单元)的每个RGB分量(三层循环),从W中提取对应的卷积核,将其与大图做卷积。这里计算大图与卷积核的卷积与数学中的矩阵卷积不同,是两矩阵的对应项直接相乘再求和,具体过程如下图:



       每个RGB分量计算的卷积累加起来,其结果加上特征的偏置后取sigmoid就得到了一张图像的一个特征的卷积矩阵。三层循环结束后就得到了全部图像的卷积特征矩阵族。

cnnConvolve.m

function convolvedFeatures = cnnConvolve(patchDim, numFeatures, images, W, b, ZCAWhite, meanPatch)
%cnnConvolve Returns the convolution of the features given by W and b with
%the given images
%
% Parameters:
%  patchDim - patch (feature) dimension
%  numFeatures - number of features
%  images - large images to convolve with, matrix in the form
%           images(r, c, channel, image number)
%  W, b - W, b for features from the sparse autoencoder
%  ZCAWhite, meanPatch - ZCAWhitening and meanPatch matrices used for
%                        preprocessing
%
% Returns:
%  convolvedFeatures - matrix of convolved features in the form
%                      convolvedFeatures(featureNum, imageNum, imageRow, imageCol)

numImages = size(images, 4);
imageDim = size(images, 1);
imageChannels = size(images, 3);

% Instructions:
%   Convolve every feature with every large image here to produce the
%   numFeatures x numImages x (imageDim - patchDim + 1) x (imageDim - patchDim + 1)
%   matrix convolvedFeatures, such that
%   convolvedFeatures(featureNum, imageNum, imageRow, imageCol) is the
%   value of the convolved featureNum feature for the imageNum image over
%   the region (imageRow, imageCol) to (imageRow + patchDim - 1, imageCol + patchDim - 1)
%
% Expected running times:
%   Convolving with 100 images should take less than 3 minutes
%   Convolving with 5000 images should take around an hour
%   (So to save time when testing, you should convolve with less images, as
%   described earlier)

% -------------------- YOUR CODE HERE --------------------
% Precompute the matrices that will be used during the convolution. Recall
% that you need to take into account the whitening and mean subtraction
% steps

% patchDim    8
% numFeatures 400; is hiddenSize
% images      images(r, c, channel, image number)
% W           hiddenSize X visibleSize
% b           hiddenSize X 1
% ZCAWhite    visibleSize X visibleSize
% meanPatch   visibleSize X 1
WT = W * ZCAWhite;
bias = b - WT * meanPatch;
patchSize = patchDim * patchDim;

% --------------------------------------------------------

convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + 1, imageDim - patchDim + 1);
for imageNum = 1:numImages
for featureNum = 1:numFeatures

% convolution of image with feature matrix for each channel
convolvedImage = zeros(imageDim - patchDim + 1, imageDim - patchDim + 1);
for channel = 1:imageChannels

% Obtain the feature (patchDim x patchDim) needed during the convolution
% ---- YOUR CODE HERE ----
feature = reshape(WT(featureNum,(channel-1)*patchSize+1:channel*patchSize), patchDim, patchDim);
% ------------------------

% Flip the feature matrix because of the definition of convolution, as explained later
feature = rot90(squeeze(feature),2);

% Obtain the image
im = squeeze(images(:, :, channel, imageNum));

% Convolve "feature" with "im", adding the result to convolvedImage
% be sure to do a 'valid' convolution
% ---- YOUR CODE HERE ----
convolvedImage = convolvedImage + conv2(im, feature, 'valid');
% ------------------------

end

% Subtract the bias unit (correcting for the mean subtraction as well)
% Then, apply the sigmoid function to get the hidden activation
% ---- YOUR CODE HERE ----
convolvedImage = sigmoid(convolvedImage + bias(featureNum));
% ------------------------

% The convolved feature is the sum of the convolved values for all channels
convolvedFeatures(featureNum, imageNum, :, :) = convolvedImage;
end
end

end

function sigm = sigmoid(x)

sigm = 1 ./ (1 + exp(-x));
end

       池化采用平均采样。对每个卷积特征矩阵划分为若干个池化区域,每个区域取特征均值作为一个采样特征。在采样特征上做Softmax分类及测试。

cnnPool.m

for imageNum = 1:numImages
for featureNum = 1:numFeatures
temp = conv2(squeeze(convolvedFeatures(featureNum,imageNum,:,:)),ones(poolDim)/poolDim/poolDim,'valid');
pooledFeatures(featureNum,imageNum,:,:) = temp(1:poolDim:end,1:poolDim:end);
end
end


 
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  深度学习