您的位置:首页 > 其它

UFLDL教程Exercise答案(3.1):PCA in 2D

2016-11-21 13:07 381 查看
教程地址:http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial

Exercise地址:http://deeplearning.stanford.edu/wiki/index.php/Exercise:PCA_in_2D

代码

Step 1a: Implement PCA to obtain U

u = zeros(size(x, 1)); % You need to compute this
sigma = x * x' / size(x,2);  %计算协方差矩阵sigma;x 是一个n*m的矩阵,每列表示一个训练样本
[u,s,v] = svd(sigma);  %矩阵 U 将包含 Sigma 的特征向量(一个特征向量一列,从主向量开始排序),
%矩阵S 对角线上的元素将包含对应的特征值(同样降序排列)。
%矩阵v等于u的转置,可以忽略


Step 1b: Compute xRot, the projection on to the eigenbasis

xRot = zeros(size(x)); % You need to compute this
xRot = u' * x;

Step 2: Reduce the number of dimensions from 2 to 1. 

k = 1; % Use k = 1 and project the data onto the first eigenbasis
xHat = zeros(size(x)); % You need to compute this
xTlide = zeros(size(x));
xTlide(1:k,:) = u(:,1:k)' * x;   %数据降维后的结果,k为希望保留的特征向量的数目
xHat = u * xTlide;   %还原近似数据

Step 3: PCA Whitening

xPCAWhite = zeros(size(x)); % You need to compute this
xPCAWhite = diag(1./sqrt(diag(s) + epsilon)) * u' * x;

Step 3: ZCA Whitening

xZCAWhite = zeros(size(x)); % You need to compute this
xZCAWhite = u * diag(1./sqrt(diag(s) + epsilon)) * u' * x;
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: