您的位置:首页 > 产品设计 > UI/UE

What is an intuitive explanation of the relation between PCA and SVD?

2015-09-14 19:24 711 查看

What is an intuitive explanation of the relation between PCA and SVD?

36 FOLLOWERS









































Last asked: 30 Sep, 2014

QUESTION TOPICS

Singular Value Decomposition

Principal Component Analysis

Intuitive Explanations

Statistics (academic discipline)

Machine Learning

Algorithms

Mathematics

QUESTION STATS

Views7,318

Followers36

Edits

What is an intuitive explanation of the relation between PCA and SVD?

3 Answers




Mike Tamir, CSO - GalvanizeU accredited Masters program creating top tier Data Scientists...
5.2k Views • Upvoted by Ricky Kwok, Ph.D. in Applied Math from UC Davis

There is a very direct mathematical relation between SVD (Singular Value Decomposition) and PCA (Principal Component Analysis) - see below. For this reason, the two algorithms deliver essentially the same result: a set of "new axes" constructed from linear combinations of the original the feature space axes in which the dataset is plotted. These “new axes” are useful because they systematically break down the variance in the data points (how widely the data points are distributed) based on each direction's contribution to the variance in the data:







The result of this process is a ranked list of "directions" in the feature space ordered from most variance to least. The directions along which there is greatest variance are referred to as the "principal components" (of variation in the data) and the common wisdom is that by focusing on the way the data is distributed along these dimensions exclusively, one can capture most of the information represented in in the original feature space without having to deal with such a high number of dimensions which can be of great benefit in statistical modeling and Data Science applications (see: When and where do we use SVD?).

What is the Formal Relation between SVD and PCA?
Let's let the matrix M be our data matrix where the m rows represents our data points and and the n columns represents the features of the data point. The data may already have been mean centered and normalized by the standard deviations column-wise (most off-the-shelf implementations provide these options).

SVD: Because in most cases a data matrix M will not have exactly the same number of data points as features (i.e. m≠n) the matrix M will not be a square matrix and a diagonalization of the from M=UΣUT where U is an m×m orthogonal matrix of the eigenvectors of M and Σ is the diagonal m×m matrix of the eigenvalues of M will not exist. However, in cases where n≠m, an analogue of this decomposition is possible and M can be factored as follows M=UΣVT, where

U is an m×m orthogonal matrix of the the "left singular-vectors" of M.

V is an n×n orthogonal matrix of the the "right singular-vectors" of M.

And, Σ is an m×n matrix with non-zero entries Σi,i referred to as the "singular-values" of M.

Note, u⃗ , v⃗ , and σ form a left singular-vector, right singular-vector, and singular-value triple for a given matrix M if they satisfy the following equations:

Mv⃗ =σu⃗ and

MTu⃗ =σv⃗

PCA: PCA sidesteps the problem of M not being diagonalizable by working directly with the n×n "covariance matrix" MTM. Because MTM is symmetric it is guaranteed to be diagonalizable. So PCA works by finding the eigenvectors of the covariance matrix and ranking them by their respective eigenvalues. The eigenvectors with the greatest eigenvalues are the Principal Components of the data matrix.

Now, a little bit of matrix algebra can be done to show that the Principal Components of a PCA diagonalization of the covariance matrix MTM are the same left-singular vectors that are found through SVD (i.e. the columns of matrix V) - the same as the principal components found through PCA:

From SVD we have M=UΣVT so...

MTM=(UΣVT)T(UΣVT)

MTM=(VΣTUT)(UΣVT)

but since U is orthogonal UTU=I

so

MTM=VΣ2VT

where Σ2 is an n×n diagonal matrix with the diagonal elements Σ2i,i from the matrix Σ. So the matrix of eigenvectors V in PCA are the same as the singular vectors from SVD, and the eigenvalues generated in PCA are just the squares of the singular values from SVD.

So is it ever better to use SVD over PCA?
Yes. While formally both solutions can be used to calculate the same principal components and their corresponding eigen/singular values, the extra step of calculating the covariance matrix MTM can lead to numerical rounding errors when calculating the eigenvalues/vectors.

Written 1 Dec, 2014View Upvotes

More Answers Below.

Related Questions

What is an intuitive explanation of how to distinguish what the difference between antisymmetry and asymmetry is in relations?

Why don't people use SVD in PCA rather than eigen value decomposition?

What is an intuitive explanation for PCA?

What is an intuitive explanation of singular value decomposition (SVD)?

When do I use SVD and when PCA?





David Beniaguev
486 Views

I would like to refine two points that I think are important:

I'll be assuming your data matrix is an m×n matrix that is organized such that rows are data samples (m samples), and columns are features (d features).

The first point is that SVD preforms low rank matrix approximation.
Your input to SVD is a number k (that is smaller than m or d), and the SVD procedure will return a set of k vectors of d dimensions (can be organized in a k×d matrix), and a set of k coefficients for each data sample (there are m data samples, so it can be organized in a m×k matrix), such that for each sample, the linear combination of it's k coefficients multiplied by the k vectors best reconstructs that data sample (in the euclidean distance sense). and this is true for all data samples.
So in a sense, the SVD procedure finds the optimum k vectors that together span a subspace in which most of the data samples lie in (up to a small reconstruction error).

PCA on the other hand is:
1) subtract the mean sample from each row of the data matrix.
2) preform SVD on the resulting matrix.

So, the second point is that PCA is giving you as output the subspace thatspans the deviations from the mean data sample, and SVD provides you with a subspace that spans the data samples themselves (or, you can view this as a subspace that spans the deviations from zero).

Note that these two subspaces are usually NOT the same, and will be the same only if the mean data sample is zero.

In order to understand a little better why they are not the same, let's think of a data set where all features values for all data samples are in the range 999-1001, and each feature's mean is 1000.

From the SVD point of view, the main way in which these sample deviate from zero are along the vector (1,1,1,...,1).
From the PCA point of view, on the other hand, the main way in which these data samples deviate from the mean data sample is dependent on the precise data distributions around the mean data sample...

In short, we can think of SVD as "something that compactly summarizes the main ways in which my data is deviating from zero" and PCA as "something that compactly summarizes the main ways in which my data is deviating from the mean data sample".

Written 11d agoView Upvotes





Tigran Ishkhanov
1.3k Views

PCA is a statistical technique in which SVD is used as a low level linear algebra algorithm. One can apply SVD to any matrix C. In PCA this matrix C arises from the data and has a statistical meaning - the element c_ij is a covariance between i-th and j-th coordinates of your dataset after mean-normalization.

Written 30 Sep, 2014View Upvotes

Related Questions

What is the difference between PCA and SVD?

What's the difference between MDS and PCA?

Where can I find intuitive explanations of mathematics related topics?

What's the difference between SVD and SVD++?

What is an intuitive explanation of the singular values from an SVD?

What's the difference between SVD and TF-IDF?

How do you use PCA/SVD to visualize your data (presumably, high dimensional) in a lower dimensional setting?

Is there a relationship between QR decomposition, ICA and PCA?

What is an intuitive way to explain the PCA's results?

Linear Algebra: What are the use cases of singular value decomposition and best sources to see the intuition of SVD?

What is an intuitive explanation for restricted Boltzmann machines?

Principal Component Analysis: What is the intuitive meaning of a covariance matrix?

Intuitive Explanations: What is stability of a system intuitively?

Is there an intuitive explanation for the difference between standard deviation and sample standard deviation?

Is there any relationship between "intuitive explanations" and "handwavy explanations" in science?
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: