scikit-learn Tutorials(4)
2016-02-22 22:45
274 查看
Unsupervised learning: seeking representations of the data 无监督学习:寻找数据表示
Clustering: grouping observations together 簇
The problem solved in clusteringGiven the iris dataset, if we knew that there were 3 types of iris, but did not have access to a taxonomist to label them: we could try a clustering task: split the observations
into well-separated group called clusters.
K-means clustering
Note that there exist a lot of different clustering criteria and associated algorithms. The simplest clustering algorithm is K-means.>>>
>>> from sklearn import cluster, datasets >>> iris = datasets.load_iris() >>> X_iris = iris.data >>> y_iris = iris.target >>> k_means = cluster.KMeans(n_clusters=3) >>> k_means.fit(X_iris) KMeans(copy_x=True, init='k-means++', ... >>> print(k_means.labels_[::10]) [1 1 1 1 1 0 0 0 0 0 2 2 2 2 2] >>> print(y_iris[::10]) [0 0 0 0 0 1 1 1 1 1 2 2 2 2 2]
Warning
There is absolutely no guarantee of recovering a ground truth. First, choosing the right number of clusters is hard. Second, the algorithm is sensitive to initialization, and can fall into local minima, although scikit-learn employs several tricks to mitigate
this issue.
Bad initialization | 8 clusters | Ground truth |
Application example: vector quantization
Clustering in general and KMeans, in particular, can be seen as a way of choosing a small number of exemplars to compress the information. The problem is sometimes known as vector
quantization. For instance, this can be used to posterize an image:
>>>
>>> import scipy as sp >>> try: ... face = sp.face(gray=True) ... except AttributeError: ... from scipy import misc ... face = misc.face(gray=True) #face.shape=(768,1024) >>> X = face.reshape((-1, 1)) # We need an (n_sample, n_feature) array X.shape=(786432, 1) >>> k_means = cluster.KMeans(n_clusters=5, n_init=1) #n_init查看api还是不太明白是啥意思 >>> k_means.fit(X) KMeans(copy_x=True, init='k-means++', ... >>> values = k_means.cluster_centers_.squeeze()
#k_means.cluster_centers_.shaoe=(5,1) values.shape(1,5) 中心点坐标值>>> labels = k_means.labels_>>> face_compressed = np.choose(labels, values) #按照每个像素点标签的值,把族的中心点值赋上去>>> face_compressed.shape = face.shape #reshape
Raw image | K-means quantization | Equal bins | Image histogram |
Hierarchical agglomerative clustering: Ward #这个没看
A Hierarchical clustering methodis a type of cluster analysis that aims to build a hierarchy of clusters. In general, the various approaches of this technique are either:
Agglomerative - bottom-up approaches: each observation starts in its own cluster, and clusters are iterativelly merged in such a way to minimize a linkage criterion. This approach is particularly interesting
when the clusters of interest are made of only a few observations. When the number of clusters is large, it is much more computationally efficient than k-means.
Divisive - top-down approaches: all observations start in one cluster, which is iteratively split as one moves down the hierarchy. For estimating large numbers of clusters, this approach is both slow (due to all
observations starting as one cluster, which it splits recursively) and statistically ill-posed.
Connectivity-constrained clustering
With agglomerative clustering, it is possible to specify which samples can be clustered together by giving a connectivity graph. Graphs in the scikit are represented by their adjacency matrix.Often, a sparse matrix is used. This can be useful, for instance, to retrieve connected regions (sometimes also referred to as connected components) when clustering an image:
import matplotlib.pyplot as plt from sklearn.feature_extraction.image import grid_to_graph from sklearn.cluster import AgglomerativeClustering from sklearn.utils.testing import SkipTest from sklearn.utils.fixes import sp_version if sp_version < (0, 12): raise SkipTest("Skipping because SciPy version earlier than 0.12.0 and " "thus does not include the scipy.misc.face() image.") ############################################################################### # Generate data try: face = sp.face(gray=True) except AttributeError: # Newer versions of scipy have face in misc from scipy import misc face = misc.face(gray=True) # Resize it to 10% of the original size to speed up the processing face = sp.misc.imresize(face, 0.10) / 255.
Feature agglomeration
特征聚类 目的和PCA一样,减少特征
We have seen that sparsity could be used to mitigate the curse of dimensionality, i.e an insufficient amount of observations compared to the number of features. Another approach is to merge together similar features: feature agglomeration.This approach can be implemented by clustering in the feature direction, in other words clustering the transposed data.
>>>
>>> digits = datasets.load_digits() >>> images = digits.images >>> X = np.reshape(images, (len(images), -1)) #images.shape=(1797, 8, 8)
# len(images)=1797 X.shape=(1797, 64) >>> connectivity = grid_to_graph(*images[0].shape) #连通性??? >>> agglo = cluster.FeatureAgglomeration(connectivity=connectivity,n_clusters=32) >>> agglo.fit(X) FeatureAgglomeration(affinity='euclidean', compute_full_tree='auto',... >>> X_reduced = agglo.transform(X) #Z=X*Ureduce >>> X_approx = agglo.inverse_transform(X_reduced) #X=Z*Ureduce.T >>> images_approx = np.reshape(X_approx, images.shape)
transformand
inverse_transformmethods
类似PCA中Z与X的转化
Some estimators expose a
transformmethod, for instance to reduce
the dimensionality of the dataset.
Decompositions: from a signal to components and loadings
Components and loadingsIf X is our multivariate data, then the problem that we are trying to solve is to rewrite it on a different observational basis: we want to learn loadings L and a set of components C such that X
= L C. Different criteria exist to choose the components
Principal component analysis: PCA
Principal component analysis (PCA) selects the successivecomponents that explain the maximum variance in the signal.
The point cloud spanned by the observations above is very flat in one direction: one of the three univariate features can almost be exactly computed using the other two. PCA finds the directions in which the data is not flat
When used to transform data, PCA can reduce the dimensionality of the data by projecting on a principal subspace.
>>>
>>> # Create a signal with only 2 useful dimensions >>> x1 = np.random.normal(size=100) >>> x2 = np.random.normal(size=100) >>> x3 = x1 + x2 >>> X = np.c_[x1, x2, x3] >>> from sklearn import decomposition #PCA的包 >>> pca = decomposition.PCA() >>> pca.fit(X) PCA(copy=True, n_components=None, whiten=False) >>> print(pca.explained_variance_) [ 2.18565811e+00 1.19346747e+00 8.43026679e-32] #解释方差要大于0.99 >>> # As we can see, only the 2 first components are useful >>> pca.n_components = 2 #成分数 >>> X_reduced = pca.fit_transform(X) >>> X_reduced.shape (100, 2)
Independent Component Analysis: ICA 独立成分分析
Independentcomponent analysis (ICA) selects components so that the distribution of their loadings carries a maximum amount of independent information. It is able to recover non-Gaussian independent signals:
>>>
>>> # Generate sample data >>> time = np.linspace(0, 10, 2000) >>> s1 = np.sin(2 * time) # Signal 1 : sinusoidal signal >>> s2 = np.sign(np.sin(3 * time)) # Signal 2 : square signal >>> S = np.c_[s1, s2] >>> S += 0.2 * np.random.normal(size=S.shape) # Add noise >>> S /= S.std(axis=0) # Standardize data >>> # Mix data >>> A = np.array([[1, 1], [0.5, 2]]) # Mixing matrix >>> X = np.dot(S, A.T) # Generate observations #np.dot 矩阵相乘 >>> # Compute ICA >>> ica = decomposition.FastICA() >>> S_ = ica.fit_transform(X) # Get the estimated sources >>> A_ = ica.mixing_.T >>> np.allclose(X, np.dot(S_, A_) + ica.mean_) True
相关文章推荐
- 用Python从零实现贝叶斯分类器的机器学习的教程
- My Machine Learning
- 机器学习---学习首页 3ff0
- 反向传播(Backpropagation)算法的数学原理
- 也谈 机器学习到底有没有用 ?
- 如何用70行代码实现深度神经网络算法
- 量子计算机编程原理简介 和 机器学习
- 近200篇机器学习&深度学习资料分享(含各种文档,视频,源码等)
- 已经证实提高机器学习模型准确率的八大方法
- 初识机器学习算法有哪些?
- 机器学习相关的库和工具
- 10个关于人工智能和机器学习的有趣开源项目
- 机器学习实践中应避免的7种常见错误
- 机器学习书单
- 北美常用的机器学习/自然语言处理/语音处理经典书籍
- 如何提升COBOL系统代码分析效率
- 自动编程体系设想(一)
- 自动编程体系设想(一)
- 支持向量机(SVM)算法概述
- [Ng机器学习公开课1]机器学习概述