SVM学习
2016-07-01 10:36
232 查看
In machine learning, support vector machines (SVMs, also support
vector networks[1])
are supervised learning models with associated learning algorithms that
analyze data used for classification and regression
analysis. Given a set of training examples, each marked for belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary linear
classifier. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted
to belong to a category based on which side of the gap they fall on.
In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel
trick, implicitly mapping their inputs into high-dimensional feature spaces.
When data are not labeled, supervised learning is not possible, and an unsupervised learning approach is required, which attempts to find natural clustering of the data to groups, and then map new data to these formed groups. The clustering algorithm which
provides an improvement to the support vector machines is called support vector clustering[2] and
is often used in industrial applications either when data is not labeled or when only some data is labeled as a preprocessing for a classification pass.
vector networks[1])
are supervised learning models with associated learning algorithms that
analyze data used for classification and regression
analysis. Given a set of training examples, each marked for belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary linear
classifier. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted
to belong to a category based on which side of the gap they fall on.
In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel
trick, implicitly mapping their inputs into high-dimensional feature spaces.
When data are not labeled, supervised learning is not possible, and an unsupervised learning approach is required, which attempts to find natural clustering of the data to groups, and then map new data to these formed groups. The clustering algorithm which
provides an improvement to the support vector machines is called support vector clustering[2] and
is often used in industrial applications either when data is not labeled or when only some data is labeled as a preprocessing for a classification pass.
相关文章推荐
- 关于SVM的那点破事
- LSI SVM 挑战IBM SVC
- 支持向量机(SVM)算法概述
- 对SVM的认识
- SVM对手写数字的识别
- 使用深度卷积网络和支撑向量机实现的商标检测与分类的例子
- [转载]用opencv实现svm
- Learning to ranking简介
- 结构风险最小和VC维理论的解释
- SVM学习
- Sparse Autoencoder1-NeuralNetworks
- Sparse Autoencoder2-Backpropagation Algorithm
- Sparse Autoencoder3-Gradient checking and advanced optimization
- Sparse Autoencoder4-Autoencoders and Sparsity
- 在matlab上安装svm
- SVM分类和训练
- SVM分类和训练
- 在matlab上安装svm
- OpenCV之CvANN_MLP和CvSVM测试
- LIBSVM 的参数选择