您的位置:首页 > 其它

Mahout系列之核心功能实践

2016-02-14 16:50 281 查看
上次已经说到了Mahout的计算项目模块mahout math。这里面包含了很多常用的数学计算或者统计方面的东西,有很多东西可能会用到,所以对这些基础的需要有很好的理解。Mahout提供了很多工具的命令行方式,下面列出所有的命令,当然这个是会变化的,而且每个都有不同的参数;这些命令也有很多相似之处,要每个都很熟悉还是要很多功力的。管中窥豹,可见一斑,这样可以知道Mahout到底可以做什么,提供了哪些直接使用的方式,可供参考:

CommandCommentDetail
arff.vector从ARFF文件产生向量Generate Vectors from an ARFF file or directory
baumwelchHMM Baum-Welch训练算法Baum-Welch algorithm for unsupervised HMM training
buildforest构建随机森林分类器Build the random forest classifier
canopyCanopy聚类Canopy clustering
cat打印文件或者资源方便查看Print a file or resource as the logistic regression models would see it
cleansvd清空验证SVD输出Cleanup and verification of SVD output
clusterdumpDump聚类输出结果文本Dump cluster output to text
clusterpp分组聚类输出Groups Clustering Output In Clusters
cmdump以HTML或者文本格式Dump混淆矩阵Dump confusion matrix in HTML or text formats
concatmatrices合并相同基的矩阵到单个矩阵中Concatenates 2 matrices of same cardinality into a single matrix
cvbLDALDA via Collapsed Variation Bayes (0th deriv. approx)
cvb0_localLDA localLDA via Collapsed Variation Bayes, in memory locally.
describe描述数据集中的字段和目标变量Describe the fields and target variable in a data set
evaluateFactorization计算RMSE 和 MAE compute RMSE and MAE of a rating matrix factorization against probes
fkmeansFuzzy K-means聚类Fuzzy K-means clustering
hmmpredict由给定的HMM模型产生随机观察序列Generate random sequence of observations by given HMM
itemsimilarity物品相似度Compute the item-item-similarities for item-based collaborative filtering
kmeansK-means聚类K-means clustering
lucene.vector产生Lucene索引向量Generate Vectors from a Lucene index
lucene2seqLucene索引产生文本序列Generate Text SequenceFiles from a Lucene index
matrixdump以CSV格式Dump矩阵Dump matrix in CSV format
matrixmult获得两矩阵的积Take the product of two matrices
parallelALS并行ALSALS-WR factorization of a rating matrix
qualcluster运行聚类实验和摘要Runs clustering experiments and summarizes results in a CSV
recommendfactorized使用等分因子获得推荐Compute recommendations using the factorization of a rating matrix
recommenditembased使用基于物品的协作过滤推荐Compute recommendations using item-based collaborative filtering
regexconverter按行基于正则表达式转换文本文件Convert text files on a per line basis based on regular expressions
resplit将文件文件切分多等分Splits a set of SequenceFiles into a number of equal splits
rowidMap系列文件Map SequenceFile<Text,VectorWritable> to {SequenceFile<IntWritable,VectorWritable>, SequenceFile<IntWritable,Text>}
rowsimilarity计算行矩阵的成对相似度Compute the pairwise similarities of the rows of a matrix
runAdaptiveLogistic运行自适应逻辑回归Score new production data using a probably trained and validated AdaptivelogisticRegression model
runlogistic从CSV数据运行逻辑回归Run a logistic regression model against CSV data
seq2encoded从文本序列文件获得编码稀疏向量Encoded Sparse Vector generation from Text sequence files
seq2sparse从文本序列文件获得稀疏向量Sparse Vector generation from Text sequence files
seqdirectory从目录创建序列文件Generate sequence files (of Text) from a directory
seqdumper通用序列文件DumpGeneric Sequence File dumper
seqmailarchives从压缩邮件目录中创建序列文件Creates SequenceFile from a directory containing gzipped mail archives
seqwikiWikipedia xml dump至序列文件Wikipedia xml dump to sequence file
spectralkmeans谱K-mean聚类Spectral k-means clustering
split输入数据分为测试和训练数据Split Input data into test and train sets
splitDataset等分训练和测试数据split a rating dataset into training and probe parts
ssvd随机SVDStochastic SVD
streamingkmeans流式K-mean聚类Streaming k-means clustering
svdLanczos 奇异值分解Lanczos Singular Value Decomposition
testforest测试随机森林分类器Test the random forest classifier
testnb测试Bayes分类器Test the Vector-based Bayes classifier
trainAdaptiveLogistic训练自适应逻辑回归模型Train an AdaptivelogisticRegression model
trainlogistic基于随机梯度下降训练逻辑回归Train a logistic regression using stochastic gradient descent
trainnb基于Bayes分类训练Train the Vector-based Bayes classifier
transpose转置矩阵Take the transpose of a matrix
validateAdaptiveLogistic验证自适应逻辑回归模型Validate an AdaptivelogisticRegression model against hold-out data set
vecdist计算向量距离Compute the distances between a set of Vectors (or Cluster or Canopy, they must fit in memory) and a list of Vectors
vectordumpDump向量至文本文件Dump vectors from a sequence file to text
viterbiViterbi 算法Viterbi decoding of hidden states from given output states sequence
当然上面的有些中文的翻译不是很准,也没有一一使用过,具体的使用还有很多细节。

Mahout提供了很多聚类,分类,推荐(协作过滤)方面的计算方法,对i数据分析提供了有意的帮助,目前用的比较成熟的应该就是推荐这块了,在很多系统里面得到了实际的应用,效果也不错;相对来说聚类分类还是使用的场合比较有限,有待进一步的研究。

前面几篇已经分析过了推荐方面的,从理论到实际操作,下面介绍一个逻辑回归(Logistic Regression)模型的例子。

1.数据准备

使用的是iris数据,iris数据是数据分析使用比较多的实验数据,不多说了。

打开R, 输入 iris,可以看到数据长什么样子,使用下面的命令导出数据

write.csv(iris,file="D:/work_doc/Doc/iris.csv")

数据是这样的:

"ID","Sepal.Length","Sepal.Width","Petal.Length","Petal.Width","Species"

"1",5.1,3.5,1.4,0.2,"setosa"

"2",4.9,3,1.4,0.2,"setosa"

"3",4.7,3.2,1.3,0.2,"setosa"

"4",4.6,3.1,1.5,0.2,"setosa"

2. 使用java代码来实际操作一番。

import java.io.File;

import java.io.IOException;

import java.io.OutputStreamWriter;

import java.io.PrintWriter;

import java.util.List;

import java.util.Locale;

import org.apache.commons.io.FileUtils;

import org.apache.mahout.classifier.sgd.CsvRecordFactory;

import org.apache.mahout.classifier.sgd.LogisticModelParameters;

import org.apache.mahout.classifier.sgd.OnlineLogisticRegression;

import org.apache.mahout.math.RandomAccessSparseVector;

import org.apache.mahout.math.SequentialAccessSparseVector;

import org.apache.mahout.math.Vector;

import com.google.common.base.Charsets;

import com.google.common.collect.Lists;

public class IrisLRTest {

private static LogisticModelParameters lmp;

private static PrintWriter output;

public static void main(String[] args) throws IOException {

// 初始化

lmp = new LogisticModelParameters();

output = new PrintWriter(new OutputStreamWriter(System.out,

Charsets.UTF_8), true);

lmp.setLambda(0.001);

lmp.setLearningRate(50);

lmp.setMaxTargetCategories(3);

lmp.setNumFeatures(4);

List<String> targetCategories = Lists.newArrayList("setosa", "versicolor", "versicolor"); //对应Species属性三个类别

lmp.setTargetCategories(targetCategories);

lmp.setTargetVariable("Species"); // 需要进行预测的是Species属性

List<String> typeList = Lists.newArrayList("numeric", "numeric", "numeric", "numeric"); //每个属性的类型

List<String> predictorList = Lists.newArrayList("Sepal.Length", "Sepal.Width", "Petal.Length", "Petal.Width"); //属性的名称

lmp.setTypeMap(predictorList, typeList);

// 读取数据

List<String> raw = FileUtils.readLines(new File(

"D:\\work_doc\\Doc\\iris.csv"));

String header = raw.get(0);

List<String> content = raw.subList(1, raw.size());

CsvRecordFactory csv = lmp.getCsvRecordFactory();

csv.firstLine(header);

// 训练

OnlineLogisticRegression lr = lmp.createRegression();

for(int i = 0; i < 100; i++) { //训练次数

for (String line : content) {

Vector input = new RandomAccessSparseVector(lmp.getNumFeatures());

int targetValue = csv.processLine(line, input);

lr.train(targetValue, input);

}

}

// 评估分类结果

double correctRate = 0;

double sampleCount = content.size();

for (String line : content) {

Vector v = new SequentialAccessSparseVector(lmp.getNumFeatures());

int target = csv.processLine(line, v);

int score = lr.classifyFull(v).maxValueIndex();

//System.out.println("Target:" + target + "\tReal:" + score);

if(score == target) {

correctRate++;

}

}

output.printf(Locale.ENGLISH, "Rate = %.2f%n", correctRate / sampleCount);

}

}

代码里面给出了注释,过程比较容易理解。不仅是这个模型是这样的思路,很多其他的算法都是这样的过程,具体的训练方法,算法或者过程,有差别。

当然这里给出的是基于Mahout的代码,一样在R中也可以做很多模型,基本步骤类似。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: