Spark平台下的组合分类器AdaBoost
2016-07-11 10:58
483 查看
首先在github上发现了写好的Adaboost包,可以用来测试下能否使用。
https://github.com/tizfa/sparkboost
对于Java程序需求的是JavaRDD<MultilabelPoint> 数据格式,而读取的是RDD<labeledPoint>,转化为JavaRDD<labeledPoint>。
所以要对于两种数据格式进行转换。把label,feature对应起来。
public class ClassifierTask {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("ClassifierTask").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
// 得到常用的Sparkconf和sc, JavaSparkContext to SparkContext
SparkContext sc1 = sc.sc();
String inputFile = "D:\\softs\\spark-1.6.0-bin-hadoop2.6\\data\\mllib\\sample_binary_classification_data.txt";
JavaRDD<String> StringFile = sc.textFile("D:\\softs\\spark-1.6.0-bin-hadoop2.6\\data\\mllib\\sample_libsvm_data.txt");
JavaRDD<LabeledPoint> FileLabeledPoint = MLUtils.loadLibSVMFile(sc1, inputFile).toJavaRDD();
// from RDD to train model,转换成multilabelpoint
JavaRDD<MultilabelPoint> rdd = FileLabeledPoint.map(Row -> {
int a = (int)Row.label();
SparseVector b = (SparseVector)Row.features();
int docID =0;
int[] labels = {a};
SparseVector feature = b;
return new MultilabelPoint(docID, feature, labels);
});
//train set is 0.8, test set is 0.2,设置权重
double[] weights = {0.8,0.2};
JavaRDD<MultilabelPoint>[] data = rdd.randomSplit(weights);
AdaBoostMHLearner learner = new AdaBoostMHLearner(sc);
//设置分类器的各项参数
learner.setNumIterations(100);
learner.setNumDocumentsPartitions(2);
learner.setNumFeaturesPartitions(2);
learner.setNumLabelsPartitions(2);
BoostClassifier classifier = learner.buildModel(data[0]);
ClassificationResults results = classifier.classifyWithResults(sc, data[1], 1);
// Print results in a StringBuilder.
StringBuilder sb = new StringBuilder();
sb.append("**** Effectiveness\n");
sb.append(results.getCt().toString() + "\n");
sb.append("********\n");
for (int i = 0; i < results.getNumDocs(); i++) {
int docID = results.getDocuments()[i];
int[] labels = results.getLabels()[i];
int[] goldLabels = results.getGoldLabels()[i];
sb.append("DocID: " + docID + ", Labels assigned: " + Arrays.toString(labels) + ", Labels scores: " + Arrays.toString(results.getScores()[i]) + ", Gold labels: " + Arrays.toString(goldLabels)
+ "\n");
}
System.out.print(sb);
}
}
https://github.com/tizfa/sparkboost
对于Java程序需求的是JavaRDD<MultilabelPoint> 数据格式,而读取的是RDD<labeledPoint>,转化为JavaRDD<labeledPoint>。
所以要对于两种数据格式进行转换。把label,feature对应起来。
public class ClassifierTask {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("ClassifierTask").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
// 得到常用的Sparkconf和sc, JavaSparkContext to SparkContext
SparkContext sc1 = sc.sc();
String inputFile = "D:\\softs\\spark-1.6.0-bin-hadoop2.6\\data\\mllib\\sample_binary_classification_data.txt";
JavaRDD<String> StringFile = sc.textFile("D:\\softs\\spark-1.6.0-bin-hadoop2.6\\data\\mllib\\sample_libsvm_data.txt");
JavaRDD<LabeledPoint> FileLabeledPoint = MLUtils.loadLibSVMFile(sc1, inputFile).toJavaRDD();
// from RDD to train model,转换成multilabelpoint
JavaRDD<MultilabelPoint> rdd = FileLabeledPoint.map(Row -> {
int a = (int)Row.label();
SparseVector b = (SparseVector)Row.features();
int docID =0;
int[] labels = {a};
SparseVector feature = b;
return new MultilabelPoint(docID, feature, labels);
});
//train set is 0.8, test set is 0.2,设置权重
double[] weights = {0.8,0.2};
JavaRDD<MultilabelPoint>[] data = rdd.randomSplit(weights);
AdaBoostMHLearner learner = new AdaBoostMHLearner(sc);
//设置分类器的各项参数
learner.setNumIterations(100);
learner.setNumDocumentsPartitions(2);
learner.setNumFeaturesPartitions(2);
learner.setNumLabelsPartitions(2);
BoostClassifier classifier = learner.buildModel(data[0]);
ClassificationResults results = classifier.classifyWithResults(sc, data[1], 1);
// Print results in a StringBuilder.
StringBuilder sb = new StringBuilder();
sb.append("**** Effectiveness\n");
sb.append(results.getCt().toString() + "\n");
sb.append("********\n");
for (int i = 0; i < results.getNumDocs(); i++) {
int docID = results.getDocuments()[i];
int[] labels = results.getLabels()[i];
int[] goldLabels = results.getGoldLabels()[i];
sb.append("DocID: " + docID + ", Labels assigned: " + Arrays.toString(labels) + ", Labels scores: " + Arrays.toString(results.getScores()[i]) + ", Gold labels: " + Arrays.toString(goldLabels)
+ "\n");
}
System.out.print(sb);
}
}
相关文章推荐
- Hello Venus - 我是艺术(I'm Ill)
- android 不生成odex文件方法
- iOS XMPP协议的服务端Tigase
- 各种排序算法的稳定性以及时间和空间复杂度分析
- 浅谈Java设计模式(十九)备忘录模式(Memento)
- springMVC + hadoop + httpclient 文件上传请求直接写入hdfs
- Linux内存管理之页面回收【转】
- RTMP协议分析
- Lazy<Object> 单例
- 远程调用(RMI,RPC,WS,JMS,Rest)
- Zookeeper如何正确设置和获取watcher
- R语言中描述统计量的多种方法summary()、describe()、str()等
- 用StringBuilder连接MYSQL字段时被清空的问题
- C/C++常考面试题(一):代码分析
- 文本闪烁函数
- HBase 修改TTL 属性释放空间
- 深入浅出Zookeeper之二Session建立
- 内核怎样管理你的内存【转】
- Eclipse内存溢出
- Oracle学习之常见错误整理