离线轻量级大数据平台Spark之MLib机器学习库Word2Vec实例
2016-11-07 15:13
337 查看
Word2Vecword2vec能将文本中出现的词向量化,可以在捕捉语境信息的同时压缩数据规模。Word2Vec实际上是两种不同的方法:Continuous Bag of Words (CBOW) 和 Skip-gram。CBOW的目标是根据上下文来预测当前词语的概率。Skip-gram刚好相反:根据当前词语来预测上下文的概率。这两种方法都利用人工神经网络作为它们的分类算法。起初,每个单词都是一个随机 N 维向量。经过训练之后,该算法利用 CBOW 或者 Skip-gram 的方法获得了每个单词的最优向量。
实例代码如下:
package sk.mlib;
import java.util.Arrays;
import java.util.List;
import org.apache.spark.ml.feature.Word2Vec;
import org.apache.spark.ml.feature.Word2VecModel;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.types.*;
public class Word2VecDemo {
public static void main(String[] args) {
SparkSession spark = SparkSession.builder().appName("Word2VecDemo").getOrCreate();
// Input data: Each row is a bag of words from a sentence or document.
List<Row> data = Arrays.asList(
RowFactory.create(Arrays.asList("Hi I heard about Spark".split(" "))),
RowFactory.create(Arrays.asList("I wish Java could use case classes".split(" "))),
RowFactory.create(Arrays.asList("Logistic regression models are neat".split(" ")))
);
StructType schema = new StructType(new StructField[]{
new StructField("text", new ArrayType(DataTypes.StringType, true), false, Metadata.empty())
});
Dataset<Row> documentDF = spark.createDataFrame(data, schema);
// Learn a mapping from words to Vectors.
Word2Vec word2Vec = new Word2Vec()
.setInputCol("text")
.setOutputCol("result")
.setVectorSize(3)
.setMinCount(0);
Word2VecModel model = word2Vec.fit(documentDF);
Dataset<Row> result = model.transform(documentDF);
for (Row r : result.select("text","result").takeAsList(3)) {
System.out.println(r);
}
spark.stop();
}
}
/*
执行结果:
[WrappedArray(Hi, I, heard, about, Spark),[-0.028139343485236168,0.04554025698453188,-0.013317196490243079]]
[WrappedArray(I, wish, Java, could, use, case, classes),[0.06872416580361979,-0.02604914902310286,0.02165239889706884]]
[WrappedArray(Logistic, regression, models, are, neat),[0.023467857390642166,0.027799883112311366,0.0331136979162693]]
*/
实例代码如下:
package sk.mlib;
import java.util.Arrays;
import java.util.List;
import org.apache.spark.ml.feature.Word2Vec;
import org.apache.spark.ml.feature.Word2VecModel;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.types.*;
public class Word2VecDemo {
public static void main(String[] args) {
SparkSession spark = SparkSession.builder().appName("Word2VecDemo").getOrCreate();
// Input data: Each row is a bag of words from a sentence or document.
List<Row> data = Arrays.asList(
RowFactory.create(Arrays.asList("Hi I heard about Spark".split(" "))),
RowFactory.create(Arrays.asList("I wish Java could use case classes".split(" "))),
RowFactory.create(Arrays.asList("Logistic regression models are neat".split(" ")))
);
StructType schema = new StructType(new StructField[]{
new StructField("text", new ArrayType(DataTypes.StringType, true), false, Metadata.empty())
});
Dataset<Row> documentDF = spark.createDataFrame(data, schema);
// Learn a mapping from words to Vectors.
Word2Vec word2Vec = new Word2Vec()
.setInputCol("text")
.setOutputCol("result")
.setVectorSize(3)
.setMinCount(0);
Word2VecModel model = word2Vec.fit(documentDF);
Dataset<Row> result = model.transform(documentDF);
for (Row r : result.select("text","result").takeAsList(3)) {
System.out.println(r);
}
spark.stop();
}
}
/*
执行结果:
[WrappedArray(Hi, I, heard, about, Spark),[-0.028139343485236168,0.04554025698453188,-0.013317196490243079]]
[WrappedArray(I, wish, Java, could, use, case, classes),[0.06872416580361979,-0.02604914902310286,0.02165239889706884]]
[WrappedArray(Logistic, regression, models, are, neat),[0.023467857390642166,0.027799883112311366,0.0331136979162693]]
*/
相关文章推荐
- 离线轻量级大数据平台Spark之MLib机器学习库线性回归实例
- 离线轻量级大数据平台Spark之读取CSV文件实例
- 离线轻量级大数据平台Spark之MLib机器学习协同过滤ALS实例
- 离线轻量级大数据平台Spark之MLib机器学习库朴素贝叶斯实例
- 离线轻量级大数据平台Spark之MLib机器学习库SVM实例
- 离线轻量级大数据平台Spark之MLib机器学习库TF-IDF实例
- 离线轻量级大数据平台Spark之MLib机器学习库聚类算法KMeans实例
- 离线轻量级大数据平台Spark之中文字符显示问题的解决
- 离线轻量级大数据平台Spark之单机部署及Java开发
- 离线轻量级大数据平台Spark之MLib机器学习库概念学习
- 离线轻量级大数据平台Spark之JavaRDD关联join操作
- Spark项目之电商用户行为分析大数据平台之(四)离线数据采集
- 大数据计算平台Spark内核解读
- 学习Spark的入门教程——《Spark大数据实例开发教程》
- Spark大型项目实战:电商用户行为分析大数据平台
- 乌云平台公开漏洞、知识库爬虫和搜索——乌云所有离线数据
- 跨平台app开发中的点击更新数据实例
- Spark 大数据平台 Introduction part 2 coding
- scala实战之spark读取mysql数据表并存放到mysql库中编程实例
- Hadoop、Storm、Spark这三个大数据平台有啥区别,各有啥应用场景?