spark--transform算子--groupByKey
2017-07-18 11:42
441 查看
import org.apache.spark.{SparkConf, SparkContext} /** * Created by liupeng on 2017/6/16. */ object T_groupByKey { System.setProperty("hadoop.home.dir","F:\\hadoop-2.6.5") def main(args: Array[String]): Unit = { val conf = new SparkConf().setAppName("groupByKey_test").setMaster("local") val sc = new SparkContext(conf) val scoreMap = List("liupeng" -> 150, "liupeng" -> 50, "liusi" -> 120, "xiaoma" -> 100) //groupByKey把相同的key的元素放到一起去 val rdd = sc.parallelize(scoreMap) val result = rdd.groupByKey() result.foreach(x => println(x._1 + ":" + x._2)) } }
运行结果:
liusi:CompactBuffer(120)
liupeng:CompactBuffer(150, 50)
xiaoma:CompactBuffer(100)
相关文章推荐
- 【Spark篇】---SparkStreaming算子操作transform和updateStateByKey
- Spark中groupByKey与reduceByKey算子之间的区别
- Spark算子--groupByKey
- Spark编程的基本的算子之:combineByKey,reduceByKey,groupByKey
- Spark算子:RDD键值转换操作(3)–groupByKey、reduceByKey、reduceByKeyLocally
- Spark算子[12]:groupByKey、cogroup、join、lookup 源码实例详解
- Spark算子:RDD键值转换操作(3)–groupByKey、reduceByKey、reduceByKeyLocally
- Spark算子:RDD键值转换操作(3)–groupBy、keyBy、groupByKey、reduceByKey、reduceByKeyLocally
- Spark算子:RDD键值转换操作(3)–groupByKey、reduceByKey、reduceByKeyLocally
- Spark编程之基本的RDD算子之cogroup,groupBy,groupByKey
- spark--transform算子--reduceByKey
- Spark groupByKey,reduceByKey,sortByKey算子的区别
- Spark算子详解之reduceByKey_sample_take_takeSample_distinct_sortByKey_saveAsTextFile_intersection
- spark中groupByKey与reducByKey
- Spark中 groupBy() 与groupByKey()的区别
- sparkrdd自动转换能用pairfun(否则无法用reducebykey,groupbykey)
- Spark API 详解/大白话解释 之 groupBy、groupByKey
- spark新能优化之reduceBykey和groupBykey的使用
- Spark算子:RDD键值转换操作(2)–combineByKey、foldByKey
- 对于Spark中groupByKey的深入理解