spark--actions算子--reduce
2017-07-12 18:32
218 查看
import org.apache.spark.{SparkConf, SparkContext} /** * Created by liupeng on 2017/6/16. */ object A_reduce { System.setProperty("hadoop.home.dir","F:\\hadoop-2.6.5") def main(args: Array[String]): Unit = { val conf = new SparkConf().setAppName("reduce_test").setMaster("local") val sc = new SparkContext(conf) //准备一下数据,有一个集合,通过reduce来进行累加 val list = List(1, 2, 3, 4, 5) val rdd = sc.parallelize(list) //reduce它需要两个参数并返回一个,来聚合数据集的元素。 val sum = rdd.reduce((x,y) => x + y) println(sum) } }
运行结果:
15
相关文章推荐
- [大数据之Spark]——Actions算子操作入门实例
- spark RDD算子(六)之键值对聚合操作reduceByKey,foldByKey,排序操作sortByKey
- Spark算子:RDD行动Action操作(1)–first、count、reduce、collect
- Spark中groupByKey与reduceByKey算子之间的区别
- spark--actions算子--count
- spark--actions算子--saveAsObjectFile
- Spark算子:RDD行动Action操作(1)–first、count、reduce、collect
- Spark算子--first、count、reduce、collect、lookup
- spark--actions算子--saveAsTextFile
- Spark算子[11]:reduce、aggregate、fold 详解
- spark--actions算子--takeOrdered
- spark--transform算子--reduceByKey
- spark--actions算子--collect
- spark--actions算子--takeSample
- Spark算子:RDD键值转换操作(3)–groupByKey、reduceByKey、reduceByKeyLocally
- 深入理解Spark算子之 reduceByKey
- Spark编程的基本的算子之:combineByKey,reduceByKey,groupByKey
- Spark算子--reduceByKey
- Spark算子:RDD行动Action操作(1)–first、count、reduce、collect
- Spark算子:RDD键值转换操作(3)–groupByKey、reduceByKey、reduceByKeyLocally