java spark WordCount
2016-03-10 16:05
543 查看
spark,又一个传说中的分布式实现,详情:http://spark-project.org/,
安装这里就不写了,因为网上已有中文介绍,这里主要是介绍一下入门,和hadoop一样,学习的时候,首先学习spark提供的字符统计例子:javaWordCount
原始代码如下:
Java代码
import scala.Tuple2;
import spark.api.java.JavaPairRDD;
import spark.api.java.JavaRDD;
import spark.api.java.JavaSparkContext;
import spark.api.java.function.FlatMapFunction;
import spark.api.java.function.Function2;
import spark.api.java.function.PairFunction;
import java.util.Arrays;
import java.util.List;
public class JavaWordCount {
public static void main(String[] args) throws Exception {
if (args.length < 2) {
System.err.println("Usage: JavaWordCount <master> <file>");
System.exit(1);
}
JavaSparkContext ctx = new JavaSparkContext(args[0], "JavaWordCount",
System.getenv("SPARK_HOME"), System.getenv("SPARK_EXAMPLES_JAR"));
JavaRDD<String> lines = ctx.textFile(args[1], 1);
JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
public Iterable<String> call(String s) {
return Arrays.asList(s.split(" "));
}
});
JavaPairRDD<String, Integer> ones = words.map(new PairFunction<String, String, Integer>() {
public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
});
JavaPairRDD<String, Integer> counts = ones.reduceByKey(new Function2<Integer, Integer, Integer>() {
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
});
List<Tuple2<String, Integer>> output = counts.collect();
for (Tuple2 tuple : output) {
System.out.println(tuple._1 + ": " + tuple._2);
}
System.exit(0);
}
}
运行: ./run spark/examples/JavaWordCount local input.txt
local:不解析,自己查
input.txt:文件类容
Html代码
Hello World Bye World goole
运行的结果和haddoop中运行的JavaWordCount 一样
Html代码
goole: 1
World: 2
Hello: 1
Bye: 1
安装这里就不写了,因为网上已有中文介绍,这里主要是介绍一下入门,和hadoop一样,学习的时候,首先学习spark提供的字符统计例子:javaWordCount
<dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.10</artifactId> <version>1.1.0</version> </dependency>
原始代码如下:
Java代码
import scala.Tuple2;
import spark.api.java.JavaPairRDD;
import spark.api.java.JavaRDD;
import spark.api.java.JavaSparkContext;
import spark.api.java.function.FlatMapFunction;
import spark.api.java.function.Function2;
import spark.api.java.function.PairFunction;
import java.util.Arrays;
import java.util.List;
public class JavaWordCount {
public static void main(String[] args) throws Exception {
if (args.length < 2) {
System.err.println("Usage: JavaWordCount <master> <file>");
System.exit(1);
}
JavaSparkContext ctx = new JavaSparkContext(args[0], "JavaWordCount",
System.getenv("SPARK_HOME"), System.getenv("SPARK_EXAMPLES_JAR"));
JavaRDD<String> lines = ctx.textFile(args[1], 1);
JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
public Iterable<String> call(String s) {
return Arrays.asList(s.split(" "));
}
});
JavaPairRDD<String, Integer> ones = words.map(new PairFunction<String, String, Integer>() {
public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
});
JavaPairRDD<String, Integer> counts = ones.reduceByKey(new Function2<Integer, Integer, Integer>() {
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
});
List<Tuple2<String, Integer>> output = counts.collect();
for (Tuple2 tuple : output) {
System.out.println(tuple._1 + ": " + tuple._2);
}
System.exit(0);
}
}
运行: ./run spark/examples/JavaWordCount local input.txt
local:不解析,自己查
input.txt:文件类容
Html代码
Hello World Bye World goole
运行的结果和haddoop中运行的JavaWordCount 一样
Html代码
goole: 1
World: 2
Hello: 1
Bye: 1
相关文章推荐
- 动态代理 jdk as cglib asm
- Javac编译器源代码分析
- java实现动态切换上网IP (ADSL拨号上网)
- JAVA 中BIO,NIO,AIO的理解
- Java BIO、NIO、AIO
- Java数据类型和MySql数据类型对应表
- java jvm 参数 -Xms -Xmx -Xmn -Xss 调优总结
- Java中的数据类型
- 利用java开源包进行短信的收发
- Java NIO框架Mina、Netty、Grizzly介绍与对比
- Java常用工具包 Jodd
- Spring Security二
- Spring Security
- NStruts
- java.io.IOException: Negative seek offset
- java threadLocal 测试
- JAVA 变量的概述
- java.lang.VerifyError
- java 注解 继承
- Watij——Java开源Web测试工具