Spark技术实战之1--KafkaWordCount
2015-08-25 09:55
323 查看
步骤1:解压kafka 0.8.1
步骤2:启动zookeeper
步骤3:修改配置文件config/server.properties,添加如下内容
步骤4:启动kafka server
步骤5:创建topic
检验topic创建是否成功
步骤6:打开producer,发送消息
步骤7:打开consumer,接收消息
步骤1:停止运行刚才的kafka-console-producer和kafka-console-consumer
步骤2:运行KafkaWordCountProducer
解释一下参数的意思,localhost:9092表示producer的地址和端口, test表示topic,3表示每秒发多少条消息,5表示每条消息中有几个单词
步骤3:启动hadoop
如果不启动hadoop会报错:
步骤2:启动zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties
步骤3:修改配置文件config/server.properties,添加如下内容
host.name=localhost # Hostname the broker will advertise to producers and consumers. If not set, it uses the # value for "host.name" if configured. Otherwise, it will use the value returned from # java.net.InetAddress.getCanonicalHostName(). advertised.host.name=localhost
步骤4:启动kafka server
bin/kafka-server-start.sh config/server.properties
步骤5:创建topic
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
检验topic创建是否成功
bin/kafka-topics.sh --list --zookeeper localhost:2181
步骤6:打开producer,发送消息
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test ##启动成功后,输入以下内容测试 This is a message This is another message
步骤7:打开consumer,接收消息
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning ###启动成功后,如果一切正常将会显示producer端输入的内容 This is a message This is another message
运行KafkaWordCount
KafkaWordCount源文件位置 examples/src/main/scala/org/apache/spark/examples/streaming/KafkaWordCount.scala/** * Consumes messages from one or more topics in Kafka and does wordcount. * Usage: KafkaWordCount <zkQuorum> <group> <topics> <numThreads> * <zkQuorum> is a list of one or more zookeeper servers that make quorum * <group> is the name of kafka consumer group * <topics> is a list of one or more kafka topics to consume from * <numThreads> is the number of threads the kafka consumer should use * * Example: * `$ bin/run-example \ * org.apache.spark.examples.streaming.KafkaWordCount zoo01,zoo02,zoo03 \ * my-consumer-group topic1,topic2 1` */ object KafkaWordCount { def main(args: Array[String]) { if (args.length < 4) { System.err.println("Usage: KafkaWordCount ") System.exit(1) } StreamingExamples.setStreamingLogLevels() val Array(zkQuorum, group, topics, numThreads) = args val sparkConf = new SparkConf().setAppName("KafkaWordCount") val ssc = new StreamingContext(sparkConf, Seconds(2)) ssc.checkpoint("checkpoint") val topicpMap = topics.split(",").map((_,numThreads.toInt)).toMap val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicpMap).map(_._2) val words = lines.flatMap(_.split(" ")) val wordCounts = words.map(x => (x, 1L)) .reduceByKeyAndWindow(_ + _, _ - _, Minutes(10), Seconds(2), 2) wordCounts.print() ssc.start() ssc.awaitTermination() } }
步骤1:停止运行刚才的kafka-console-producer和kafka-console-consumer
步骤2:运行KafkaWordCountProducer
bin/run-example org.apache.spark.examples.streaming.KafkaWordCountProducer localhost:9092 test 3 5
解释一下参数的意思,localhost:9092表示producer的地址和端口, test表示topic,3表示每秒发多少条消息,5表示每条消息中有几个单词
步骤3:启动hadoop
如果不启动hadoop会报错:
Call From moon/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused[/code]
步骤4:运行KafkaWordCountbin/run-example org.apache.spark.examples.streaming.KafkaWordCount localhost:2181 test-consumer-group test 1
解释一下参数, localhost:2181表示zookeeper的监听地址,test-consumer-group表示consumer-group的名称,必须和$KAFKA_HOME/config/consumer.properties中的group.id的配置内容一致,test表示topic,1表示线程数。
参考链接
相关文章推荐
- 【HDU3038】【加权并查集】
- 临时变量作为非const的引用进行参数传递引发的编译错误
- HDU 2844+POJ 1014 +FZU 1432详解(多重背包&&二进制优化)
- POJ 3126 Prime Path (BFS)
- 揭秘谷歌网络基础设施十年演变过程
- vijos1098:合唱队形
- 5.5 运算符问题
- zzuli训练赛_05_13-K
- poj-1426(转)
- 转贴 poj分类
- 转贴 poj分类
- poj-1321
- 解决 ffmpeg yasm not found, use --disable-yasm for a crippled build
- nyoj--水池数目
- nyoj--图像有用区域
- XML(学习笔记)
- cocos2d js layer定义扑克放在scene中,修改layer中图片,layer响应点击函数
- Java从入门到精通(实例版) 例9.11访问构造方法
- css样式学习笔记
- Request(对象)