您的位置:首页 > 其它

SparkStreaming学习札记4-2020-2-15--SparkStreaming实时流处理项目实战

2020-02-29 18:31 260 查看

12-8 -通过定时调度工具每一分钟产生一批数据

1.在线工具

https://tool.lu/crontab/

crontab -e

              */1 * * * * /hadoop/data/project/log_generator.sh

如果要取消用#注释掉

 

 

2.对接python日志产生器输出的日志到Flume

定义名字为streaming_project.conf

 

选型:access.log ==>控制台输出

           exec

          memory

          logger

streaming_project.conf文件具体配置:
exec-memory-logger.sources = exec-source
exec-memory-logger.sinks = logger-sink
exec-memory-logger.channels = memory-channel

exec-memory-logger.sources.exec-source.type = exec
exec-memory-logger.sources.exec-source.command = tail -F /home/hadoop/data/project/logs/access.log
exec-memory-logger.sources.exec-source.shell = /bin/sh -c

exec-memory-logger.channels.memory-channel.type = memory

exec-memory-logger.sinks.logger-sink.type = logger

exec-memory-logger.sources.exec-source.channels = memory-channel
exec-memory-logger.sinks.logger-sink.channel = memory-channel
 

启动命令:

flume-ng agent --name exec-memory-logger --conf $FLUME_HOME/conf --conf-fi
le /home/hadoop/data/project/streaming_project.conf -Dflume.root.logger=INFO,console

3日志 == 》Kafka

 (1)启动zk:

         进入目录

          cd /home/hadoop/app/zookeeper-3.4.5-cdh5.7.0/bin

         启动命令

           ./zkServer.sh start

 

(2)启动Kafka Server:

         

          进入目录:cd /home/hadoop/app/kafka_2.11-0.9.0.0/bin/

          启动命令:./kafka-server-start.sh -daemon /home/hadoop/app/kafka_2.11-0.9.0.0/config/server.properties

       修改flume配置文件使得Flume sink数据到Kafka,修改如下并以streaming_project2.conf命名

exec-memory-kafka.sources = exec-source
exec-memory-kafka.sinks = kafka-sink
exec-memory-kafka.channels = memory-channel

exec-memory-kafka.sources.exec-source.type = exec
exec-memory-kafka.sources.exec-source.command = tail -F /home/hadoop/data/project/logs/access.log
exec-memory-kafka.sources.exec-source.shell = /bin/sh -c

exec-memory-kafka.channels.memory-channel.type = memory

exec-memory-kafka.sinks.kafka-sink.type = org.apache.flume.sink.kafka.KafkaSink
exec-memory-kafka.sinks.kafka-sink.brokerList = hadoop000:9092
exec-memory-kafka.sinks.kafka-sink.topic = streamingtopic
exec-memory-kafka.sinks.kafka-sink.batchSize = 5
exec-memory-kafka.sinks.kafka-sink.requiredAcks = 1

exec-memory-kafka.sources.exec-source.channels = memory-channel
exec-memory-kafka.sinks.kafka-sink.channel = memory-channel
 

(3)开启Kafka消费者查看

kafka-console-consumer.sh --zookeeper hadoop000:2181 --topic streamingtopic

(4)启动flume

flume-ng agent --name exec-memory-kafka --conf $FLUME_HOME/conf --conf-file /home/hadoop/data/project/streaming_project2.conf -Dflume.root.logger=INFO,console

  • 点赞
  • 收藏
  • 分享
  • 文章举报
qq_36956082 发布了10 篇原创文章 · 获赞 0 · 访问量 84 私信 关注
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: