flume+kafka+storm+redis/mysql启动命令记录
2014-09-19 09:41
525 查看
1.flume启动
bin/flume-ng agent --conf conf --conf-file conf/flume-conf.properties --name fks -Dflume.root.logger=INFO,console
2.启动kafka
[root@Cassandra kafka]# bin/zookeeper-server-start.sh config/zookeeper.properties&
[root@Cassandra kafka]# bin/kafka-server-start.sh config/server.properties &
3.启动storm
3台机器分别启动zookeeper
1master启动:bin/storm nimbus&
bin/storm ui&
2slave启动:bin/storm supervisor&
启动topology:storm jar tools/Storm4.jar com.qihoo.datacenter.step7kafka2redis.Kafka2RedisTopology flume2redis
4.启动redis
src/redis-server
在前端flume的机器指定文件夹中拷贝进文件,redis已更新
storm jar tools/Storm4.jar com.qihoo.datacenter.step3tomysql.ToMysqlTopology tomysql
storm jar tools/Storm4.jar com.qihoo.datacenter.step7kafka2redis.Kafka2RedisTopology flume2redis
# fks : flume kafka storm integration
fks.sources=r1
fks.sinks=k1
fks.channels=c1
# configure r1
fks.sources.r1.type=spooldir
fks.sources.r1.spoolDir=/data/flumeread
fks.sources.r1.fileHeader = false
# configure k1
fks.sinks.k1.type=com.qihoo.datacenter.sink.KafkaSink
# configure c1
fks.channels.c1.type=file
fks.channels.c1.checkpointDir=/data/flumewrite/example_fks_001
fks.channels.c1.dataDirs=/data/flumewrite2/example_fks_001
# bind source and sink
fks.sources.r1.channels=c1
fks.sinks.k1.channel=c1
1.flume启动
bin/flume-ng agent --conf conf --conf-file conf/flume-conf.properties --name fks -Dflume.root.logger=INFO,console
2.启动kafka
[root@Cassandra kafka]# bin/zookeeper-server-start.sh config/zookeeper.properties&
[root@Cassandra kafka]# bin/kafka-server-start.sh config/server.properties &
3.启动storm
3台机器分别启动zookeeper
1master启动:bin/storm nimbus&
bin/storm ui&
2slave启动:bin/storm supervisor&
启动topology:storm jar tools/Storm4.jar com.qihoo.datacenter.step7kafka2redis.Kafka2RedisTopology flume2redis
4.启动redis
src/redis-server
在前端flume的机器指定文件夹中拷贝进文件,redis已更新
storm jar tools/Storm4.jar com.qihoo.datacenter.step3tomysql.ToMysqlTopology tomysql
storm jar tools/Storm4.jar com.qihoo.datacenter.step7kafka2redis.Kafka2RedisTopology flume2redis
# fks : flume kafka storm integration
fks.sources=r1
fks.sinks=k1
fks.channels=c1
# configure r1
fks.sources.r1.type=spooldir
fks.sources.r1.spoolDir=/data/flumeread
fks.sources.r1.fileHeader = false
# configure k1
fks.sinks.k1.type=com.qihoo.datacenter.sink.KafkaSink
# configure c1
fks.channels.c1.type=file
fks.channels.c1.checkpointDir=/data/flumewrite/example_fks_001
fks.channels.c1.dataDirs=/data/flumewrite2/example_fks_001
# bind source and sink
fks.sources.r1.channels=c1
fks.sinks.k1.channel=c1
相关文章推荐
- 大数据平台架构(flume+kafka+hbase+ELK+storm+redis+mysql)
- HDFS HA、YARN HA、Zookeeper、HBase HA、Mysql、Hive、Sqool、Flume-ng、storm、kafka、redis、mongodb、spark安装
- _00025 妳那伊抹微笑_云计算之Flume+Kafka+Storm+Redis/Hbase+Hadoop+Hive技术文档分享V1.0.0(原创文档)
- [转载] 利用flume+kafka+storm+mysql构建大数据实时系统
- Flume sink Kafka Spout Storm Bolt Hbase or Redis (Flume)
- flume+kafka+storm+mysql架构设计
- Flume+Kafka+Storm+Redis实时分析系统基本架构
- flume、kafka、storm常用命令
- Flume+Kafka+Storm+Redis实时分析系统基本架构
- Flume sink Kafka Spout Storm Bolt Hbase or Redis (Kafka)
- [置顶] HADOOP大数据离线分析+实时分析框架;Hadoop+Flume+Kafka+Storm+Hive+Sqoop+mysql/oracle
- Flume+Kafka+Storm+Redis实时分析系统基本架构
- flume+kafka+storm+mysql 数据流
- flume+kafka+storm+redis+mongodb 配置
- Flume sink Kafka Spout Storm Bolt Hbase or Redis (Storm)
- flume+kafka+storm+mysql架构设计
- flume+kafka+storm+mysql
- 分布式消息中间件(四)——Flume+Kafka+Storm+Redis生态架构实战
- flume+kafka+storm+redis+mongodb日志优化
- flume+kafka+redis+storm分析系统架构