您的位置:首页 > 数据库 > Redis

flume+kafka+storm+redis/mysql启动命令记录

2014-09-19 09:41 525 查看

1.flume启动

bin/flume-ng agent --conf conf --conf-file conf/flume-conf.properties --name fks -Dflume.root.logger=INFO,console

2.启动kafka

 [root@Cassandra kafka]# bin/zookeeper-server-start.sh config/zookeeper.properties&

 [root@Cassandra kafka]# bin/kafka-server-start.sh config/server.properties &

3.启动storm

 3台机器分别启动zookeeper

 1master启动:bin/storm nimbus&

   bin/storm ui&

 2slave启动:bin/storm supervisor&

 启动topology:storm jar tools/Storm4.jar com.qihoo.datacenter.step7kafka2redis.Kafka2RedisTopology flume2redis

4.启动redis

 src/redis-server

在前端flume的机器指定文件夹中拷贝进文件,redis已更新

storm jar tools/Storm4.jar com.qihoo.datacenter.step3tomysql.ToMysqlTopology tomysql

storm jar tools/Storm4.jar com.qihoo.datacenter.step7kafka2redis.Kafka2RedisTopology flume2redis

# fks : flume kafka storm integration

fks.sources=r1

fks.sinks=k1

fks.channels=c1

# configure r1

fks.sources.r1.type=spooldir

fks.sources.r1.spoolDir=/data/flumeread

fks.sources.r1.fileHeader = false

# configure k1

fks.sinks.k1.type=com.qihoo.datacenter.sink.KafkaSink

# configure c1

fks.channels.c1.type=file

fks.channels.c1.checkpointDir=/data/flumewrite/example_fks_001

fks.channels.c1.dataDirs=/data/flumewrite2/example_fks_001

# bind source and sink

fks.sources.r1.channels=c1

fks.sinks.k1.channel=c1


内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: