flume-ng负载均衡load-balance、failover集群搭建
2015-12-11 16:53
621 查看
集群采用3台机器:
[html] view
plaincopy
host1 load-balance设置
host2 机器1
host3 机器2
其中,host1 机器配置:
[html] view
plaincopy
#Define a memory channel called c1 on a1
a1.channels = c1
a1.sources = r1
a1.sinks = k1 k2
a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type = load_balance
a1.sinkgroups.g1.processor.selector = round_robin
a1.sinkgroups.g1.processor.backoff = true
a1.channels.c1.type = file
a1.channels.c1.checkpointDir = /tmp/flume/loadcheckpoint
a1.channels.c1.dataDirs = /tmp/flume/loaddata
a1.sources.r1.channels = c1
a1.sources.r1.type = avro
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 41415
a1.sinks.k1.channel = c1
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = host2
a1.sinks.k1.port = 41414
a1.sinks.k2.channel = c1
a1.sinks.k2.type = AVRO
a1.sinks.k2.hostname = host3
a1.sinks.k2.port = 41414
host2 机器配置:
[html] view
plaincopy
a2.channels = c1
a2.sources = r1
a2.sinks = k1
a2.channels.c1.type = FILE
a1.channels.c1.checkpointDir = /tmp/flume/checkpoint
a1.channels.c1.dataDirs = /tmp/flume/data
a2.sources.r1.channels = c1
a2.sources.r1.type = AVRO
a2.sources.r1.bind = 0.0.0.0
a2.sources.r1.port = 41414
a2.sinks.k1.channel = c1
a2.sinks.k1.type = file_roll
a2.sinks.k1.sink.directory = /tmp/load/
a2.sinks.k1.sink.rollInterval = 0
host3 机器配置:
[html] view
plaincopy
a2.channels = c1
a2.sources = r1
a2.sinks = k1
a2.channels.c1.type = FILE
a1.channels.c1.checkpointDir = /tmp/flume/checkpoint
a1.channels.c1.dataDirs = /tmp/flume/data
a2.sources.r1.channels = c1
a2.sources.r1.type = AVRO
a2.sources.r1.bind = 0.0.0.0
a2.sources.r1.port = 41414
a2.sinks.k1.channel = c1
a2.sinks.k1.type = file_roll
a2.sinks.k1.sink.directory = /tmp/load/
a2.sinks.k1.sink.rollInterval = 0
客户端机器暂时使用flume-ng的agent发送,配置如下:
[html] view
plaincopy
# Define a memory channel called c1 on a1
a1.channels.c1.type = file
a1.channels.c1.checkpointDir = /tmp/flume/checkpoint
a1.channels.c1.dataDirs = /tmp/flume/data
a1.sources.r1.channels = c1
a1.sources.r1.type = exec
a1.sources.r1.command = cat /tmp/linux.log
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = host1
a1.sinks.k1.port = 41415
a1.channels = c1
a1.sources = r1
a1.sinks = k1
客户端/tmp/linux.log文件3G左右,发送给host1。
启动:
host2:
[html] view
plaincopy
bin/flume-ng agent -c conf -f conf/load-sink1.conf -n a2
host3:
[html] view
plaincopy
bin/flume-ng agent -c conf -f conf/load-sink2.conf -n a2
host1:
[html] view
plaincopy
bin/flume-ng agent -c conf -f conf/load-balance.conf -n a1
客户端:
[html] view
plaincopy
bin/flume-ng agent -c conf -f conf/client.conf -n a1
注意:启动方式最好是从下往上启动。即:先启动host2和host3,然后启动host1,最后启动client。
测试过程中,可以随时将host2或host3停止,过一段时间再启动。
这样,就测试了flume-ng的load-balance和failover功能。
[html] view
plaincopy
host1 load-balance设置
host2 机器1
host3 机器2
其中,host1 机器配置:
[html] view
plaincopy
#Define a memory channel called c1 on a1
a1.channels = c1
a1.sources = r1
a1.sinks = k1 k2
a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type = load_balance
a1.sinkgroups.g1.processor.selector = round_robin
a1.sinkgroups.g1.processor.backoff = true
a1.channels.c1.type = file
a1.channels.c1.checkpointDir = /tmp/flume/loadcheckpoint
a1.channels.c1.dataDirs = /tmp/flume/loaddata
a1.sources.r1.channels = c1
a1.sources.r1.type = avro
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 41415
a1.sinks.k1.channel = c1
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = host2
a1.sinks.k1.port = 41414
a1.sinks.k2.channel = c1
a1.sinks.k2.type = AVRO
a1.sinks.k2.hostname = host3
a1.sinks.k2.port = 41414
host2 机器配置:
[html] view
plaincopy
a2.channels = c1
a2.sources = r1
a2.sinks = k1
a2.channels.c1.type = FILE
a1.channels.c1.checkpointDir = /tmp/flume/checkpoint
a1.channels.c1.dataDirs = /tmp/flume/data
a2.sources.r1.channels = c1
a2.sources.r1.type = AVRO
a2.sources.r1.bind = 0.0.0.0
a2.sources.r1.port = 41414
a2.sinks.k1.channel = c1
a2.sinks.k1.type = file_roll
a2.sinks.k1.sink.directory = /tmp/load/
a2.sinks.k1.sink.rollInterval = 0
host3 机器配置:
[html] view
plaincopy
a2.channels = c1
a2.sources = r1
a2.sinks = k1
a2.channels.c1.type = FILE
a1.channels.c1.checkpointDir = /tmp/flume/checkpoint
a1.channels.c1.dataDirs = /tmp/flume/data
a2.sources.r1.channels = c1
a2.sources.r1.type = AVRO
a2.sources.r1.bind = 0.0.0.0
a2.sources.r1.port = 41414
a2.sinks.k1.channel = c1
a2.sinks.k1.type = file_roll
a2.sinks.k1.sink.directory = /tmp/load/
a2.sinks.k1.sink.rollInterval = 0
客户端机器暂时使用flume-ng的agent发送,配置如下:
[html] view
plaincopy
# Define a memory channel called c1 on a1
a1.channels.c1.type = file
a1.channels.c1.checkpointDir = /tmp/flume/checkpoint
a1.channels.c1.dataDirs = /tmp/flume/data
a1.sources.r1.channels = c1
a1.sources.r1.type = exec
a1.sources.r1.command = cat /tmp/linux.log
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = host1
a1.sinks.k1.port = 41415
a1.channels = c1
a1.sources = r1
a1.sinks = k1
客户端/tmp/linux.log文件3G左右,发送给host1。
启动:
host2:
[html] view
plaincopy
bin/flume-ng agent -c conf -f conf/load-sink1.conf -n a2
host3:
[html] view
plaincopy
bin/flume-ng agent -c conf -f conf/load-sink2.conf -n a2
host1:
[html] view
plaincopy
bin/flume-ng agent -c conf -f conf/load-balance.conf -n a1
客户端:
[html] view
plaincopy
bin/flume-ng agent -c conf -f conf/client.conf -n a1
注意:启动方式最好是从下往上启动。即:先启动host2和host3,然后启动host1,最后启动client。
测试过程中,可以随时将host2或host3停止,过一段时间再启动。
这样,就测试了flume-ng的load-balance和failover功能。
相关文章推荐
- RedHat 5.8 安装Oracle 11gR2_Grid集群
- mysql集群之MMM简单搭建
- MySQL的集群配置的基本命令使用及一次操作过程实录
- MySQL slave_net_timeout参数解决的一个集群问题案例
- Redis 集群搭建和简单使用教程
- Windows Server 2003 下配置 MySQL 集群(Cluster)教程
- tomcat6_apache2.2_ajp 负载均衡加集群实战分享
- [Oracle] Data Guard 之 浅析Switchover与Failover
- Flume环境部署和配置详解及案例大全
- 用apache和tomcat搭建集群(负载均衡)
- Red Hat Linux,Apache2.0+Weblogic9.2负载均衡集群安装配置
- Hadoop单机版和全分布式(集群)安装
- java结合HADOOP集群文件上传下载
- Spring3.2.0和Quartz1.8.6集群配置
- (Weblogic Portal 9.2.3集群)Oracle数据库初始化报PF_MARKUP...
- HBase基本原理
- HDFS DatanodeProtocol——sendHeartbeat
- HDFS DatanodeProtocol——register
- Hadoop集群提交作业问题总结