Flume-ng分布式部署和配置
2014-04-07 15:35
351 查看
Flume-ng分布式部署总结
1.日志总接收端的配置(负责接收各个节点发送过来的日志数据),修改flume的安装目录下/conf/flume-conf.properties配置文件
(默认没有,cp flume-conf.properties.template flume-conf.properties)
agent.sources = avrosrc
agent.channels = memoryChannel
agent.sinks = hdfsSink
# For each one of the sources, the type is defined
agent.sources.avrosrc.type = avro
agent.sources.avrosrc.bind = 192.168.35.100
agent.sources.avrosrc.port = 44444
# The channel can be defined as follows.
agent.sources.avrosrc.channels = memoryChannel
# Each channel's type is defined.
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.keep-alive = 10
agent.channels.memoryChannel.capacity = 100000
agent.channels.memoryChannel.transactionCapacity =100000
# Each sink's type must be defined
agent.sinks.hdfsSink.type = hdfs
agent.sinks.hdfsSink.channel = memoryChannel
agent.sinks.hdfsSink.hdfs.path = /flume_logs/%Y%m%d #存放文件的目录结构
#生成的文件的名称,“datacenter”各节点配置文件中定义的名称
agent.sinks.hdfsSink.hdfs.filePrefix = %{datacenter}_
agent.sinks.hdfsSink.hdfs.rollInterval = 0
agent.sinks.hdfsSink.hdfs.rollSize = 4000000
agent.sinks.hdfsSink.hdfs.rollCount = 0
agent.sinks.hdfsSink.hdfs.writeFormat = Text
agent.sinks.hdfsSink.hdfs.fileType = DataStream
agent.sinks.hdfsSink.hdfs.batchSize = 10
2.配置收集日志各节点的Flume的配置文件,flume的安装目录下/conf/flume-conf.properties
agent.sources = nodesource
agent.channels = nodeMemoryChannel
agent.sinks = nodeSink
agent.sources.nodesource.type = exec
agent.sources.nodesource.command = tail -F /root/logs/log1.log #监听的日志文件
agent.sources.nodesource.channels = nodeMemoryChannel
agent.sources.nodesource.interceptors = host_int timestamp_int inter1
agent.sources.nodesource.interceptors.host_int.type = host
agent.sources.nodesource.interceptors.host_int.hostHeader = hostname
agent.sources.nodesource.interceptors.timestamp_int.type = org.apache.flume.interceptor.TimestampInterceptor$Builder
#agent.sources.nodesource.interceptors = inter1
agent.sources.nodesource.interceptors.inter1.type = static
#数据接收端通过“datacenter”获取value值定义文件名称
agent.sources.nodesource.interceptors.inter1.key = datacenter
agent.sources.nodesource.interceptors.inter1.value = log102 #在数据接收端生成的文件名称
agent.channels.nodeMemoryChannel.type = memory
agent.channels.nodeMemoryChannel.keep-alive = 10
agent.channels.nodeMemoryChannel.capacity = 100000
agent.channels.nodeMemoryChannel.transactionCapacity =100000
agent.sinks.nodeSink.type = avro
agent.sinks.nodeSink.hostname = 192.168.35.100
agent.sinks.nodeSink.port = 44444
agent.sinks.nodeSink.channel = nodeMemoryChannel
3.启动(切换到flume的安装路径下启动)
#将日志信息打印到控制台
bin/flume-ng agent -n agent -c conf -f conf/flume-conf.properties -Dflume.root.logger=INFO,console
#将产生的日志信息保存到日志文件中,并后台运行
bin/flume-ng agent -n agent -c conf -f conf/flume-conf.properties >logs/flume_log.log &
1.日志总接收端的配置(负责接收各个节点发送过来的日志数据),修改flume的安装目录下/conf/flume-conf.properties配置文件
(默认没有,cp flume-conf.properties.template flume-conf.properties)
agent.sources = avrosrc
agent.channels = memoryChannel
agent.sinks = hdfsSink
# For each one of the sources, the type is defined
agent.sources.avrosrc.type = avro
agent.sources.avrosrc.bind = 192.168.35.100
agent.sources.avrosrc.port = 44444
# The channel can be defined as follows.
agent.sources.avrosrc.channels = memoryChannel
# Each channel's type is defined.
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.keep-alive = 10
agent.channels.memoryChannel.capacity = 100000
agent.channels.memoryChannel.transactionCapacity =100000
# Each sink's type must be defined
agent.sinks.hdfsSink.type = hdfs
agent.sinks.hdfsSink.channel = memoryChannel
agent.sinks.hdfsSink.hdfs.path = /flume_logs/%Y%m%d #存放文件的目录结构
#生成的文件的名称,“datacenter”各节点配置文件中定义的名称
agent.sinks.hdfsSink.hdfs.filePrefix = %{datacenter}_
agent.sinks.hdfsSink.hdfs.rollInterval = 0
agent.sinks.hdfsSink.hdfs.rollSize = 4000000
agent.sinks.hdfsSink.hdfs.rollCount = 0
agent.sinks.hdfsSink.hdfs.writeFormat = Text
agent.sinks.hdfsSink.hdfs.fileType = DataStream
agent.sinks.hdfsSink.hdfs.batchSize = 10
2.配置收集日志各节点的Flume的配置文件,flume的安装目录下/conf/flume-conf.properties
agent.sources = nodesource
agent.channels = nodeMemoryChannel
agent.sinks = nodeSink
agent.sources.nodesource.type = exec
agent.sources.nodesource.command = tail -F /root/logs/log1.log #监听的日志文件
agent.sources.nodesource.channels = nodeMemoryChannel
agent.sources.nodesource.interceptors = host_int timestamp_int inter1
agent.sources.nodesource.interceptors.host_int.type = host
agent.sources.nodesource.interceptors.host_int.hostHeader = hostname
agent.sources.nodesource.interceptors.timestamp_int.type = org.apache.flume.interceptor.TimestampInterceptor$Builder
#agent.sources.nodesource.interceptors = inter1
agent.sources.nodesource.interceptors.inter1.type = static
#数据接收端通过“datacenter”获取value值定义文件名称
agent.sources.nodesource.interceptors.inter1.key = datacenter
agent.sources.nodesource.interceptors.inter1.value = log102 #在数据接收端生成的文件名称
agent.channels.nodeMemoryChannel.type = memory
agent.channels.nodeMemoryChannel.keep-alive = 10
agent.channels.nodeMemoryChannel.capacity = 100000
agent.channels.nodeMemoryChannel.transactionCapacity =100000
agent.sinks.nodeSink.type = avro
agent.sinks.nodeSink.hostname = 192.168.35.100
agent.sinks.nodeSink.port = 44444
agent.sinks.nodeSink.channel = nodeMemoryChannel
3.启动(切换到flume的安装路径下启动)
#将日志信息打印到控制台
bin/flume-ng agent -n agent -c conf -f conf/flume-conf.properties -Dflume.root.logger=INFO,console
#将产生的日志信息保存到日志文件中,并后台运行
bin/flume-ng agent -n agent -c conf -f conf/flume-conf.properties >logs/flume_log.log &
相关文章推荐
- Flume-ng分布式环境的部署和配置(三)
- Flume-ng分布式环境的部署和配置(一)
- Flume-ng分布式环境的部署和配置(二)
- Flume NG之Agent部署和sink配置HDFS且吐槽CSDN博客及客服态度
- Flume(NG)架构设计要点及配置实践 Flume NG是一个分布式、可靠、可用的系统,它能够将不同数据源的海量日志数据进行高效收集
- (大数据之flume)Flume(NG)架构设计要点及配置实践
- Flume-ng 配置channel轮询负载均衡
- 搭建3个节点的hadoop集群(完全分布式部署)5 flume安装及flume导数据到hdfs
- Flume-ng 配置以及简单的例子[转]
- hbase 在三台centos7上的分布式集群的配置部署
- zookeeper实践(二) 伪分布式部署和配置
- Flume环境部署和配置详解及案例大全
- Flume-ng配置
- Flume环境部署和配置详解及案例大全
- 上传下载分布式部署FASTDFS安装与配置
- nutch2.0完全分布式部署配置
- Flume学习2_Flume NG简介、配置实战、技术架构应用和可能遇到的问题
- Flume NG 简介及配置实战
- flume-ng 1.5.0安装部署
- Flume(NG)架构设计要点及配置实践