您的位置:首页 > 运维架构

关于Flume-ng那些事(四)

2014-03-23 00:00 148 查看
最后一章了,flume-ng的手册总是慢半拍,一些实例可以参考下手册总的图,会很有收获。
http://flume.apache.org/FlumeUserGuide.html 还是1.4版本,有些参数已经发生变化,注意更改。 贴个上hadoop hdfs的例子,Flume-ng write events to HDFS example: update flume-ng 1.3 example: agent config

#List sources, sinks and channels in the agent weblog-agent.sources = tail weblog-agent.sources.tail.interceptors = ts host weblog-agent.sources.tail.interceptors.ts.type = org.apache.flume.interceptor.TimestampInterceptor$Builder weblog-agent.sources.tail.interceptors.host.type = org.apache.flume.interceptor.HostInterceptor$Builder weblog-agent.sources.tail.interceptors.host.useIP = false weblog-agent.sources.tail.interceptors.host.preserveExisting = true weblog-agent.sinks = avro-forward-sink01 weblog-agent.channels = jdbc-channel01 #define the flow #webblog-agent sources config weblog-agent.sources.tail.channels = jdbc-channel01 weblog-agent.sources.tail.type = exec weblog-agent.sources.tail.restart = true #weblog-agent.sources.tail. = true weblog-agent.sources.tail.command = tail -f /opt/nginx/logs/access.log #weblog-agent.sources.tail.selector.type = replicating #avro sink properties weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01 weblog-agent.sinks.avro-forward-sink01.type = hdfs weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://ttlsa-hadoop-master.:9000/user/flume/webtest/%{host}/%Y-%m-%d/ weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosPrincipal = flume@HADOOP.TTLSA.COM weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosKeytab = /var/run/flume-ng/flume.keytab weblog-agent.sinks.avro-forward-sink01.hdfs.rollInterval = 120 weblog-agent.sinks.avro-forward-sink01.hdfs.rollSize = 0 weblog-agent.sinks.avro-forward-sink01.hdfs.rollCount = 0 weblog-agent.sinks.avro-forward-sink01.hdfs.fileType = DataStream #channels config weblog-agent.channels.jdbc-channel01.type = memory
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  flume-ng hadoop