Flume之监听目录变化并上传文件到HDFS中
2017-05-08 16:21
501 查看
vim /usr/local/flume/conf/exex-hdfs.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /opt/data/logs
a1.sources.r1.fileHeader = true
# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://master:9000/input/flume/%y/%m
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 40
a1.sinks.k1.hdfs.roundUnit = second
a1.sinks.k1.hdfs.useLocalTimeStamp = true
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
//启动脚本
flume-ng agent --conf conf --conf-file exec-hdfs.conf --name a1 -Dflume.root.logger=INFO,console
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /opt/data/logs
a1.sources.r1.fileHeader = true
# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://master:9000/input/flume/%y/%m
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 40
a1.sinks.k1.hdfs.roundUnit = second
a1.sinks.k1.hdfs.useLocalTimeStamp = true
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
//启动脚本
flume-ng agent --conf conf --conf-file exec-hdfs.conf --name a1 -Dflume.root.logger=INFO,console
相关文章推荐
- flume实例二、监听目录日志上传到HDFS文件系统
- Flume笔记--source端监听目录,sink端上传到HDFS
- 模拟使用Flume监听日志变化,并且把增量的日志文件写入到hdfs中
- Flume监听文件夹中的文件变化,并把文件下沉到hdfs
- Spark Streaming之:Flume监控目录下文件内容变化,然后Spark Streaming实时监听Flume,然后从其上拉取数据,并计算出结果
- Flume监听文件目录sink至hdfs配置
- Flume监听文件夹中的文件变化_并把文件下沉到hdfs
- 模拟使用Flume监听日志变化,并且把增量的日志文件写入到hdfs中
- Flume监听文件目录sink至hdfs配置
- 模拟使用Flume监听日志变化_并且把增量的日志文件写入到hdfs中
- Flume监听文件夹中的文件变化_并把文件下沉到hdfs
- Flume监听文件目录sink至hdfs配置
- Flume之监听目录变化
- 文件上传之分目录存储及上传监听
- gulp监听文件变化,并拷贝到指定目录
- Hadoop之HDFS上测试创建目录、上传、下载文件
- flume配置-生产环境下从文件目录下将日志上传到s3
- hadoop-3.0.0-beta1运维手册(007):hdfs3.0.0基本操作-上传、下载、删除文件或目录
- gulp监听文件变化,并拷贝到指定目录
- 用inotify监听文件或目录变化