Flume与Kafka整合案例详解
2017-10-17 23:11
323 查看
环境配置
名称 | 版本 | 下载地址 |
---|---|---|
Centos 7.0 | 64x | 百度 |
Zookeeper | 3.4.5 | |
Flume | 1.6.0 | |
Kafka | 2.1.0 |
直接贴配置文件
[root@zero239 kafka_2.10-0.10.1.1]# cat /opt/hadoop/apache-flume-1.6.0-bin/conf/kafka-conf.properties # The configuration file needs to define the sources, # the channels and the sinks. # Sources, channels and sinks are defined per agent, # in this case called 'agent' agent.sources = r1 agent.channels = c1 agent.sinks = s1 # For eac 4000 h one of the sources, the type is defined #agent.sources.r1.type = spooldir #agent.sources.r1.command = /opt/test/logs/data #agent.sources.r1.fileHeader = true #agent.sources.r1.channels = c1 agent.sources.r1.type = spooldir agent.sources.r1.spoolDir = /opt/test/logs/data agent.sources.r1.fileHeader = true # Each sink's type must be defined #agent.sinks.s1.type = logger agent.sinks.s1.type = org.apache.flume.sink.kafka.KafkaSink agent.sinks.s1.topic = logstest agent.sinks.s1.brokerList = zero230:9092 agent.sinks.s1.requiredAcks = 1 agent.sinks.s1.batchSize = 2 # Each channel's type is defined. agent.channels.c1.type = memory agent.channels.c1.capacity = 100 agent.sources.r1.channels = c1 agent.sinks.s1.channel = c11
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
配置Kafka
# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. broker.id=2 # Switch to enable topic deletion or not, default value is false #delete.topic.enable=true ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = security_protocol://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 #listeners=PLAINTEXT://:9092 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #advertised.listeners=PLAINTEXT://your.host.name:9092 # The number of threads handling network requests num.network.threads=3 # The number of threads doing disk I/O num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM) socket.request.max.bytes=104857600 ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files log.dirs=/opt/hadoop/kafka_2.10-0.10.1.1/logs/tmp # The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. num.partitions=1 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. num.recovery.threads.per.data.dir=1 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk #log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush #log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log. # The minimum age of a log file to be eligible for deletion log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining # segments don't drop below log.retention.bytes. #log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according # to the retention policies log.retention.check.interval.ms=300000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connect=zero230:2181,zero231:2181,zero239:2181 # Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=60001
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
我已经配置了集群Zookeeper所以在这里我指定是我配置的Zookeeper地址如果你没有配置的话可以直接使用
Kafka内置的Zokeeper
Zookeeper集群搭建配置
启动Kafka验证是否成功
启动Zookeeper 如果没有配置集群的这一步跳过
启动Kafka内置Zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties
3.启动Kafka
server1.properties 为刚刚自己编辑的名称 bin/kafka-server-start.sh config/server1.properties1
2
3
4.创建一个名为
logstest的
topic
./bin/kafka-topics.sh --create --zookeeper zero230:2181 --replication-factor 1 --partitions 1 --topic logstest1
2
5.查看Topic是否创建成功
./bin/kafka-topics.sh --list --zookeeper localhost:21811
2
6.创建一个生产端(相当于是一个已经数据产生的用户吧)这样容易理解
bin/kafka-console-producer.sh --broker-list zero230:9092 --topic logstest1
2
7.创建一个消费端(意思就是可以看到
生产者意思就是生产出来的数据可以看到
输出)
bin/kafka-console-consumer.sh --zookeeper zero230:2181 --topic logstest --from-beginning1
2
启动验证Flume是否能与Kafka对接
[root@zero239 apache-flume-1.6.0-bin]# ./bin/flume-ng agent --conf conf -f ./conf/kafka-conf.properties -n agent -Dflume.root.logger=INFO,console1
2
对接成功截图
各位同学可以看到在Flume
sinks配置中我设置的是Kafka意思就是输出到
Kafka中
agent.sinks.s1.type = org.apache.flume.sink.kafka.KafkaSink agent.sinks.s1.topic = logstest 刚刚创建的Topic名称 agent.sinks.s1.brokerList = zero230:9092 创建生产的机1
2
3
4
在这里Flume与Kafka已经整合完毕了。
下节剧透
相关文章推荐
- Flume与Kafka整合案例详解
- Flume与Kafka整合案例详解
- Flume 与 Kafka整合案例
- Flume+Kafka+SparkStreaming整合
- flume与kafka整合
- Storm之——Storm+Kafka+Flume+Zookeeper+MySQL实现数据实时分析(案例测试篇)
- flume监听端口整合kafka以及相关错误
- flume+kafka+storm+hdfs整合
- Flume和Kafka的整合完成实时数据采集
- flume整合kafka和hdfs
- Kafka+Flume+Hive的整合
- Kafka+Flume+Hive的整合
- 自定义Flume拦截器,并将收集的日志存储到Kafka中(案例)
- flume与kafka整合
- flume+kafka+storm整合
- Flume-ng+kafka+storm+hbase 整合实例
- flume+kafka+storm整合02---问题
- kafka整合flume
- Flume+Kafka整合
- Flume 整合 Kafka 使用