您的位置:首页 > 其它

Kafka+Flume+Hive的整合

2018-01-10 14:20 337 查看
本次任务是数据迁移工作,kafka作为数据入口与总线,通过Flume将数据落地到Hive中做分析。

数据源采用Java程序模拟,实际应用中可以根据情况选择不同的数据源。程序如下:

package com.unicom.kafka;

import java.util.Properties;

import java.util.concurrent.TimeUnit;

import kafka.javaapi.producer.Producer;

import kafka.producer.KeyedMessage;

import kafka.producer.ProducerConfig;

import kafka.serializer.StringEncoder;

public class kafkaProducer extends Thread {
private String topic;
public kafkaProducer(String topic) {
super();
this.topic = topic;
}
@Override
public void run() {
Producer producer = createProducer();
int i = 0;
while (true) {
producer.send(new KeyedMessage<Integer, String>(topic, "message: "+ i++));
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
private Producer createProducer() {
Properties props = new Properties();
props.put("zookeeper.connect","192.168.6.41:2181,192.168.6.42:2181,192.168.6.43:2181");// 声明zk
props.put("serializer.class", StringEncoder.class.getName());
props.put("metadata.broker.list","192.168.6.41:9092,192.168.6.42:9092,192.168.6.43:9092,192.168.6.44:9092");// 声明kafka broker
return new Producer<Integer, String>(new ProducerConfig(props));
}

}

运行Java程序,配置Flume配置文件:

#---------source的名字----------------

flume2HDFS_agent.sources = source_from_kafka

#auto.commit.enable = true

## kerberos config ##

#flume2HDFS_agent.sinks.hdfs_sink.hdfs.kerberosPrincipal = flume/datanode2.hdfs.alpha.com@OMGHADOOP.COM

#flume2HDFS_agent.sinks.hdfs_sink.hdfs.kerberosKeytab = /root/apache-flume-1.6.0-bin/conf/flume.keytab

#-------- kafkaSource相关配置-----------------

# 定义消息源类型

# For each one of the sources, the type is defined

flume2HDFS_agent.sources.source_from_kafka.type = org.apache.flume.source.kafka.KafkaSource

flume2HDFS_agent.sources.source_from_kafka.channels = mem_channel

flume2HDFS_agent.sources.source_from_kafka.batchSize = 5000

# 定义kafka所在的地址

# 配置消费的kafka topic

#flume2HDFS_agent.sources.source_from_kafka.topic = itil_topic_4097

flume2HDFS_agent.sources.source_from_kafka.kafka.topics = test

# 配置消费的kafka groupid

#flume2HDFS_agent.sources.source_from_kafka.groupId = flume4097

flume2HDFS_agent.sources.source_from_kafka.kafka.consumer.group.id = flumetest

#---------hdfsSink 相关配置------------------

# The channel can be defined as follows.

flume2HDFS_agent.sinks.hdfs_sink.type = hdfs

# 指定sink需要使用的channel的名字,注意这里是channel

#Specify the channel the sink should use

flume2HDFS_agent.sinks.hdfs_sink.channel = mem_channel

#flume2HDFS_agent.sinks.hdfs_sink.filePrefix = %{host}

flume2HDFS_agent.sinks.hdfs_sink.hdfs.path = hdfs://192.168.6.41:8020/tmp/ds=%Y%m%d

#File size to trigger roll, in bytes (0: never roll based on file size)

flume2HDFS_agent.sinks.hdfs_sink.hdfs.rollSize = 0

#Number of events written to file before it rolled (0 = never roll based on number of events)

flume2HDFS_agent.sinks.hdfs_sink.hdfs.rollCount = 0

flume2HDFS_agent.sinks.hdfs_sink.hdfs.rollInterval = 3600

flume2HDFS_agent.sinks.hdfs_sink.hdfs.threadsPoolSize = 30

#flume2HDFS_agent.sinks.hdfs_sink.hdfs.codeC = gzip

#flume2HDFS_agent.sinks.hdfs_sink.hdfs.fileType = CompressedStream

flume2HDFS_agent.sinks.hdfs_sink.hdfs.fileType=DataStream

flume2HDFS_agent.sinks.hdfs_sink.hdfs.writeFormat=Text

#------- memoryChannel相关配置-------------------------

# channel类型

# Each channel's type is defined.

flume2HDFS_agent.channels.mem_channel.type = memory

# Other config values specific to each type of channel(sink or source)

# can be defined as well

# channel存储的事件容量

# In this case, it specifies the capacity of the memory channel

flume2HDFS_agent.channels.mem_channel.capacity = 100000

# 事务容量

flume2HDFS_agent.channels.mem_channel.transactionCapacity = 10000

使用命令启动脚本:

flume-ng agent -n flume2HDFS_agent -f flume-conf-kafka2hdfs.properties

本次设置的HDFS落地文件位置为hadoop fs /tmp/  下,可以查看到文件:

-rw-r--r--   3 hadoop supergroup      53970 2018-01-09 01:00 /tmp/ds=20180109/FlumeData.1515427201560

-rw-r--r--   3 hadoop supergroup      13290 2018-01-09 01:14 /tmp/ds=20180109/FlumeData.1515430801937

-rw-r--r--   3 hadoop supergroup      49454 2018-01-09 10:53 /tmp/ds=20180109/FlumeData.1515462800875

-rw-r--r--   3 hadoop supergroup      50358 2018-01-09 11:53 /tmp/ds=20180109/FlumeData.1515466403582

-rw-r--r--   3 hadoop supergroup      51229 2018-01-09 12:53 /tmp/ds=20180109/FlumeData.1515470004538

-rw-r--r--   3 hadoop supergroup      53970 2018-01-09 13:53 /tmp/ds=20180109/FlumeData.1515473605797

需要放置到hive中的,可以直接使用hive的建表语句建表,然后从HDFS加载数据文件,文件会直接移动到hive目录下

在hive中查询测试:

message: 3702

message: 3703

message: 3704

message: 3705

message: 3706

message: 3707

message: 3708

message: 3709

message: 3710

message: 3711

message: 3712

message: 3713

message: 3714

message: 3715

message: 3716

Time taken: 0.16 seconds, Fetched: 100 row(s)

hive> 
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  Kafka Flume