您的位置:首页 > 编程语言 > Java开发

Kafka消息生产消费的一个java小案例(伪分布)

2016-05-03 21:16 369 查看
本文是传智播客hadoop八天-第七天学习笔记


个人感觉kafka有点像JMS的点对点模式,都是一个生产者一个(组)消费者,消息被一个(组)消费者消费以后,其他(组)消费者就无法查看消息。

生产者:

package cn.kafka;

import java.util.Properties;

import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

public class ProducerDemo {

public static void main(String[] args) throws Exception {
Properties props = new Properties();
props.put("zk.connect", "localhost:2181");
props.put("metadata.broker.list", "localhost:9092");
props.put("serializer.class", "kafka.serializer.StringEncoder");

ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<String, String>(config);

for (int i = 1; i <= 1000; i++) {
Thread.sleep(200);
producer.send(new KeyedMessage<String, String>("order",
"the message no is" + i));
}

}

}


消费者

package cn.kafka;

import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;

import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.message.MessageAndMetadata;

public class ConsumerDemo {

private static final String topic = "order";

private static final Integer threads = 1;

public static void main(String[] args) {
Properties props = new Properties();
//在这里使用zk.connect会报错,这是为什么呢?明明生产者用的就是zk.connect
props.put("zookeeper.connect", "localhost:2181");
// 若干个消费者为一个组,这个组消费的消息别的组是无法查看的。
props.put("group.id", "1111");
// 偏移量,smallest指将指针指向topic最起始的位置
props.put("auto.offset.reset", "smallest");

ConsumerConfig config = new ConsumerConfig(props);
// 创建java连接
ConsumerConnector consumer = Consumer
.createJavaConsumerConnector(config);
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
//可以定义多个topic
topicCountMap.put(topic, threads);
topicCountMap.put("topic1", threads);
topicCountMap.put("topic2", threads);

Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer
.createMessageStreams(topicCountMap);
//可以取出任意topic的消息
List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(topic);

for (final KafkaStream<byte[], byte[]> kafkaStream : streams) {
new Thread(new Runnable() {

public void run() {

for (MessageAndMetadata<byte[], byte[]> mm : kafkaStream) {
String msg = new String(mm.message());
System.out.println(msg);
}

}
}).start();

}

}

}
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: