您的位置:首页 > 运维架构 > Linux

centos7 搭建kafka集群笔记

2018-01-09 20:25 411 查看
安装kafka
cd   /soft
tar -zxvf kafka_2.11-1.0.0.tgz -C /usr/local/
mv /usr/local/kafka_2.11-1.0.0/  /usr/local/kafka_2.11

环境变量:
echo "export KAFKA_HOME=/usr/local/kafka_2.11" >> /etc/profile
echo -e 'export PATH=$PATH:$KAFKA_HOME/bin'>> /etc/profile
source /etc/profile

新建文件夹
mkdir  /usr/local/kafka_2.11/kafka-logs

将kafka scp到node2,node3节点(也即拷贝文件夹kafka_2.11到目录/usr/local/下):
sudo scp -r /usr/local/kafka_2.11  node2:/usr/local/
sudo scp -r /usr/local/kafka_2.11  node3:/usr/local/

按照前面步骤,为node2和node3配置hbase的环境变量。

chown -R hadoop /usr/local/kafka_2.11
chgrp -R hadoop /usr/local/kafka_2.11

修改配置文件:
sudo vim  /usr/local/kafka_2.11/config/server.properties

node1节点配置:
broker.id=1
port=9092     #新增
host.name=node1   #新增
log.dirs=/usr/local/kafka_2.11/kafka-logs
zookeeper.connect=node1:2181,node2:2181,node3:2181    #zk的IP和端口

node2节点配置:
broker.id=2
port=9092     #新增
host.name=node2   #新增
log.dirs=/usr/local/kafka_2.11/kafka-logs
zookeeper.connect=node1:2181,node2:2181,node3:2181    #zk的IP和端口

node3节点配置:
broker.id=3
port=9092     #新增
host.name=node3   #新增
log.dirs=/usr/local/kafka_2.11/kafka-logs
zookeeper.connect=node1:2181,node2:2181,node3:2181    #zk的IP和端口

启动,需要到各个节点下启动:
/usr/local/kafka_2.11/bin/kafka-server-start.sh  /usr/local/kafka_2.11/config/server.properties

例子1:
创建一个复制因子为3的新主题
/usr/local/kafka_2.11/bin/kafka-topics.sh -create -zookeeper localhost:2181 -replication-factor 3 -partitions 1 -topic my-replicated-topic
查看显示主题信息
/usr/local/kafka_2.11/bin/kafka-topics.sh -describe -zookeeper localhost:2181 -topic my-replicated-topic

第五步:发布消息,消费消息
发布:/usr/local/kafka_2.11/bin/kafka-console-producer.sh -broker-list localhost:9092 -topic my-replicated-topic
消费:/usr/local/kafka_2.11/bin/kafka-console-consumer.sh -bootstrap-server localhost:9092 -from-beginning -topic  my-replicated-topic

例子2:
创建一个复制因子为1的新主题
/usr/local/kafka_2.11/bin/kafka-topics.sh --create --zookeeper 192.168.209.129:2181,192.168.209.130:2181,192.168.209.131:2181 --replication-factor 1 --partitions 3 --topic first

通过shell命令发送消息
/usr/local/kafka_2.11/bin/kafka-console-producer.sh --broker-list 192.168.209.129:9092,192.168.209.130:9092,192.168.209.131:9092 --topic first

通过shell消费消息
/usr/local/kafka_2.11/bin/kafka-console-consumer.sh --zookeeper 192.168.209.129:2181,192.168.209.130:2181,192.168.209.131:2181 --from-beginning --topic first
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  kafka centos