centos7 搭建kafka集群笔记
2018-01-09 20:25
411 查看
安装kafka cd /soft tar -zxvf kafka_2.11-1.0.0.tgz -C /usr/local/ mv /usr/local/kafka_2.11-1.0.0/ /usr/local/kafka_2.11 环境变量: echo "export KAFKA_HOME=/usr/local/kafka_2.11" >> /etc/profile echo -e 'export PATH=$PATH:$KAFKA_HOME/bin'>> /etc/profile source /etc/profile 新建文件夹 mkdir /usr/local/kafka_2.11/kafka-logs 将kafka scp到node2,node3节点(也即拷贝文件夹kafka_2.11到目录/usr/local/下): sudo scp -r /usr/local/kafka_2.11 node2:/usr/local/ sudo scp -r /usr/local/kafka_2.11 node3:/usr/local/ 按照前面步骤,为node2和node3配置hbase的环境变量。 chown -R hadoop /usr/local/kafka_2.11 chgrp -R hadoop /usr/local/kafka_2.11 修改配置文件: sudo vim /usr/local/kafka_2.11/config/server.properties node1节点配置: broker.id=1 port=9092 #新增 host.name=node1 #新增 log.dirs=/usr/local/kafka_2.11/kafka-logs zookeeper.connect=node1:2181,node2:2181,node3:2181 #zk的IP和端口 node2节点配置: broker.id=2 port=9092 #新增 host.name=node2 #新增 log.dirs=/usr/local/kafka_2.11/kafka-logs zookeeper.connect=node1:2181,node2:2181,node3:2181 #zk的IP和端口 node3节点配置: broker.id=3 port=9092 #新增 host.name=node3 #新增 log.dirs=/usr/local/kafka_2.11/kafka-logs zookeeper.connect=node1:2181,node2:2181,node3:2181 #zk的IP和端口 启动,需要到各个节点下启动: /usr/local/kafka_2.11/bin/kafka-server-start.sh /usr/local/kafka_2.11/config/server.properties 例子1: 创建一个复制因子为3的新主题 /usr/local/kafka_2.11/bin/kafka-topics.sh -create -zookeeper localhost:2181 -replication-factor 3 -partitions 1 -topic my-replicated-topic 查看显示主题信息 /usr/local/kafka_2.11/bin/kafka-topics.sh -describe -zookeeper localhost:2181 -topic my-replicated-topic 第五步:发布消息,消费消息 发布:/usr/local/kafka_2.11/bin/kafka-console-producer.sh -broker-list localhost:9092 -topic my-replicated-topic 消费:/usr/local/kafka_2.11/bin/kafka-console-consumer.sh -bootstrap-server localhost:9092 -from-beginning -topic my-replicated-topic 例子2: 创建一个复制因子为1的新主题 /usr/local/kafka_2.11/bin/kafka-topics.sh --create --zookeeper 192.168.209.129:2181,192.168.209.130:2181,192.168.209.131:2181 --replication-factor 1 --partitions 3 --topic first 通过shell命令发送消息 /usr/local/kafka_2.11/bin/kafka-console-producer.sh --broker-list 192.168.209.129:9092,192.168.209.130:9092,192.168.209.131:9092 --topic first 通过shell消费消息 /usr/local/kafka_2.11/bin/kafka-console-consumer.sh --zookeeper 192.168.209.129:2181,192.168.209.130:2181,192.168.209.131:2181 --from-beginning --topic first
相关文章推荐
- centos7下kafka集群搭建
- Kafka(二)CentOS7.5搭建Kafka2.11-1.1.0集群与简单测试
- Centos 下Kafka集群的搭建
- CentOS7上部署搭建Kafka集群
- Centos7---kafka集群搭建
- ka 4000 fka学习笔记四:搭建Kafka集群
- Kafka在Centos6.4中的集群搭建
- CentOS7.3 - CDH5.13 集群搭建实战笔记
- centos+reidis sentinel集群搭建笔记
- kafka集群搭建和使用Java写kafka生产者消费者
- Centos-7.2 下搭建 Zookeeper-3.5.3 集群的搭建与测试
- API 服务器搭建笔记:CentOS + Node.js + MongoDB
- CentOs7 搭建基于最新版 Redis 集群环境
- Centos 6.5 搭建NFS服务器笔记
- MySql集群搭建笔记
- Kafka 0.9+Zookeeper3.4.6集群搭建、配置
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十一)NIFI1.7.1安装
- kafka集群搭建
- JAVA学习笔记05——windows下搭建集群及session共享问题
- kafka集群搭建