zookeeper集群部署(分布式)
2014-12-09 11:32
417 查看
描述
ZooKeeper可以用来保证数据在zookeeper集群之间的数据的事务一致性。如何搭建ZooKeeper集群
1.Zookeeper服务集群规模不小于三个节点,要求各服务之间系统时间要保持一致。
2.
在hadoop0的usr/local目录下,解压缩zookeeper(执行命令tar –zvxf zookeeper.tar.gz)
3.
设置环境变量
打开/etc/profile文件!内容如下:
#set java & hadoop export JAVA_HOME=/usr/local/jdk export HADOOP_HOME=/usr/local/hadoop export ZOOKEEPER_HOME=/usr/local/zookeeper export PATH=.:$HADOOP_HOME/bin:$ZOOKEEPER_HOME/bin:$JAVA_HOME/bin:$PATH
注:修改完后profile记得执行source /etc/profile
4.
在解压后的zookeeper的目录下进入conf目录修改配置文件
更名操作:mv zoo_sample.cfg zoo.cfg
5.
编辑zoo.cfg (vi zoo.cfg)
修改dataDir=/usr/local/zookeeper/data/
新增server.0=hadoop0:2888:3888
server.1=hadoop1:2888:3888
server.2=hadoop2:2888:3888
文件如下:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/usr/local/zookeeper/data # the port at which the clients will connect clientPort=2181 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.0=hadoop0:2888:3888 server.1=hadoop1:2888:3888 server.2=hadoop2:2888:3888
注:
server.0=hadoop0:2888:3888
server.1=hadoop1:2888:3888
server.2=hadoop2:2888:3888
这三行为配置zookeeper集群的机器(hadoop0、hadoop1、hadoop2)分别用server.0和server.1、server.2标识,2888和3888为端口号(zookeeper集群包含一个leader(领导)和多个fllower(随从),启动zookeeper集群时会随机分配端口号,分配的端口号为2888的为leader,端口号为3888的是fllower)
6.
创建文件夹mkdir /usr/local/zookeeper/data
7.
在data目录下,创建文件myid,值为0 (0用来标识hadoop0这台机器的zookeeper )
到此为止 hadoop0上的配置就已经完成;接下来配置hadoop1和hadoop2.
8.
把zookeeper目录复制到hadoop1和hadoop2中(scp –r /usr/local/zookeeper hadoop1:/usr/local)
9.
把修改后的etc/profile文件复制到hadoop1和hadoop2中
(复制完后记得在hadoop1和hadoop2中执行命令source /etc/profile)
10.
把hadoop1中相应的myid中的值改为1,hadoop2中相应的myid中的值改为2
11.
启动,在三个节点上分别执行命令zkServer.sh start
12.
检验,在三个节点上分别执行命令zkServer.sh status
zookeeper的shell操作
启动zookeeper:zkServer.sh start进入zookeeper:zkCli.sh
相关文章推荐
- apache+tomcat+Jfinal 2.2+dubbo2.5.4+zookeeper3.3.6 +redis+druid 分布式(集群)部署成功的一点心得(二)
- Zookeeper集群部署及分布式应用系统服务器上下线动态感知
- hadoop-2.6.0+zookeeper-3.4.6+hbase-1.0.0+hive-1.1.0完全分布式集群HA部署
- Zookeeper分布式集群部署
- 分布式协调服务zookeeper01-zookeeper集群安装部署
- 搭建3个节点的hadoop集群(完全分布式部署)--3 zookeeper与hbase安装
- Zookeeper分布式集群部署
- ZooKeeper分布式集群部署及问题
- SolrCloud 分布式集群安装部署(solr+ zookeeper +tomcat)
- ZooKeeper伪分布式集群部署
- Zookeeper分布式集群部署
- centos-7 部署zookeeper集群 >>>> 分布式 HDFS(二)
- Zookeeper分布式集群部署
- SolrCloud 分布式集群安装部署(solr4.8.1 + zookeeper +tomcat)
- Hadoop及Zookeeper+HBase完全分布式集群部署
- 生产环境实战spark (11)分布式集群 5台设备 Zookeeper集群、Kafka集群安装部署
- 分布式Web应用----Linux环境下zookeeper集群环境的安装与配置
- Kubernetes部署大数据组件系列一:一键部署Zookeeper集群
- 单机搭建zookeeper伪分布式集群
- zookeeper伪分布式集群安装