zookeeper 集群部署
2018-04-04 18:25
232 查看
zooKeeper是一个分布式的,开放源码的分布式应用程序协调服务,是Google的Chubby一个开源的实现,是Hadoop和Hbase的重要组件。它是一个为分布式应用提供一致性服务的软件,提供的功能包括:配置维护、域名服务、分布式同步、组服务等。
原理:
ZooKeeper是以Fast Paxos算法为基础的,Paxos 算法存在活锁的问题,即当有多个proposer交错提交时,有可能互相排斥导致没有一个proposer能提交成功,而Fast Paxos作了一些优化,通过选举产生一个leader (领导者),只有leader才能提交proposer,具体算法可见Fast Paxos。因此,要想弄懂ZooKeeper首先得对Fast Paxos有所了解。
ZooKeeper的基本运转流程:
1、选举Leader。
2、同步数据。
3、选举Leader过程中算法有很多,但要达到的选举标准是一致的。
4、Leader要具有最高的执行ID,类似root权限。
5、集群中大多数的机器得到响应并接受选出的Leader
集群部署:
[root@benet /]# tar xf zookeeper-3.4.8.tar.gz
[root@benet /]# cd zookeeper-3.4.8/conf/
[root@benet conf]# cp zoo_sample.cfg zoo.cfg
[root@benet conf]# vim zoo.cfg
编辑zoo.cfg配置文件;
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance #
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.222.130:2888:3888 #三台为真实的虚拟机
server.2=192.168.222.131:2888:3888
server.3=192.168.222.129:2889:3889
[root@benet conf]# mkdir -p /tmp/zookeeper/data
echo "1" 等于 server.1中的1 ,
其它二台机器为:echo "2" > /tmp/zookeeper/data/myid #server.2中的2
echo "3" > /tmp/zookeeper/data/myid #server.3中的3
[root@benet conf]# echo "1" > /tmp/zookeeper/data/myid
[root@benet conf]# ../bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /zookeeper-3.4.8/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@localhost conf]# ../bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: follower
[root@localhost conf]# ../bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: ./zoo1.cfg
Mode: leader
客户端登录:
[root@localhost bin]# ./zkCli.sh -server 192.168.222.131:2181
Connecting to 192.168.222.131:2181
2018-04-04 17:58:46,964 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.8--1, built on 02/06/2016 03:18 GMT
2018-04-04 17:58:46,969 [myid:] - INFO [main:Environment@100] - Client environment:host.name=localhost
2018-04-04 17:58:46,969 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.7.0_45
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64/jre
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/zookeeper-3.4.8/bin/../build/classes:/zookeeper-3.4.8/bin/../build/lib/*.jar:/zookeeper-3.4.8/bin/../lib/slf4j-log4j12-1.6.1.jar:/zookeeper-3.4.8/bin/../lib/slf4j-api-1.6.1.jar:/zookeeper-3.4.8/bin/../lib/netty-3.7.0.Final.jar:/zookeeper-3.4.8/bin/../lib/log4j-1.2.16.jar:/zookeeper-3.4.8/bin/../lib/jline-0.9.94.jar:/zookeeper-3.4.8/bin/../zookeeper-3.4.8.jar:/zookeeper-3.4.8/bin/../src/java/lib/*.jar:/zookeeper-3.4.8/bin/../conf:
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA>
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:os.version=2.6.32-431.el6.x86_64
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:user.name=root
2018-04-04 17:58:46,972 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/root
2018-04-04 17:58:46,972 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/zookeeper-3.4.8/bin
2018-04-04 17:58:46,973 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=192.168.222.131:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@6d8083b3
Welcome to ZooKeeper!
2018-04-04 17:58:47,005 [myid:] - INFO [main-SendThread(192.168.222.131:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server 192.168.222.131/192.168.222.131:2181. Will not attempt to authenticate using SASL (unknown error)
2018-04-04 17:58:47,129 [myid:] - INFO [main-SendThread(192.168.222.131:2181):ClientCnxn$SendThread@876] - Socket connection established to 192.168.222.131/192.168.222.131:2181, initiating session
JLine support is enabled
[zk: 192.168.222.131:2181(CONNECTING) 0] 2018-04-04 17:58:47,251 [myid:] - INFO [main-SendThread(192.168.222.131:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server 192.168.222.131/192.168.222.131:2181, sessionid = 0x26290161cba0000, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: 192.168.222.131:2181(CONNECTED) 0] ls /
[dubbo, project, zookeeper]
[zk: 192.168.222.131:2181(CONNECTED) 1]
[zk: 192.168.222.131:2181(CONNECTED) 1] cd
ZooKeeper -server host:port cmd args
connect host:port
get path [watch]
ls path [watch]
set path data [version]
rmr path
delquota [-n|-b] path
quit
printwatches on|off
create [-s] [-e] path data acl
stat path [watch]
close
ls2 path [watch]
history
listquota path
setAcl path acl
getAcl path
sync path
redo cmdno
addauth scheme auth
delete path [version]
setquota -n|-b val path
伪集群:
其实就是在一台机器 上部署多个zookeeper;
将zoo.cfg 自制成多个不重名的配置文件
[root@benet conf]# pwd
/zookeeper-3.4.8/conf
[root@benet conf]# ls
configuration.xsl log4j.properties zoo1.cfg zoo2.cfg zoo.cfg zookeeper.out zoo_sample.cfg
[root@benet conf]#
启动方式:
每一个zoo.cfg文件都是一台服务器;
[root@benet /]# sh zookeeper-3.4.8/bin/zkServer.sh start zookeeper-3.4.8/conf/zoo1.cfg
[root@benet /]# sh zookeeper-3.4.8/bin/zkServer.sh start zookeeper-3.4.8/conf/zoo2.cfg
配置文件的内容
端口号,clientPort=XXXXX
数据目录,dataDir= XXXXX 都要改
下面是二台服务器,每台机器上部署了二台zookeeper
[root@benet /]# vim zookeeper-3.4.8/conf/zoo1.cfg
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper1/data
# the port at which the clients will connect
clientPort=2182
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance #
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.222.130:2888:3888 #不同IP可以用相同的端口,相反相同IP要使用不同的端口;
server.2=192.168.222.131:2888:3888
server.3=192.168.222.130:2889:3889
server.4=192.168.222.131:2889:3889
启动zookeeper:
[root@localhost conf]# ../bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /zookeeper-3.4.8/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@localhost conf]# ../bin/zkServer.sh start ./zoo1.cfg
ZooKeeper JMX enabled by default
Using config: ./zoo1.cfg
Starting zookeeper ... STARTED
在配置之前记得关闭防火墙,或者配置iptables策略,大部分的错误都是没有关闭防火墙、SELinux而导致的错误;
zookeeper负载均衡是2n+1的规则,宕机的数量不能超过一半,否则就不能进行高可用。
原理:
ZooKeeper是以Fast Paxos算法为基础的,Paxos 算法存在活锁的问题,即当有多个proposer交错提交时,有可能互相排斥导致没有一个proposer能提交成功,而Fast Paxos作了一些优化,通过选举产生一个leader (领导者),只有leader才能提交proposer,具体算法可见Fast Paxos。因此,要想弄懂ZooKeeper首先得对Fast Paxos有所了解。
ZooKeeper的基本运转流程:
1、选举Leader。
2、同步数据。
3、选举Leader过程中算法有很多,但要达到的选举标准是一致的。
4、Leader要具有最高的执行ID,类似root权限。
5、集群中大多数的机器得到响应并接受选出的Leader
集群部署:
[root@benet /]# tar xf zookeeper-3.4.8.tar.gz
[root@benet /]# cd zookeeper-3.4.8/conf/
[root@benet conf]# cp zoo_sample.cfg zoo.cfg
[root@benet conf]# vim zoo.cfg
编辑zoo.cfg配置文件;
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance #
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.222.130:2888:3888 #三台为真实的虚拟机
server.2=192.168.222.131:2888:3888
server.3=192.168.222.129:2889:3889
[root@benet conf]# mkdir -p /tmp/zookeeper/data
echo "1" 等于 server.1中的1 ,
其它二台机器为:echo "2" > /tmp/zookeeper/data/myid #server.2中的2
echo "3" > /tmp/zookeeper/data/myid #server.3中的3
[root@benet conf]# echo "1" > /tmp/zookeeper/data/myid
[root@benet conf]# ../bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /zookeeper-3.4.8/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@localhost conf]# ../bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: follower
[root@localhost conf]# ../bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: ./zoo1.cfg
Mode: leader
客户端登录:
[root@localhost bin]# ./zkCli.sh -server 192.168.222.131:2181
Connecting to 192.168.222.131:2181
2018-04-04 17:58:46,964 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.8--1, built on 02/06/2016 03:18 GMT
2018-04-04 17:58:46,969 [myid:] - INFO [main:Environment@100] - Client environment:host.name=localhost
2018-04-04 17:58:46,969 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.7.0_45
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64/jre
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/zookeeper-3.4.8/bin/../build/classes:/zookeeper-3.4.8/bin/../build/lib/*.jar:/zookeeper-3.4.8/bin/../lib/slf4j-log4j12-1.6.1.jar:/zookeeper-3.4.8/bin/../lib/slf4j-api-1.6.1.jar:/zookeeper-3.4.8/bin/../lib/netty-3.7.0.Final.jar:/zookeeper-3.4.8/bin/../lib/log4j-1.2.16.jar:/zookeeper-3.4.8/bin/../lib/jline-0.9.94.jar:/zookeeper-3.4.8/bin/../zookeeper-3.4.8.jar:/zookeeper-3.4.8/bin/../src/java/lib/*.jar:/zookeeper-3.4.8/bin/../conf:
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA>
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:os.version=2.6.32-431.el6.x86_64
2018-04-04 17:58:46,971 [myid:] - INFO [main:Environment@100] - Client environment:user.name=root
2018-04-04 17:58:46,972 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/root
2018-04-04 17:58:46,972 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/zookeeper-3.4.8/bin
2018-04-04 17:58:46,973 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=192.168.222.131:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@6d8083b3
Welcome to ZooKeeper!
2018-04-04 17:58:47,005 [myid:] - INFO [main-SendThread(192.168.222.131:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server 192.168.222.131/192.168.222.131:2181. Will not attempt to authenticate using SASL (unknown error)
2018-04-04 17:58:47,129 [myid:] - INFO [main-SendThread(192.168.222.131:2181):ClientCnxn$SendThread@876] - Socket connection established to 192.168.222.131/192.168.222.131:2181, initiating session
JLine support is enabled
[zk: 192.168.222.131:2181(CONNECTING) 0] 2018-04-04 17:58:47,251 [myid:] - INFO [main-SendThread(192.168.222.131:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server 192.168.222.131/192.168.222.131:2181, sessionid = 0x26290161cba0000, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: 192.168.222.131:2181(CONNECTED) 0] ls /
[dubbo, project, zookeeper]
[zk: 192.168.222.131:2181(CONNECTED) 1]
[zk: 192.168.222.131:2181(CONNECTED) 1] cd
ZooKeeper -server host:port cmd args
connect host:port
get path [watch]
ls path [watch]
set path data [version]
rmr path
delquota [-n|-b] path
quit
printwatches on|off
create [-s] [-e] path data acl
stat path [watch]
close
ls2 path [watch]
history
listquota path
setAcl path acl
getAcl path
sync path
redo cmdno
addauth scheme auth
delete path [version]
setquota -n|-b val path
伪集群:
其实就是在一台机器 上部署多个zookeeper;
将zoo.cfg 自制成多个不重名的配置文件
[root@benet conf]# pwd
/zookeeper-3.4.8/conf
[root@benet conf]# ls
configuration.xsl log4j.properties zoo1.cfg zoo2.cfg zoo.cfg zookeeper.out zoo_sample.cfg
[root@benet conf]#
启动方式:
每一个zoo.cfg文件都是一台服务器;
[root@benet /]# sh zookeeper-3.4.8/bin/zkServer.sh start zookeeper-3.4.8/conf/zoo1.cfg
[root@benet /]# sh zookeeper-3.4.8/bin/zkServer.sh start zookeeper-3.4.8/conf/zoo2.cfg
配置文件的内容
端口号,clientPort=XXXXX
数据目录,dataDir= XXXXX 都要改
下面是二台服务器,每台机器上部署了二台zookeeper
[root@benet /]# vim zookeeper-3.4.8/conf/zoo1.cfg
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper1/data
# the port at which the clients will connect
clientPort=2182
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance #
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.222.130:2888:3888 #不同IP可以用相同的端口,相反相同IP要使用不同的端口;
server.2=192.168.222.131:2888:3888
server.3=192.168.222.130:2889:3889
server.4=192.168.222.131:2889:3889
启动zookeeper:
[root@localhost conf]# ../bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /zookeeper-3.4.8/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@localhost conf]# ../bin/zkServer.sh start ./zoo1.cfg
ZooKeeper JMX enabled by default
Using config: ./zoo1.cfg
Starting zookeeper ... STARTED
在配置之前记得关闭防火墙,或者配置iptables策略,大部分的错误都是没有关闭防火墙、SELinux而导致的错误;
zookeeper负载均衡是2n+1的规则,宕机的数量不能超过一半,否则就不能进行高可用。
相关文章推荐
- Zookeeper集群部署
- 搭建3个节点的hadoop集群(完全分布式部署)--3 zookeeper与hbase安装
- kafka学习总结之集群部署和zookeeper
- SolrCloud 分布式集群安装部署(solr4.8.1 + zookeeper +tomcat)
- dubbo学习及集成zookeeper集群部署
- zookeeper+storm+cassandra的集群部署以及问题
- Spark集群基于Zookeeper的HA搭建部署
- zookeeper单机部署集群
- dubbo开发前戏--ZooKeeper集群部署(3.4.6)
- Zookeeper3.4.6集群部署
- CentOS7 ZooKeeper 集群部署
- Spark集群基于Zookeeper的HA搭建部署笔记(转)
- zookeeper部署以及集群
- centos7下集群部署zookeeper(伪集群)
- zookeeper3.4.9集群模式安装部署
- ubuntu14.04 server 部署 zookeeper 集群服务器
- zookeeper集群部署
- zookeeper集群部署
- Hadoop + HBase (自带zookeeper 也可单独加) 集群部署
- Zookeeper3.4.6部署伪分布集群(Apache)