您的位置:首页 > 运维架构

Hadoop HA+Zookeeper

2016-05-23 16:36 274 查看

简单粗暴易懂

系统环境

hadoop 2.6.0

zookeeper 3.4.5

CentOS 6.5

header 1header 2
row 1 col 1row 1 col 2
row 2 col 1row 2 col 2
主机名ipnull
canbot130192.168.186.130active NameNode,datanode,Zookeeper,ResourceManager,nodeManager
canbot131192.168.186.131Standby NameNode,dataNode,Zookeeper,nodeManager
canbot132192.168.186.132Standby NameNode,dataNode,Zookeeper,nodeManager

关闭防火墙

chkconfig iptables off


查看各个节点防火墙是否关闭

[hadoop@canbot130 ~]$ sudo service iptables status
"iptables: Firewall is not running."


新增角色 ssh配置

具体hadoop角色新增和ssh配置

Zookeeper

下载zookeeper

解压

tar -zxvf zookeeper-3.4.6.tar.gz -C /home/hadoop/development/src/


修改zookeeper配置

我们需要将conf/zoo_sample.cfg负责一份命名为”zoo.cfg”

cp ./conf/zoo_sample.cfg ./conf/zoo.cfg


其中主要需要修改的内容。

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/home/hadoop/development/src/zookeeper-3.4.5-cdh5.6.0/data
dataLogDir=/home/hadoop/development/src/zookeeper-3.4.5-cdh5.6.0/logs
clientPort=2181 #链接端口
server.1=192.168.186.130:2888:3888
server.2=192.168.186.131:2888:3888
server.3=192.168.186.132:2888:3888
# 我这里直接使用ip地址,也可以使用'主机名'


配置好以后发送到 各个节点并且”创建logs目录和 myid”

#创建myid
echo 1 > /home/hadoop/development/src/zookeeper-3.4.5-cdh5.6.0/data/myid
#ip 192.168.186.131 下,他创建的myid就是"2" 根据:zoo.cfg 中的 server.2=192.168.186.131:2888:3888 设置的。


配置Hadoop HA

其中主要修改 core-site.xml and hdfs-site.xml

core-site.xml

<configuration>
<property>
<!-- 配置 hadoop NameNode ip地址 ,由于我们配置的 HA 那么有两个namenode 所以这里配置的地址必须是动态的-->
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<!-- 整合 Zookeeper -->
<name>ha.zookeeper.quorum</name>
<value>canbot130:2181,canbot131:2181,canbot132:2181</value>
</property>
<property>
<!-- 配置hadoop缓存地址 -->
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/development/src/hadoop-2.6.0-cdh5.6.0/tmp</value>
</property>
</configuration>


hdfs-site.xml

<configuration>
<!--命名空间设置ns1-->
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>

<!--namenodes节点ID:nn1,nn2(配置在命名空间mycluster下)-->
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<!--nn1,nn2节点地址配置-->
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>canbot130:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>canbot131:8020</value>
</property>
<!--nn1,nn2节点WEB地址配置-->
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>canbot130:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>canbot131:50070</value>
</property>

<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://canbot130:8485;canbot131:8485;canbot132:8485/mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadoop/development/src/hadoop-2.6.0-cdh5.6.0/tmp/dfs/journalnode</value>
</property>

<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/development/src/hadoop-2.6.0-cdh5.6.0/tmp/dfs/name</value>
</property>

<property>
<name>dfs.namenode.data.dir</name>
<value>/home/hadoop/development/src/hadoop-2.6.0-cdh5.6.0/tmp/dfs/data</value>
</property>

<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>

<!--启用自动故障转移-->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.replication.max</name>
<value>32767</value>
</property>

</configuration>


mapredu-site.xml

<configuration>

<property>
<name>mapreduce.framwork.name</name>
<value>yarn</value>
</property>

</configuration>


yarn-site.xml

<configuration>
<!-- Site specific YARN configuration properties-->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>canbot130</value>
</property>
</configuration>


启动Zookeeper

在canbot130 131 132节点上依次启动

zkServer.sh start


在canbot130节点上格式化Zookeeper

hdfs zkfc -formatZK


验证zkfc是否格式化成功,如果多了一个hadoop-ha包就是成功了。

zkCli.sh

[zk:
localhost:2181(CONNECTED)0]
ls/

"[hadoop-ha,zookeeper]"

[zk:localhost:2181(CONNECTED)1]


启动JournalNode集群

依次在canbot130,131,132上面执行

hadoop-daemon.sh start journalnode


格式化集群的一个NameNode

hdfs namenode –format


启动刚刚格式化的namenode

hadoop-daemon.sh start namenode


执行命令后,浏览:http://canbot130:50070/dfshealth.jsp可以看到状态

在canbot131机器上,将canbot131的数据复制到canbot131上来,在canbot131上执行

hdfs namenode –bootstrapStandby


启动canbot131 namenode

hadoop-daemon.sh start namenode


在canbot130上启动YARN

start-yarn.sh


然后浏览:http://canbot130:8088/cluster, 可以看到效果

启动zkfc

启动 ZooKeeperFailoverCotroller,在canbot130,canbot131机器上依次执行以下命令,这个时候再浏览50070端口,可以发现canbot130变成active状态了,而canbot131还是standby状态

hadoop-daemon.sh start zkfc
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: