您的位置:首页 > Web前端 > Node.js

ambari增加journalnode服务节点

2016-04-13 14:31 495 查看
原生态方式:

以hadoop用户执行以下操作:

1.修改etc/hadoop/hdfs-site.xml,在dfs.namenode.shared.edits.dir 配置项中增加相应的新的journal的地址和端口.

2.把etc/hadoop/hdfs-site.xml分发到集群中各服务器.

3.把现有journal中的数据目录拷贝到新journal服务器.

4.在新journal服务器中执行hadoop-daemon.sh start journalnode 来启动journal node.

5.在standby namenode服务器执行 hadoop-daemon.sh stop namenode 来停止namenode服务.

6.在standby namenode服务器执行 hadoop-daemon.sh start namenode 来启动namenode服务.可能在网页上看到journalnode增加.

7.使用hdfs haadmin -failover nn1 nn2 切换namenode

8.在原active namenode上执行以下语句来重启namenode

    hadoop-daemon.sh stop namenode  

    hadoop-daemon.sh start namenode  

如果用户手动安装的Hadoop集群,可以用上面的方法,如果是通过ambari安装的集群,手动增加JournalNode后,还想在ambari界面上看到,可以用以下方法

ambari方式:

ambari默认3个journalnode节点,但是如果一个节点出现问题,需要增加补充,ambari界面没有操作的选项,所以只能通过其他命令方式操作,

看到之前有个文章是将HA降级,之后重新做HA,这样的风险太高了,操作复杂,从网上找到了其他方式,分享给需要的朋友,也希望ambari新版本可以将这个增加journalnode功能,添加进去。

操作前提示:如果你对ambari这些操作一点都不熟悉,建议不要进行操作,以免ambari管理界面异常,导致无法管理。

可以先在测试环境操作练习,确认无误后,再进行正式环境操作。

增加journalnode

1、分配角色:

curl -u admin:admin -H 'X-Requested-By: Ambari' -X POST http://localhost:8080/api/v1/clusters/CLUSTER_NAME/hosts/NEW_JN_NODE/host_components/JOURNALNODE
我的环境:

curl -u admin:admin -H "X-Requested-By: Ambari" -X POST http://10.11.32.50:8080/api/v1/clusters/testhadoop/hosts/testserver2.bj/host_components/JOURNALNODE
查看下内容:

curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://10.11.32.50:8080/api/v1/clusters/testhadoop/hosts/testserver2.bj/host_components/JOURNALNODE
2、安装Journalnode:

curl -u admin:admin -H 'X-Requested-By: Ambari' -X PUT -d ‘{“RequestInfo”:{“context”:”Install JournalNode”},”Body”:{“HostRoles”:{“state”:”INSTALLED”}}}’ http://10.11.32.50:8080/api/v1/clusters/CLUSTER_NAME/hosts/NEW_JN_NODE/host_components/JOURNALNODE
我的环境:

curl -u admin:admin -H 'X-Requested-By: Ambari' -X PUT -d '{"RequestInfo":{"context":"Install JournalNode"},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
http://10.11.32.50:8080/api/v1/clusters/testhadoop/hosts/testserver2.bj/host_components/JOURNALNODE
链接网址与上边是一行,中间有个空格

3、更新HDFS配置

Login to Ambari Web UI and modify the HDFS Configuration. Search for dfs.namenode.shared.edits.dir and add the new JournalNode. Make sure you don’t mess up the format for the journalnode list provided. The following is a format of a typical 3 JournalNode shared
edits definition.

qjournal://my-jn-node-1.host.com:8485;my-jn-node-1.host.com:8485;my-jn-node-1.host.com:8485/MyLAB

我的环境配置:

qjournal://testserver4.bj:8485;testserver1.bj:8485;testserver2.bj:8485;testserver3.bj:8485/testcluster

4、创建journalnode目录

Time to create the required directory structure on the new Journalnode. You have to create this directory structure based on your cluster installation. If unsure, you

can find this value from $HADOOP_CONF/hdfs-site.xml file. Look for the parameter value for dfs.journalnode.edits.dir. In my case, it happens to be /hadoop/qjournal/namenode/.

我的环境:

dfs.journalnode.edits.dir /hadoop/hdfs/journal

ll -d /hadoop/hdfs/journal

drwxr-xr-x 3 hdfs hadoop 4096 Feb  2 10:56 /hadoop/hdfs/journal

Make sure you add the HDFS Nameservice directory. You can find this value from $HADOOP_CONF/hdfs-site.xml file. The value can be found for parameter dfs.nameservices.

In my example, I have “MyLab”. So I will create the directory structure as /hadoop/qjournal/namenode/MyLab.

我的环境:

dfs.nameservices testcluster

ll -d /hadoop/hdfs/journal/testcluster/

drwxr-xr-x 3 hdfs hadoop 4096 Mar 16 18:40 /hadoop/hdfs/journal/testcluster/

mkdir -p /hadoop/hdfs/journal/testcluster/

chown hdfs:hadoop -R /hadoop/hdfs/journal/

5、同步数据

Copy or Sync the directory ‘current’ under the ‘shared edits’ location from an existing JournalNode. Make sure that the ownership for all these newly created directories and sync’ed files is right.

我的环境:

scp -r /hadoop/hdfs/journal/testcluster/* 10.11.32.51:/hadoop/hdfs/journal/testcluster/

chown hdfs:hadoop -R /hadoop/hdfs/journal/

ambari中journalnode文件存储的路径/hadoop/hdfs/journal
ll /hadoop/hdfs/journal/testcluster/

-rw-r--r--. 1 hdfs hadoop 1048576 Mar 27 11:39 edits_inprogress_0000000000000014177

-rw-r--r--. 1 hdfs hadoop 1048576 Mar 27 11:39 edits_inprogress_0000000000000021001

-rw-r--r--. 1 hdfs hadoop 1048576 Mar 27 13:44 edits_inprogress_0000000000000021215

-rw-r--r--. 1 hdfs hadoop 1048576 Mar 27 14:30 edits_inprogress_0000000000000021306

查看新的节点时可能会出现4个edits_inprogress,时间早的应该是scp过来的,启动后会生成2个新的edits_inprogress文件,旧的没用可以删除,也可以留着。

6、启动服务查看日志

从ambari界面启动journalnode服务

2016-03-17 11:48:27,644 INFO  server.Journal (Journal.java:scanStorageForLatestEdits(187)) - Scanning storage FileJournalManager(root=/hadoop/hdfs/journal/testcluster)

2016-03-17 11:48:27,791 INFO  server.Journal (Journal.java:scanStorageForLatestEdits(193)) - Latest log is EditLogFile

(file=/hadoop/hdfs/journal/testcluster/current/edits_inprogress_0000000000000010224,first=0000000000000010224,last=0000000000000010232,inProgress=true,hasCorruptHeader=

false)

2016-03-17 11:49:37,304 INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(133)) - Finalizing edits file

/hadoop/hdfs/journal/testcluster/current/edits_inprogress_0000000000000010238 -> /hadoop/hdfs/journal/testcluster/current/edits_0000000000000010238-0000000000000010251

参考:
http://gaganonthenet.com/2015/09/14/add-journalnode-to-ambari-managed-hadoop-cluster/
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: