您的位置:首页 > 运维架构

【Hadoop】关于hadoop正常启动而无法正常关闭

2013-10-21 14:29 369 查看
在1个master和2个slave节点的集群上,hadoop可以正常格式化:
hadoop@hadoop1:~/hadoop/conf$ hadoop namenode -format

13/10/21 12:02:15 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG: host = hadoop1/192.168.1.148

STARTUP_MSG: args = [-format]

STARTUP_MSG: version = 1.2.1

STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013

STARTUP_MSG: java = 1.6.0_43

************************************************************/

13/10/21 12:02:15 INFO util.GSet: Computing capacity for map BlocksMap

13/10/21 12:02:15 INFO util.GSet: VM type = 64-bit

13/10/21 12:02:15 INFO util.GSet: 2.0% max memory = 932118528

13/10/21 12:02:15 INFO util.GSet: capacity = 2^21 = 2097152 entries

13/10/21 12:02:15 INFO util.GSet: recommended=2097152, actual=2097152

13/10/21 12:02:15 INFO namenode.FSNamesystem: fsOwner=hadoop

13/10/21 12:02:15 INFO namenode.FSNamesystem: supergroup=supergroup

13/10/21 12:02:15 INFO namenode.FSNamesystem: isPermissionEnabled=true

13/10/21 12:02:15 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100

13/10/21 12:02:15 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)

13/10/21 12:02:15 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0

13/10/21 12:02:15 INFO namenode.NameNode: Caching file names occuring more than 10 times

13/10/21 12:02:15 INFO common.Storage: Image file
/home/hadoop/hdfs_tmp/
dfs/name/current/fsimage of size 112 bytes saved in 0 seconds.

13/10/21 12:02:15 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/home/hadoop/hdfs_tmp/dfs/name/current/edits

13/10/21 12:02:15 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/home/hadoop/hdfs_tmp/dfs/name/current/edits

13/10/21 12:02:16 INFO common.Storage: Storage directory /home/hadoop/hdfs_tmp/dfs/name has been successfully formatted.

13/10/21 12:02:16 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.1.148

************************************************************/
同时可以正常启动:
hadoop@hadoop1:~/hbase/logs$ start-all.sh

starting namenode, logging to /home/hadoop/hadoop/libexec/../logs/hadoop-hadoop-namenode-hadoop1.out

hadoop3: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop3.out

hadoop2: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop2.out

hadoop1: starting secondarynamenode, logging to /home/hadoop/hadoop/libexec/../logs/hadoop-hadoop-secondarynamenode-hadoop1.out

starting jobtracker, logging to /home/hadoop/hadoop/libexec/../logs/hadoop-hadoop-jobtracker-hadoop1.out

hadoop3: starting tasktracker, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-tasktracker-hadoop3.out

hadoop2: starting tasktracker, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-tasktracker-hadoop2.out

但是无法正常停止:
hadoop@Hadoop1: stop-all.sh

stopping jobtracker

hadoop3: stopping tasktracker

hadoop2: stopping tasktracker

stopping namenode

stopping namenode

hadoop3:
no datanode to stop

hadoop2:
no datanode to stop

hadoop1: stopping secondarynamenode
在各个节点上使用jps命令查看了一下java进程,发现工作节点hadoop1和hadoop2并没有DataNode这个进程。
本来用的是hadoop1.1.2和hbase0.94.8,可以正常工作。目前替换为hadoop1.2.1和hbase0.94.12,除了替换安装包,conf下面的配置基本和原来的一致,但是目前就出现了这种问题。

折腾一会后,发现原来问题出在dfs.name.dir。在重新格式化分布式目录及对应文件时,需要将NameNode及DataNode上所配置的dfs.name.dir对应的路径删掉或移除,否则hadoop无法正常工作。根据某资料所说的,这是为了避免由于格式化而删掉已有且有用的的hdfs数据,所以格式化前dfs.name.dir对应的路径应当是不存在的。
把本地环境的三个节点路径
/home/hadoop/hdfs_tmp/
对应的目录hdfs_tmp删掉后重新格式化,运行终于正常。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐