启动Hadoop HDFS时的“Incompatible clusterIDs”错误原因分析
2014-06-24 16:20
531 查看
启动Hadoop HDFS时的“Incompatible clusterIDs”错误原因分析
“Incompatible clusterIDs”的错误原因是在执行“hdfs namenode -format”之前,没有清空DataNode节点的data目录。 网上一些文章和帖子说是tmp目录,它本身也是没问题的,但Hadoop 2.4.0是data目录,实际上这个信息已经由日志的“/data/hadoop/hadoop-2.4.0/data”指出,所以不能死死的参照网上的解决办法,遇到问题时多仔细观察。 从上述描述不难看出,解决办法就是清空所有DataNode的data目录,但注意不要将data目录本身给删除了。data目录由core-site.xml文件中的属性“dfs.datanode.data.dir”指定。 2014-04-17 19:30:33,075 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/hadoop/hadoop-2.4.0/data/in_use.lock acquired by nodename 28326@localhost2014-04-17 19:30:33,078 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool <registering> (Datanode Uuid unassigned) service to /172.25.40.171:9001java.io.IOException: Incompatible clusterIDs in /data/hadoop/hadoop-2.4.0/data: namenode clusterID = CID-50401d89-a33e-47bf-9d14-914d8f1c4862; datanode clusterID = CID-153d6fcb-d037-4156-b63a-10d6be224091 at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:472) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815) at java.lang.Thread.run(Thread.java:744)2014-04-17 19:30:33,081 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to /172.25.40.171:90012014-04-17 19:30:33,184 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but service not yet registered with NNjava.lang.Exception: trace at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91) at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837) at java.lang.Thread.run(Thread.java:744)2014-04-17 19:30:33,184 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)2014-04-17 19:30:33,184 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but service not yet registered with NNjava.lang.Exception: trace at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143) at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:861) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837) at java.lang.Thread.run(Thread.java:744)2014-04-17 19:30:35,185 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode2014-04-17 19:30:35,187 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 02014-04-17 19:30:35,189 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down DataNode at localhost/127.0.0.1************************************************************/ |
相关文章推荐
- 启动Hadoop HDFS时的“Incompatible clusterIDs”错误原因分析
- 启动Hadoop HDFS时的“Incompatible clusterIDs”错误原因分析
- 启动Hadoop HDFS时的“Incompatible clusterIDs”错误原因分析
- hadoop 2.x之HDFS HA讲解之八HDFS HA测试启动NameNode遇见错误分析解决
- hadoop架构分析之启动脚本分析(hdfs部分)
- Spring Boot+Shiro+Redis(redisson)整合时,采用内嵌tomcat启动错误原因分析
- Spring Boot+Shiro+Redis(redisson)整合时,采用内嵌tomcat启动错误原因分析
- C访问hadoop程序终端显示运行正确,因为连接参数错误,使得通过网页查看就是没有成功原因分析和解决方案
- hadoop hdfs 启动出现dfs/name/in_use.lock (Permission denied)错误,导致NN启动失败
- hadoop-0.20.2 & hbase-0.90.1 集群启动错误“org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientP
- Spring Boot+Shiro+Redis(redisson)整合时,采用内嵌tomcat启动错误原因分析
- Spring Boot+Shiro+Redis(redisson)整合时,采用内嵌tomcat启动错误原因分析
- Linux suse x86_64 环境上部署Hadoop启动失败原因分析
- Hadoop集群中JobTracker和TaskTracker启动耗时过多的原因分析
- Win7系统启动网络共享时提示错误1061的原因分析
- Spring Boot+Shiro+Redis(redisson)整合时,采用内嵌tomcat启动错误原因分析
- hadoop-0.20.2 & hbase-0.90.1 集群启动错误“org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientP
- hadoop伪分布式下 无法启动datanode的原因及could only be replicated to > 0 nodes, instead of 1的错误
- Hadoop HDFS 文件访问权限问题导致Java Web 上传文件到Hadoop失败的原因分析及解决方法
- hadoop伪分布式下 无法启动datanode的原因及could only be replicated to > 0 nodes, instead of 1的错误