您的位置:首页 > 运维架构

hadoop集群测试mapreduce程序的各种坑

2017-11-30 14:12 316 查看



Error1:datanode未能启动,namenode和datanode的clusterID不一致


原因:datanode日志显示java.io.IOException: Incompatible clusterIDs in /opt/hadoop-2.7.3/tmp/dfs/data: namenode clusterID = CID-add6cc33-56f0-4d7c-8484-60740bf85c7c; datanode clusterID = CID-d616c28f-a2f3-4196-8b55-bfc77d08e678



解决办法:将datanode的VERSION(/opt/hadoop-2.7.3/tmp/dfs/data/current的VERSION)的clusterID手动修改成为 namenode的VERSION(/opt/hadoop-2.7.3/tmp/dfs/name/current的VERSION)的clusterID,再重新启动集群就OK。


Error2: Input path does not exist: hdfs://master:9000/user/root/input

原因:按照资料执行 hadoop jar /opt/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar wordcount input output,input应该为/input,output为/output,不然会找到/user/root/input目录下(之前上传的wordcount.txt到/input目录下,而不是/user/root/input目录下)



解决办法:命令中将input修改为/input 
                hadoop jar /opt/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar wordcount /input /output

Error3:mapreduce一直卡在 Running
job: job_1512019590518_0003

原因:mapreduce测试时给的输入给了文件夹而不是文件.hadoop jar /opt/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar wordcount /input /output.导致 (waiting
for AM container to be allocated)



解决办法:将输入改为文件。
hadoop jar /opt/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar wordcount /input/wordcount.txt /output

对你有用的话给个赞吧b( ̄▽ ̄)d
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  mapreduce 测试 卡住