您的位置:首页 > Web前端 > Node.js

hadoop伪分布式系统:could only be replicated to 0 nodes, instead of 1

2014-05-03 19:08 537 查看
转自:http://blog.csdn.net/weijonathan/article/details/9162619

could only be replicated to 0 nodes, instead of 1

[plain] view
plaincopy

2013-06-24 11:39:32,383 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:zqgame cause:java.io.IOException: File /data/zqhadoop/data/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1

2013-06-24 11:39:32,384 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000, call addBlock(/data/zqhadoop/data/mapred/system/jobtracker.info, DFSClient_NONMAPREDUCE_-344066732_1, null) from 192.168.216.133:59866: error: java.io.IOException: File /data/zqhadoop/data/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1

java.io.IOException: File /data/zqhadoop/data/mapred/system/jobtracker.info <span style="color:#FF0000;">could only be replicated to 0 nodes, instead of 1</span>

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)

at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:601)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

这里hadoop去查找可用的节点,但是结果找不到。

问题处在/etc/hosts和$HADOOP_HOME/conf/mapred-site.xml和core-site.xml。

解决方法:

1、修改$HADOOP_HOME/conf/mapred-site.xml和core-site.xml,把host修改为IP地址

core-site.xml

[plain] view
plaincopy

zqgame@master:~/hadoop-1.2.0/bin$ more ../conf/core-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://192.168.216.133:9000</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/data/zqhadoop/data</value>

</property>

</configuration>

mapred-site.xml

[plain] view
plaincopy

zqgame@master:~/hadoop-1.2.0/bin$ more ../conf/mapred-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<name>mapred.job.tracker</name>

<value>192.168.216.133:9001</value>

</property>

</configuration>

2、修改/etc/hosts配置,添加本机IP绑定

[plain] view
plaincopy

zqgame@master:~/hadoop-1.2.0/bin$ more /etc/hosts

127.0.0.1 localhost

127.0.1.1 master

<span style="color:#FF0000;">192.168.216.133 localhost.localdomain localhost</span>

3、关闭防火墙
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐