Hadoop集群不显示datanode(Datanode denied communication with namenode because hostname cannot be resolved)
2018-03-08 02:35
716 查看
Error:org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode denied communication with namenode because hostname cannot be resolved (ip=172.31.34.70, h
ostname=172.31.34.70): DatanodeRegistration(0.0.0.0, datanodeUuid=e218bf57-41b5-46a6-8343-ced968272b9a, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-28174
aeb-5c3e-4500-ba25-1d9ca001538f;nsid=122644145;c=0)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:904)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:5085)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1140)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:92)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:27329)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
Analytics:This error was throwned via org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java:
public void registerDatanode(DatanodeRegistration nodeReg)
throws DisallowedDatanodeException, UnresolvedTopologyException {
InetAddress dnAddress = Server.getRemoteIp();
if (dnAddress != null) {
// Mostly called inside an RPC, update ip and peer hostname
String hostname = dnAddress.getHostName();
String ip = dnAddress.getHostAddress();
if (checkIpHostnameInRegistration && !isNameResolved(dnAddress)) {
// Reject registration of unresolved datanode to prevent performance
// impact of repetitive DNS lookups later.
final String message = "hostname cannot be resolved (ip="
+ ip + ", hostname=" + hostname + ")";
LOG.warn("Unresolved datanode registration: " + message);
throw new DisallowedDatanodeException(nodeReg, message);
}checkIpHostnameInRegistration is controled by hdfs-site.xml property "dfs.namenode.datanode.registration.ip-hostname-check". And isNameResolved(dnAddress) is controle by reverse dns and /etc/hosts.
Resolution:Add datanode ip/hostname into /etc/hosts. Or enable reverse DNS(Make sure you could get hostname via command: host ip). Or add following property in hdfs-site.xml:
<property>
<name>dfs.namenode.datanode.registration.ip-hostname-check</name>
<value>false</value>
</property>
增加datanode主机hosts配置
ostname=172.31.34.70): DatanodeRegistration(0.0.0.0, datanodeUuid=e218bf57-41b5-46a6-8343-ced968272b9a, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-28174
aeb-5c3e-4500-ba25-1d9ca001538f;nsid=122644145;c=0)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:904)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:5085)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1140)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:92)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:27329)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
Analytics:This error was throwned via org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java:
public void registerDatanode(DatanodeRegistration nodeReg)
throws DisallowedDatanodeException, UnresolvedTopologyException {
InetAddress dnAddress = Server.getRemoteIp();
if (dnAddress != null) {
// Mostly called inside an RPC, update ip and peer hostname
String hostname = dnAddress.getHostName();
String ip = dnAddress.getHostAddress();
if (checkIpHostnameInRegistration && !isNameResolved(dnAddress)) {
// Reject registration of unresolved datanode to prevent performance
// impact of repetitive DNS lookups later.
final String message = "hostname cannot be resolved (ip="
+ ip + ", hostname=" + hostname + ")";
LOG.warn("Unresolved datanode registration: " + message);
throw new DisallowedDatanodeException(nodeReg, message);
}checkIpHostnameInRegistration is controled by hdfs-site.xml property "dfs.namenode.datanode.registration.ip-hostname-check". And isNameResolved(dnAddress) is controle by reverse dns and /etc/hosts.
Resolution:Add datanode ip/hostname into /etc/hosts. Or enable reverse DNS(Make sure you could get hostname via command: host ip). Or add following property in hdfs-site.xml:
<property>
<name>dfs.namenode.datanode.registration.ip-hostname-check</name>
<value>false</value>
</property>
增加datanode主机hosts配置
相关文章推荐
- How-to: Resolve "Datanode denied communication with namenode because hostname cannot be resolved (ip
- hadoop 不使用hostname,使用ip遇到的问题(Datanode denied communication with namenode because hostname cann)及解决方案
- Datanode denied communication with namenode: DatanodeRegistration 解决办法
- Hadoop datanode Datanode denied communication with namenode错误及解决
- A new node couldn't be inserted because one with the same name exists. (VERR_CFGM_NODE_EXISTS)
- 启动hadoop集群DataNode不显示,DataNode显示NodeManager不显示
- 腾讯云hadoop集群搭建步骤,namenode/datanode启动问题
- Oracle Clusterware Cannot Start on all Nodes: Network communication with node <NAME> missing for 90%
- 【那些遇到的坑】—hadoop完全分布式集群搭建namenode找不到datanode,总显示0
- hadoop2.5.x搭建集群启动jps在datanode节点无法显示datanode
- A new node couldn't be inserted because one with the same name exists. (VERR_CFGM_NODE_EXISTS).
- Error creating bean with name 'XXX' in the [XXXX.xml] --The import XXX cannot be resolved
- hadoop集群启动后datanode和namenodemanager关闭问题解决
- 正确的类引用却显示* cannot be resolved
- hadoop mkdir: Cannot create directory /usr. Name node is in safe mode.
- VMware client 无法显示hardware中的信息,提示No new host data available. Data will be updated in 5 minutes.
- 解决hadoop集群中datanode启动后自动关闭的问题
- 'UserInfoBLL' node cannot be resolved for the specified context [MVC展示数据.Controllers.LoginController]问题解决
- hadoop集群启动时DataNode节点启动失败
- class path resource [com/cn/bonc/mapping/] cannot be resolved to URL because it does not exist