您的位置:首页 > 大数据 > 人工智能

ipc.Client: Retrying connect to server,failed on socket timeout exception (已解决)

2018-02-08 16:16 567 查看
在格式化NameNode出现下面异常,通过异常信息,我们初步可以看到是因为ipc.client,即无法访问集群中的journalnode主机所导致。

18/02/08 15:47:47 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
18/02/08 15:47:47 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18/02/08 15:47:48 INFO util.GSet: Computing capacity for map NameNodeRetryCache
18/02/08 15:47:48 INFO util.GSet: VM type       = 64-bit
18/02/08 15:47:48 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
18/02/08 15:47:48 INFO util.GSet: capacity      = 2^15 = 32768 entries
18/02/08 15:47:50 INFO ipc.Client: Retrying connect to server: hadoop2/192.168.1.112:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:50 INFO ipc.Client: Retrying connect to server: hadoop4/192.168.1.114:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:50 INFO ipc.Client: Retrying connect to server: hadoop3/192.168.1.113:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:51 INFO ipc.Client: Retrying connect to server: hadoop2/192.168.1.112:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:51 INFO ipc.Client: Retrying connect to server: hadoop4/192.168.1.114:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:51 INFO ipc.Client: Retrying connect to server: hadoop3/192.168.1.113:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:52 INFO ipc.Client: Retrying connect to server: hadoop2/192.168.1.112:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:52 INFO ipc.Client: Retrying connect to server: hadoop4/192.168.1.114:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:52 INFO ipc.Client: Retrying connect to server: hadoop3/192.168.1.113:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:53 INFO ipc.Client: Retrying connect to server: hadoop2/192.168.1.112:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:53 INFO ipc.Client: Retrying connect to server: hadoop4/192.168.1.114:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:53 INFO ipc.Client: Retrying connect to server: hadoop3/192.168.1.113:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:54 INFO ipc.Client: Retrying connect to server: hadoop2/192.168.1.112:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:54 INFO ipc.Client: Retrying connect to server: hadoop3/192.168.1.113:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:54 INFO ipc.Client: Retrying connect to server: hadoop4/192.168.1.114:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:55 INFO ipc.Client: Retrying connect to server: hadoop2/192.168.1.112:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:55 INFO ipc.Client: Retrying connect to server: hadoop3/192.168.1.113:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:55 INFO ipc.Client: Retrying connect to server: hadoop4/192.168.1.114:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:56 INFO ipc.Client: Retrying connect to server: hadoop2/192.168.1.112:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:56 INFO ipc.Client: Retrying connect to server: hadoop3/192.168.1.113:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:56 INFO ipc.Client: Retrying connect to server: hadoop4/192.168.1.114:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:57 INFO ipc.Client: Retrying connect to server: hadoop2/192.168.1.112:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:57 INFO ipc.Client: Retrying connect to server: hadoop3/192.168.1.113:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:57 INFO ipc.Client: Retrying connect to server: hadoop4/192.168.1.114:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:58 INFO ipc.Client: Retrying connect to server: hadoop2/192.168.1.112:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:58 INFO ipc.Client: Retrying connect to server: hadoop3/192.168.1.113:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:58 INFO ipc.Client: Retrying connect to server: hadoop4/192.168.1.114:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:59 INFO ipc.Client: Retrying connect to server: hadoop2/192.168.1.112:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:59 WARN namenode.NameNode: Encountered exception during format:
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
192.168.1.112:8485: No Route to Host from  hadoop1/192.168.1.111 to hadoop2:8485 failed on socket timeout exception: java.net.NoRouteToHostException: 没有到主机的路由; For more details see:  http://wiki.apache.org/hadoop/NoRouteToHost at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:901)
at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:202)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1011)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1457)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1582)
18/02/08 15:47:59 INFO ipc.Client: Retrying connect to server: hadoop4/192.168.1.114:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:59 INFO ipc.Client: Retrying connect to server: hadoop3/192.168.1.113:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/02/08 15:47:59 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
192.168.1.112:8485: No Route to Host from  hadoop1/192.168.1.111 to hadoop2:8485 failed on socket timeout exception: java.net.NoRouteToHostException: 没有到主机的路由; For more details see:  http://wiki.apache.org/hadoop/NoRouteToHost at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:901)
at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:202)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1011)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1457)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1582)
18/02/08 15:47:59 INFO util.ExitUtil: Exiting with status 1
18/02/08 15:47:59 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.1.111
************************************************************/


经检查发现是自己忘记关闭集群中的防火墙,导致IO访问失败,下面是关闭防火墙的操作:

#查看防火墙状态
service iptables status
#关闭防火墙
service iptables stop
#查看防火墙开机启动状态
chkconfig iptables --list
#关闭防火墙开机启动
chkconfig iptables off
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐