centOS7.0 下配置Hadoop集群,Slave1报错:failed on socket timeout exception: java.net.NoRouteToHostException
2014-09-29 10:43
621 查看
Hadoop版本:2.5.0
在配置Hadoop集群时,在Master上 启动目录/usr/hadoop/sbin/下的./start-all.sh后,在Master主机上
[hadoop@Master sbin]$ ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [Master.Hadoop]
Master.Hadoop: starting namenode, logging to /usr/hadoop/logs/hadoop-hadoop-namenode-Master.Hadoop.out
192.168.86.129: starting datanode, logging to /usr/hadoop/logs/hadoop-hadoop-datanode-Slaver1.Hadoop.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/hadoop/logs/hadoop-hadoop-secondarynamenode-Master.Hadoop.out
starting yarn daemons
starting resourcemanager, logging to /usr/hadoop/logs/yarn-hadoop-resourcemanager-Master.Hadoop.out
192.168.86.129: starting nodemanager, logging to /usr/hadoop/logs/yarn-hadoop-nodemanager-Slaver1.Hadoop.out
[hadoop@Master sbin]$ jps
10474 Jps
10217 ResourceManager
9885 NameNode
10069 SecondaryNameNode
可见已经成功启动,在Slave1 datanode节点机上
[hadoop@Slave1 ~]$ jps
5098 NodeManager
5000 DataNode
5184 Jps
可见也已经成功启动,但是当dfsadmin -report时,报出下面错误
[hadoop@Slave1 ~]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
14/09/29 10:10:58 INFO ipc.Client: Retrying connect to server: Master.Hadoop/192.168.86.128:9000. Already tried 0 time(s); maxRetries=45
report: No Route to Host from Slave1.Hadoop/192.168.86.129 to Master.Hadoop:9000 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see:
http://wiki.apache.org/hadoop/NoRouteToHost
经过查找,发现是因为防火墙没有关闭!
在以往Linux版本上,使用service iptables stop命令进行关闭防火墙,但是当道centOS7.0后,改命令就无法使用。
centOS7.0上面的服务都通过systemctl命令进行start、stop、restart。
而且在centOS7.0上,防火墙也不再是iptables,而是firewall,所以在Master和Slave1上均执行(先切换到root用户)
[hadoop@Slave1 ~]$ su - root
Password:
Last login: Mon Sep 29 10:20:31 CST 2014 on pts/0
[root@Slave1 ~]# systemctl stop firewalld.service
现在再看一下Master和Slave1的Hadoop的存储情况
MASTER
[hadoop@Master sbin]$ hadoop dfadmin -report
Error: Could not find or load main class dfadmin
[hadoop@Master sbin]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Configured Capacity: 12572426240 (11.71 GB)
Present Capacity: 6669115392 (6.21 GB)
DFS Remaining: 6669111296 (6.21 GB)
DFS Used: 4096 (4 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 192.168.86.129:50010 (Slaver1.Hadoop)
Hostname: Slaver1.Hadoop
Decommission Status : Normal
Configured Capacity: 12572426240 (11.71 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 5903310848 (5.50 GB)
DFS Remaining: 6669111296 (6.21 GB)
DFS Used%: 0.00%
DFS Remaining%: 53.05%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Sep 29 10:57:04 CST 2014
SLAVE1
[hadoop@Slaver1 ~]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Configured Capacity: 12572426240 (11.71 GB)
Present Capacity: 6669012992 (6.21 GB)
DFS Remaining: 6669008896 (6.21 GB)
DFS Used: 4096 (4 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 192.168.86.129:50010 (Slaver1.Hadoop)
Hostname: Slaver1.Hadoop
Decommission Status : Normal
Configured Capacity: 12572426240 (11.71 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 5903413248 (5.50 GB)
DFS Remaining: 6669008896 (6.21 GB)
DFS Used%: 0.00%
DFS Remaining%: 53.04%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Sep 29 10:28:18 CST 2014
Hadoop版本:2.5.0
在配置Hadoop集群时,在Master上 启动目录/usr/hadoop/sbin/下的./start-all.sh后,在Master主机上
[hadoop@Master sbin]$ ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [Master.Hadoop]
Master.Hadoop: starting namenode, logging to /usr/hadoop/logs/hadoop-hadoop-namenode-Master.Hadoop.out
192.168.86.129: starting datanode, logging to /usr/hadoop/logs/hadoop-hadoop-datanode-Slaver1.Hadoop.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/hadoop/logs/hadoop-hadoop-secondarynamenode-Master.Hadoop.out
starting yarn daemons
starting resourcemanager, logging to /usr/hadoop/logs/yarn-hadoop-resourcemanager-Master.Hadoop.out
192.168.86.129: starting nodemanager, logging to /usr/hadoop/logs/yarn-hadoop-nodemanager-Slaver1.Hadoop.out
[hadoop@Master sbin]$ jps
10474 Jps
10217 ResourceManager
9885 NameNode
10069 SecondaryNameNode
可见已经成功启动,在Slave1 datanode节点机上
[hadoop@Slave1 ~]$ jps
5098 NodeManager
5000 DataNode
5184 Jps
可见也已经成功启动,但是当dfsadmin -report时,报出下面错误
[hadoop@Slave1 ~]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
14/09/29 10:10:58 INFO ipc.Client: Retrying connect to server: Master.Hadoop/192.168.86.128:9000. Already tried 0 time(s); maxRetries=45
report: No Route to Host from Slave1.Hadoop/192.168.86.129 to Master.Hadoop:9000 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see:
http://wiki.apache.org/hadoop/NoRouteToHost
经过查找,发现是因为防火墙没有关闭!
在以往Linux版本上,使用service iptables stop命令进行关闭防火墙,但是当道centOS7.0后,改命令就无法使用。
centOS7.0上面的服务都通过systemctl命令进行start、stop、restart。
而且在centOS7.0上,防火墙也不再是iptables,而是firewall,所以在Master和Slave1上均执行(先切换到root用户)
[hadoop@Slave1 ~]$ su - root
Password:
Last login: Mon Sep 29 10:20:31 CST 2014 on pts/0
[root@Slave1 ~]# systemctl stop firewalld.service
现在再看一下Master和Slave1的Hadoop的存储情况
MASTER
[hadoop@Master sbin]$ hadoop dfadmin -report
Error: Could not find or load main class dfadmin
[hadoop@Master sbin]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Configured Capacity: 12572426240 (11.71 GB)
Present Capacity: 6669115392 (6.21 GB)
DFS Remaining: 6669111296 (6.21 GB)
DFS Used: 4096 (4 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 192.168.86.129:50010 (Slaver1.Hadoop)
Hostname: Slaver1.Hadoop
Decommission Status : Normal
Configured Capacity: 12572426240 (11.71 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 5903310848 (5.50 GB)
DFS Remaining: 6669111296 (6.21 GB)
DFS Used%: 0.00%
DFS Remaining%: 53.05%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Sep 29 10:57:04 CST 2014
SLAVE1
[hadoop@Slaver1 ~]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Configured Capacity: 12572426240 (11.71 GB)
Present Capacity: 6669012992 (6.21 GB)
DFS Remaining: 6669008896 (6.21 GB)
DFS Used: 4096 (4 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 192.168.86.129:50010 (Slaver1.Hadoop)
Hostname: Slaver1.Hadoop
Decommission Status : Normal
Configured Capacity: 12572426240 (11.71 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 5903413248 (5.50 GB)
DFS Remaining: 6669008896 (6.21 GB)
DFS Used%: 0.00%
DFS Remaining%: 53.04%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Sep 29 10:28:18 CST 2014
相关文章推荐
- CentOS下配置Hadoop集群:java.net.NoRouteToHostException: No route to host问题的解决
- 配置Hadoop集群:java.net.NoRouteToHostException: No route to host问题的解决
- RHEL 5下配置Hadoop集群:java.net.NoRouteToHostException: No route to host问题的解决
- RHEL 5下配置Hadoop集群:java.net.NoRouteToHostException: No route to host问题的解决
- RHEL 5下配置Hadoop集群:java.net.NoRouteToHostException: No route to host问题的解决
- 配置Hadoop集群:java.net.NoRouteToHostException: No route to host问题的解决
- 配置Hadoop集群:java.net.NoRouteToHostException: No route to host问题的解决
- Hadoop java.net.NoRouteToHostException: No route to host
- hadoop 报错,java.net.NoRouteToHostException: 没有到主机的路由
- hadoop java.net.NoRouteToHostException: No Route To Host
- hadoop问题之java.net.NoRouteToHostException: 没有到主机的路由
- hadoop java.net.NoRouteToHostException: 没有到主机的路由
- Hadoop重启服务器后报错Caused by: java.net.NoRouteToHostException: 没有到主机的路由
- LINUX安装tomcat 启动报异常 Protocol handler pause failed java.net.NoRouteToHostException: No route to host
- hadoop之java.net.NoRouteToHostException: No route to host
- HADOOP 启动NodeManager闪退,logs信息:java.net.NoRouteToHostException: 没有到主机的路由
- Failed to connect to /XXXfor block, add to deadNodes and continue. java.net.NoRouteToHostException:
- hadoop中datanode无法启动,报Caused by: java.net.NoRouteToHostException: No route to host
- Hadoop java.net.NoRouteToHostException: No route to host
- INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.NoRouteToHostException: No route