Hadoop2.6.0运行自带WordCount报错
2015-04-07 21:39
225 查看
运行Hadoop自带的WordCount报错。
错误日志:
2015-04-07 21:27:01,842 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Application
application_1428407210365_0007 failed 2 times due to Error launching appattempt_1428407210365_0007_000002. Got exception: java.net.ConnectException: Call From Master.Hadoop/192.168.1.1 to localhost:60387 failed on connection exception: java.net.ConnectException:
Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy32.startContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
... 9 more
解决办法:
HDFS在输出的时候需要新建输出文件
在core-site.xml中的配置项:需要配置fs.defaultFS和fs.default.name两个参数,参数值完全一样,这样Hadoop才能在HDFS系统中新建文件,不然,Hadoop无法新建输出结果文件,就会报错。
错误日志:
2015-04-07 21:27:01,842 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Application
application_1428407210365_0007 failed 2 times due to Error launching appattempt_1428407210365_0007_000002. Got exception: java.net.ConnectException: Call From Master.Hadoop/192.168.1.1 to localhost:60387 failed on connection exception: java.net.ConnectException:
Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy32.startContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
... 9 more
解决办法:
HDFS在输出的时候需要新建输出文件
在core-site.xml中的配置项:需要配置fs.defaultFS和fs.default.name两个参数,参数值完全一样,这样Hadoop才能在HDFS系统中新建文件,不然,Hadoop无法新建输出结果文件,就会报错。
相关文章推荐
- 关于hadoop1.0.4运行自带例子WordCount内存溢出问题。
- hadoop2.6.0执行自带wordcount出现异常
- cloudera CDH5.13.1 Hadoop2.6.0 测试运行wordcount大数据统计作业
- Eclipse下运行hadoop自带的mapreduce程序--wordcount
- Ubuntu14安装配置Hadoop2.6.0(完全分布式)与 wordcount实例运行
- 运行hadoop自带wordcount例子
- hadoop自带例子wordcount的具体运行步骤
- hadoop自带例子wordcount的具体运行步骤
- hadoop-2.6.0如何编译,运行WordCount 程序
- Ubuntu14.04安装配置Hadoop2.6.0(完全分布式)与 wordcount实例运行
- linux下eclipse上运行hadoop自带wordcount程序
- 利用hadoop自带程序运行wordcount
- hadoop自带例子wordcount的具体运行步骤
- Hadoop2.5.1测试(运行自带的wordcount)
- Ubuntu14.04安装配置Hadoop2.6.0(完全分布式)与 wordcount实例运行
- win7下安装hadoop 2.6.0 的eclipse插件并编写运行WordCount程序
- hadoop2.6.0执行自带wordcount出现异常
- linux下在eclipse上运行hadoop自带例子wordcount
- spark学习1——配置hadoop 单机模式并运行WordCount实例(ubuntu14.04 & hadoop 2.6.0)
- 简单的在Hadoop2.6.0上安装eclipse运行WORDCOUNT的总结笔记