Hadoop之HMaster会自动停掉,怎么回事?
2015-07-01 21:24
417 查看
问题: 搭建hadoop的测试环境完后, 发现HMaster会自动停掉, 启动后, 刚开始能jps看到HMaster的进程, 几秒后再jps就不见了. 怎么回事?
Log中是:
[root@master2 bin]# more /data/hbase-0.96.0-hadoop2/logs/hbase-root-master-master2.out
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/hbase-0.96.0-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBin
der.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
解决办法:
解决办法:将hbase中master的lib/下的slf4j-log4j12-1.7.5.jar删除,与/home/xuhui/hadoop-2.2.0/share/hadoop/common/lib下的jar重复包含了
删去了还是不行?怎么解决?Log文件中是空的
解决办法:
清空hadoop的tmp文件夹依然不好用! 怎么破?
解决办法:
重新格式化namenode,所有机器上:
hdfs namenode -format
将hbase/lib下的hadoop的jar包替换为hadoop中的2.2的jar包:
还是不行啊? 怎么破?
Log如下:
2015-05-14 13:58:46,284 INFO [master:master1:60000] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-05-14 13:58:47,002 WARN [Thread-45] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
2015-05-14 13:58:47,010 WARN [master:master1:60000] util.FSUtils: Unable to create version file at hdfs://master1:9001/hbase, retrying
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
2015-05-14 13:58:57,051 WARN [Thread-48] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
2015-05-14 13:59:17,088 FATAL [master:master1:60000] master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
2015-05-14 13:59:17,165 ERROR [Thread-5] hdfs.DFSClient: Failed to close file /hbase/hbase.version
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
解决办法:
经过清理各个namenode,datanode, 重新格式化namenode. Natanode也启动起来了, hbase的master也启动起来了.
Log中是:
[root@master2 bin]# more /data/hbase-0.96.0-hadoop2/logs/hbase-root-master-master2.out
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/hbase-0.96.0-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBin
der.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
解决办法:
解决办法:将hbase中master的lib/下的slf4j-log4j12-1.7.5.jar删除,与/home/xuhui/hadoop-2.2.0/share/hadoop/common/lib下的jar重复包含了
删去了还是不行?怎么解决?Log文件中是空的
解决办法:
清空hadoop的tmp文件夹依然不好用! 怎么破?
解决办法:
重新格式化namenode,所有机器上:
hdfs namenode -format
将hbase/lib下的hadoop的jar包替换为hadoop中的2.2的jar包:
还是不行啊? 怎么破?
Log如下:
2015-05-14 13:58:46,284 INFO [master:master1:60000] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2015-05-14 13:58:47,002 WARN [Thread-45] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
2015-05-14 13:58:47,010 WARN [master:master1:60000] util.FSUtils: Unable to create version file at hdfs://master1:9001/hbase, retrying
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
2015-05-14 13:58:57,051 WARN [Thread-48] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
2015-05-14 13:59:17,088 FATAL [master:master1:60000] master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
2015-05-14 13:59:17,165 ERROR [Thread-5] hdfs.DFSClient: Failed to close file /hbase/hbase.version
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
解决办法:
经过清理各个namenode,datanode, 重新格式化namenode. Natanode也启动起来了, hbase的master也启动起来了.
相关文章推荐
- Linux 安装(笔记)
- openstack之keystone
- Linux命令-终止进程命令:kill
- [LVS] Linux下的各种负载均衡技术
- centos6.5 php5.6 nginx 安装手记
- windows上通过vnc连接虚拟机中linux系统
- Shell常用招式大全之入门篇
- arm linux gcc安装
- Linux命令-系统健康命令:top
- CentOS系统中替换或修改yum源的方法
- 解决显示隐藏层中select标签中的option在IE浏览器鼠标移上去就整个DIV都隐藏
- 面向服务的体系架构(SOA)——负载均衡
- 《linux备份与恢复之二》3.19 dump(文件系统备份)
- Linux命令-进程查看命令:ps
- 安卓安装并运行Linux-Ubuntu12.04系统
- 鸟哥的Linux私房菜-----6、文件与目录管理
- linux命令学习笔记
- Linux mv 命令
- 鸟哥的Linux私房菜-----5、Linux文件权限与目录配置
- centos6中三台物理机配置nginx+keepalived+lvs