您的位置:首页 > 运维架构 > Apache

ERROR: org.apache.hadoop.hbase.MasterNotRunningException

2012-09-14 11:25 204 查看
今天运行hbase的时候发现这个错误:

ERROR: org.apache.hadoop.hbase.MasterNotRunningException: Retried 7 times

查看log,发现大量的

2012-04-26 08:13:39,600 INFO org.apache.hadoop.hbase.util.FSUtils: Waiting for dfs to exit safe mode...

原来hdfs还处于安全模式

./hadoop fsck /

/hbase/.logs/slave1,60020,1333159627316/slave1%2C60020%2C1333159627316.1333159637444: Under replicated blk_-4160280099734447327_1626. Target Replicas is 3 but found 2 replica(s).

....

/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201203211238_0002/job.jar: Under replicated blk_-7807519084475423360_1012. Target Replicas is 10 but found 2 replica(s).

......................................................................Status: HEALTHY

Corrupt blocks: 0

Missing replicas: 9 (3.0612245 %)

Number of data-nodes: 2

没有损坏的block,有9个丢失的replicas,状态健康

所以可以强制离开安全模式

hadoop dfsadmin -safemode get

Warning: $HADOOP_HOME is deprecated.

Safe mode is ON

hadoop dfsadmin -safemode leave

Warning: $HADOOP_HOME is deprecated.

Safe mode is OFF

运行hbase命令成功

另外还可能是hadoop包版本不一致的问题,具体得看hbase的日志文件hbase-机器名-master-ubuntu.log了;

查看logs目录下的Master日志,发现有以下信息:

2012-02-01 14:41:52,867 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.

org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 42, server = 41)

at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:364)

at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:113)

at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:215)

at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:177)

at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)

at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)

at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)

at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)

at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)

at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)

at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:363)

at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:81)

at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:342)

at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:279)

2012-02-01 14:41:52,870 INFO org.apache.hadoop.hbase.master.HMaster: Aborting

2012-02-01 14:41:52,870 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping service threads

很明显,日志中说RPC协议不一致所造成的,恍然见明白因为我在hbase的配置文件中将rootdir设置为hdfs,如果这两者的RPC协议不一致就会导致这个问题。

解决方法:

将hbase/lib目录下的hadoop-core的jar文件删除,将hadoop目录下的hadoop-0.20.2-core.jar拷贝到hbase/lib下面,然后重新启动hbase即可。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐