您的位置:首页 > 运维架构

安装hadoop2.x出现的问题

2017-02-12 17:47 183 查看
安装完hadoop,格式化之后启动hdfs,datanode不能启动

查看日志:

2017-02-07 14:29:47,741 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting

2017-02-07 14:29:47,758 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting

2017-02-07 14:29:53,973 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.paralle

l.volumes.load.threads.num=1, dataDirs=1)

2017-02-07 14:29:54,113 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /opt/data/tmp/dfs/data/in_use.lock acquired by nodename 5

400@hadoop-master

2017-02-07 14:29:54,203 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/opt/data/tmp/dfs/data

/

java.io.IOException: Incompatible clusterIDs in /opt/data/tmp/dfs/data: namenode clusterID = CID-2ca58eab-b3ef-4f08-b3f9-6246c4d6d0be; datan

ode clusterID = CID-2121b6fc-ca1c-4f87-a700-1f7314390f13

at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:775)

at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300)

at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416)

at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)

at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)

at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)

at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)

at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)

at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)

at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)

at java.lang.Thread.run(Thread.java:745)

2017-02-07 14:29:54,298 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode

Uuid unassigned) service to hadoop-master/192.168.8.88:9000. Exiting.

java.io.IOException: All specified directories are failed to load.

at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574)

at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)

at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)

at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)

at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)

at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)

at java.lang.Thread.run(Thread.java:745)

2017-02-07 14:29:54,298 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datan

ode Uuid unassigned) service to hadoop-master/192.168.8.88:9000

2017-02-07 14:29:54,324 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool (Datanode Uuid unassigned)

2017-02-07 14:29:56,324 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode

2017-02-07 14:29:56,342 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0

2017-02-07 14:29:56,377 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:

造成这个原因的是在安装完hadoop后格式化启动hdfs后,又格式化了一遍hdfs,知识后namenode的clusterID和datanode的clusterID不一致,所以造成不能正常的启动datanode.

解决方法:在配置的hadoop运行的临时文件目录中找到datanode的clusterID目录

/opt/data/tmp/dfs/data/current/VERSION clusterID和/opt/data/tmp/dfs/name/current/VERSION中的clusterID一致就可以了,把namenode中的clusterID拷贝到datanode中的clusterID。

开启Hadoop2.6.0出现ssh无法解析主机名等错误提示的解决办法!

报错:

You: ssh: Could not resolve hostname You: Temporary failure in name resolution

warning:: ssh: Could not resolve hostname warning:: Temporary failure in name resolution

VM: ssh: Could not resolve hostname VM: Temporary failure in name resolution

have: ssh: Could not resolve hostname have: Temporary failure in name resolution

library: ssh: Could not resolve hostname library: Temporary failure in name resolution

loaded: ssh: Could not resolve hostname loaded: Temporary failure in name resolution

might: ssh: Could not resolve hostname might: Temporary failure in name resolution

which: ssh: Could not resolve hostname which: Temporary failure in name resolution

have: ssh: Could not resolve hostname have: Temporary failure in name resolution

disabled: ssh: Could not resolve hostname disabled: Temporary failure in name resolution

stack: ssh: Could not resolve hostname stack: Temporary failure in name resolution

guard.: ssh: Could not resolve hostname guard.: Temporary failure in name resolution

VM: ssh: Could not resolve hostname VM: Temporary failure in name resolution

The: ssh: Could not resolve hostname The: Temporary failure in name resolution

try: ssh: Could not resolve hostname try: Temporary failure in name resolution

will: ssh: Could not resolve hostname will: Temporary failure in name resolution

to: ssh: Could not resolve hostname to: Temporary failure in name resolution

fix: ssh: Could not resolve hostname fix: Temporary failure in name resolution

the: ssh: Could not resolve hostname the: Temporary failure in name resolution

省略一大堆 Could not resolve hostname stack: Temporary failure in name resolution

解决办法:

出现上述问题主要是环境变量没设置好,在~/.bash_profile或者/etc/profile中加入以下语句就没问题了

vi /etc/profile或者vi ~/.bash_profile

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"


然后用source重新编译使之生效即可!

source /etc/profile或者source ~/.bash_profile
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: