您的位置:首页 > 移动开发

HDFS append操作: DataStreamer Exception <ERROR>Failed to close inode 347753

2016-07-16 14:05 477 查看
客户端通过append操作追加内容至hdfs文件时,出现异常:

<WARN>DataStreamer Exception

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[192.168.50.208:50010, 192.168.50.207:50010], original=[192.168.50.208:50010, 192.168.50.207:50010]). The current
failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1040)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1106)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1253)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:594)

[2016-07-16 10:30:15] org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:941)

<ERROR>Failed to close inode 347753

排查了之后发现,需要hdfs-site.xml设置2个参数:

dfs.client.block.write.replace-datanode-on-failure.policy : NEVER
dfs.client.block.write.replace-datanode-on-failure.enable : true

调整参数,添加以上property配置,重启,执行,发现还是报错,后来通过在客户端设置这2个参数解决:
conf.set("dfs.client.block.write.replace-datanode-on-failure.policy", "NEVER")
conf.set("dfs.client.block.write.replace-datanode-on-failure.enable", "true")

参考:
http://blog.csdn.net/map_lixiupeng/article/details/32153251
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: