HDFS append操作: DataStreamer Exception <ERROR>Failed to close inode 347753
2016-07-16 14:05
477 查看
客户端通过append操作追加内容至hdfs文件时,出现异常:
<WARN>DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[192.168.50.208:50010, 192.168.50.207:50010], original=[192.168.50.208:50010, 192.168.50.207:50010]). The current
failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1040)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1106)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1253)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:594)
[2016-07-16 10:30:15] org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:941)
<ERROR>Failed to close inode 347753
排查了之后发现,需要hdfs-site.xml设置2个参数:
dfs.client.block.write.replace-datanode-on-failure.policy : NEVER
dfs.client.block.write.replace-datanode-on-failure.enable : true
调整参数,添加以上property配置,重启,执行,发现还是报错,后来通过在客户端设置这2个参数解决:
conf.set("dfs.client.block.write.replace-datanode-on-failure.policy", "NEVER")
conf.set("dfs.client.block.write.replace-datanode-on-failure.enable", "true")
参考:
http://blog.csdn.net/map_lixiupeng/article/details/32153251
<WARN>DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[192.168.50.208:50010, 192.168.50.207:50010], original=[192.168.50.208:50010, 192.168.50.207:50010]). The current
failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1040)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1106)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1253)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:594)
[2016-07-16 10:30:15] org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:941)
<ERROR>Failed to close inode 347753
排查了之后发现,需要hdfs-site.xml设置2个参数:
dfs.client.block.write.replace-datanode-on-failure.policy : NEVER
dfs.client.block.write.replace-datanode-on-failure.enable : true
调整参数,添加以上property配置,重启,执行,发现还是报错,后来通过在客户端设置这2个参数解决:
conf.set("dfs.client.block.write.replace-datanode-on-failure.policy", "NEVER")
conf.set("dfs.client.block.write.replace-datanode-on-failure.enable", "true")
参考:
http://blog.csdn.net/map_lixiupeng/article/details/32153251
相关文章推荐
- 使用apktool反编译apk文件
- Android 各种按钮点击效果以及兼容性问题总结
- Mob的ShareSDK
- JAVA基础语法、面向对象、了解java对象的生存周期(Object、修饰符、数组、枚举、接口、抽象类)
- ToolBar与AppcompatAcitivity实现浸入式Statusbar效果
- iOS在应用中添加自定义字体
- android项目之记事本-2(文件保存与阅读)
- iOS git .gitignore 文件的常见写法
- Unity导出场景地形网格到.obj文件
- Android 6.0 App无法安装使用的问题
- android中xml tools属性详解
- webview加载assets下的html5页面
- android图片压缩工具类
- Android 蓝牙开发基本流程
- Android权限问题整理
- android蓝牙的简单用法
- 写给VR手游开发小白的教程:(二)UnityVR插件CardboardSDKForUnity解析(一)
- Android之动画3D旋转
- JavaScript中bind、call、apply函数用法详解
- timer实现倒计时