您的位置:首页 > 大数据 > Hadoop

从本地上传到hdfs上出现异常

2015-12-03 22:45 465 查看
hdfs dfs -put 从本地上传到hdfs上出现异常



与namenode 同台机器的datanode错误日志信息如下:

2015-12-03 09:54:03,083 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:727ms (threshold=300ms)
2015-12-03 09:54:03,991 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting CheckDiskError Thread
2015-12-03 09:54:03,991 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-254367353-10.172.153.46-1448878000030:blk_1073741847_1023
java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:345)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:613)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:781)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:730)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at org.apache.hadoop.hdfs.protoco2015-12-03 09:54:04,050 WARN org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Block BP-254367353-10.172.153.46-1448878000030:blk_1073741847_1023 unfinalized and removed.
2015-12-03 09:54:04,054 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-254367353-10.172.153.46-1448878000030:blk_1073741847_1023 received exception java.io.IOException: No space left on device
2015-12-03 09:54:04,054 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: hd1:50010:DataXceiver error processing WRITE_BLOCK operation src: /10.165.114.138:57315 dst: /10.172.153.46:50010
java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:345)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:613)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:781)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:730)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
at java.lang.Thread.run(Thread.java:745)

datanode所在机器的错误日志信息如下:

2015-12-03 17:54:04,111 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.172.218.18, datanodeUuid=7c882efa-f159-4477-a322-30cf55c84598, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-183048c9-89b2-44b4-a224-21f04d2a8065;nsid=275180848;c=0):Failed to transfer BP-254367353-10.172.153.46-1448878000030:blk_1073741850_1026 to 10.172.153.46:50010 got
java.net.SocketException: Original Exception : java.io.IOException: Connection reset by peer
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:433)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:565)
at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:223)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:728)
at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2017)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Connection reset by peer
... 8 more
2015-12-03 17:54:04,146 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting CheckDiskError Thread
2015-12-03 17:57:39,288 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-254367353-10.172.153.46-1448878000030:blk_1073741850_1026

从日志可以看出设备上空间不足,服务器磁盘空间较小,只好删除一些垃圾数据。

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: