您的位置:首页 > 大数据 > Hadoop

Flume-ng的HdfsSink出现Lease mismatch错误

2014-03-17 14:21 417 查看
多台Flume-ng Agent做HA,实际环境中出现Lease mismatch错误,具体报错如下:
11 Mar 2014 12:21:02,971 WARN  [SinkRunner-PollingRunner-DefaultSinkProcessor] (
org.apache.flume.sink.hdfs.HDFSEventSink.process:418)  - HDFS IO error
java.io.IOException: IOException flush:org.apache.hadoop.ipc.RemoteException(org
.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): Lease mismatch on /l
ogdata/2014/03/11/91pc/FlumeData.1394467200001.tmp owned by DFSClient_NONMAPREDU
CE_1235353284_49 but is accessed by DFSClient_NONMAPREDUCE_1765113176_43
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSName
system.java:2459)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSName
system.java:2437)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.fsync(FSNamesyste
m.java:3106)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.fsync(NameNo
deRpcServer.java:823)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTra
nslatorPB.fsync(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cl
ientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4501
0)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.cal
l(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
tion.java:1408)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)

at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.ja
va:1643)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:15
25)
at org.apache.hadoop.hdfs.DFSOutputStream.sync(DFSOutputStream.java:1510
)
at org.apache.hadoop.fs.FSDataOutputStream.sync(FSDataOutputStream.java:
116)
at org.apache.flume.sink.hdfs.HDFSDataStream.sync(HDFSDataStream.java:11
7)
at org.apache.flume.sink.hdfs.BucketWriter$5.call(BucketWriter.java:356)

at org.apache.flume.sink.hdfs.BucketWriter$5.call(BucketWriter.java:353)

at org.apache.flume.sink.hdfs.BucketWriter$8$1.run(BucketWriter.java:536
)
at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.ja
va:160)
at org.apache.flume.sink.hdfs.BucketWriter.access$1000(BucketWriter.java
:56)
at org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:533)

at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:615)
at java.lang.Thread.run(Thread.java:724)


这个是由于配置了多个Flume agent做HA,每个agent都有多个HDFS客户端。多个HDFS客户端同时向一个HDFS文件写数据的时候造成Lease租约问题,Lease可以认为是一个文件写锁。解决方法是配置每个Agent的HDFS文件前缀后者后缀
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: