您的位置:首页 > 运维架构

hadoop在对数据压缩时出现的问题

2012-12-24 13:22 169 查看
最近在对数据做压缩时,报了数组太大,导致内存不够的错误:

2012-12-19 13:15:03,866 FATAL org.apache.hadoop.mapred.Child: Error running child :

java.lang.OutOfMemoryError: allocLargeObjectOrArray: [I, size 3600016at org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream$Data.<init>(CBZip2OutputStream.java:2074)

at org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream.init(CBZip2OutputStream.java:747)

at org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream.<init>(CBZip2OutputStream.java:637)

at org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream.<init>(CBZip2OutputStream.java:594)

at org.apache.hadoop.io.compress.BZip2Codec$BZip2CompressionOutputStream.internalReset(BZip2Codec.java:186)

at org.apache.hadoop.io.compress.BZip2Codec$BZip2CompressionOutputStream.write(BZip2Codec.java:205)

at org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat$1.write(HiveIgnoreKeyTextOutputFormat.java:86)

at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:588)

at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)

at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)

at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)

at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)

at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)

at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:761)

at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:848)

at org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:265)

at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:198)

at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519)

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)

at org.apache.hadoop.mapred.Child$4.run(Child.java:255)

at javax.security.auth.Subject.doAs(Subject.java:396)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)

at org.apache.hadoop.mapred.Child.main(Child.java:249)

报此错误是在reduce阶段出现。我尝试着修改了reduce数(加大),之前(hive-site.xml)设置是10,后来加到25,最后设置成40,就运行OK了。

解决方式:设定reduce数(具体数字需要自己测试下,根据数据量的不同来判断,我这次是用我们能用到最大的数据容量做动态分区加bz压缩)

总数据量在130GB左右。

另外我们尝试过不用压缩,也是能正常运行的。

说明使用压缩的时候,我们还有些问题没有更好的解决掉,后续会专门研究其压缩的使用。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: