New Memory allocation 1046759 bytes is smaller than the minimum allocation size of 1048576 bytes.
2017-04-14 18:01
1406 查看
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 (TID 78, hdp57.car.bj2.yongche.com): org.apache.hadoop.hive.ql.metadata.HiveException:
parquet.hadoop.MemoryManager$1: New Memory allocation 1046759 bytes is smaller than the minimum allocation size of 1048576 bytes.
at org.apache.spark.sql.hive.SparkHiveDynamicPartitionWriterContainer.org$apache$spark$sql$hive$SparkHiveDynamicPartitionWriterContainer$$newWriter$1(hiveWriterContainers.scala:240)
at org.apache.spark.sql.hive.SparkHiveDynamicPartitionWriterContainer$$anonfun$getLocalFileWriter$1.apply(hiveWriterContainers.scala:249)
at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:91)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$org$apache$spark$sql$hive$execution$InsertIntoHiveTable$$writeToFile$1$1.apply(InsertIntoHiveTable.scala:112)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$org$apache$spark$sql$hive$execution$InsertIntoHiveTable$$writeToFile$1$1.apply(InsertIntoHiveTable.scala:104)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.org$apache$spark$sql$hive$execution$InsertIntoHiveTable$$writeToFile$1(InsertIntoHiveTable.scala:104)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:84)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
解决方法:
hiveContext.setConf("parquet.memory.min.chunk.size", "100000")
hiveContext.setConf("hive.exec.max.dynamic.partitions", "100000")
parquet.hadoop.MemoryManager$1: New Memory allocation 1046759 bytes is smaller than the minimum allocation size of 1048576 bytes.
at org.apache.spark.sql.hive.SparkHiveDynamicPartitionWriterContainer.org$apache$spark$sql$hive$SparkHiveDynamicPartitionWriterContainer$$newWriter$1(hiveWriterContainers.scala:240)
at org.apache.spark.sql.hive.SparkHiveDynamicPartitionWriterContainer$$anonfun$getLocalFileWriter$1.apply(hiveWriterContainers.scala:249)
at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:91)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$org$apache$spark$sql$hive$execution$InsertIntoHiveTable$$writeToFile$1$1.apply(InsertIntoHiveTable.scala:112)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$org$apache$spark$sql$hive$execution$InsertIntoHiveTable$$writeToFile$1$1.apply(InsertIntoHiveTable.scala:104)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.org$apache$spark$sql$hive$execution$InsertIntoHiveTable$$writeToFile$1(InsertIntoHiveTable.scala:104)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:84)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
解决方法:
hiveContext.setConf("parquet.memory.min.chunk.size", "100000")
hiveContext.setConf("hive.exec.max.dynamic.partitions", "100000")
相关文章推荐
- Volume is smaller than the minimum size specified in image
- warning: the frame size of 1040 bytes is larger than 1024 bytes
- warning: the frame size of 1040 bytes is larger than 1024 bytes
- warning: the frame size of 1040 bytes is larger than 1024 bytes
- warning: the frame size of 1040 bytes is larger than 1024 bytes
- warning: the frame size of 1456 bytes is larger than 1024 bytes
- warning: the frame size of 1104 bytes is larger than 1024 bytes [-Wframe-larger-than=]
- warning: the frame size of 1040 bytes is larger than 1024 bytes
- Spring Boot 批量上传: The field files exceeds its maximum permitted size of 1048576 bytes.
- No enclosing instance of type E is accessible. Must qualify the allocation with an enclosing instance of type E(e.g. x.new A() where x is an
- sort operation used more than the maximum 33554432 bytes of RAM. Add an index,or specify a smaller
- ORA-01200: actual file size of 533 is smaller than correct size of 640 blocks
- No enclosing instance of type Demo is accessible. Must qualify the allocation with an enclosing instance of type Demo (e.g. x.new A() where x is an instance of Demo).
- The database page size of 4096 bytes obtained from ASE is different from the database page size of 2
- Spring Boot修改最大上传文件限制:The field file exceeds its maximum permitted size of 1048576 bytes.
- Spring Boot:The field file exceeds its maximum permitted size of 1048576 bytes.
- Spring Boot 批量上传: The field files exceeds its maximum permitted size of 1048576 bytes.
- InnoDB: The Auto-extending innodb_system data file './ibdata1' is of a different size 640 pages (rounded down to MB) than specified in the .cnf file: initial 768 pages, max 0 (relevant if non-zero) pa
- What is the maximum amount of memory any single process on Windows can address? Is this different than the maximum virtual memor
- [转]Spring Boot修改最大上传文件限制:The field file exceeds its maximum permitted size of 1048576 bytes.