您的位置:首页 > 产品设计 > UI/UE

There is insufficient memory for the Java Runtime Environment to continue. 解决

2014-07-15 09:04 1016 查看
在Centos 6.4 X64, JDK 1.7 U21下用hadoop 1.2.1 运行 mahout 0.9,处理一个5GB的数据,系统提示There is insufficient memory for the Java Runtime Environment to continue.

14/07/15 08:46:05 INFO mapred.JobClient: Task Id : attempt_201407141818_0002_m_000018_0, Status : FAILED

java.lang.Throwable: Child Error

at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)

Caused by: java.io.IOException: Task process exit with nonzero status of 1.

at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

attempt_201407141818_0002_m_000018_0: #

attempt_201407141818_0002_m_000018_0: # There is insufficient memory for the Java Runtime Environment to continue.

attempt_201407141818_0002_m_000018_0: # Cannot create GC thread. Out of system resources.

attempt_201407141818_0002_m_000018_0: # An error report file with more information is saved as:

attempt_201407141818_0002_m_000018_0: # /home/hadoop/hd_space/mapred/local/taskTracker/hadoop/jobcache/job_201407141818_0002/attempt_201407141818_0002_m_000018_0/work/hs_err_pid25377.log

14/07/15 08:46:07 INFO mapred.JobClient: map 15% reduce 0%

14/07/15 08:46:09 INFO mapred.JobClient: map 16% reduce 0%

14/07/15 08:46:09 INFO mapred.JobClient: Task Id : attempt_201407141818_0002_m_000018_1, Status : FAILED

java.lang.Throwable: Child Error

at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)

Caused by: java.io.IOException: Task process exit with nonzero status of 1.

at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

attempt_201407141818_0002_m_000018_1: #

attempt_201407141818_0002_m_000018_1: # There is insufficient memory for the Java Runtime Environment to continue.

attempt_201407141818_0002_m_000018_1: # Cannot create GC thread. Out of system resources.

attempt_201407141818_0002_m_000018_1: # An error report file with more information is saved as:

查看系统限制

[root@NameNode ~]# ulimit -a

core file size (blocks, -c) unlimited

data seg size (kbytes, -d) unlimited

scheduling priority (-e) 0

file size (blocks, -f) unlimited

pending signals (-i) 2066288

max locked memory (kbytes, -l) 64

max memory size (kbytes, -m) unlimited

open files (-n) 1024

pipe size (512 bytes, -p) 8

文件数太少了。查看系统的/etc/security/limit.conf,etc/sysctl.conf ,换JDK版本等等,均无果!

在Root下设置 ulimit -c unlimited后,仍然不行。

[hadoop@NameNode mahout-distribution-0.9]$ ulimit -a

max user processes (-u) 1024

virtual memory (kbytes, -v) unlimited

经过查证,再在/etc/security/下一看。centos6多出来一个limits.d目录,下面有个文件: 90-nproc.config

此文件内容:

# Default limit for number of user's processes to prevent

# accidental fork bombs.

# See rhbz #432903 for reasoning.

* soft nproc 1024

root soft nproc unlimited

这里限制了1024呀,果断注释。

问题解决。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐