您的位置:首页 > 其它

OutOfMemoryError: Cannot create GC thread. Out of system resources

2016-11-02 14:08 585 查看
Scott Carey <sc...@richrelevance.com>
SubjectRe: OutOfMemoryError: Cannot create GC thread. Out of system resources
DateThu, 01 Apr 2010 19:40:54 GMT
The default size of Java's young GC generation is 1/3 of the heap.  (-XX:NewRatio defaults
to 2)
You have told it to use 100MB for in memory file system.  There is a default setting of 64MB
sort space.

if -Xmx is 128M then the above sums to over 200MB and won't fit.   Turning down the use of
any of the three above could help, or increasing -Xmx.

Additionally, when a thread can't be allocated it could potentially be due to a limit on the
OS side for file system handles per process or user.

On Mar 31, 2010, at 11:48 AM, Edson Ramiro wrote:

> Hi all,
>
> When I run the pi Hadoop sample I get this error:
>
> 10/03/31 15:46:13 WARN mapred.JobClient: Error reading task outputhttp://
> h04.ctinfra.ufpr.br:50060/tasklog?plaintext=true&taskid=attempt_201003311545_0001_r_000002_0&filter=stdout
> 10/03/31 15:46:13 WARN mapred.JobClient: Error reading task outputhttp://
> h04.ctinfra.ufpr.br:50060/tasklog?plaintext=true&taskid=attempt_201003311545_0001_r_000002_0&filter=stderr
> 10/03/31 15:46:20 INFO mapred.JobClient: Task Id :
> attempt_201003311545_0001_m_000006_1, Status : FAILED
> java.io.IOException: Task process exit with nonzero status of 134.
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
>
> May be its because the datanode can't create more threads.
>
> ramiro@lcpad:~/hadoop-0.20.2$ cat
> logs/userlogs/attempt_201003311457_0001_r_000001_2/stdout
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> # java.lang.OutOfMemoryError: Cannot create GC thread. Out of system
> resources.
> #
> #  Internal Error (gcTaskThread.cpp:38), pid=28840, tid=140010745776400
> #  Error: Cannot create GC thread. Out of system resources.
> #
> # JRE version: 6.0_17-b04
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (14.3-b01 mixed mode
> linux-amd64 )
> # An error report file with more information is saved as:
> #
> /var-host/tmp/hadoop-ramiro/mapred/local/taskTracker/jobcache/job_201003311457_0001/attempt_201003311457_0001_r_000001_2/work/hs_err_pid28840.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://java.sun.com/webapps/bugreport/crash.jsp > #
>
> I configured the limits bellow, but I'm still getting the same error.
>
>  <property>
>  <name>fs.inmemory.size.mb</name>
>  <value>100</value>
>  </property>
>
>  <property>
>  <name>mapred.child.java.opts</name>
>  <value>-Xmx128M</value>
>  </property>
>
> Do you know what limit should I configure to fix it?
>
> Thanks in Advance
>
> Edson Ramiro


内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐