您的位置:首页 > 运维架构

Hadoop yarn OutOfMemoryError: unable to create new native thread

2015-08-23 19:00 381 查看

Bug

2015-08-23 18:00:12,084 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: Error in dispatcher thread
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1371)
at java.lang.UNIXProcess.initStreams(UNIXProcess.java:172)
at java.lang.UNIXProcess$2.run(UNIXProcess.java:145)
at java.lang.UNIXProcess$2.run(UNIXProcess.java:143)
at java.security.AccessController.doPrivileged(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:143)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:485)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.containerIsAlive(DefaultContainerExecutor.java:430)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.signalContainer(DefaultContainerExecutor.java:401)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.cleanupContainer(ContainerLaunch.java:419)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher.handle(ContainersLauncher.java:139)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher.handle(ContainersLauncher.java:55)
at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173)
at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106)
at java.lang.Thread.run(Thread.java:744)
2015-08-23 18:00:12,086 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Exiting, bbye..
2015-08-23 18:13:35,544 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: STARTUP_MSG:


Solution

At first,I thought my mapreduce program may need more memorys and there may be a limit on nproc .So,I changed the conf in linux and hadoop

/etc/security/limits.conf

* soft nofile 65536
* hard nofile 65536

/etc/security/limits.d/90-nproc.conf

* soft nproc unlimited
* hard nproc unlimited

mapred-site.xml

mapreduce.map.memory.mb  4096
mapreduce.reduce.memory.mb 8192
mapreduce.map.java.opts  -Xmx3072m
mapreduce.reduce.java.opts  -Xmx7168m


But,It didn’t work.The truth is that I have not enough memory to allocate to each maps and reduces.In fact, they don’t need a lot of memory and I over allocate to them.The solution is

mapred-site.xml

mapreduce.map.memory.mb  1024
mapreduce.reduce.memory.mb 2048
mapreduce.map.java.opts  -Xmx800m
mapreduce.reduce.java.opts  -Xmx1600m


Environment

Centos 6.4 core 3.10.80
Hadoop 2.6
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  hadoop yarn