yarn中MR作业报错Java heap space

提交到yarn框架计算的作业报错
//0,报错内容
我们hadoop-2.7集群用的执行引擎不是Tez,而是mr(是老集群)
Error: Java heap space
Container killed by the ApplicationMaster.

//1,查找报错日志
[root@ my-hadoop-cluster hive]# grep -C 3 –color “log.dir” {HIVE_HOME}/conf/hive-log4j.properties

# Define some default values that can be overridden by system properties
hive.log.threshold=ALL
hive.root.logger=INFO,DRFA
hive.log.dir=/mnt/log/hive/scratch/${user.name}
hive.log.file=hive.log

# Define the root logger to the system property "hadoop.root.logger".
--

log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender

log4j.appender.DRFA.File=${hive.log.dir}/${hive.log.file}

# Rollver at midnight
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd

//2,进到hive日志所在目录,查看hive.log

2018-08-05 13:26:28,570 ERROR [Thread-35]: exec.Task (SessionState.java:printError(948)) -
Task with the most failures(4):
-----
Task ID:
  task_1532952070023_22931_r_000852

URL:
  http:// my-hadoop-cluster:8088/taskdetails.jsp?jobid=job_1532952070023_22931&tipid=task_1532952070023_22931_r_000852
-----
Diagnostic Messages for this Task:
Error: Java heap space
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

2018-08-05 13:26:28,649 INFO  [main]: impl.YarnClientImpl (YarnClientImpl.java:killApplication(401)) - Killed application application_1532952070023_229
31

//3,从错误日志中的Diagnostic Messages for this Task做分析
此reduce阶段的任务被AM杀掉了。原因是该任务的container使用的java堆内存超出了限制。具体是什么限制?可以查找mapred-site.xml 中MapReduce作业对reduce阶段的堆内存限制,我这里的是做了2Gb的限制(默认1Gb):
[root@my-hadoop-cluster conf]# grep -iC 2 –color “reduce.memory.mb” mapred-site.xml

    <property>
      <name>mapreduce.reduce.memory.mb</name>
      <value>2048</value>
    </property>

展开来分析原因,task是运行在container上面的,也就是运行在JVM中,而此作业的reduce任务所对应的JVM堆内存大小已经超过2G。为什么超出2Gb?可能是产生的对象太多,占满了heap space;也可能是堆size太小,可以根据作业需求适当调大一些,例如3072

//4,解决办法
要么增大reduce.memory.mb 要么减小计算的数据量(可通过适当增加reducer个数来把算力分散到各节点)

//5,MR作业的内存分配简介

$ cd {HIVE_HOME}/conf/
$ grep -iEC 2 --color "map.java.opts|reduce.java.opts" mapred-site.xml
<property>
  <!—当前container下的java子进程(map task)中的JVM可用堆内存上限,超限会爆OOM-->
      <name>mapreduce.map.java.opts</name>
      <value>-Xmx800m -verbose:gc -Xloggc:/tmp/@[email protected]</value>
    </property>
--
<property>
  <!—当前container下的java子进程(reduce task)中的JVM可用堆内存上限,超限会爆OOM-->
      <name>mapreduce.reduce.java.opts</name>
      <value>-Xmx1736m -verbose:gc -Xloggc:/tmp/@[email protected]</value>
</property>

$ grep -iEC 2 --color "reduce.memory.mb|map.memory.mb" mapred-site.xml
<property>
  <!—该值是container内存上限,由NM监控,一旦超限会被NM杀掉. mapreduce.map.java.opts须小于该值-->
      <name>mapreduce.map.memory.mb</name>
      <value>512</value>
    </property>
--

<property>
<!—当前container下的java子进程(reduce task)中的JVM可用堆内存上限,超限会AM杀掉-->
      <name>mapreduce.reduce.memory.mb</name>
      <value>2048</value>
    </property>

猜你喜欢

转载自blog.csdn.net/qq_31598113/article/details/81432040
今日推荐