本文转载自 http://www.cnblogs.com/scw2901/p/4331682.html
在hbase shell 里 运行 count 'tablename' 统计表格行数太慢了
改用 $HBASE_HOME/bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter 'tablename'
就报Container [pid=13875,containerID=container_1480991516670_0003_01_000003] is running beyond virtual memory limits. Current usage: 165.5 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.
注意运行这条命令需要开启yarn
问题描述:
在hadoop中运行应用,出现了running beyond virtual memory错误。提示如下:
Container [pid=28920,containerID=container_1389136889967_0001_01_000121] is running beyond virtual memory limits. Current usage: 1.2 GB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container. 原因:从机上运行的
Container试图使用过多的内存,而被
NodeManager kill掉了。
[摘录] The NodeManager is killing your container. It sounds like you are trying to use hadoop streaming which is running as a child process of the map-reduce task. The NodeManager monitors the entire process tree of the task and if it eats up more memory than the maximum set in mapreduce.map.memory.mb or mapreduce.reduce.memory.mb respectively, we would expect the Nodemanager to kill the task, otherwise your task is stealing memory belonging to other containers, which you don't want.
解决方法:
mapred-site.xml中设置map和reduce任务的内存配置如下:(value中实际配置的内存需要根据自己机器内存大小及应用情况进行修改)
<property>
<name>mapreduce.map.memory.mb</name>
<value>1536</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx1024M</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>3072</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx2560M</value>
</property>
附录:
Container is running beyond memory limits
http://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits