Debug issue of OOM/pod restarting of in Kubernetes

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
简介: # Debug issue of OOM/Pod restarting of in Kubernetes Recently, customers reported a issue that pod keep restarting after pod migrating from node to node, and java process inside pod exit abnormally

Debug issue of OOM/Pod restarting of in Kubernetes

Recently, customers reported a issue that pod keep restarting after pod migrating from node to node, and java process inside pod exit abnormally.
After a couple of troubleshooting, we could figure out root cause OOM caused by LimitRange of namespace, kernel killed newly created process once memory of JVM request exceeds the default limit. In this article, I will explain the troubleshooting method step by step, it should be common PD method for most application OOM/pod restarting related in Kubernetes.

Failed to start java/jetty application

kubectl logs console-54dc5566b4-nq6gs -n test

jetty process failed to start without any log

Starting Jetty: -Djava.io.tmpdir=/tmp/jetty
start-stop-daemon -S -p/var/run/jetty.pid -croot -d/var/lib/jetty -b -m -a /usr/bin/java -- -Djava.io.tmpdir=/tmp/jetty -Xms512m -Xmx1g -Djetty.logs=/usr/local/jetty/logs -Dspring.profiles.active=StandaloneLogin,Agility,NotAliCloud,NotPublicCloud -Djetty.home=/usr/local/jetty -Djetty.base=/var/lib/jetty -Djava.io.tmpdir=/tmp/jetty -jar /usr/local/jetty/start.jar jetty.state=/var/lib/jetty/jetty.state jetty-started.xml start-log-file=/usr/local/jetty/logs/start.log
jetty.state=/var/lib/jetty/jetty.state
FAILED Thu Mar 14 09:43:55 UTC 2019
tail: cannot open '/var/lib/jetty/logs/*.log' for reading: No such file or directory
tail: no files remaining

console pod keep re-creating on k8s cluster

kubectl get events --all-namespaces | grep console

test      1m        1m        1         console-54dc5566b4-sx2r6.158bc8b1f2a076ce   Pod       spec.containers{console}   Normal    Killing   kubelet, k8s003   Killing container with id docker://console:Container failed liveness probe.. Container will be killed and recreated.
test      1m        6m        2         console-54dc5566b4-hx6wb.158bc86c4379c4e7   Pod       spec.containers{console}   Normal    Started   kubelet, k8s001   Started container
test      1m        6m        2         console-54dc5566b4-hx6wb.158bc86c355ab395   Pod       spec.containers{console}   Normal    Created   kubelet, k8s001   Created container
test      1m        6m        2         console-54dc5566b4-hx6wb.158bc86c2fe32c76   Pod       spec.containers{console}   Normal    Pulled    kubelet, k8s001   Container image "registry.cn-hangzhou.aliyuncs.com/eric-dev/console:0.9-62f837e" already present on machine
test      1m        1m        1         console-54dc5566b4-hx6wb.158bc8b87083e752   Pod       spec.containers{console}   Normal    Killing   kubelet, k8s001   Killing container with id docker://console:Container failed liveness probe.. Container will be killed and recreated.

Determine an OOM from pod state

kubectl get pod console-54dc5566b4-hx6wb -n test -o yaml | grep reason -C5

lastState:
      terminated:
        containerID: docker://90e5c9e618f3e745ebf510b8f215da3a165e3d03be58e0369e27c1773e75ef70
        exitCode: 137
        finishedAt: 2019-03-14T09:29:51Z
        reason: OOMKilled
        startedAt: 2019-03-14T09:24:51Z
    name: console
    ready: true
    restartCount: 3
    state:

kubectl get pod console-54dc5566b4-hx6wb -n test -o jsonpath='{.status.containerStatuses[].lastState}'

map[terminated:map[exitCode:137 reason:OOMKilled startedAt:2019-03-14T09:24:51Z finishedAt:2019-03-14T09:29:51Z containerID:docker://90e5c9e618f3e745ebf510b8f215da3a165e3d03be58e0369e27c1773e75ef70]]

Detect oom thru system log to validate assumption

Following error indicate an java oom caused by cgroup setting

# grep oom /var/log/messages

/var/log/messages:2019-03-14T09:15:17.541049+00:00 iZbp185dy2o3o6lnlo4f07Z kernel: [8040341.949064] java invoked oom-killer: gfp_mask=0x14000c0(GFP_KERNEL), nodemask=0, order=0, oom_score_adj=968
/var/log/messages:2019-03-14T09:15:17.541117+00:00 iZbp185dy2o3o6lnlo4f07Z kernel: [8040341.949153]  [<ffffffff81191de4>] oom_kill_process+0x214/0x3f0
/var/log/messages:2019-03-14T09:15:17.541119+00:00 iZbp185dy2o3o6lnlo4f07Z kernel: [8040341.949171]  [<ffffffff811f9481>] mem_cgroup_oom_synchronize+0x2f1/0x310
/var/log/messages:2019-03-14T09:15:17.541147+00:00 iZbp185dy2o3o6lnlo4f07Z kernel: [8040341.950571] [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name

#grep oom /var/log/warn

2019-03-14T09:15:17.541049+00:00 iZbp185dy2o3o6lnlo4f07Z kernel: [8040341.949064] java invoked oom-killer: gfp_mask=0x14000c0(GFP_KERNEL), nodemask=0, order=0, oom_score_adj=968
2019-03-14T09:15:17.541117+00:00 iZbp185dy2o3o6lnlo4f07Z kernel: [8040341.949153]  [<ffffffff81191de4>] oom_kill_process+0x214/0x3f0
2019-03-14T09:15:17.541119+00:00 iZbp185dy2o3o6lnlo4f07Z kernel: [8040341.949171]  [<ffffffff811f9481>] mem_cgroup_oom_synchronize+0x2f1/0x310

Root cause:

kubectl get pod console-54dc5566b4-hx6wb -n test -o yaml | grep limits -A4

limits:
  memory: 512Mi
requests:
  memory: 256Mi

In this case, application console pod extends the limits setting from default limits of namespace test

kubectl describe pod console-54dc5566b4-hx6wb -n test | grep limit

Annotations:        kubernetes.io/limit-ranger=LimitRanger plugin set: memory request for container console; memory limit for container console

kubectl get limitrange -n test

NAME              CREATED AT
mem-limit-range   2019-03-14T09:04:10Z

kubectl describe ns test

Name:         test
Labels:       <none>
Annotations:  <none>
Status:       Active

No resource quota.

Resource Limits
 Type       Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
 ----       --------  ---  ---  ---------------  -------------  -----------------------
 Container  memory    -    -    256Mi            512Mi          -

Action to fix oom issue

After fixing limits setting and recreated pod, application become healthy.

kubectl delete limitrange mem-limit-range -n test
kubectl delete pod console-54dc5566b4-hx6wb
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
23天前
|
JSON Kubernetes Shell
【Azure K8S | AKS】在不丢失文件/不影响POD运行的情况下增加PVC的大小
【Azure K8S | AKS】在不丢失文件/不影响POD运行的情况下增加PVC的大小
|
23天前
|
Kubernetes Shell Perl
【Azure K8S|AKS】进入AKS的POD中查看文件,例如PVC Volume Mounts使用情况
【Azure K8S|AKS】进入AKS的POD中查看文件,例如PVC Volume Mounts使用情况
|
24天前
|
Kubernetes Docker Perl
在K8S中,如果是因为开发写的镜像问题导致pod起不来该怎么排查?
在K8S中,如果是因为开发写的镜像问题导致pod起不来该怎么排查?
|
24天前
|
Kubernetes 安全 Docker
在K8S中,在服务上线的时候Pod起不来怎么进行排查?
在K8S中,在服务上线的时候Pod起不来怎么进行排查?
|
24天前
|
存储 Kubernetes 调度
在K8S中,⼀个pod的不同container能够分开被调动到不同的节点上吗?
在K8S中,⼀个pod的不同container能够分开被调动到不同的节点上吗?
|
24天前
|
消息中间件 Kubernetes 容器
在K8S中,同⼀个Pod的不同容器互相可以访问是怎么做到的?
在K8S中,同⼀个Pod的不同容器互相可以访问是怎么做到的?
|
24天前
|
存储 Kubernetes 数据中心
在K8S中,同⼀个Pod内不同容器哪些资源是共用的,哪些资源是隔离的?
在K8S中,同⼀个Pod内不同容器哪些资源是共用的,哪些资源是隔离的?
|
24天前
|
Kubernetes 负载均衡 网络协议
在K8S中,Pod的探针有哪些及用途?
在K8S中,Pod的探针有哪些及用途?
|
24天前
|
Kubernetes 监控 Perl
在K8S中,Pod⼀直处于Init状态,如何排查?
在K8S中,Pod⼀直处于Init状态,如何排查?
|
24天前
|
Prometheus Kubernetes 监控
在K8S中,Pod处于OOM状态如何排查?
在K8S中,Pod处于OOM状态如何排查?