k8s之Pod基础概念 (下)

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
简介: Pod是kubernetes中最小的资源管理组件,Pod也是最小化运行容器化应用的资源对象。一个Pod代表着集群中运行的一个进程。kubernetes中其他大多数组件都是围绕着Pod来进行支撑和扩展Pod功能的,例如,用于管理Pod运行的StatefulSet和Deployment等控制器对象,用于暴露Pod应用的Service和Ingress对象,为Pod提供存储的PersistentVolume存储资源对象等

7. 镜像拉取策略(image PullPolicy


Pod 的核心是运行容器,必须指定容器引擎,比如 Docker,启动容器时,需要拉取镜像,k8s 的镜像拉取策略可以由用户指定:

● IfNotPresent:在镜像已经存在的情况下,kubelet 将不再去拉取镜像,仅当本地缺失时才从仓库中拉取,默认的镜像拉取策略

● Always:每次创建 Pod 都会重新拉取一次镜像;

● Never:Pod 不会主动拉取这个镜像,仅使用本地镜像。

注意:对于标签为“:latest”的镜像文件,其默认的镜像获取策略即为“Always”;而对于其他标签的镜像,其默认策略则为“IfNotPresent”。


7.1 官方示例


https://kubernetes.io/docs/concepts/containers/images

创建使用私有镜像的 Pod 来验证


  kubectl apply -f - <<EOF
  apiVersion: v1
  kind: Pod
  metadata:
  name: private-image-test-1
  spec:
  containers:
  - name: uses-private-image
  image: $PRIVATE_IMAGE_NAME
  imagePullPolicy: Always
  command: [ "echo", "SUCCESS" ]
  EOF


输出类似于

pod/private-image-test-1 created


如果一些顺利,那么一段时间后你可以执行

kubectl logs private-image-test-1


然后可以看到SUCCESS

如果你怀疑命令失败了,可以运行

kubectl describe pods/private-image-test-1 | grep 'Failed'


如果命令确实失败了,输出类似于

Fri, 26 Jun 2015 15:36:13 -0700    Fri, 26 Jun 2015 15:39:13 -0700    19    {kubelet node-i2hq}    spec.containers{uses-private-image}    failed        Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found

必须确保集群中所有节点的 .docker/config.json 文件内容相同。 否则,Pod 会能在一些节点上正常运行而无法在另一些节点上启动。 例如,如果使用节点自动扩缩,那么每个实例模板都需要包含 .docker/config.json, 或者挂载一个包含该文件的驱动器。

在 .docker/config.json 中配置了私有仓库密钥后,所有 Pod 都将能读取私有仓库中的镜像。


7.2 不指定版本号,查看默认拉取策略


7.2.1 不指定版本号创建pod


kubectl run nginx-test1 --image=nginx

  [root@master test]# kubectl run nginx-test1 --image=nginx
  kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
  deployment.apps/nginx-test1 created


7.2.2 查看默认拉取策略


kubectl edit pod nginx-test1-7c4c56845c-hndnk


  [root@master test]# kubectl get pod
  NAME READY STATUS RESTARTS AGE
  nginx-test1-7c4c56845c-hndnk 1/1 Running 0 75s
  [root@master test]# kubectl edit pod nginx-test1-7c4c56845c-hndnk
  ......
  spec:
  containers:
  - image: nginx
  imagePullPolicy: Always
  #不指定版本号,即使用缺省值latest最新版本,默认拉取策略为Always
  name: nginx-test1
  ......


7.2.3 查看创建过程


kubectl describe pod nginx-test1

  [root@master test]# kubectl describe pod nginx-test1
  ......
  Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Normal Scheduled 5m35s default-scheduler Successfully assigned default/nginx-test1-7c4c56845c-hndnk to node01
  Normal Pulling 5m34s kubelet, node01 Pulling image "nginx"
  Normal Pulled 5m19s kubelet, node01 Successfully pulled image "nginx"
  Normal Created 5m19s kubelet, node01 Created container nginx-test1
  Normal Started 5m19s kubelet, node01 Started container nginx-test1


由于拉取策略为Always,因此不管本地有没有对应镜像,kubelet都会前往公有/私有仓库下载最新版本应用


7.3 测试案例(非循环命令)


7.3.1 创建测试案例nginx-test1.yaml

  [root@master test]# vim nginx-test1.yaml
   
  apiVersion: v1
  kind: Pod
  metadata:
  name: nginx-test1
  spec:
  containers:
  - name: nginx
  image: nginx
  imagePullPolicy: Always
  command: [ "echo","SUCCESS" ]

7.3.2 生成nginx-test1配置资源

kubectl apply -f nginx-test1.yaml

  [root@master test]# kubectl apply -f nginx-test1.yaml
  pod/nginx-test1 created


7.3.3 查看pod状态


kubectl get pod -o wide


  [root@master test]# kubectl get pod -o wide
  NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  nginx-test1 0/1 CrashLoopBackOff 1 51s 10.244.1.48 node01 <none> <none>


该pod状态为CrashLoopBackOff,说明pod进入了异常循环状态

原因是 echo 执行完进程终止,容器生命周期也就结束了


7.3.4 查看创建过程


kubectl describe pod nginx-test1

  [root@master test]# kubectl describe pod nginx-test1
  ......
  Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Normal Scheduled 3m54s default-scheduler Successfully assigned default/nginx-test1 to node01
  Normal Created 2m8s (x4 over 3m38s) kubelet, node01 Created container nginx
  Normal Started 2m8s (x4 over 3m38s) kubelet, node01 Started container nginx
  Warning BackOff 100s (x7 over 3m21s) kubelet, node01 Back-off restarting failed container
  Normal Pulling 86s (x5 over 3m53s) kubelet, node01 Pulling image "nginx"
  Normal Pulled 71s (x5 over 3m38s) kubelet, node01 Successfully pulled image "nginx"



可以发现 Pod 中的容器在生命周期结束后,由于 Pod 的重启策略为 Always,容器再次重启了,并且又重新开始拉取镜像


7.3.5 修改nginx-test1.yaml


  [root@master test]# vim nginx-test1.yaml
   
  apiVersion: v1
  kind: Pod
  metadata:
  name: nginx-test1
  spec:
  containers:
  - name: nginx
  ##指定nginx版本为1.14
  image: nginx:1.14
  imagePullPolicy: Always
  ##注释掉命令
  #command: [ "echo","SUCCESS" ]


7.3.6 生成新的nginx-tets1.yaml配置资源


删除原有的资源

kubectl delete -f nginx-test1.yaml


  [root@master test]# kubectl delete -f nginx-test1.yaml
  pod "nginx-test1" deleted

生成新的资源

kubectl apply -f nginx-test1.yaml

  [root@master test]# kubectl apply -f nginx-test1.yaml
  pod/nginx-test1 created


7.3.7 查看pod状态


kubectl get pod -o wide

  [root@master test]# kubectl get pod -o wide
  NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  nginx-test1 1/1 Running 0 50s 10.244.1.49 node01 <none> <none>

pod创建成功


7.3.8 查看pod的应用版本


curl -I 10.244.1.49

  [root@master test]# curl -I 10.244.1.49
  HTTP/1.1 200 OK
  Server: nginx/1.14.2
  #版本重设为1.14.2
  Date: Thu, 04 Nov 2021 15:04:43 GMT
  Content-Type: text/html
  Content-Length: 612
  Last-Modified: Tue, 04 Dec 2018 14:44:49 GMT
  Connection: keep-alive
  ETag: "5c0692e1-264"
  Accept-Ranges: bytes


7.4 测试案例(循环命令)


7.4.1 修改nginx-test1.yaml


  [root@master test]# vim nginx-test1.yaml
   
  apiVersion: v1
  kind: Pod
  metadata:
  name: nginx-test1
  spec:
  containers:
  - name: nginx
  #修改镜像版本为最新版本
  image: nginx
  #设置镜像拉取策略为IfNotPresent
  imagePullPolicy: IfNotPresent
  #设置循环命令,使得pod不会自动关闭
  command: [ "sh","while true;do echo SUCCESS;done;" ]


7.4.2 生成新的nginx-tets1.yaml配置资源


删除原有的资源

kubectl delete -f nginx-test1.yaml

  [root@master test]# kubectl delete -f nginx-test1.yaml
  pod "nginx-test1" deleted

生成新的资源

kubectl apply -f nginx-test1.yaml

  [root@master test]# kubectl apply -f nginx-test1.yaml
  pod/nginx-test1 created


7.4.2 查看pod状态


kubectl get pod -o wide

  [root@master test]# kubectl get pod -o wide
  NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  nginx-test1 0/1 CrashLoopBackOff 3 67s 10.244.1.50 node01 <none> <none>

再次进入异常循环状态


7.4.3 查看创建过程


kubectl describe pod nginx-test1

  [root@master test]# kubectl describe pod nginx-test1
  ......
  Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Normal Scheduled 2m59s default-scheduler Successfully assigned default/nginx-test1 to node01
  Normal Pulled 95s (x5 over 2m59s) kubelet, node01 Container image "nginx" already present on machine
  Normal Created 95s (x5 over 2m59s) kubelet, node01 Created container nginx
  Normal Started 95s (x5 over 2m59s) kubelet, node01 Started container nginx
  Warning BackOff 70s (x10 over 2m57s) kubelet, node01 Back-off restarting failed container

发现错误发生在command步骤


7.4.4 查看pod日志


kubectl logs nginx-test1

  [root@master test]# kubectl logs nginx-test1
  sh: 0: Can't open while true;do echo SUCCESS;done;

发现是命令错误


7.4.5 再次修改nginx-test1.yaml


  [root@master test]# vim nginx-test1.yaml
   
  apiVersion: v1
  kind: Pod
  metadata:
  name: nginx-test1
  spec:
  containers:
  - name: nginx
  image: nginx
  imagePullPolicy: IfNotPresent
  #发现缺少了“-c”选项,添加后保存退出
  command: [ "sh","-c","while true;do echo SUCCESS;done;" ]


7.4.6 再次生成新的nginx-tets1.yaml配置资源


删除原有的资源

kubectl delete -f nginx-test1.yaml

  [root@master test]# kubectl delete -f nginx-test1.yaml
  pod "nginx-test1" deleted

生成新的资源

kubectl apply -f nginx-test1.yaml

  [root@master test]# kubectl apply -f nginx-test1.yaml
  pod/nginx-test1 created

7.4.7 再次查看pod状态

kubectl get pod -o wide

  [root@master test]# kubectl get pod -o wide
  NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  nginx-test1 1/1 Running 0 7s 10.244.1.51 node01 <none> <none>

pod运行成功


7.4.8 查看创建过程


kubectl describe pod nginx-test1

  [root@master test]# kubectl describe pod nginx-test1
  ......
  Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Normal Scheduled 108s default-scheduler Successfully assigned default/nginx-test1 to node01
  Normal Pulled 107s kubelet, node01 Container image "nginx" already present on machine
  Normal Created 107s kubelet, node01 Created container nginx
  Normal Started 107s kubelet, node01 Started container nginx

由于镜像拉取策略设定的是IfNotPresent,因此kubelet会先检查本地镜像仓库,如果有对应版本镜像就直接使用,没有的话才会前往镜像仓库拉取。

7.4.9 查看pod日志

kubectl describe pod nginx-test1 | grep "Image"

  [root@master test]# kubectl describe pod nginx-test1 | grep "Image"
  Image: nginx
  Image ID: docker-pullable://nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36


8. harbor登录凭据操作


8.1 部署harbor


部署步骤详见往期博客


8.2 查看登录凭证


cat /root/.docker/config.json | base64 -w 0

node01

  [root@node01 ~]# cat /root/.docker/config.json | base64 -w 0
  #base64 -w 0:进行 base64 加密并禁止自动换行
  ewoJImF1dGhzIjogewoJCSJodWIudGVzdC5jb20iOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2U0dGeVltOXlNVEl6TkRVPSIKCQl9Cgl9Cn0=


8.3 创建harbor登录凭据资源清单


master

  [root@master ~]# vim harbor-pull-secret.yaml
   
  apiVersion: v1
  kind: Secret
  metadata:
  name: harbor-pull-secret
  data:
  .dockerconfigjson: ewoJImF1dGhzIjogewoJCSJodWIudGVzdC5jb20iOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2U0dGeVltOXlNVEl6TkRVPSIKCQl9Cgl9Cn0=
  #复制粘贴上述查看的登陆凭据
  type: kubernetes.io/dockerconfigjson


8.4 创建secret资源


kubectl create -f harbor-pull-secret.yaml

master

  [root@master test]# kubectl create -f harbor-pull-secret.yaml
  secret/harbor-pull-secret created


8.5 查看secret资源


kubectl get secret

master

  [root@master test]# kubectl get secret
  NAME TYPE DATA AGE
  default-token-7lsdx kubernetes.io/service-account-token 3 3d5h
  harbor-pull-secret kubernetes.io/dockerconfigjson 1 2m3s


8.6 创建资源从harbor中下载镜像


master

  [root@master test]# vim nginx-deployment.yaml
   
  apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
  name: my-nginx
  spec:
  replicas: 2
  template:
  metadata:
  labels:
  app: my-nginx
  spec:
  #添加拉取secret资源选项
  imagePullSecrets:
  #指定secret资源名称
  - name: harbor-pull-secret
  containers:
  - name: my-nginx
  #指定harbor中的镜像名
  image: hub.test.com/library/nginx:v1
  ports:
  - containerPort: 80
  ---
  apiVersion: v1
  kind: Service
  metadata:
  name: my-nginx
  spec:
  type: NodePort
  ports:
  - port: 8080
  targetPort: 8080
  nodePort: 11111
  selector:
  app: my-nginx


8.7 删除之前在node节点下载的nginx镜像


node01

docker rmi -f hub.test.com/library/nginx:v1

  [root@node01 ~]# docker images
  REPOSITORY TAG IMAGE ID CREATED SIZE
  nginx latest 87a94228f133 3 weeks ago 133MB
  hub.test.com/library/nginx v1 87a94228f133 3 weeks ago 133MB
  k8s.gcr.io/kube-apiserver v1.15.1 68c3eb07bfc3 2 years ago 207MB
  k8s.gcr.io/kube-proxy v1.15.1 89a062da739d 2 years ago 82.4MB
  registry.aliyuncs.com/google_containers/kube-proxy v1.15.1 89a062da739d 2 years ago 82.4MB
  k8s.gcr.io/kube-controller-manager v1.15.1 d75082f1d121 2 years ago 159MB
  k8s.gcr.io/kube-scheduler v1.15.1 b0b3c4c404da 2 years ago 81.1MB
  nginx 1.15 53f3fd8007f7 2 years ago 109MB
  nginx 1.14 295c7be07902 2 years ago 109MB
  wl/flannel v0.11.0-amd64 ff281650a721 2 years ago 52.6MB
  k8s.gcr.io/coredns 1.3.1 eb516548c180 2 years ago 40.3MB
  registry.aliyuncs.com/google_containers/coredns 1.3.1 eb516548c180 2 years ago 40.3MB
  wl/kubernetes-dashboard-amd64 v1.10.1 f9aed6605b81 2 years ago 122MB
  k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 2 years ago 258MB
  nginx 1.15.4 bc26f1ed35cf 3 years ago 109MB
  busybox 1.28 8c811b4aec35 3 years ago 1.15MB
  k8s.gcr.io/pause 3.1 da86e6ba6ca1 3 years ago 742kB
  registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 3 years ago 742kB
  [root@node01 ~]# docker rmi -f hub.test.com/library/nginx:v1
  Untagged: hub.test.com/library/nginx:v1
  Untagged: hub.test.com/library/nginx@sha256:7250923ba3543110040462388756ef099331822c6172a050b12c7a38361ea46f


8.8 创建资源


master

kubectl apply -f nginx-deployment.yaml

  [root@master test]# kubectl apply -f nginx-deployment.yaml
  deployment.extensions/my-nginx created
  service/my-nginx created


8.9 查看pod信息


master

kubectl get pods -o wide


  [root@master test]# kubectl get pods -o wide
  NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  my-nginx-74f9bccc8c-5z9nv 1/1 Running 0 49s 10.244.2.30 node02 <none> <none>
  my-nginx-74f9bccc8c-txkg9 1/1 Running 0 49s 10.244.1.52 node01 <none> <none>


8.10 查看pod描述信息


kubectl describe pod my-nginx-74f9bccc8c-txkg9

  [root@master test]# kubectl describe pod my-nginx-74f9bccc8c-txkg9
  ......
  Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Normal Scheduled 4m38s default-scheduler Successfully assigned default/my-nginx-74f9bccc8c-txkg9 to node01
  #镜像从harbor仓库中拉取而来
  Normal Pulling 4m37s kubelet, node01 Pulling image "hub.test.com/library/nginx:v1"
  Normal Pulled 4m37s kubelet, node01 Successfully pulled image "hub.test.com/library/nginx:v1"
  Normal Created 4m37s kubelet, node01 Created container my-nginx
  Normal Started 4m37s kubelet, node01 Started container my-nginx

kubectl describe pod my-nginx-74f9bccc8c-txkg9 | grep "Image:"

  [root@master test]# kubectl describe pod my-nginx-74f9bccc8c-txkg9 | grep "Image:"
  Image: hub.test.com/library/nginx:v1

8.11 刷新harbor页面,可以看到镜像的下载次数增加了


2391905-20211105005931037-1382209542.png


相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
2月前
|
运维 Kubernetes 监控
Kubernetes详解(十九)——Kubernetes Pod控制器
Kubernetes详解(十九)——Kubernetes Pod控制器
47 3
|
5天前
|
存储 Kubernetes 调度
K8S中的核心概念
【6月更文挑战第25天】k8s资源对象可以用yaml或者json格式声明。每个资源对象都有自己的特定结构定义,并统一保存在etcd这种非关系型数据库中。
|
3天前
|
Kubernetes Shell API
技术笔记:K8s中大量Pod是Evicted状态,这是咋回事?
技术笔记:K8s中大量Pod是Evicted状态,这是咋回事?
|
2月前
|
存储 Kubernetes 负载均衡
k8s 数据流向 与 核心概念详细介绍
k8s 数据流向 与 核心概念详细介绍
|
12天前
|
Kubernetes API 调度
Pod无法调度到可用的节点上(K8s)
完成k8s单节点部署后,创建了一个pod进行测试,后续该pod出现以下报错: Warning FailedScheduling 3h7m (x3 over 3h18m) default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
50 0
|
2月前
|
运维 Kubernetes 网络协议
Kubernetes详解(十八)——Pod就绪性探针实战
Kubernetes详解(十八)——Pod就绪性探针实战
67 5
|
2月前
|
Kubernetes 网络协议 应用服务中间件
Kubernetes详解(十七)——Pod存活性探针应用实战
Kubernetes详解(十七)——Pod存活性探针应用实战
49 4
|
2月前
|
Kubernetes 算法 调度
k8s群集调度之 pod亲和 node亲和 标签指定
k8s群集调度之 pod亲和 node亲和 标签指定
|
29天前
|
Kubernetes 微服务 容器
Aspire项目发布到远程k8s集群
Aspire项目发布到远程k8s集群
379 2
Aspire项目发布到远程k8s集群
|
17天前
|
Kubernetes Cloud Native 微服务
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
204 3