kubernetes最小调度单元pod详解(一)

简介: kubernetes最小调度单元pod详解(一)

pod概述

Pod是Kubernetes中的最小调度单元,k8s是通过定义一个Pod的资源,然后在Pod里面运行容器,容器需要指定一个镜像,

这样就可以用来运行具体的服务。

一个Pod封装一个容器(也可以封装多个容器),Pod里的容器共享存储、网络等。也就是说,应该把整个pod看作虚拟机,

然后每个容器相当于运行在虚拟机的进程。

Pod是需要调度到k8s集群的工作节点来运行的,具体调度到哪个节点,是根据scheduler调度器实现的。

白话解释:

可以把pod看成是一个“豌豆荚”,里面有很多“豆子”(容器)。

一个豌豆荚里的豆子,它们吸收着共同的营养成分、肥料、水分等,Pod和容器的关系也是一样,

Pod里面的容器共享pod的网络、存储等。

pod相当于一个逻辑主机–比方说我们想要部署一个tomcat应用,

如果不用容器,我们可能会部署到物理机、虚拟机或者云主机上,那么出现k8s之后,我们就可以定义一个pod资源,

在pod里定义一个把tomcat容器,所以pod充当的是一个逻辑主机的角色。

1、Pod如何管理多个容器?

Pod中可以同时运行多个容器。同一个Pod中的容器会自动的分配到同一个 node 上。

同一个Pod中的容器共享资源、网络环境,它们总是被同时调度,在一个Pod中同时运行多个容器是一种比较高级的用法,

只有当你的容器需要紧密配合协作的时候才考虑用这种模式。

例如,你有一个容器作为web服务器运行,需要用到共享的volume,有另一个“sidecar”容器来从远端获取资源更新这些文件。

一些Pod有init容器和应用容器。 在应用程序容器启动之前,运行初始化容器。

管理配置文件一般用configmap,配置管理中心管理

2、Pod网络:

Pod是有IP地址的,每个pod都被分配唯一的IP地址(IP地址是靠网络插件calico、flannel、weave等分配的),

POD中的容器共享网络名称空间,包括IP地址和网络端口。 Pod内部的容器可以使用localhost相互通信。

信。

Pod中的容器也可以通过网络插件calico与其他节点的Pod通信。


[root@master01 ~ ]# kubectl get po -n kube-system -owide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

calico-kube-controllers-58f755f869-4ljmm 1/1 Running 2 3h25m 172.31.112.132 master01

calico-node-gmvp4 1/1 Running 1 3h25m 10.10.0.221 master02

calico-node-njbwz 1/1 Running 1 3h25m 10.10.0.220 master01

calico-node-sm486 1/1 Running 1 3h25m 10.10.0.224 node01

calico-node-zx7t5 1/1 Running 1 3h25m 10.10.0.223 master03

coredns-59d64cd4d4-6cbmv 1/1 Running 1 3h43m 172.31.112.133 master01

coredns-59d64cd4d4-mwph6 1/1 Running 1 3h43m 172.31.112.134 master01

etcd-master01 1/1 Running 1 3h43m 10.10.0.220 master01

etcd-master02 1/1 Running 1 3h39m 10.10.0.221 master02

etcd-master03 1/1 Running 1 3h37m 10.10.0.223 master03

kube-apiserver-master01 1/1 Running 1 3h43m 10.10.0.220 master01

kube-apiserver-master02 1/1 Running 1 3h39m 10.10.0.221 master02

kube-apiserver-master03 1/1 Running 1 3h37m 10.10.0.223 master03

kube-controller-manager-master01 1/1 Running 2 3h43m 10.10.0.220 master01

kube-controller-manager-master02 1/1 Running 2 3h39m 10.10.0.221 master02

kube-controller-manager-master03 1/1 Running 1 3h37m 10.10.0.223 master03

kube-proxy-9tqnz 1/1 Running 1 3h29m 10.10.0.224 node01

kube-proxy-b52wx 1/1 Running 1 3h37m 10.10.0.223 master03

kube-proxy-h6tfj 1/1 Running 1 3h43m 10.10.0.220 master01

kube-proxy-nmctj 1/1 Running 1 3h39m 10.10.0.221 master02

kube-scheduler-master01 1/1 Running 2 3h43m 10.10.0.220 master01

kube-scheduler-master02 1/1 Running 2 3h39m 10.10.0.221 master02

kube-scheduler-master03 1/1 Running 1 3h37m 10.10.0.223 master03

ready 数字解读:下面有两列数字,左右各一个数字,左边表示1表示就绪状态,右侧的1表示pod里有1个容器

3、Pod存储:

创建Pod的时候可以指定挂载的存储卷。 POD中的所有容器都可以访问共享卷,允许这些容器共享数据。

Pod只要挂载持久化数据卷,Pod重启之后数据还是会存在的。

比如创建一个 mysql pod ,当pod被删除后,持久化存储卷里面的数据还是存在的

持久化存储数据卷类型:ceph pvc Glusterfs Nfs

4、Pod工作方式:常用的有两种

在K8s中,所有的资源都可以使用一个yaml文件来创建,创建Pod也可以使用yaml配置文件。

或者使用kubectl run在命令行创建Pod(不常用)。

1、 自主式Pod:

所谓的自主式Pod,就是直接定义一个Pod资源,创建资源清单 如下:

[root@master01 ~ ]# cat pod-tomcat.yaml

apiVersion: v1

kind: Pod

metadata:

name: tomcat-test

namespace: default

labels:

app: tomcat

spec:

containers:

name: tomcat-java

ports:

containerPort: 8080

image: xianchao/tomcat-8.5-jre8:v1

imagePullPolicy: IfNotPresent 本地有镜像就从本地拉取,本地没有的话就从官方拉取

[root@master01 ~ ]# mkdir k8s-test

[root@master01 ~ ]#

[root@master01 ~ ]#

[root@master01 ~ ]# mv pod-tomcat.yaml k8s-test/

需要在worker节点导入镜像:

[root@node01 ~ ]# docker load -i xianchao-tomcat.tar.gz 
df64d3292fd6: Loading layer [==================================================>]  4.672MB/4.672MB
0c3170905795: Loading layer [==================================================>]  3.584kB/3.584kB
9bca1faaa73e: Loading layer [==================================================>]  79.44MB/79.44MB
e927085edc33: Loading layer [==================================================>]   2.56kB/2.56kB
e5f8376fd9dc: Loading layer [==================================================>]  27.08MB/27.08MB
e82a3681bb38: Loading layer [==================================================>]  2.048kB/2.048kB
Loaded image: xianchao/tomcat-8.5-jre8:v1



[root@node01 ~ ]# docker images 
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
xianchao/tomcat-8.5-jre8                                          v1         4ac473a3dd92   3 years ago     108MB

#更新资源清单文件

[root@master01 k8s-test ]# kubectl apply -f pod-tomcat.yaml

pod/tomcat-test created

#查看pod是否创建成功

[root@master01 k8s-test ]# kubectl get pods -owide -l app=tomcat

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

tomcat-test 1/1 Running 0 4m34s 172.29.55.1 node01

但是自主式Pod是存在一个问题的,假如我们不小心删除了pod:

[root@master01 k8s-test ]# kubectl delete pods tomcat-test

pod “tomcat-test” deleted

#查看pod是否还在

[root@master01 k8s-test ]# kubectl get pods -o wide -l app=tomcat

No resources found in default namespace.

#结果是空,说明pod已经被删除了

通过上面可以看到,如果直接定义一个Pod资源,那Pod被删除,就彻底被删除了,不会再创建一个新的Pod,这在生产环境还是具有非常大风险的,

假如这个pod提供了核心业务,pod被删除后故障将非常大

我们希望当pod被删除后能生成完全一样的pod,继续提供服务

所以今后我们接触的Pod,都是控制器管理的。

通过控制器创建的pod可以指定副本数,pod创建后生成几个完全一样的pod,实现高可用

一般测试使用自主式pod,不影响业务,可以任意创建删除,资源清单文件要保存好

2、控制器管理的Pod:

常见的管理Pod的控制器:Replicaset、Deployment、Job、CronJob、Daemonset、Statefulset。

控制器管理的Pod可以确保Pod始终维持在指定的副本数运行。

如,通过Deployment管理Pod 可以对pod进行滚动更新,金丝雀发布

先上传镜像到worker节点,然后导入:

#解压镜像:

把xianchao-nginx.tar.gz上传到node01节点

[root@node01 ~ ]# docker load -i xianchao-nginx.tar.gz 
7e718b9c0c8c: Loading layer [==================================================>]  72.52MB/72.52MB
4dc529e519c4: Loading layer [==================================================>]  64.81MB/64.81MB
23c959acc3d0: Loading layer [==================================================>]  3.072kB/3.072kB
15aac1be5f02: Loading layer [==================================================>]  4.096kB/4.096kB
974e9faf62f1: Loading layer [==================================================>]  3.584kB/3.584kB
64ee8c6d0de0: Loading layer [==================================================>]  7.168kB/7.168kB
Loaded image: xianchao/nginx:v1

#创建一个资源清单文件

[root@master01 k8s-pod ]# vim nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-test
  labels:
    app: nginx-deploy
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: my-nginx
        image: xianchao/nginx:v1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8088

#更新资源清单文件

[root@master01 pod-test ]# kubectl apply -f nginx-deploy.yaml

deployment.apps/nginx-test created

#查看Deployment

[root@master01 pod-test ]# kubectl get deploy -l app=nginx-deploy

NAME READY UP-TO-DATE AVAILABLE AGE

nginx-test 2/2 2 2 6m23s

创建deployment的同时,会自动创建replicaset,replicaset的名字是deploy的名字加随机数组成

#查看Replicaset

[root@master01 pod-test ]# kubectl get rs -l app=nginx

NAME DESIRED CURRENT READY AGE

nginx-test-64b444bff5 2 2 2 3h13m

#查看pod ,pods的名字是由replicaset的名字加上随机数组成

[root@master01 pod-test ]# kubectl get pod -l app=nginx -owide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

nginx-test-64b444bff5-pl8bv 1/1 Running 0 3h16m 172.29.55.2 node01

nginx-test-64b444bff5-qzzqq 1/1 Running 0 3h16m 172.29.55.3 node01

#删除nginx-test-64b444bff5-pl8bv这个pod

[root@master01 pod-test ]# kubectl delete pods nginx-test-64b444bff5-pl8bv

pod “nginx-test-64b444bff5-pl8bv” deleted

再次查看又生成一个pod

[root@master01 pod-test ]# kubectl get pods

NAME READY STATUS RESTARTS AGE

nginx-test-64b444bff5-d97f9 1/1 Running 0 18s

nginx-test-64b444bff5-qzzqq 1/1 Running 0 3h23m

通过上面可以发现通过deployment管理的pod,可以确保pod始终维持在指定副本数量

5、如何创建一个Pod资源:

K8s创建Pod流程

Pod是Kubernetes中最基本的部署调度单元,可以包含container,逻辑上表示某种应用的一个实例。

例如一个web站点应用由前端、后端及数据库构建而成,

这三个组件将运行在各自的容器中,那么我们可以创建包含三个container的pod。

使用kubectl命令创建pod资源,kubectl apply 或kubectl run

master节点:kubectl -> kube-api -> kubelet -> CRI容器环境初始化

第一步:

客户端提交创建Pod的请求,可以通过调用API Server的Rest API接口,也可以通过kubectl命令行工具。

如kubectl apply -f filename.yaml(资源清单文件)

第二步:

apiserver接收到pod创建请求后,会将yaml中的属性信息(metadata)写入etcd。

第三步:

apiserver触发watch机制准备创建pod,信息转发给调度器scheduler,调度器使用调度算法选择node,

调度器将node信息给apiserver,apiserver将绑定的node信息写入etcd

调度器用一组规则过滤掉不符合要求的主机。比如Pod指定了所需要的资源量,那么可用资源比Pod需要的资源量少的主机会被过滤掉。

scheduler 查看 k8s api ,类似于通知机制。

首先判断:pod.spec.Node == null?

若为null,表示这个Pod请求是新来的,需要创建;因此先进行调度计算,找到最“闲”的node。

然后将信息在etcd数据库中更新分配结果:pod.spec.Node = nodeA (设置一个具体的节点)

ps:同样上述操作的各种信息也要写到etcd数据库中

第四步:

apiserver又通过watch机制,调用kubelet,指定pod信息,调用Docker API创建并启动pod内的容器。

第五步:

创建完成之后反馈给kubelet, kubelet又将pod的状态信息给apiserver,

apiserver又将pod的状态信息写入etcd。

1、资源清单YAML文件书写技巧:

yaml文件最好要表示要起的应用是什么

[root@master01 pod-test ]# vim pod-tomcat.yaml

apiVersion: v1 #指定api版本

kind: Pod #创建的资源

metadata:

name: tomcat-test #pod名称

namespace: default #pod所在的名称空间

labels:

app: tomcat #pod的标签

spec:

containers:

name: tomcat-java #pod里的容器的名字

ports:

containerPort: 8080 #容器暴露的端口

image: xianchao/tomcat-8.5-jre8:v1 #容器使用的镜像

imagePullPolicy: IfNotPresent #镜像拉取策略

apiVersion版本有很多:

[root@master01 pod-test ]# kubectl api-versions

admissionregistration.k8s.io/v1

admissionregistration.k8s.io/v1beta1

apiextensions.k8s.io/v1

apiextensions.k8s.io/v1beta1

apiregistration.k8s.io/v1

apiregistration.k8s.io/v1beta1

apps/v1

authentication.k8s.io/v1

authentication.k8s.io/v1beta1

authorization.k8s.io/v1

authorization.k8s.io/v1beta1

autoscaling/v1

autoscaling/v2beta1

autoscaling/v2beta2

batch/v1

batch/v1beta1

certificates.k8s.io/v1

certificates.k8s.io/v1beta1

coordination.k8s.io/v1

coordination.k8s.io/v1beta1

crd.projectcalico.org/v1

discovery.k8s.io/v1

discovery.k8s.io/v1beta1

events.k8s.io/v1

events.k8s.io/v1beta1

extensions/v1beta1

flowcontrol.apiserver.k8s.io/v1beta1

networking.k8s.io/v1

networking.k8s.io/v1beta1

node.k8s.io/v1

node.k8s.io/v1beta1

policy/v1

policy/v1beta1

rbac.authorization.k8s.io/v1

rbac.authorization.k8s.io/v1beta1

scheduling.k8s.io/v1

scheduling.k8s.io/v1beta1

storage.k8s.io/v1

storage.k8s.io/v1beta1

v1 核心板

Pod资源清单编写技巧:

通过kubectl explain 查看定义Pod资源包含哪些字段。

创建什么资源,就可以通过explain查看所属的api版本,和kind

[root@master01 pod-test ]# kubectl explain pod

KIND: Pod

VERSION: v1

DESCRIPTION:

Pod is a collection of containers that can run on a host. This resource is

created by clients and scheduled onto hosts.

FIELDS:

apiVersion [APIVersion定义了对象,代表了一个版本。]

APIVersion defines the versioned schema of this representation of an

object. Servers should convert recognized schemas to the latest internal

value, and may reject unrecognized values. More info:

https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

kind [Kind是字符串类型的值,代表了要创建的资源。服务器可以从客户端提交的请求推断出这个资源。]

Kind is a string value representing the REST resource this object

represents. Servers may infer this from the endpoint the client submits

requests to. Cannot be updated. In CamelCase. More info:

https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

metadata [metadata是对象,定义元数据属性信息的]

Standard object’s metadata. More info:

https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

spec [spec制定了定义Pod的规格,里面包含容器的信息,端口号,镜像等都是在spec里面定义的]

Specification of the desired behavior of the pod. More info:

https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

status [status表示状态,这个不可以修改,定义pod的时候也不需要定义这个字段]

Most recently observed status of the pod. This data may not be up to date.

Populated by the system. Read-only. More info:

https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

#查看pod.metadata字段如何定义:

[root@master01 pod-test ]# kubectl explain pod.metadata

KIND: Pod

VERSION: v1

RESOURCE: metadata

DESCRIPTION:

Standard object’s metadata. More info:

https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

 ObjectMeta is metadata that all persisted resources must have, which
 includes all objects users must create.

FIELDS:

annotations  # annotations是注解,map类型表示对应的值是key-value键值对,表示 key和value都是String类型的

Annotations is an unstructured key value map stored with a resource that

may be set by external tools to store and retrieve arbitrary metadata. They

are not queryable and should be preserved when modifying objects. More

info: http://kubernetes.io/docs/user-guide/annotations

“metadata”: {

“annotations”: {

“key1” : “value1”,

“key2” : “value2”

}

}

用Annotation来记录的信息包括:
build信息、release信息、Docker镜像信息等,例如时间戳、release id号、镜像hash值、docker registry地址等;
日志库、监控库、分析库等资源库的地址信息;
程序调试工具信息,例如工具名称、版本号等;
团队的联系信息,例如电话号码、负责人名称、网址等。

clusterName

The name of the cluster which the object belongs to. This is used to

distinguish resources with same name and namespace in different clusters.

This field is not set anywhere right now and apiserver is going to ignore

it if set in create or update request.

#对象所属群集的名称。这是用来区分不同集群中具有相同名称和命名空间的资源。
此字段现在未设置在任何位置,apiserver将忽略它,如果设置了就使用设置的值

creationTimestamp

CreationTimestamp is a timestamp representing the server time when this

object was created. It is not guaranteed to be set in happens-before order

across separate operations. Clients may not set this value. It isrepresented in RFC3339 form and is in UTC.

 Populated by the system. Read-only. Null for lists. More info:
 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

deletionGracePeriodSeconds 删除之前允许运行多长时间,过多少秒删除。默认删除时间是30秒

Number of seconds allowed for this object to gracefully terminate before it

will be removed from the system. Only set when deletionTimestamp is also

set. May only be shortened. Read-only.

在Kubernetes中提供了grace-period,在Pod删除时此选项会起作用,会延迟一定时长才进行删除,

缺省未设定的情况下会等待30s中之后删除。

删除pod时,可以手动设置删除等待时间:

time -p kubectl delete pod testbox --force --grace-period=0 快速删除pod

time -p 监控命令执行时间

deletionTimestamp

DeletionTimestamp is RFC 3339 date and time at which this resource will be

deleted. This field is set by the server when a graceful deletion is

requested by the user, and is not directly settable by a client. The

resource is expected to be deleted (no longer visible from resource lists,

and not reachable by name) after the time in this field, once the

finalizers list is empty. As long as the finalizers list contains items,

deletion is blocked. Once the deletionTimestamp is set, this value may not

be unset or be set further into the future, although it may be shortened or

the resource may be deleted prior to this time. For example, a user may

request that a pod is deleted in 30 seconds. The Kubelet will react by

sending a graceful termination signal to the containers in the pod. After

that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)

to the container and after cleanup, remove the pod from the API. In the

presence of network partitions, this object may still exist after this

timestamp, until an administrator or automated process can determine the

resource is fully terminated. If not set, graceful deletion of the object

has not been requested.

 Populated by the system when a graceful deletion is requested. Read-only.
 More info:
 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

Must be empty before the object is deleted from the registry. Each entry is

an identifier for the responsible component that will remove the entry from

the list. If the deletionTimestamp of the object is non-nil, entries in

this list can only be removed. Finalizers may be processed and removed in

any order. Order is NOT enforced because it introduces significant risk of

stuck finalizers. finalizers is a shared field, any actor with permission

can reorder it. If the finalizer list is processed in order, then this can

lead to a situation in which the component responsible for the first

finalizer in the list is waiting for a signal (field value, external

system, or other) produced by a component responsible for a finalizer later

in the list, resulting in a deadlock. Without enforced ordering finalizers

are free to order amongst themselves and are not vulnerable to ordering

changes in the list.

generateName

GenerateName is an optional prefix, used by the server, to generate a

unique name ONLY IF the Name field has not been provided. If this field is

used, the name returned to the client will be different than the name

passed. This value will also be combined with a unique suffix. The provided

value has the same validation rules as the Name field, and may be truncated

by the length of the suffix required to make the value unique on the

server.

 If this field is specified and the generated name exists, the server will
 NOT return a 409 - instead, it will either return 201 Created or 500 with
 Reason ServerTimeout indicating a unique name could not be found in the
 time allotted, and the client should retry (optionally after the time
 indicated in the Retry-After header).

 Applied only if Name is not specified. More info:
 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency

generation

A sequence number representing a specific generation of the desired state.

Populated by the system. Read-only.

labels  #创建的资源具有的标签

Map of string keys and values that can be used to organize and categorize

(scope and select) objects. May match selectors of replication controllers

and services. More info: http://kubernetes.io/docs/user-guide/labels

#labels是标签,labels是map类型,map类型表示对应的值是key-value键值对,
<string,string>表示 key和value都是String类型的

managedFields <[]Object>

ManagedFields maps workflow-id and version to the set of fields that are

managed by that workflow. This is mostly for internal housekeeping, and

users typically shouldn’t need to set or understand this field. A workflow

can be the user’s name, a controller’s name, or the name of a specific

apply path like “ci-cd”. The set of fields is always in the version that

the workflow used when modifying the object.

name #创建的资源的名字

Name must be unique within a namespace. Is required when creating

resources, although some resources may allow a client to request the

generation of an appropriate name automatically. Name is primarily intended

for creation idempotence and configuration definition. Cannot be updated.

More info: http://kubernetes.io/docs/user-guide/identifiers#names

namespace #创建的资源所属的名称空间

Namespace defines the space within which each name must be unique. An empty

namespace is equivalent to the “default” namespace, but “default” is the

canonical representation. Not all objects are required to be scoped to a

namespace - the value of this field for those objects will be empty.

 Must be a DNS_LABEL. Cannot be updated. More info:
 http://kubernetes.io/docs/user-guide/namespaces
 
 # namespaces划分了一个空间,在同一个namesace下的资源名字是唯一的,默认的名称空间是default。

ownerReferences <[]Object>

List of objects depended by this object. If ALL objects in the list have

been deleted, this object will be garbage collected. If this object is

managed by a controller, then an entry in this list will point to this

controller, with the controller field set to true. There cannot be more

than one managing controller.

resourceVersion

An opaque value that represents the internal version of this object that

can be used by clients to determine when objects have changed. May be used

for optimistic concurrency, change detection, and the watch operation on a

resource or set of resources. Clients must treat these values as opaque and

passed unmodified back to the server. They may only be valid for a

particular resource or set of resources.

 Populated by the system. Read-only. Value must be treated as opaque by
 clients and . More info:
 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency

selfLink

SelfLink is a URL representing this object. Populated by the system.

Read-only.

 DEPRECATED Kubernetes will stop propagating this field in 1.20 release and
 the field is planned to be removed in 1.21 release.

uid

UID is the unique in time and space value for this object. It is typically

generated by the server on successful creation of a resource and is not

allowed to change on PUT operations.

 Populated by the system. Read-only. More info:
 http://kubernetes.io/docs/user-guide/identifiers#uids

#查看pod.spec字段如何定义

[root@master01 pod-test ]# kubectl explain pod.spec

KIND: Pod

VERSION: v1

RESOURCE: spec

DESCRIPTION:

Specification of the desired behavior of the pod. More info:

https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

 PodSpec is a description of a pod.
 
 #Pod的spec字段是用来描述Pod的

FIELDS:

activeDeadlineSeconds #表示Pod可以运行的最长时间,达到设置的值后,Pod会自动停止。

Optional duration in seconds the pod may be active on the node relative to

StartTime before the system will actively try to mark it failed and kill

associated containers. Value must be a positive integer.

affinity #定义亲和性的

If specified, the pod’s scheduling constraints

automountServiceAccountToken

AutomountServiceAccountToken indicates whether a service account token

should be automatically mounted.

containers <[]Object> -required-

List of containers belonging to the pod. Containers cannot currently be

added or removed. There must be at least one container in a Pod. Cannot be

updated.

#containers是对象列表,用来定义容器的,是必须字段。对象列表 表示下面有很多对象,对象列表下面的内容用 - 连接。
spec:
   containers:
     - name:  tomcat-java      #pod里的容器的名字
       ports:
       - containerPort: 8080   #容器暴露的端口

kubernetes最小调度单元pod详解(二):https://developer.aliyun.com/article/1495566

相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
相关文章
|
3天前
|
Kubernetes 应用服务中间件 nginx
Kubernetes详解(六)——Pod对象部署和应用
在Kubernetes系列中,本文聚焦Pod对象的部署和管理。首先,通过`kubectl run`命令创建Pod,如`kubectl run pod-test --image=nginx:1.12 --port=80 --replicas=1`。接着,使用`kubectl get deployment`或`kubectl get pods`查看Pod信息,添加`-o wide`参数获取详细详情。然后,利用Pod的IP地址进行访问。最后,用`kubectl delete pods [Pod名]`删除Pod,但因Controller控制器,删除后Pod可能自动重建。了解更多细节,请参阅原文链接。
9 5
|
12天前
|
Kubernetes Perl 容器
在 Kubernetes 中重启 pod 的 3 种方法
【4月更文挑战第25天】
32 1
在 Kubernetes 中重启 pod 的 3 种方法
|
14天前
|
Kubernetes 网络协议 调度
kubernetes最小调度单元pod详解(二)
kubernetes最小调度单元pod详解(二)
|
1月前
|
Kubernetes 固态存储 调度
Kubernetes节点亲和性分配Pod
Kubernetes节点亲和性分配Pod
33 0
Kubernetes节点亲和性分配Pod
|
1月前
|
存储 Kubernetes 调度
Kubernetes Pod生命周期
Kubernetes Pod生命周期
32 0
Kubernetes Pod生命周期
|
16天前
|
运维 Kubernetes 监控
Kubernetes 集群的持续性能优化实践
【4月更文挑战第26天】 在动态且不断增长的云计算环境中,维护高性能的 Kubernetes 集群是一个挑战。本文将探讨一系列实用的策略和工具,旨在帮助运维专家监控、分析和优化 Kubernetes 集群的性能。我们将讨论资源分配的最佳实践,包括 CPU 和内存管理,以及集群规模调整的策略。此外,文中还将介绍延迟和吞吐量的重要性,并提供日志和监控工具的使用技巧,以实现持续改进的目标。
|
2天前
|
存储 运维 监控
Kubernetes 集群的持续监控与性能优化策略
【5月更文挑战第11天】在微服务架构日益普及的当下,Kubernetes 已成为容器编排的事实标准。随着其在不同规模企业的广泛采用,如何确保 Kubernetes 集群的高效稳定运行变得至关重要。本文将探讨一套系统的 Kubernetes 集群监控方法,并结合实践经验分享针对性能瓶颈的优化策略。通过实时监控、日志分析与定期审计的结合,旨在帮助运维人员快速定位问题并提出解决方案,从而提升系统的整体表现。
|
4天前
|
Kubernetes Java API
Kubernetes详解(三)——Kubernetes集群组件
Kubernetes详解(三)——Kubernetes集群组件
15 1
|
9天前
|
运维 监控 Kubernetes
Kubernetes 集群的监控与维护策略
【5月更文挑战第4天】 在当今微服务架构盛行的时代,容器化技术已成为软件开发和部署的标准实践。Kubernetes 作为一个开源的容器编排平台,因其强大的功能和灵活性而广受欢迎。然而,随着 Kubernetes 集群规模的扩大,集群的监控和维护变得日益复杂。本文将探讨 Kubernetes 集群监控的重要性,分析常见的监控工具,并提出一套有效的集群维护策略,以帮助运维人员确保集群的健康运行和高可用性。
40 10
|
10天前
|
存储 运维 监控
Kubernetes 集群的持续监控与优化策略
【5月更文挑战第3天】在微服务架构和容器化部署日益普及的背景下,Kubernetes 已成为众多企业的首选容器编排平台。然而,随着集群规模的增长和业务复杂度的提升,有效的集群监控和性能优化成为确保系统稳定性和提升资源利用率的关键。本文将深入探讨针对 Kubernetes 集群的监控工具选择、监控指标的重要性解读以及基于数据驱动的性能优化实践,为运维人员提供一套系统的持续监控与优化策略。