目录
Deployment控制器
1.deployment及副本数
在k8s里,最小的调度单位是pod,但是pod本身不稳定,导致系统不健壮,没有可再生性(自愈)
在k8s集群中,业务需要成百上千甚至更多的pod,而对于这些pod的管理,k8s提供了很多个控制器,Deployment就是其中一个
集群中只需要告诉deploy你想要多少个pod即可,多的,删掉。少的,补齐。
deploy有2种方式创建,一种是yaml文件,另一种就是命令行了
在1.17以前使用kubectl run 这个命令默认创建的是deploy
在1.17以后使用kubectl run 这个命令默认创建的是pod
使用命令生产yaml文件模板
# 其实生成方法跟pod是一样的,唯一不同的就是不是使用run [root@master k8s]# kubectl create deployment deploy1 --image=nginx --dry-run=client -o yaml > deploy1.yaml
这样就生成了一个yaml文件,可以进去修改
apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: deploy1 name: deploy1 spec: # 这里是控制副本数,可以根据需求修改 replicas: 3 selector: # 留意一下这个地方 matchLabels: app: deploy1 strategy: {} template: metadata: creationTimestamp: null labels: app: deploy1 spec: # 这个器使用地方是控制器产生的容的镜像,拉取策略啥的 containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} status: {}
修改好之后我们就可以使用这个yaml文件去创建控制器了,来看看效果,我们指定了3个副本
[root@master k8s]# kubectl apply -f deploy1.yaml deployment.apps/deploy1 created [root@master k8s]# kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE deploy1 1/3 3 1 10s
他现在只ready了一个,等待一会之后我们再查一次
[root@master k8s]# kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE deploy1 3/3 3 3 33s # 我们来查一下pod [root@master k8s]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES deploy1-7f56c55d4c-6fjgc 1/1 Running 0 3m18s 10.244.104.25 node2 <none> <none> deploy1-7f56c55d4c-h4xj2 1/1 Running 0 3m18s 10.244.104.22 node2 <none> <none> deploy1-7f56c55d4c-zhx6n 1/1 Running 0 3m18s 10.244.166.130 node1 <none> <none>
发现他确实是启动了3个pod,我们之前说到,如果pod少了他会再创建新的,我们来看看是不是这样
我们删除node1上的这个pod,名字是deploy1-7f56c55d4c-zhx6n [root@master k8s]# kubectl delete pod/deploy1-7f56c55d4c-zhx6n pod "deploy1-7f56c55d4c-zhx6n" deleted [root@master k8s]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES deploy1-7f56c55d4c-6fjgc 1/1 Running 0 4m34s 10.244.104.25 node2 <none> <none> deploy1-7f56c55d4c-h4xj2 1/1 Running 0 4m34s 10.244.104.22 node2 <none> <none> deploy1-7f56c55d4c-rwd4l 1/1 Running 0 8s 10.244.166.129 node1 <none> <none>
控制器通过什么管理pod?
我们可以看到,之前那个pod确实被删除了,但是马上就有一个新的pod启动了,我们通过名字,以及运行时间都是看到的,这是一个刚创建的pod
那么这个控制器是通过什么去管理这些pod的呢? 是通过标签,我们现在先删除这个控制器
注意:不要直接删pod,因为你删不完,上面说过了,他会控制在pod保持在你规定的数量,所以pod你是删不完的
删除控制器之后我们重新创建一个pod,然后再去创建控制器,再将pod的标签改成与控制器创建出来的pod一致,我们来看看效果
# 首先自行创建一个pod [root@master k8s]# kubectl run pod01 --image=nginx --image-pull-policy=IfNotPresent pod/pod01 created [root@master k8s]# kubectl get pods NAME READY STATUS RESTARTS AGE pod01 1/1 Running 0 5s # 然后我们再通过yaml文件去创建控制器 [root@master k8s]# kubectl apply -f deploy1.yaml deployment.apps/deploy1 created [root@master k8s]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy1-7f56c55d4c-5czhw 1/1 Running 0 5s deploy1-7f56c55d4c-bgtft 1/1 Running 0 5s deploy1-7f56c55d4c-tqlz2 1/1 Running 0 5s pod01 1/1 Running 0 35s
现在我们可以发现pod是4个,一个是我们手动创建的,不归控制器管理,另外3个是控制器创建的,现在我们开始修改pod01的标签
# 查看标签 [root@master k8s]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS deploy1-7f56c55d4c-5czhw 1/1 Running 0 90s app=deploy1,pod-template-hash=7f56c55d4c deploy1-7f56c55d4c-bgtft 1/1 Running 0 90s app=deploy1,pod-template-hash=7f56c55d4c deploy1-7f56c55d4c-tqlz2 1/1 Running 0 90s app=deploy1,pod-template-hash=7f56c55d4c pod01 1/1 Running 0 2m run=pod01 # 我们将pod01 的标签修改,先删除原有标签 [root@master k8s]# kubectl label pod/pod01 run- pod/pod01 unlabeled # 添加2个标签,与其他3个保持一致 [root@master k8s]# kubectl label pod/pod01 app=deploy1 pod-template-hash=7f56c55d4c pod/pod01 labeled # 此时再去查看pod [root@master k8s]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy1-7f56c55d4c-5czhw 1/1 Running 0 2m48s deploy1-7f56c55d4c-bgtft 1/1 Running 0 2m48s deploy1-7f56c55d4c-tqlz2 1/1 Running 0 2m48s
现在我们发现,之前手工创建的pod没有了,当然,你自己在做这个实验的时候他可能会把你手动创建的pod保留下来,这个跟他他的算法来的,不一定是非要删除手动创建的pod
在上面那个yaml文件里我就标出来了一行叫做matchLabels,翻译过来就是匹配标签嘛,也就是说只有在这里定义好的,才能被匹配到
2.副本数修改方法
副本数量修改的方式有3种
- 在线修改,类似于vim
[root@master k8s]# kubectl edit deployments.apps deploy1 # 进入之后找到replicas: 3,将3改成你想要的数量,:wq保存退出,我将3改成5 deployment.apps/deploy1 edited [root@master k8s]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy1-7f56c55d4c-22897 1/1 Running 0 107s deploy1-7f56c55d4c-djh5c 1/1 Running 0 10s deploy1-7f56c55d4c-qzvpx 1/1 Running 0 10s deploy1-7f56c55d4c-w5znc 1/1 Running 0 107s deploy1-7f56c55d4c-xpwjn 1/1 Running 0 107s
那么这个就是在线修改
2. 命令行修改
这个比较简单
# --replicas 就是指定副本数 [root@master k8s]# kubectl scale deployment deploy1 --replicas 3 deployment.apps/deploy1 scaled [root@master k8s]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy1-7f56c55d4c-djh5c 1/1 Running 0 51s deploy1-7f56c55d4c-w5znc 1/1 Running 0 2m28s deploy1-7f56c55d4c-xpwjn 1/1 Running 0 2m28s
- 修改yaml文件
这种方式就是通过修改你之前创建deploy的yaml文件,然后重新应用一下就可以了
# [root@master k8s]# vim deploy1.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: deploy1 name: deploy1 spec: # 修改这里然后重新应用一下 replicas: 8 selector: matchLabels: app: deploy1 strategy: {} template: metadata: creationTimestamp: null labels: app: deploy1 spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} status: {}
重新应用
[root@master k8s]# kubectl apply -f deploy1.yaml deployment.apps/deploy1 configured [root@master k8s]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy1-7f56c55d4c-787bw 1/1 Running 0 18s deploy1-7f56c55d4c-8smn7 1/1 Running 0 18s deploy1-7f56c55d4c-djh5c 1/1 Running 0 3m31s deploy1-7f56c55d4c-kqqfc 1/1 Running 0 18s deploy1-7f56c55d4c-pqrmh 1/1 Running 0 18s deploy1-7f56c55d4c-w5znc 1/1 Running 0 5m8s deploy1-7f56c55d4c-xpqcl 1/1 Running 0 18s deploy1-7f56c55d4c-xpwjn 1/1 Running 0 5m8s
这三种方式都是比较简单的,随意通过哪种方式都可以进行修改
3.动态扩展HPA
前提条件,安装metrics-server
HPA其实就是水平pod伸缩,如果用过公有云的话,这个HPA其实就是跟公有云里面的弹性伸缩是一样的道理,为什么需要这个呢?
假如你的业务流量白天比较少,但是晚上比较高,那么你是不是可以让他白天保持在3个pod,晚上让他创建10个甚至更多个pod呢,是可以的吧,但是你不可能一直手工去修改deploy的副本数啊,所以就出现了这个HPA,他就是用来监视你的pod的,如果pod的资源利用率过高,那么他就会新创建一些pod
HPA是针对于deployment的,我们来看看示例
# 我们首先将deploy的副本数修改成8 [root@master k8s]# kubectl scale deployment deploy1 --replicas 8 deployment.apps/deploy1 scaled # 改好之后我们针对这个deploy来创建一个hpa,最少pod要有2个,最多6个,我们现在 [root@master k8s]# kubectl autoscale deployment deploy1 --min 2 --max 6 horizontalpodautoscaler.autoscaling/deploy1 autoscaled # 然后我们再来查看pod的数量 [root@master k8s]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy1-7f56c55d4c-787bw 1/1 Running 0 27m deploy1-7f56c55d4c-djh5c 1/1 Running 0 30m deploy1-7f56c55d4c-f4v8s 1/1 Running 0 5m54s deploy1-7f56c55d4c-twhwn 1/1 Running 0 5m54s deploy1-7f56c55d4c-xpqcl 1/1 Running 0 27m deploy1-7f56c55d4c-xpsld 1/1 Running 0 5m54s
我们并没有修改deploy的副本数,但是现在pod只剩下6个了,我们同样可以查到hpa
[root@master k8s]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE deploy1 Deployment/deploy1 <unknown>/80% 2 6 6 3m17s
我们在查看这个hpa的时候,会发现他的TARGETS显示的是unknown,如果他是unknown的话他是监测不到资源利用率的,所以也没有办法去动态的伸缩,所以我们要想办法把他显示成正常的数值,让他监测到
他检测不到数值的原因是因为我们并没有指定数值,我们现在先删除hpa,然后需要在线修改一下deploy的配置文件
# Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "2" kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"app":"deploy1"},"name":"deploy1","namespace":"default"},"spec":{"replicas":8,"selector":{"matchLabels":{"app":"deploy1"}},"strategy":{},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"deploy1"}},"spec":{"containers":[{"image":"nginx","imagePullPolicy":"IfNotPresent","name":"nginx","resources":{}}]}}},"status":{}} creationTimestamp: "2024-01-13T03:34:01Z" generation: 10 labels: app: deploy1 name: deploy1 namespace: default resourceVersion: "232702" uid: 7a1e57af-4fdd-475d-bc72-be95413e7885 spec: progressDeadlineSeconds: 600 replicas: 6 revisionHistoryLimit: 10 selector: matchLabels: app: deploy1 strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: deploy1 spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx # 就是这里,这里的resources默认是个空的,现在将他修改一下 resources: requests: cpu: 400m terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: availableReplicas: 6 conditions: - lastTransitionTime: "2024-01-13T04:00:05Z" lastUpdateTime: "2024-01-13T04:00:05Z" message: Deployment has minimum availability.
修改完上面这个文件之后,我们等一会再去查看hpa
[root@master k8s]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE deploy1 Deployment/deploy1 0%/80% 2 6 6 69s
可以看到这里的数值正常显示了,因为现在的利用率是0,那么按道理说他应该会减少pod,我们稍等一会来查看pod
[root@master k8s]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy1-588569cb96-4cjdk 1/1 Running 0 8m9s deploy1-588569cb96-d944g 1/1 Running 0 8m9s
我们发现,现在的pod数量只有2个了
那他的阈值是80%,我们能不能对这个阈值进行修改呢?当然也是可以的
# 我们先删除hpa [root@master k8s]# kubectl delete hpa deploy1 horizontalpodautoscaler.autoscaling "deploy1" deleted # 然后我们重新创建一个hpa,指定cpu的阈值 horizontalpodautoscaler.autoscaling "deploy1" deleted [root@master k8s]# kubectl autoscale deployment deploy1 --max 6 --min 2 --cpu-percent 60 horizontalpodautoscaler.autoscaling/deploy1 autoscaled # 查看hpa,稍等一会 [root@master k8s]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE deploy1 Deployment/deploy1 <unknown>/60% 2 6 0 3s # 再次查询,数值正确 [root@master k8s]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE deploy1 Deployment/deploy1 0%/60% 2 6 2 28s
我们现在也可以尝试一下让他动态扩展
- 安装ab(apache bench)工具
- 创建svc,使外部可以访问到pod
- 开始压测,观察伸缩结果
# 安装工具 [root@master k8s]# yum install httpd-tools # 创建svc [root@master k8s]# kubectl expose deployment --port=80 --target-port=80 deploy1 --type=NodePort # 建议再开一个会话,一个用来压测,另一个用来观察 # 将副本数量改为一个,并且指定阈值为50% [root@master k8s]# kubectl autoscale deployment deploy1 --min 1 --max 6 --cpu-percent 50 # 删除之前的svc,重新创建一个 [root@master k8s]# kubectl expose deployment deploy1 --port=80 --target-port=80 --type=NodePort # 首先查看当前pod [root@master k8s]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy1-588569cb96-czqnf 1/1 Running 0 5m25s # 开始压测 [root@master k8s]# ab -t 600 -n 1000000 -c 1000 http://master:30860/index.html # 切换到另一个会话观察 [root@master k8s]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE deploy1 Deployment/deploy1 57%/50% 1 6 2 2m50s [root@master k8s]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy1-588569cb96-5rr4j 1/1 Running 0 23s deploy1-588569cb96-czqnf 1/1 Running 0 7m36s
我们可以看到,通过压测,我们的pod数量从1个提升到了2个,继续等待下去会提升到更多
4.镜像滚动升级及回滚
升级
滚动升级也是有3种方式
- 在线编辑
# 查看现在的版本,现在使用的是最新版的nginx [root@master k8s]# kubectl get deployments.apps -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy1 8/8 8 8 14s nginx nginx app=deploy1 [root@master k8s]# kubectl edit deployments.apps deploy1 # 进去之后找到image,改成你想使用的版本然后退出 deployment.apps/deploy1 edited # 查看修改后的版本 [root@master k8s]# kubectl get deployments.apps -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy1 6/8 4 6 40s nginx nginx:1.19 app=deploy1 # 我们也可以看看pod的状态 [root@master k8s]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy1-58459c85c6-2kwgd 0/1 ContainerCreating 0 15s deploy1-58459c85c6-645pg 0/1 ContainerCreating 0 15s deploy1-58459c85c6-6nwgp 0/1 ContainerCreating 0 15s deploy1-58459c85c6-jtzvd 0/1 ContainerCreating 0 15s deploy1-7f56c55d4c-6nrz6 1/1 Running 0 53s deploy1-7f56c55d4c-b8xhn 1/1 Running 0 53s deploy1-7f56c55d4c-m2ljc 1/1 Running 0 53s deploy1-7f56c55d4c-q8b7h 1/1 Running 0 53s deploy1-7f56c55d4c-qmlws 1/1 Running 0 53s deploy1-7f56c55d4c-z86tb 1/1 Running 0 53s
他会先删除25%旧版本的pod,然后再启动一些新的pod。不会一次性全部升级
2. 修改yaml文件
我们将创建deploy的yaml文件修改,修改image那个地方,修改完之后重新apply
apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: deploy1 name: deploy1 spec: replicas: 8 selector: matchLabels: app: deploy1 strategy: {} template: metadata: creationTimestamp: null labels: app: deploy1 spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx resources: {} status: {}
改完之后重新应用一下
[root@master k8s]# kubectl apply -f deploy1.yaml deployment.apps/deploy1 configured # 查看版本,变回最新版 [root@master k8s]# kubectl get deployments.apps -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy1 8/8 8 8 25m nginx nginx app=deploy1
- 命令行修改
直接使用命令行进行修改
[root@master k8s]# kubectl set image deploy deploy1 nginx=nginx:1.19 deployment.apps/deploy1 image updated # 查看版本 [root@master k8s]# kubectl get deployments.apps -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy1 6/8 4 6 26m nginx nginx:1.19 app=deploy1
升级以及回退默认情况下是不会被记录下来的,就像这样
[root@master k8s]# kubectl rollout history deployment deploy1 deployment.apps/deploy1 REVISION CHANGE-CAUSE 4 <none> 5 <none>
他显示的是none,如果你想要他将你的修改记录下来,那么加上--record=true就好了,来看看效果
[root@master k8s]# kubectl edit deployments.apps deploy1 --record=true # 通过哪种方式都行,但是记得加上参数--record=true deployment.apps/deploy1 edited [root@master k8s]# kubectl rollout history deployment deploy1 deployment.apps/deploy1 REVISION CHANGE-CAUSE 4 <none> 5 <none> 6 kubectl edit deployments.apps deploy1 --record=true
现在我们的操作就会被记录下来了
回退
刚刚这种是升级,如果我们升级后发现这个镜像有缺陷,有问题,我们想回退到之前的版本可以吗?可以的,它不仅支持动态升级,且支持回退
回退很简单,一条命令就搞定了
# 我们可以先查一下我们具体要回退的版本,也可以不查,像这种方式就是回退到上一个版本 [root@master k8s]# kubectl rollout undo deployment deploy1 deployment.apps/deploy1 rolled back # 我们还可以通过指定回退版本让他回退到我们想要的版本,这种方式就要先查询一下了 [root@master k8s]# kubectl rollout history deployment deploy1 deployment.apps/deploy1 REVISION CHANGE-CAUSE 4 <none> 7 kubectl set image deploy deploy1 nginx=nginx --record=true 8 kubectl edit deployments.apps deploy1 --record=true # 我们想要他回到7这个版本 [root@master k8s]# kubectl rollout undo deployment deploy1 --to-revision 7 deployment.apps/deploy1 rolled back # 这样就实现了回退
本文来自博客园,作者:FuShudi,转载请注明原文链接:https://www.cnblogs.com/fsdstudy/p/17961428
分类: CKA