Deployment、Service资源

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
简介: Deployment、Service资源

屏幕截图 2023-08-28 163846.png

一、Deployment

练习:  创建一个Deployment资源对象,名称为bdqn1,replicas: 3个,镜像使用httpd镜像。

yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: bdqn1
  labels:
    app: bdqn1 
spec:
  selector:
    matchLabels:
      app: bdqn1
  replicas: 3
  template:
    metadata:
      labels:
        app: bdqn1
    spec:
      containers:
        - name: bdqn1
          image: httpd:latest
          ports:      
            - containerPort: 80
kubectl apply -f deploy1.yaml 
#PS: 注意,在Deployment资源对象中,可以添加Port字段,但此字段仅供用户查看,并不实际生效。-->

分析:  Deployment称为高级的Pod控制器,为什么那?我们来仔细看一下。

kubectl describe deployment bdqn1
#Events:
#  Type    Reason             Age    From                   Message
# ----    ------             ----   ----                   -------
# Normal  ScalingReplicaSet  3m50s  deployment-controller  Scaled up replica set bdqn1-#7f49d8f579 to 3

> Events:  事件提示。它非常重要,它描述了整个资源从开始到现在都做了哪些工作。

//在此,仔细分析 Events事件描述,会发现:

Deployment没有像我们想象中直接创建并控制后端的Pod,而是又创建了一个新的资源对象:ReplicaSet(bdqn1-7f49d8f579),我们再次查看该RS的详细信息,会看到,RS整个的Events.

kubectl describe rs bdqn1-7f49d8f579
...
Events:
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  32m   replicaset-controller  Created pod: bdqn1-7f49d8f579-s6mhr
  Normal  SuccessfulCreate  32m   replicaset-controller  Created pod: bdqn1-7f49d8f579-rsf5c
  Normal  SuccessfulCreate  32m   replicaset-controller  Created pod: bdqn1-7f49d8f579-2bw9g

//此时查看任意一个Pod的详细信息,能够看到此Pod的完整的工作流程

kubectl describe pod bdqn1-7f49d8f579-2bw9g
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  34m   default-scheduler  Successfully assigned default/bdqn1-7f49d8f579-2bw9g to node02
  Normal  Pulling    34m   kubelet, node02    Pulling image "httpd"
  Normal  Pulled     34m   kubelet, node02    Successfully pulled image "httpd"
  Normal  Created    34m   kubelet, node02    Created container bdqn1
  Normal  Started    34m   kubelet, node02    Started container bdqn1

练习:

1)将上述bdqn1,replicas:  5个。

2)将上述bdqn1,镜像改为nginx。

之后可以使用kubectl get pod查看启动的信息

再结合kubectl describe rs bdqn1--5c8b555849-cdx9k 查看一下控制器的操作状态 缩容或扩容

#进入pod内部
kubectl get pod
#然后去对应的节点运行docker exec 
kubectl exec -it pod名称 #注意kubectl exec只能用来进入pod所以不用加资源类型了。 
#查看pod控制器之ReplicaSet 
kubectl get pods
kubectl get rs
kubectl get replicaset
#理解由控制器创建的pod过程,deployment创建pod过程理解
#1,deployment控制器------>创建RS---->创建pod
kubectl describe rs bdqn1-7f49d8f579 #查看一下rs控制器下的bdqn1-7f49d8f579
#找到Controlled By:    Deployment/bdqn1
#代表bdqn1-7f49d8f579这个rs资源是由Deployment/bdqn1 来创建的。
#下面找个pod资源由bdqn1创建的,查看一下pod详细信息,看看event事件里记录的过程
kubectl get pod
kubectl describe pod bdqn1-5c8b555849-cdx9k
Name:         bdqn1-5c8b555849-cdx9k
Namespace:    default
Priority:     0
Node:         node01/192.168.10.169
Start Time:   Mon, 15 Mar 2021 22:46:28 +0800
Labels:       app=bdqn1
              pod-template-hash=5c8b555849
Annotations:  <none>
Status:       Running
IP:           10.244.1.19
IPs:
  IP:           10.244.1.19
Controlled By:  ReplicaSet/bdqn1-5c8b555849       #看到控制器由谁控制的了?
Containers:
  bdqn1:
    Container ID:   docker://34cb607905dbbd7f4c3aac21bb005bd321cf4f4218fc5527fc3779c8f203dc23
    Image:          httpd:latest
    Image ID:       docker-pullable://httpd@sha256:2fab99fb3b1c7ddfa99d7dc55de8dad0a62dbe3e7c605d78ecbdf2c6c49fd636
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 15 Mar 2021 22:46:45 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cqsd2 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-cqsd2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-cqsd2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:                                #看看这个在node01节点上的完整创建流程。
  Type    Reason     Age   From               Message
----    ------     ----  ----               -------
  Normal  Scheduled  27m   default-scheduler  Successfully assigned default/bdqn1-5c8b555849-cdx9k to node01
  Normal  Pulling    27m   kubelet, node01    Pulling image "httpd:latest"
  Normal  Pulled     27m   kubelet, node01    Successfully pulled image "httpd:latest"
  Normal  Created    27m   kubelet, node01    Created container bdqn1
  Normal  Started    27m   kubelet, node01    Started container bdqn1

二、Service

//创建一个Service资源,要求与上述bdqn1进行关联。

yaml
apiVersion: v1
kind: Service
metadata:
  name: bdqn-svc
spec:
  selector:
    app: bdqn1
  ports:              #端口设置
    - port: 1000      #服务监听端口
      targetPort: 80  #容器内部端口

//默认情况下Service的资源类型ClusterIP,上述YAML文件中,spec.ports.port:   描述的是ClusterIP的端口。仅只是为后端的Pod提供了一个统一的访问入口(在k8s集群内有效)。

kubectl get svc
#NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
#bdqn-svc     ClusterIP   10.105.143.2   <none>        1000/TCP   6m27s
#kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP    3d23h
#curl 10.105.143.2:1000
3333
curl 10.105.143.2:1000
22222
curl 10.105.143.2:1000
1111111

//如果想要让外网能够访问到后端Pod,这里应该讲Service的资源类型改为NodePort.

kind: Service
apiVersion: v1
metadata:
  name: bdqn-svc
spec:
  type: NodePort     #类型指定NodePort开通对外访问
  selector:
    app: bdqn1
  ports:          #//端口设置   
    - port: 1000
      targetPort: 80
      nodePort: 32034  #端口映射 外网访问这个端口 P大写

//注意,nodePort的有效范围是:    30000-32767



//上边我们看到了访问ClusterIP  ,后端的Pod会轮替着为我们提供服务,也就是有负载均衡,通知之前k8s架构和组件的课程中了解到,如果没有Service资源,KUBE-PROXY组件也不会生效,因为它就是复制负载均衡,那么现在有了Service资源,它到底是怎么做到负载均衡的?底层的原理是什么?


//表面上来看,通过describe命令,查看SVC资源对应的Endpoint,就能够知道后端真正的Pod。

kubectl describe svc bdqn-svc

...

Endpoints:                10.244.1.23:80,10.244.2.20:80,10.244.2.21:80



-----------------------------------------------------------------------------


#### kubeadm 部署方式修改kube-proxy为 ipvs模式。

# kubectl get pod -n kube-system
# kubectl logs -n kube-system kube-proxy-xxxx
W1013 06:55:35.773739       1 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1013 06:55:35.868822       1 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1013 06:55:35.869786       1 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1013 06:55:35.870800       1 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W1013 06:55:35.876832       1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
I1013 06:55:35.890892       1 server_others.go:143] Using iptables Proxier.
I1013 06:55:35.892136       1 server.go:534] Version: v1.15.0
I1013 06:55:35.909025       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I1013 06:55:35.909053       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1013 06:55:35.919298       1 conntrack.go:83] Setting conntrack hashsize to 32768
I1013 06:55:35.945969       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1013 06:55:35.946044       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1013 06:55:35.946623       1 config.go:96] Starting endpoints config controller
I1013 06:55:35.946660       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I1013 06:55:35.946695       1 config.go:187] Starting service config controller
I1013 06:55:35.946713       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I1013 06:55:36.047121       1 controller_utils.go:1036] Caches are synced for endpoints config controller
I1013 06:55:36.047195       1 controller_utils.go:1036] Caches are synced for service config controller修改kube-proxy的配置文件,添加mode 为ipvs。默认是空的
# kubectl edit cm kube-proxy -n kube-system
mode: "ipvs" #加上ipvs
ipvs模式需要注意的是要添加ip_vs相关模块,
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
 #!/bin/bash 
 modprobe -- ip_vs 
 modprobe -- ip_vs_rr 
 modprobe -- ip_vs_wrr 
 modprobe -- ip_vs_sh 
 modprobe -- nf_conntrack_ipv4 
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
#重启kube-proxy 的pod
#kubectl get pod -n kube-system | grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-62gvr" deleted
pod "kube-proxy-n2rml" deleted
#在pod重启后再查看日志,发现模式已经变为ipvs了。
# kubectl get pod -n kube-system |grep kube-proxy
kube-proxy-cbm8p                     1/1     Running   0          85s
kube-proxy-d97pn                     1/1     Running   0          83s
# kubectl logs -n kube-system kube-proxy-cbm8p 
I1013 07:34:38.685794       1 server_others.go:170] Using ipvs Proxier.
W1013 07:34:38.686066       1 proxier.go:401] IPVS scheduler not specified, use rr by default
I1013 07:34:38.687224       1 server.go:534] Version: v1.15.0
I1013 07:34:38.692777       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1013 07:34:38.693378       1 config.go:187] Starting service config controller
#再次测试ping service就ok了
[root@master kubernetes]# kubectl 
admin.conf               kubelet.conf             pki/                     
controller-manager.conf  manifests/               scheduler.conf           
[root@master kubernetes]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
bdqn-svc     NodePort    10.111.134.107   <none>        80:32034/TCP   68m
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        29h
[root@master kubernetes]# ping 10.111.134.107
PING 10.111.134.107 (10.111.134.107) 56(84) bytes of data.
64 bytes from 10.111.134.107: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 10.111.134.107: icmp_seq=2 ttl=64 time=0.043 ms

三、练习使用yaml回滚到指定版本 此版本为1.15yaml要求用私人仓库,并把yaml代码变更为1.18版本写法。

准备3个版本所使用的私有镜像,参考docker章节 私仓和镜像创建。来模拟每次升级不同的镜像。


如果要验证,让外面主机能访问的话,记得要结合上面的svc建立一个service的yaml文件。--------------------------------------

#k8s V1.15格式:
deploy1.yaml
apiVersion: extensions/v1beta1     
kind: Deployment
metadata:
  name: web1
spec:                              
  revisionHistoryLimit: 10        
  replicas: 3
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - name: web1
        image: nginx:1.17   
deploy2.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: web1
spec:
  revisionHistoryLimit: 10
  replicas: 3
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - name: web1
        image: nginx:1.18
deploy3.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: web1
spec:
  revisionHistoryLimit: 10
  replicas: 3
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - name: web1
        image: nginx:latest
#k8s V1.18语法格式:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web1
  labels:
    run: nginx
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      run: nginx
  replicas: 3
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - name: web1
        image: nginx:1.17

---------------------------------------------------

此处3个yaml文件  指定不同版本的镜像。

运行一个服务,并记录一个版本信息:

kubectl apply -f deploy1.yaml --record    #--record记录版本信息的选项
kubectl apply -f deploy2.yaml --record    #升级到1.18
kubectl apply -f deploy3.yaml --record     #升级到latest

查看有哪些版本信息:

kubectl rollout history deployment web1
#显示:
deployment.apps/web1 
REVISION  CHANGE-CAUSE
kubectl apply --filename=deploy1.yaml --record=true
kubectl apply --filename=deploy2.yaml --record=true
kubectl apply --filename=deploy3.yaml --record=true

回滚到指定版本:

kubectl rollout undo deployment web1 --to-revision=1

四、用label控制Pod的位置

添加节点标签:

kubectl label nodes node02 disk=ssd
node/node02 labeled
kubectl get nodes --show-labels    #查看后会发现node02上,多了一个标签
node02   Ready    <none>   47h   v1.18.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux

//通过节点标签调度pod

yaml
spec:
  revisionHistoryLimit: 10
  replicas: 3
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: test-web
        image: nginx
        ports:
        - containerPort: 80
      nodeSelector:       //添加节点选择器
        disk: ssd    //和标签内容一致

删除节点标签

kubectl label nodes node02 disk-


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
6月前
|
Kubernetes 负载均衡 网络协议
|
3月前
|
Kubernetes Perl 容器
在k8S中,Service怎么关联Pod的?
在k8S中,Service怎么关联Pod的?
|
3月前
|
Kubernetes 负载均衡 应用服务中间件
Kubernetes(K8S) Service 介绍
Kubernetes(K8S) Service 介绍
41 0
|
6月前
|
运维 Kubernetes Linux
Kubernetes详解(七)——Service对象部署和应用
Kubernetes详解(七)——Service对象部署和应用
74 3
|
6月前
|
Kubernetes 负载均衡 算法
kubernetes—Service详解
kubernetes—Service详解
84 0
|
Kubernetes 负载均衡 应用服务中间件
k8s部署nginx(Pod、Deployment、Service)
k8s部署nginx(Pod、Deployment、Service)
2862 0
k8s部署nginx(Pod、Deployment、Service)
|
Kubernetes 网络协议 应用服务中间件
|
存储 Kubernetes 应用服务中间件
k8s--Service 环境准备、ClusterIP 使用
k8s--Service 环境准备、ClusterIP 使用
|
存储 Kubernetes 负载均衡
21-Kubernetes-Service详解-Service使用
21-Kubernetes-Service详解-Service使用
|
Kubernetes 监控 算法
20-Kubernetes-Service详解-Service介绍
20-Kubernetes-Service详解-Service介绍