kubernetes实战练习2(1)

本文涉及的产品
云数据库 Tair(兼容Redis),内存型 2GB
Redis 开源版,标准版 2GB
推荐场景:
搭建游戏排行榜
传统型负载均衡 CLB,每月750个小时 15LCU
简介: kubernetes实战练习2(1)

文章目录

1.1 检查集群

1.2 创建rc

1.3 Redis 主服务

1.4 rc slave Pod

1.5 Redis slave service

1.6 前端 rc

1.7 Guestbook Frontend Service

1.8 Access Guestbook Frontend

2. 网络介绍

2.1 集群 IP

2.2 targetport

2.3 nodeport

2.4 External IPs

2.5 Load Balancer

3. Create Ingress Routing

3.1 创建http部署

3.2 部署 Ingress

3.3 Deploy Ingress Rules

3.4 测试

4. Liveness and Readiness Healthchecks

4.1 创建http应用程序

4.2 Readiness Probe

4.3 Liveness Probe

kubernetes实战练习1

kubernetes实战练习2

kubernetes实战练习3

kubernetes 快速学习手册

1. Kubernetes 上部署留言板示例

此场景说明如何使用 Kubernetes 和 Docker 启动简单的多层 Web 应用程序。留言簿示例应用程序通过 JavaScript API 调用将访客的笔记存储在 Redis 中。Redis 包含一个 master(用于存储)和一组复制的 redis ‘slaves’。

1.1 检查集群

controlplane $ kubectl cluster-info
Kubernetes master is running at https://172.17.0.29:6443
KubeDNS is running at https://172.17.0.29:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
controlplane $ kubectl get nodes
NAME           STATUS   ROLES    AGE     VERSION
controlplane   Ready    master   2m57s   v1.14.0
node01         Ready    <none>   2m31s   v1.14.0

1.2 创建rc

controlplane $ cat redis-master-controller.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: redis-master
  labels:
    name: redis-master
spec:
  replicas: 1
  selector:
    name: redis-master
  template:
    metadata:
      labels:
        name: redis-master
    spec:
      containers:
      - name: master
        image: redis:3.0.7-alpine
        ports:
        - containerPort: 6379

创建

controlplane $ kubectl create -f redis-master-controller.yaml
replicationcontroller/redis-master created
controlplane $ kubectl get rc
NAME           DESIRED   CURRENT   READY   AGE
redis-master   1         1         0       2s
controlplane $ kubectl get pods
NAME                 READY   STATUS    RESTARTS   AGE
redis-master-2j4qm   1/1     Running   0          4s

1.3 Redis 主服务

第二部分是服务。Kubernetes 服务是一种命名负载均衡器,它将流量代理到一个或多个容器。即使容器位于不同的节点上,代理也能工作。


服务代理在集群内通信,很少将端口暴露给外部接口。


当您启动服务时,您似乎无法使用 curl 或 netcat 进行连接,除非您将其作为 Kubernetes 的一部分启动。推荐的方法是使用 LoadBalancer 服务来处理外部通信。]

controlplane $ cat redis-master-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    name: redis-master
spec:
  ports:
    # the port that this service should serve on
  - port: 6379
    targetPort: 6379
  selector:
    name: redis-master

创建

controlplane $ kubectl create -f redis-master-service.yaml
service/redis-master created
controlplane $ kubectl get services
NAME           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
kubernetes     ClusterIP   10.96.0.1      <none>        443/TCP    6m58s
redis-master   ClusterIP   10.111.64.45   <none>        6379/TCP   1s
controlplane $ kubectl describe services redis-master
Name:              redis-master
Namespace:         default
Labels:            name=redis-master
Annotations:       <none>
Selector:          name=redis-master
Type:              ClusterIP
IP:                10.111.64.45
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         10.32.0.193:6379
Session Affinity:  None
Events:            <none>

1.4 rc slave Pod

在这个例子中,我们将运行 Redis Slaves,它会从 master 复制数据。有关 Redis 复制的更多详细信息,请访问http://redis.io/topics/replication


如前所述,控制器定义了服务的运行方式。在这个例子中,我们需要确定服务如何发现其他 pod。YAML 将GET_HOSTS_FROM属性表示为 DNS。您可以将其更改为在 yaml 中使用环境变量,但这会引入创建顺序依赖关系,因为需要运行服务才能定义环境变量。

在这种情况下,我们将使用image:kubernetes/redis-slave:v2启动 pod 的两个实例。它将通过 DNS链接到redis-master。

controlplane $ cat redis-slave-controller.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: redis-slave
  labels:
    name: redis-slave
spec:
  replicas: 2
  selector:
    name: redis-slave
  template:
    metadata:
      labels:
        name: redis-slave
    spec:
      containers:
      - name: worker
        image: gcr.io/google_samples/gb-redisslave:v1
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below.
          # value: env
        ports:
        - containerPort: 6379

执行:

controlplane $ kubectl create -f redis-slave-controller.yaml
replicationcontroller/redis-slave created
controlplane $ kubectl get rc
NAME           DESIRED   CURRENT   READY   AGE
redis-master   1         1         1       4m29s
redis-slave    2         2         2       3s

1.5 Redis slave service

和以前一样,我们需要让我们的奴隶可以访问传入的请求。这是通过启动一个知道如何与redis-slave通信的服务来完成的。

因为我们有两个复制的 Pod,该服务还将在两个节点之间提供负载平衡。

controlplane $ cat redis-slave-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    name: redis-slave
spec:
  ports:
    # the port that this service should serve on
  - port: 6379
  selector:
    name: redis-slave

执行:

controlplane $ kubectl create -f redis-slave-service.yaml
controlplane $ kubectl get services
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes     ClusterIP   10.96.0.1       <none>        443/TCP    14m
redis-master   ClusterIP   10.111.64.45    <none>        6379/TCP   7m13s
redis-slave    ClusterIP   10.109.135.21   <none>        6379/TCP   41s

1.6 前端 rc

启动数据服务后,我们现在可以部署 Web 应用程序。部署 Web 应用程序的模式与我们之前部署的 pod 相同。YAML 定义了一个名为 frontend 的服务,该服务使用图像 _gcr.io/google samples/gb-frontend:v3。复制控制器将确保三个 Pod 始终存在。

controlplane $ cat frontend-controller.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: frontend
  labels:
    name: frontend
spec:
  replicas: 3
  selector:
    name: frontend
  template:
    metadata:
      labels:
        name: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google_samples/gb-frontend:v3
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below.
          # value: env
        ports:
        - containerPort: 80

执行

controlplane $ kubectl create -f frontend-controller.yaml
replicationcontroller/frontend created
controlplane $ kubectl get rc
NAME           DESIRED   CURRENT   READY   AGE
frontend       3         3         1       2s
redis-master   1         1         1       20m
redis-slave    2         2         2       15m
controlplane $ kubectl get pods
NAME                 READY   STATUS    RESTARTS   AGE
frontend-bkcsj       1/1     Running   0          3s
frontend-ftjrk       1/1     Running   0          3s
frontend-jnckp       1/1     Running   0          3s
redis-master-2j4qm   1/1     Running   0          20m
redis-slave-79w2b    1/1     Running   0          15m
redis-slave-j8zqj    1/1     Running   0          15m

PHP 代码使用 HTTP 和 JSON 与 Redis 通信。当设置一个值时,请求转到redis-master,而读取的数据来自redis-slave节点。

1.7 Guestbook Frontend Service

为了使前端可访问,我们需要启动一个服务来配置代理。

YAML 将服务定义为NodePort。NodePort 允许您设置在整个集群中共享的知名端口。这就像Docker 中的-p 80:80。


在这种情况下,我们定义我们的 Web 应用程序在端口 80 上运行,但我们将在30080上公开服务。

controlplane $ cat frontend-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    name: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  type: NodePort
  ports:
    # the port that this service should serve on
    - port: 80
      nodePort: 30080
  selector:
    name: frontend
controlplane $ kubectl create -f frontend-service.yaml
service/frontend created
controlplane $ kubectl get services
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
frontend       NodePort    10.105.214.152   <none>        80:30080/TCP   2s
kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP        28m
redis-master   ClusterIP   10.111.64.45     <none>        6379/TCP       21m
redis-slave    ClusterIP   10.109.135.21    <none>        6379/TCP       15m

1.8 Access Guestbook Frontend

定义了所有控制器和服务后,Kubernetes 将开始将它们作为 Pod 启动。根据发生的情况,Pod 可以具有不同的状态。例如,如果 Docker 镜像仍在下载,则 Pod 将处于挂起状态,因为它无法启动。准备就绪后,状态将更改为running。

查看 Pod 状态

controlplane $ kubectl get pods
NAME                 READY   STATUS    RESTARTS   AGE
frontend-bkcsj       1/1     Running   0          4m55s
frontend-ftjrk       1/1     Running   0          4m55s
frontend-jnckp       1/1     Running   0          4m55s
redis-master-2j4qm   1/1     Running   0          24m
redis-slave-79w2b    1/1     Running   0          20m
redis-slave-j8zqj    1/1     Running   0          20m

查找节点端口

controlplane $ kubectl describe service frontend | grep NodePort
Type:                     NodePort
NodePort:                 <unset>  30080/TCP

查看用户界面

一旦 Pod 处于运行状态,您将能够通过端口 30080 查看 UI。使用 URL 查看页面 https://2886795293-30080-elsy05.environments.katacoda.com


在幕后,PHP 服务通过 DNS 发现 Redis 实例。您现在已经在 Kubernetes 上部署了一个有效的多层应用程序。

2. 网络介绍

Kubernetes 具有先进的网络功能,允许 Pod 和服务在集群网络内部和外部进行通信。


在此场景中,您将学习以下类型的 Kubernetes 服务。


集群IP

目标端口

节点端口

外部 IP

负载均衡器

Kubernetes 服务是一个抽象,它定义了如何访问一组 Pod 的策略和方法。通过 Service 访问的 Pod 集基于标签选择器。

2.1 集群 IP

集群 IP 是创建 Kubernetes 服务时的默认方法。该服务被分配了一个内部 IP,其他组件可以使用它来访问 pod。

通过拥有单个 IP 地址,它可以使服务在多个 Pod 之间进行负载平衡。

controlplane $ cat clusterip.yaml 
apiVersion: v1
kind: Service
metadata:
  name: webapp1-clusterip-svc
  labels:
    app: webapp1-clusterip
spec:
  ports:
  - port: 80
  selector:
    app: webapp1-clusterip
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: webapp1-clusterip-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: webapp1-clusterip
    spec:
      containers:
      - name: webapp1-clusterip-pod
        image: katacoda/docker-http-server:latest
        ports:
        - containerPort: 80
controlplane $ kubectl get pods
NAME                                            READY   STATUS    RESTARTS   AGE
webapp1-clusterip-deployment-669c7c65c4-gqlkc   1/1     Running   0          112s
webapp1-clusterip-deployment-669c7c65c4-hwkrl   1/1     Running   0          112s
controlplane $ kubectl get svc
NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes              ClusterIP   10.96.0.1      <none>        443/TCP   6m28s
webapp1-clusterip-svc   ClusterIP   10.100.49.56   <none>        80/TCP    116s
controlplane $ kubectl describe svc/webapp1-clusterip-svc
Name:              webapp1-clusterip-svc
Namespace:         default
Labels:            app=webapp1-clusterip
Annotations:       kubectl.kubernetes.io/last-applied-configuration:
                     {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"webapp1-clusterip"},"name":"webapp1-clusterip-svc","name...
Selector:          app=webapp1-clusterip
Type:              ClusterIP
IP:                10.100.49.56
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.32.0.5:80,10.32.0.6:80
Session Affinity:  None
Events:            <none>
controlplane $ export CLUSTER_IP=$(kubectl get services/webapp1-clusterip-svc -o go-template='{{(index .spec.clusterIP)}}')
controlplane $ echo CLUSTER_IP=$CLUSTER_IP
CLUSTER_IP=10.100.49.56
controlplane $ curl $CLUSTER_IP:80
<h1>This request was processed by host: webapp1-clusterip-deployment-669c7c65c4-gqlkc</h1>
controlplane $ curl $CLUSTER_IP:80
<h1>This request was processed by host: webapp1-clusterip-deployment-669c7c65c4-gqlkc</h1>

多个请求将展示基于公共标签选择器的跨多个 Pod 的服务负载均衡器。

2.2 targetport

目标端口允许我们将服务可用的端口与应用程序正在侦听的端口分开。TargetPort 是应用程序配置为侦听的端口。 Port是从外部访问应用程序的方式。

controlplane $ cat clusterip-target.yaml
apiVersion: v1
kind: Service
metadata:
  name: webapp1-clusterip-targetport-svc
  labels:
    app: webapp1-clusterip-targetport
spec:
  ports:
  - port: 8080
    targetPort: 80
  selector:
    app: webapp1-clusterip-targetport
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: webapp1-clusterip-targetport-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: webapp1-clusterip-targetport
    spec:
      containers:
      - name: webapp1-clusterip-targetport-pod
        image: katacoda/docker-http-server:latest
        ports:
        - containerPort: 80
---
controlplane $ kubectl apply -f clusterip-target.yaml
service/webapp1-clusterip-targetport-svc created
deployment.extensions/webapp1-clusterip-targetport-deployment created
controlplane $ kubectl get svc
NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes                         ClusterIP   10.96.0.1       <none>        443/TCP    11m
webapp1-clusterip-svc              ClusterIP   10.100.49.56    <none>        80/TCP     6m33s
webapp1-clusterip-targetport-svc   ClusterIP   10.99.164.105   <none>        8080/TCP   2s
controlplane $ kubectl describe svc/webapp1-clusterip-targetport-svc
Name:              webapp1-clusterip-targetport-svc
Namespace:         default
Labels:            app=webapp1-clusterip-targetport
Annotations:       kubectl.kubernetes.io/last-applied-configuration:
                     {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"webapp1-clusterip-targetport"},"name":"webapp1-clusterip...
Selector:          app=webapp1-clusterip-targetport
Type:              ClusterIP
IP:                10.99.164.105
Port:              <unset>  8080/TCP
TargetPort:        80/TCP
Endpoints:         10.32.0.7:80,10.32.0.8:80
Session Affinity:  None
Events:            <none>
controlplane $ export CLUSTER_IP=$(kubectl get services/webapp1-clusterip-targetport-svc -o go-template='{{(index .spec.clusterIP)}}')
controlplane $ echo CLUSTER_IP=$CLUSTER_IP
CLUSTER_IP=10.99.164.105
controlplane $ curl $CLUSTER_IP:8080
<h1>This request was processed by host: webapp1-clusterip-targetport-deployment-5599945ff4-9n89k</h1>
controlplane $ curl $CLUSTER_IP:8080
<h1>This request was processed by host: webapp1-clusterip-targetport-deployment-5599945ff4-9n89k</h1>
controlplane $ curl $CLUSTER_IP:8080

服务和 pod 部署完成后,可以像以前一样通过集群 IP 访问它,但这次是在定义的端口 8080 上。应用程序本身仍然配置为侦听端口 80。Kubernetes 服务管理着两者之间的转换。

2.3 nodeport

虽然 TargetPortClusterIP 使其可用于集群内部,但 NodePort 通过定义的静态端口在每个节点的 IP 上公开服务。无论访问集群内的哪个节点,根据定义的端口号都可以访问该服务。

controlplane $ cat nodeport.yaml
apiVersion: v1
kind: Service
metadata:
  name: webapp1-nodeport-svc
  labels:
    app: webapp1-nodeport
spec:
  type: NodePort
  ports:
  - port: 80
    nodePort: 30080
  selector:
    app: webapp1-nodeport
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: webapp1-nodeport-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: webapp1-nodeport
    spec:
      containers:
      - name: webapp1-nodeport-pod
        image: katacoda/docker-http-server:latest
        ports:
        - containerPort: 80
---
controlplane $ kubectl apply -f nodeport.yaml
service/webapp1-nodeport-svc created
deployment.extensions/webapp1-nodeport-deployment created
controlplane $ kubectl get svc
NAME                               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes                         ClusterIP   10.96.0.1        <none>        443/TCP        14m
webapp1-clusterip-svc              ClusterIP   10.100.49.56     <none>        80/TCP         9m39s
webapp1-clusterip-targetport-svc   ClusterIP   10.99.164.105    <none>        8080/TCP       3m8s
webapp1-nodeport-svc               NodePort    10.111.226.228   <none>        80:30080/TCP   48s
controlplane $ kubectl describe svc/webapp1-nodeport-svc
Name:                     webapp1-nodeport-svc
Namespace:                default
Labels:                   app=webapp1-nodeport
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"webapp1-nodeport"},"name":"webapp1-nodeport-svc","namesp...
Selector:                 app=webapp1-nodeport
Type:                     NodePort
IP:                       10.111.226.228
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  30080/TCP
Endpoints:                10.32.0.10:80,10.32.0.9:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
controlplane $ curl 172.17.0.66:30080
<h1>This request was processed by host: webapp1-nodeport-deployment-677bd89b96-hqdbb</h1>

2.4 External IPs

使服务在集群外可用的另一种方法是通过外部 IP 地址。

将定义更新为当前集群的 IP 地址。

controlplane $ cat externalip.yaml
apiVersion: v1
kind: Service
metadata:
  name: webapp1-externalip-svc
  labels:
    app: webapp1-externalip
spec:
  ports:
  - port: 80
  externalIPs:
  - HOSTIP
  selector:
    app: webapp1-externalip
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: webapp1-externalip-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: webapp1-externalip
    spec:
      containers:
      - name: webapp1-externalip-pod
        image: katacoda/docker-http-server:latest
        ports:
        - containerPort: 80
---
controlplane $ sed -i 's/HOSTIP/172.17.0.66/g' externalip.yaml
controlplane $ cat externalip.yaml
apiVersion: v1
kind: Service
metadata:
  name: webapp1-externalip-svc
  labels:
    app: webapp1-externalip
spec:
  ports:
  - port: 80
  externalIPs:
  - 172.17.0.66
  selector:
    app: webapp1-externalip
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: webapp1-externalip-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: webapp1-externalip
    spec:
      containers:
      - name: webapp1-externalip-pod
        image: katacoda/docker-http-server:latest
        ports:
        - containerPort: 80
---
controlplane $ kubectl apply -f externalip.yaml
service/webapp1-externalip-svc created
deployment.extensions/webapp1-externalip-deployment created
controlplane $ kubectl get svc
NAME                               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes                         ClusterIP   10.96.0.1        <none>        443/TCP        16m
webapp1-clusterip-svc              ClusterIP   10.100.49.56     <none>        80/TCP         11m
webapp1-clusterip-targetport-svc   ClusterIP   10.99.164.105    <none>        8080/TCP       5m15s
webapp1-externalip-svc             ClusterIP   10.101.221.229   172.17.0.66   80/TCP         2s
webapp1-nodeport-svc               NodePort    10.111.226.228   <none>        80:30080/TCP   2m55s
controlplane $ kubectl describe svc/webapp1-externalip-svc
Name:              webapp1-externalip-svc
Namespace:         default
Labels:            app=webapp1-externalip
Annotations:       kubectl.kubernetes.io/last-applied-configuration:
                     {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"webapp1-externalip"},"name":"webapp1-externalip-svc","na...
Selector:          app=webapp1-externalip
Type:              ClusterIP
IP:                10.101.221.229
External IPs:      172.17.0.66
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.32.0.11:80
Session Affinity:  None
Events:            <none>
controlplane $ curl 172.17.0.66
<h1>This request was processed by host: webapp1-externalip-deployment-6446b488f8-tjrpt</h1>
controlplane $ curl 172.17.0.66
<h1>This request was processed by host: webapp1-externalip-deployment-6446b488f8-tjrpt</h1>

2.5 Load Balancer

在云中运行时,例如 EC2 或 Azure,可以配置和分配通过云提供商发布的公共 IP 地址。这将通过负载均衡器(例如 ELB)发出。这允许将额外的公共 IP 地址分配给 Kubernetes 集群,而无需直接与云提供商交互。


由于 Katacoda 不是云提供商,因此仍然可以为 LoadBalancer 类型的服务动态分配 IP 地址。这是通过使用

controlplane $ cat cloudprovider.yaml 
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-keepalived-vip
  namespace: kube-system
spec:
  template:
    metadata:
      labels:
        name: kube-keepalived-vip
    spec:
      hostNetwork: true
      containers:
        - image: gcr.io/google_containers/kube-keepalived-vip:0.9
          name: kube-keepalived-vip
          imagePullPolicy: Always
          securityContext:
            privileged: true
          volumeMounts:
            - mountPath: /lib/modules
              name: modules
              readOnly: true
            - mountPath: /dev
              name: dev
          # use downward API
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          # to use unicast
          args:
          - --services-configmap=kube-system/vip-configmap
          # unicast uses the ip of the nodes instead of multicast
          # this is useful if running in cloud providers (like AWS)
          #- --use-unicast=true
      volumes:
        - name: modules
          hostPath:
            path: /lib/modules
        - name: dev
          hostPath:
            path: /dev
      nodeSelector:
        # type: worker # adjust this to match your worker nodes
---
## We also create an empty ConfigMap to hold our config
apiVersion: v1
kind: ConfigMap
metadata:
  name: vip-configmap
  namespace: kube-system
data:
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  labels:
    app: keepalived-cloud-provider
  name: keepalived-cloud-provider
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: keepalived-cloud-provider
  strategy:
    type: RollingUpdate
  template:
    metadata:
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
        scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
      labels:
        app: keepalived-cloud-provider
    spec:
      containers:
      - name: keepalived-cloud-provider
        image: quay.io/munnerz/keepalived-cloud-provider:0.0.1
        imagePullPolicy: IfNotPresent
        env:
        - name: KEEPALIVED_NAMESPACE
          value: kube-system
        - name: KEEPALIVED_CONFIG_MAP
          value: vip-configmap
        - name: KEEPALIVED_SERVICE_CIDR
          value: 10.10.0.0/26 # pick a CIDR that is explicitly reserved for keepalived
        volumeMounts:
        - name: certs
          mountPath: /etc/ssl/certs
        resources:
          requests:
            cpu: 200m
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10252
            host: 127.0.0.1
          initialDelaySeconds: 15
          timeoutSeconds: 15
          failureThreshold: 8
      volumes:
      - name: certs
        hostPath:
          path: /etc/ssl/certs
controlplane $ kubectl apply -f cloudprovider.yaml
daemonset.extensions/kube-keepalived-vip configured
configmap/vip-configmap configured
deployment.apps/keepalived-cloud-provider created
controlplane $ kubectl get pods -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-9hrwv                     1/1     Running   0          21m
coredns-fb8b8dccf-skwkj                     1/1     Running   0          21m
etcd-controlplane                           1/1     Running   0          20m
katacoda-cloud-provider-558d5c854b-6h955    1/1     Running   0          21m
keepalived-cloud-provider-78fc4468b-lpg9s   1/1     Running   0          2m41s
kube-apiserver-controlplane                 1/1     Running   0          20m
kube-controller-manager-controlplane        1/1     Running   0          20m
kube-keepalived-vip-hq7hk                   1/1     Running   0          21m
kube-proxy-468j8                            1/1     Running   0          21m
kube-scheduler-controlplane                 1/1     Running   0          20m
weave-net-w5zff                             2/2     Running   1          21m

该服务是通过负载均衡器配置的

controlplane $ cat loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
  name: webapp1-loadbalancer-svc
  labels:
    app: webapp1-loadbalancer
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: webapp1-loadbalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: webapp1-loadbalancer-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: webapp1-loadbalancer
    spec:
      containers:
      - name: webapp1-loadbalancer-pod
        image: katacoda/docker-http-server:latest
        ports:
        - containerPort: 80
---
controlplane $ kubectl get svc
NAME                               TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes                         ClusterIP      10.96.0.1        <none>        443/TCP        23m
webapp1-clusterip-svc              ClusterIP      10.100.49.56     <none>        80/TCP         19m
webapp1-clusterip-targetport-svc   ClusterIP      10.99.164.105    <none>        8080/TCP       12m
webapp1-externalip-svc             ClusterIP      10.101.221.229   172.17.0.66   80/TCP         7m22s
webapp1-loadbalancer-svc           LoadBalancer   10.104.93.133    172.17.0.66   80:31232/TCP   97s
webapp1-nodeport-svc               NodePort       10.111.226.228   <none>        80:30080/TCP   10m
controlplane $ kubectl describe svc/webapp1-loadbalancer-svc
Name:                     webapp1-loadbalancer-svc
Namespace:                default
Labels:                   app=webapp1-loadbalancer
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"webapp1-loadbalancer"},"name":"webapp1-loadbalancer-svc"...
Selector:                 app=webapp1-loadbalancer
Type:                     LoadBalancer
IP:                       10.104.93.133
LoadBalancer Ingress:     172.17.0.66
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31232/TCP
Endpoints:                10.32.0.14:80,10.32.0.15:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason                Age   From                Message
  ----    ------                ----  ----                -------
  Normal  CreatingLoadBalancer  99s   service-controller  Creating load balancer
  Normal  CreatedLoadBalancer   99s   service-controller  Created load balancer

现在可以通过分配的 IP 地址访问该服务,在本例中为 10.10.0.0/26 范围。

controlplane $ export LoadBalancerIP=$(kubectl get services/webapp1-loadbalancer-svc -o go-template='{{(index .status.loadBalancer.ingress 0).ip}}')
controlplane $ echo LoadBalancerIP=$LoadBalancerIP
LoadBalancerIP=172.17.0.66
controlplane $ curl $LoadBalancerIP
<h1>This request was processed by host: webapp1-externalip-deployment-6446b488f8-xt4nh</h1>
controlplane $ curl $LoadBalancerIP
<h1>This request was processed by host: webapp1-externalip-deployment-6446b488f8-xt4nh</h1>



相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
2月前
|
存储 Kubernetes 负载均衡
CentOS 7.9二进制部署K8S 1.28.3+集群实战
本文详细介绍了在CentOS 7.9上通过二进制方式部署Kubernetes 1.28.3+集群的全过程,包括环境准备、组件安装、证书生成、高可用配置以及网络插件部署等关键步骤。
350 3
CentOS 7.9二进制部署K8S 1.28.3+集群实战
|
2月前
|
Kubernetes 负载均衡 前端开发
二进制部署Kubernetes 1.23.15版本高可用集群实战
使用二进制文件部署Kubernetes 1.23.15版本高可用集群的详细教程,涵盖了从环境准备到网络插件部署的完整流程。
94 2
二进制部署Kubernetes 1.23.15版本高可用集群实战
|
1月前
|
Kubernetes 网络协议 Docker
Kubernetes入门到进阶实战
Kubernetes入门到进阶实战
68 0
|
2月前
|
存储 Kubernetes Docker
深入探索容器化技术:Docker 实战与 Kubernetes 管理
深入探索容器化技术:Docker 实战与 Kubernetes 管理
67 0
|
2月前
|
Kubernetes Ubuntu 网络安全
Ubuntu基于kubeadm快速部署K8S实战
关于如何在Ubuntu系统上使用kubeadm工具快速部署Kubernetes集群的详细实战指南。
168 2
|
2月前
|
Kubernetes Linux API
CentOS 7.6使用kubeadm部署k8s 1.17.2测试集群实战篇
该博客文章详细介绍了在CentOS 7.6操作系统上使用kubeadm工具部署kubernetes 1.17.2版本的测试集群的过程,包括主机环境准备、安装Docker、配置kubelet、初始化集群、添加节点、部署网络插件以及配置k8s node节点管理api server服务器。
113 0
CentOS 7.6使用kubeadm部署k8s 1.17.2测试集群实战篇
|
2月前
|
Kubernetes 容器
Kubernetes附加组件Dashboard部署实战篇
关于如何在Kubernetes集群中部署和配置Dashboard组件的详细实战指南,涵盖了从创建证书、部署Dashboard、设置服务访问到登录认证的完整流程。
308 0
Kubernetes附加组件Dashboard部署实战篇
|
3月前
|
Kubernetes Cloud Native Docker
云原生入门:Docker容器化部署实战
【8月更文挑战第31天】在数字化浪潮中,云原生技术成为企业转型的助推器。本文通过Docker容器化部署的实践案例,引导读者从零基础到掌握基础的云原生应用部署技能。我们将一起探索Docker的魅力,学习如何将一个应用容器化,并在云平台上运行起来,为深入云原生世界打下坚实基础。
|
3月前
|
Kubernetes Nacos 微服务
【技术难题破解】Nacos v2.2.3 + K8s 微服务注册:强制删除 Pod 却不消失?!7步排查法+实战代码,手把手教你解决Nacos Pod僵死问题,让服务瞬间满血复活!
【8月更文挑战第15天】Nacos作为微服务注册与配置中心受到欢迎,但有时会遇到“v2.2.3 k8s 微服务注册nacos强制删除 pod不消失”的问题。本文介绍此现象及其解决方法,帮助开发者确保服务稳定运行。首先需检查Pod状态与事件、配置文件及Nacos配置,确认无误后可调整Pod生命周期管理,并检查Kubernetes版本兼容性。若问题持续,考虑使用Finalizers、审查Nacos日志或借助Kubernetes诊断工具。必要时,可尝试手动强制删除Pod。通过系统排查,通常能有效解决此问题。
68 0