文章目录
1.1 检查集群
1.2 创建rc
1.3 Redis 主服务
1.4 rc slave Pod
1.5 Redis slave service
1.6 前端 rc
1.7 Guestbook Frontend Service
1.8 Access Guestbook Frontend
2. 网络介绍
2.1 集群 IP
2.2 targetport
2.3 nodeport
2.4 External IPs
2.5 Load Balancer
3. Create Ingress Routing
3.1 创建http部署
3.2 部署 Ingress
3.3 Deploy Ingress Rules
3.4 测试
4. Liveness and Readiness Healthchecks
4.1 创建http应用程序
4.2 Readiness Probe
4.3 Liveness Probe
kubernetes实战练习1
kubernetes实战练习2
kubernetes实战练习3
kubernetes 快速学习手册
1. Kubernetes 上部署留言板示例
此场景说明如何使用 Kubernetes 和 Docker 启动简单的多层 Web 应用程序。留言簿示例应用程序通过 JavaScript API 调用将访客的笔记存储在 Redis 中。Redis 包含一个 master(用于存储)和一组复制的 redis ‘slaves’。
1.1 检查集群
controlplane $ kubectl cluster-info Kubernetes master is running at https://172.17.0.29:6443 KubeDNS is running at https://172.17.0.29:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. controlplane $ kubectl get nodes NAME STATUS ROLES AGE VERSION controlplane Ready master 2m57s v1.14.0 node01 Ready <none> 2m31s v1.14.0
1.2 创建rc
controlplane $ cat redis-master-controller.yaml apiVersion: v1 kind: ReplicationController metadata: name: redis-master labels: name: redis-master spec: replicas: 1 selector: name: redis-master template: metadata: labels: name: redis-master spec: containers: - name: master image: redis:3.0.7-alpine ports: - containerPort: 6379
创建
controlplane $ kubectl create -f redis-master-controller.yaml replicationcontroller/redis-master created controlplane $ kubectl get rc NAME DESIRED CURRENT READY AGE redis-master 1 1 0 2s controlplane $ kubectl get pods NAME READY STATUS RESTARTS AGE redis-master-2j4qm 1/1 Running 0 4s
1.3 Redis 主服务
第二部分是服务。Kubernetes 服务是一种命名负载均衡器,它将流量代理到一个或多个容器。即使容器位于不同的节点上,代理也能工作。
服务代理在集群内通信,很少将端口暴露给外部接口。
当您启动服务时,您似乎无法使用 curl 或 netcat 进行连接,除非您将其作为 Kubernetes 的一部分启动。推荐的方法是使用 LoadBalancer 服务来处理外部通信。]
controlplane $ cat redis-master-service.yaml apiVersion: v1 kind: Service metadata: name: redis-master labels: name: redis-master spec: ports: # the port that this service should serve on - port: 6379 targetPort: 6379 selector: name: redis-master
创建
controlplane $ kubectl create -f redis-master-service.yaml service/redis-master created controlplane $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6m58s redis-master ClusterIP 10.111.64.45 <none> 6379/TCP 1s controlplane $ kubectl describe services redis-master Name: redis-master Namespace: default Labels: name=redis-master Annotations: <none> Selector: name=redis-master Type: ClusterIP IP: 10.111.64.45 Port: <unset> 6379/TCP TargetPort: 6379/TCP Endpoints: 10.32.0.193:6379 Session Affinity: None Events: <none>
1.4 rc slave Pod
在这个例子中,我们将运行 Redis Slaves,它会从 master 复制数据。有关 Redis 复制的更多详细信息,请访问http://redis.io/topics/replication
如前所述,控制器定义了服务的运行方式。在这个例子中,我们需要确定服务如何发现其他 pod。YAML 将GET_HOSTS_FROM属性表示为 DNS。您可以将其更改为在 yaml 中使用环境变量,但这会引入创建顺序依赖关系,因为需要运行服务才能定义环境变量。
在这种情况下,我们将使用image:kubernetes/redis-slave:v2启动 pod 的两个实例。它将通过 DNS链接到redis-master。
controlplane $ cat redis-slave-controller.yaml apiVersion: v1 kind: ReplicationController metadata: name: redis-slave labels: name: redis-slave spec: replicas: 2 selector: name: redis-slave template: metadata: labels: name: redis-slave spec: containers: - name: worker image: gcr.io/google_samples/gb-redisslave:v1 env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below. # value: env ports: - containerPort: 6379
执行:
controlplane $ kubectl create -f redis-slave-controller.yaml replicationcontroller/redis-slave created controlplane $ kubectl get rc NAME DESIRED CURRENT READY AGE redis-master 1 1 1 4m29s redis-slave 2 2 2 3s
1.5 Redis slave service
和以前一样,我们需要让我们的奴隶可以访问传入的请求。这是通过启动一个知道如何与redis-slave通信的服务来完成的。
因为我们有两个复制的 Pod,该服务还将在两个节点之间提供负载平衡。
controlplane $ cat redis-slave-service.yaml apiVersion: v1 kind: Service metadata: name: redis-slave labels: name: redis-slave spec: ports: # the port that this service should serve on - port: 6379 selector: name: redis-slave
执行:
controlplane $ kubectl create -f redis-slave-service.yaml controlplane $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14m redis-master ClusterIP 10.111.64.45 <none> 6379/TCP 7m13s redis-slave ClusterIP 10.109.135.21 <none> 6379/TCP 41s
1.6 前端 rc
启动数据服务后,我们现在可以部署 Web 应用程序。部署 Web 应用程序的模式与我们之前部署的 pod 相同。YAML 定义了一个名为 frontend 的服务,该服务使用图像 _gcr.io/google samples/gb-frontend:v3。复制控制器将确保三个 Pod 始终存在。
controlplane $ cat frontend-controller.yaml apiVersion: v1 kind: ReplicationController metadata: name: frontend labels: name: frontend spec: replicas: 3 selector: name: frontend template: metadata: labels: name: frontend spec: containers: - name: php-redis image: gcr.io/google_samples/gb-frontend:v3 env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below. # value: env ports: - containerPort: 80
执行
controlplane $ kubectl create -f frontend-controller.yaml replicationcontroller/frontend created controlplane $ kubectl get rc NAME DESIRED CURRENT READY AGE frontend 3 3 1 2s redis-master 1 1 1 20m redis-slave 2 2 2 15m controlplane $ kubectl get pods NAME READY STATUS RESTARTS AGE frontend-bkcsj 1/1 Running 0 3s frontend-ftjrk 1/1 Running 0 3s frontend-jnckp 1/1 Running 0 3s redis-master-2j4qm 1/1 Running 0 20m redis-slave-79w2b 1/1 Running 0 15m redis-slave-j8zqj 1/1 Running 0 15m
PHP 代码使用 HTTP 和 JSON 与 Redis 通信。当设置一个值时,请求转到redis-master
,而读取的数据来自redis-slave
节点。
1.7 Guestbook Frontend Service
为了使前端可访问,我们需要启动一个服务来配置代理。
YAML 将服务定义为NodePort。NodePort 允许您设置在整个集群中共享的知名端口。这就像Docker 中的-p 80:80。
在这种情况下,我们定义我们的 Web 应用程序在端口 80 上运行,但我们将在30080上公开服务。
controlplane $ cat frontend-service.yaml apiVersion: v1 kind: Service metadata: name: frontend labels: name: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer type: NodePort ports: # the port that this service should serve on - port: 80 nodePort: 30080 selector: name: frontend controlplane $ kubectl create -f frontend-service.yaml service/frontend created controlplane $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend NodePort 10.105.214.152 <none> 80:30080/TCP 2s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28m redis-master ClusterIP 10.111.64.45 <none> 6379/TCP 21m redis-slave ClusterIP 10.109.135.21 <none> 6379/TCP 15m
1.8 Access Guestbook Frontend
定义了所有控制器和服务后,Kubernetes 将开始将它们作为 Pod 启动。根据发生的情况,Pod 可以具有不同的状态。例如,如果 Docker 镜像仍在下载,则 Pod 将处于挂起状态,因为它无法启动。准备就绪后,状态将更改为running。
查看 Pod 状态
controlplane $ kubectl get pods NAME READY STATUS RESTARTS AGE frontend-bkcsj 1/1 Running 0 4m55s frontend-ftjrk 1/1 Running 0 4m55s frontend-jnckp 1/1 Running 0 4m55s redis-master-2j4qm 1/1 Running 0 24m redis-slave-79w2b 1/1 Running 0 20m redis-slave-j8zqj 1/1 Running 0 20m
查找节点端口
controlplane $ kubectl describe service frontend | grep NodePort Type: NodePort NodePort: <unset> 30080/TCP
查看用户界面
一旦 Pod 处于运行状态,您将能够通过端口 30080 查看 UI。使用 URL 查看页面 https://2886795293-30080-elsy05.environments.katacoda.com
在幕后,PHP 服务通过 DNS 发现 Redis 实例。您现在已经在 Kubernetes 上部署了一个有效的多层应用程序。
2. 网络介绍
Kubernetes 具有先进的网络功能,允许 Pod 和服务在集群网络内部和外部进行通信。
在此场景中,您将学习以下类型的 Kubernetes 服务。
集群IP
目标端口
节点端口
外部 IP
负载均衡器
Kubernetes 服务是一个抽象,它定义了如何访问一组 Pod 的策略和方法。通过 Service 访问的 Pod 集基于标签选择器。
2.1 集群 IP
集群 IP 是创建 Kubernetes 服务时的默认方法。该服务被分配了一个内部 IP,其他组件可以使用它来访问 pod。
通过拥有单个 IP 地址,它可以使服务在多个 Pod 之间进行负载平衡。
controlplane $ cat clusterip.yaml apiVersion: v1 kind: Service metadata: name: webapp1-clusterip-svc labels: app: webapp1-clusterip spec: ports: - port: 80 selector: app: webapp1-clusterip --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: webapp1-clusterip-deployment spec: replicas: 2 template: metadata: labels: app: webapp1-clusterip spec: containers: - name: webapp1-clusterip-pod image: katacoda/docker-http-server:latest ports: - containerPort: 80 controlplane $ kubectl get pods NAME READY STATUS RESTARTS AGE webapp1-clusterip-deployment-669c7c65c4-gqlkc 1/1 Running 0 112s webapp1-clusterip-deployment-669c7c65c4-hwkrl 1/1 Running 0 112s controlplane $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6m28s webapp1-clusterip-svc ClusterIP 10.100.49.56 <none> 80/TCP 116s controlplane $ kubectl describe svc/webapp1-clusterip-svc Name: webapp1-clusterip-svc Namespace: default Labels: app=webapp1-clusterip Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"webapp1-clusterip"},"name":"webapp1-clusterip-svc","name... Selector: app=webapp1-clusterip Type: ClusterIP IP: 10.100.49.56 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.32.0.5:80,10.32.0.6:80 Session Affinity: None Events: <none> controlplane $ export CLUSTER_IP=$(kubectl get services/webapp1-clusterip-svc -o go-template='{{(index .spec.clusterIP)}}') controlplane $ echo CLUSTER_IP=$CLUSTER_IP CLUSTER_IP=10.100.49.56 controlplane $ curl $CLUSTER_IP:80 <h1>This request was processed by host: webapp1-clusterip-deployment-669c7c65c4-gqlkc</h1> controlplane $ curl $CLUSTER_IP:80 <h1>This request was processed by host: webapp1-clusterip-deployment-669c7c65c4-gqlkc</h1>
多个请求将展示基于公共标签选择器的跨多个 Pod 的服务负载均衡器。
2.2 targetport
目标端口允许我们将服务可用的端口与应用程序正在侦听的端口分开。TargetPort 是应用程序配置为侦听的端口。 Port是从外部访问应用程序的方式。
controlplane $ cat clusterip-target.yaml apiVersion: v1 kind: Service metadata: name: webapp1-clusterip-targetport-svc labels: app: webapp1-clusterip-targetport spec: ports: - port: 8080 targetPort: 80 selector: app: webapp1-clusterip-targetport --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: webapp1-clusterip-targetport-deployment spec: replicas: 2 template: metadata: labels: app: webapp1-clusterip-targetport spec: containers: - name: webapp1-clusterip-targetport-pod image: katacoda/docker-http-server:latest ports: - containerPort: 80 --- controlplane $ kubectl apply -f clusterip-target.yaml service/webapp1-clusterip-targetport-svc created deployment.extensions/webapp1-clusterip-targetport-deployment created controlplane $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m webapp1-clusterip-svc ClusterIP 10.100.49.56 <none> 80/TCP 6m33s webapp1-clusterip-targetport-svc ClusterIP 10.99.164.105 <none> 8080/TCP 2s controlplane $ kubectl describe svc/webapp1-clusterip-targetport-svc Name: webapp1-clusterip-targetport-svc Namespace: default Labels: app=webapp1-clusterip-targetport Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"webapp1-clusterip-targetport"},"name":"webapp1-clusterip... Selector: app=webapp1-clusterip-targetport Type: ClusterIP IP: 10.99.164.105 Port: <unset> 8080/TCP TargetPort: 80/TCP Endpoints: 10.32.0.7:80,10.32.0.8:80 Session Affinity: None Events: <none> controlplane $ export CLUSTER_IP=$(kubectl get services/webapp1-clusterip-targetport-svc -o go-template='{{(index .spec.clusterIP)}}') controlplane $ echo CLUSTER_IP=$CLUSTER_IP CLUSTER_IP=10.99.164.105 controlplane $ curl $CLUSTER_IP:8080 <h1>This request was processed by host: webapp1-clusterip-targetport-deployment-5599945ff4-9n89k</h1> controlplane $ curl $CLUSTER_IP:8080 <h1>This request was processed by host: webapp1-clusterip-targetport-deployment-5599945ff4-9n89k</h1> controlplane $ curl $CLUSTER_IP:8080
服务和 pod 部署完成后,可以像以前一样通过集群 IP 访问它,但这次是在定义的端口 8080 上。应用程序本身仍然配置为侦听端口 80。Kubernetes 服务管理着两者之间的转换。
2.3 nodeport
虽然 TargetPort
和 ClusterIP
使其可用于集群内部,但 NodePort 通过定义的静态端口在每个节点的 IP 上公开服务。无论访问集群内的哪个节点,根据定义的端口号都可以访问该服务。
controlplane $ cat nodeport.yaml apiVersion: v1 kind: Service metadata: name: webapp1-nodeport-svc labels: app: webapp1-nodeport spec: type: NodePort ports: - port: 80 nodePort: 30080 selector: app: webapp1-nodeport --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: webapp1-nodeport-deployment spec: replicas: 2 template: metadata: labels: app: webapp1-nodeport spec: containers: - name: webapp1-nodeport-pod image: katacoda/docker-http-server:latest ports: - containerPort: 80 --- controlplane $ kubectl apply -f nodeport.yaml service/webapp1-nodeport-svc created deployment.extensions/webapp1-nodeport-deployment created controlplane $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14m webapp1-clusterip-svc ClusterIP 10.100.49.56 <none> 80/TCP 9m39s webapp1-clusterip-targetport-svc ClusterIP 10.99.164.105 <none> 8080/TCP 3m8s webapp1-nodeport-svc NodePort 10.111.226.228 <none> 80:30080/TCP 48s controlplane $ kubectl describe svc/webapp1-nodeport-svc Name: webapp1-nodeport-svc Namespace: default Labels: app=webapp1-nodeport Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"webapp1-nodeport"},"name":"webapp1-nodeport-svc","namesp... Selector: app=webapp1-nodeport Type: NodePort IP: 10.111.226.228 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 30080/TCP Endpoints: 10.32.0.10:80,10.32.0.9:80 Session Affinity: None External Traffic Policy: Cluster Events: <none> controlplane $ curl 172.17.0.66:30080 <h1>This request was processed by host: webapp1-nodeport-deployment-677bd89b96-hqdbb</h1>
2.4 External IPs
使服务在集群外可用的另一种方法是通过外部 IP 地址。
将定义更新为当前集群的 IP 地址。
controlplane $ cat externalip.yaml apiVersion: v1 kind: Service metadata: name: webapp1-externalip-svc labels: app: webapp1-externalip spec: ports: - port: 80 externalIPs: - HOSTIP selector: app: webapp1-externalip --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: webapp1-externalip-deployment spec: replicas: 2 template: metadata: labels: app: webapp1-externalip spec: containers: - name: webapp1-externalip-pod image: katacoda/docker-http-server:latest ports: - containerPort: 80 --- controlplane $ sed -i 's/HOSTIP/172.17.0.66/g' externalip.yaml controlplane $ cat externalip.yaml apiVersion: v1 kind: Service metadata: name: webapp1-externalip-svc labels: app: webapp1-externalip spec: ports: - port: 80 externalIPs: - 172.17.0.66 selector: app: webapp1-externalip --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: webapp1-externalip-deployment spec: replicas: 2 template: metadata: labels: app: webapp1-externalip spec: containers: - name: webapp1-externalip-pod image: katacoda/docker-http-server:latest ports: - containerPort: 80 --- controlplane $ kubectl apply -f externalip.yaml service/webapp1-externalip-svc created deployment.extensions/webapp1-externalip-deployment created controlplane $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16m webapp1-clusterip-svc ClusterIP 10.100.49.56 <none> 80/TCP 11m webapp1-clusterip-targetport-svc ClusterIP 10.99.164.105 <none> 8080/TCP 5m15s webapp1-externalip-svc ClusterIP 10.101.221.229 172.17.0.66 80/TCP 2s webapp1-nodeport-svc NodePort 10.111.226.228 <none> 80:30080/TCP 2m55s controlplane $ kubectl describe svc/webapp1-externalip-svc Name: webapp1-externalip-svc Namespace: default Labels: app=webapp1-externalip Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"webapp1-externalip"},"name":"webapp1-externalip-svc","na... Selector: app=webapp1-externalip Type: ClusterIP IP: 10.101.221.229 External IPs: 172.17.0.66 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.32.0.11:80 Session Affinity: None Events: <none> controlplane $ curl 172.17.0.66 <h1>This request was processed by host: webapp1-externalip-deployment-6446b488f8-tjrpt</h1> controlplane $ curl 172.17.0.66 <h1>This request was processed by host: webapp1-externalip-deployment-6446b488f8-tjrpt</h1>
2.5 Load Balancer
在云中运行时,例如 EC2 或 Azure,可以配置和分配通过云提供商发布的公共 IP 地址。这将通过负载均衡器(例如 ELB)发出。这允许将额外的公共 IP 地址分配给 Kubernetes 集群,而无需直接与云提供商交互。
由于 Katacoda 不是云提供商,因此仍然可以为 LoadBalancer 类型的服务动态分配 IP 地址。这是通过使用
controlplane $ cat cloudprovider.yaml apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-keepalived-vip namespace: kube-system spec: template: metadata: labels: name: kube-keepalived-vip spec: hostNetwork: true containers: - image: gcr.io/google_containers/kube-keepalived-vip:0.9 name: kube-keepalived-vip imagePullPolicy: Always securityContext: privileged: true volumeMounts: - mountPath: /lib/modules name: modules readOnly: true - mountPath: /dev name: dev # use downward API env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace # to use unicast args: - --services-configmap=kube-system/vip-configmap # unicast uses the ip of the nodes instead of multicast # this is useful if running in cloud providers (like AWS) #- --use-unicast=true volumes: - name: modules hostPath: path: /lib/modules - name: dev hostPath: path: /dev nodeSelector: # type: worker # adjust this to match your worker nodes --- ## We also create an empty ConfigMap to hold our config apiVersion: v1 kind: ConfigMap metadata: name: vip-configmap namespace: kube-system data: --- apiVersion: apps/v1beta1 kind: Deployment metadata: labels: app: keepalived-cloud-provider name: keepalived-cloud-provider namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 2 selector: matchLabels: app: keepalived-cloud-provider strategy: type: RollingUpdate template: metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' labels: app: keepalived-cloud-provider spec: containers: - name: keepalived-cloud-provider image: quay.io/munnerz/keepalived-cloud-provider:0.0.1 imagePullPolicy: IfNotPresent env: - name: KEEPALIVED_NAMESPACE value: kube-system - name: KEEPALIVED_CONFIG_MAP value: vip-configmap - name: KEEPALIVED_SERVICE_CIDR value: 10.10.0.0/26 # pick a CIDR that is explicitly reserved for keepalived volumeMounts: - name: certs mountPath: /etc/ssl/certs resources: requests: cpu: 200m livenessProbe: httpGet: path: /healthz port: 10252 host: 127.0.0.1 initialDelaySeconds: 15 timeoutSeconds: 15 failureThreshold: 8 volumes: - name: certs hostPath: path: /etc/ssl/certs controlplane $ kubectl apply -f cloudprovider.yaml daemonset.extensions/kube-keepalived-vip configured configmap/vip-configmap configured deployment.apps/keepalived-cloud-provider created controlplane $ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-fb8b8dccf-9hrwv 1/1 Running 0 21m coredns-fb8b8dccf-skwkj 1/1 Running 0 21m etcd-controlplane 1/1 Running 0 20m katacoda-cloud-provider-558d5c854b-6h955 1/1 Running 0 21m keepalived-cloud-provider-78fc4468b-lpg9s 1/1 Running 0 2m41s kube-apiserver-controlplane 1/1 Running 0 20m kube-controller-manager-controlplane 1/1 Running 0 20m kube-keepalived-vip-hq7hk 1/1 Running 0 21m kube-proxy-468j8 1/1 Running 0 21m kube-scheduler-controlplane 1/1 Running 0 20m weave-net-w5zff 2/2 Running 1 21m
该服务是通过负载均衡器配置的
controlplane $ cat loadbalancer.yaml apiVersion: v1 kind: Service metadata: name: webapp1-loadbalancer-svc labels: app: webapp1-loadbalancer spec: type: LoadBalancer ports: - port: 80 selector: app: webapp1-loadbalancer --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: webapp1-loadbalancer-deployment spec: replicas: 2 template: metadata: labels: app: webapp1-loadbalancer spec: containers: - name: webapp1-loadbalancer-pod image: katacoda/docker-http-server:latest ports: - containerPort: 80 --- controlplane $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23m webapp1-clusterip-svc ClusterIP 10.100.49.56 <none> 80/TCP 19m webapp1-clusterip-targetport-svc ClusterIP 10.99.164.105 <none> 8080/TCP 12m webapp1-externalip-svc ClusterIP 10.101.221.229 172.17.0.66 80/TCP 7m22s webapp1-loadbalancer-svc LoadBalancer 10.104.93.133 172.17.0.66 80:31232/TCP 97s webapp1-nodeport-svc NodePort 10.111.226.228 <none> 80:30080/TCP 10m controlplane $ kubectl describe svc/webapp1-loadbalancer-svc Name: webapp1-loadbalancer-svc Namespace: default Labels: app=webapp1-loadbalancer Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"webapp1-loadbalancer"},"name":"webapp1-loadbalancer-svc"... Selector: app=webapp1-loadbalancer Type: LoadBalancer IP: 10.104.93.133 LoadBalancer Ingress: 172.17.0.66 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 31232/TCP Endpoints: 10.32.0.14:80,10.32.0.15:80 Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreatingLoadBalancer 99s service-controller Creating load balancer Normal CreatedLoadBalancer 99s service-controller Created load balancer
现在可以通过分配的 IP 地址访问该服务,在本例中为 10.10.0.0/26
范围。
controlplane $ export LoadBalancerIP=$(kubectl get services/webapp1-loadbalancer-svc -o go-template='{{(index .status.loadBalancer.ingress 0).ip}}') controlplane $ echo LoadBalancerIP=$LoadBalancerIP LoadBalancerIP=172.17.0.66 controlplane $ curl $LoadBalancerIP <h1>This request was processed by host: webapp1-externalip-deployment-6446b488f8-xt4nh</h1> controlplane $ curl $LoadBalancerIP <h1>This request was processed by host: webapp1-externalip-deployment-6446b488f8-xt4nh</h1>