3. Create Ingress Routing
Kubernetes 具有先进的网络功能,允许 Pod 和服务在集群网络内部进行通信。Ingress 启用到集群的入站连接,允许外部流量到达正确的 Pod。
Ingress 启用外部可访问的 url、负载平衡流量、终止 SSL、为 Kubernetes 集群提供基于名称的虚拟主机。
在此场景中,您将学习如何部署和配置 Ingress 规则来管理传入的 HTTP 请求。
3.1 创建http部署
首先,部署一个示例 HTTP 服务器,它将成为我们请求的目标。该部署包含三个部署,一个称为webapp1,第二个称为webapp2,第三个称为webapp3,每个部署都有一个服务。
controlplane $ cat deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: webapp1 spec: replicas: 1 selector: matchLabels: app: webapp1 template: metadata: labels: app: webapp1 spec: containers: - name: webapp1 image: katacoda/docker-http-server:latest ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: webapp2 spec: replicas: 1 selector: matchLabels: app: webapp2 template: metadata: labels: app: webapp2 spec: containers: - name: webapp2 image: katacoda/docker-http-server:latest ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: webapp3 spec: replicas: 1 selector: matchLabels: app: webapp3 template: metadata: labels: app: webapp3 spec: containers: - name: webapp3 image: katacoda/docker-http-server:latest ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: webapp1-svc labels: app: webapp1 spec: ports: - port: 80 selector: app: webapp1 --- apiVersion: v1 kind: Service metadata: name: webapp2-svc labels: app: webapp2 spec: ports: - port: 80 selector: app: webapp2 --- apiVersion: v1 kind: Service metadata: name: webapp3-svc labels: app: webapp3 spec: ports: - port: 80 selector: app: webapp3 controlplane $ kubectl apply -f deployment.yaml deployment.apps/webapp1 created deployment.apps/webapp2 created deployment.apps/webapp3 created service/webapp1-svc created service/webapp2-svc created service/webapp3-svc created controlplane $ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE webapp1 0/1 1 0 4s webapp2 0/1 1 0 4s webapp3 0/1 1 0 4s
3.2 部署 Ingress
AML 文件ingress.yaml定义了一个基于 Nginx 的入口控制器以及一个服务,使其在端口 80 上可用于使用 ExternalIPs 的外部连接。如果 Kubernetes 集群在云提供商上运行,那么它将使用 LoadBalancer 服务类型。
ServiceAccount 定义了具有如何访问集群以访问定义的入口规则的一组权限的帐户。默认服务器密钥是其他 Nginx 示例 SSL 连接的自签名证书,并且是所必需的Nginx 默认示例。
controlplane $ cat ingress.yaml apiVersion: v1 kind: Namespace metadata: name: nginx-ingress --- apiVersion: v1 kind: Secret metadata: name: default-server-secret namespace: nginx-ingress type: kubernetes.io/tls data: tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN2akNDQWFZQ0NRREFPRjl0THNhWFhEQU5CZ2txaGtpRzl3MEJBUXNGQURBaE1SOHdIUVlEVlFRRERCWk8KUjBsT1dFbHVaM0psYzNORGIyNTBjbTlzYkdWeU1CNFhEVEU0TURreE1qRTRNRE16TlZvWERUSXpNRGt4TVRFNApNRE16TlZvd0lURWZNQjBHQTFVRUF3d1dUa2RKVGxoSmJtZHlaWE56UTI5dWRISnZiR3hsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUwvN2hIUEtFWGRMdjNyaUM3QlBrMTNpWkt5eTlyQ08KR2xZUXYyK2EzUDF0azIrS3YwVGF5aGRCbDRrcnNUcTZzZm8vWUk1Y2Vhbkw4WGM3U1pyQkVRYm9EN2REbWs1Qgo4eDZLS2xHWU5IWlg0Rm5UZ0VPaStlM2ptTFFxRlBSY1kzVnNPazFFeUZBL0JnWlJVbkNHZUtGeERSN0tQdGhyCmtqSXVuektURXUyaDU4Tlp0S21ScUJHdDEwcTNRYzhZT3ExM2FnbmovUWRjc0ZYYTJnMjB1K1lYZDdoZ3krZksKWk4vVUkxQUQ0YzZyM1lma1ZWUmVHd1lxQVp1WXN2V0RKbW1GNWRwdEMzN011cDBPRUxVTExSakZJOTZXNXIwSAo1TmdPc25NWFJNV1hYVlpiNWRxT3R0SmRtS3FhZ25TZ1JQQVpQN2MwQjFQU2FqYzZjNGZRVXpNQ0F3RUFBVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQWpLb2tRdGRPcEsrTzhibWVPc3lySmdJSXJycVFVY2ZOUitjb0hZVUoKdGhrYnhITFMzR3VBTWI5dm15VExPY2xxeC9aYzJPblEwMEJCLzlTb0swcitFZ1U2UlVrRWtWcitTTFA3NTdUWgozZWI4dmdPdEduMS9ienM3bzNBaS9kclkrcUI5Q2k1S3lPc3FHTG1US2xFaUtOYkcyR1ZyTWxjS0ZYQU80YTY3Cklnc1hzYktNbTQwV1U3cG9mcGltU1ZmaXFSdkV5YmN3N0NYODF6cFErUyt1eHRYK2VBZ3V0NHh3VlI5d2IyVXYKelhuZk9HbWhWNThDd1dIQnNKa0kxNXhaa2VUWXdSN0diaEFMSkZUUkk3dkhvQXprTWIzbjAxQjQyWjNrN3RXNQpJUDFmTlpIOFUvOWxiUHNoT21FRFZkdjF5ZytVRVJxbStGSis2R0oxeFJGcGZnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdi91RWM4b1JkMHUvZXVJTHNFK1RYZUprckxMMnNJNGFWaEMvYjVyYy9XMlRiNHEvClJOcktGMEdYaVN1eE9ycXgrajlnamx4NXFjdnhkenRKbXNFUkJ1Z1B0ME9hVGtIekhvb3FVWmcwZGxmZ1dkT0EKUTZMNTdlT1l0Q29VOUZ4amRXdzZUVVRJVUQ4R0JsRlNjSVo0b1hFTkhzbysyR3VTTWk2Zk1wTVM3YUhudzFtMApxWkdvRWEzWFNyZEJ6eGc2clhkcUNlUDlCMXl3VmRyYURiUzc1aGQzdUdETDU4cGszOVFqVUFQaHpxdmRoK1JWClZGNGJCaW9CbTVpeTlZTW1hWVhsMm0wTGZzeTZuUTRRdFFzdEdNVWozcGJtdlFmazJBNnljeGRFeFpkZFZsdmwKMm82MjBsMllxcHFDZEtCRThCay90elFIVTlKcU56cHpoOUJUTXdJREFRQUJBb0lCQVFDZklHbXowOHhRVmorNwpLZnZJUXQwQ0YzR2MxNld6eDhVNml4MHg4Mm15d1kxUUNlL3BzWE9LZlRxT1h1SENyUlp5TnUvZ2IvUUQ4bUFOCmxOMjRZTWl0TWRJODg5TEZoTkp3QU5OODJDeTczckM5bzVvUDlkazAvYzRIbjAzSkVYNzZ5QjgzQm9rR1FvYksKMjhMNk0rdHUzUmFqNjd6Vmc2d2szaEhrU0pXSzBwV1YrSjdrUkRWYmhDYUZhNk5nMUZNRWxhTlozVDhhUUtyQgpDUDNDeEFTdjYxWTk5TEI4KzNXWVFIK3NYaTVGM01pYVNBZ1BkQUk3WEh1dXFET1lvMU5PL0JoSGt1aVg2QnRtCnorNTZud2pZMy8yUytSRmNBc3JMTnIwMDJZZi9oY0IraVlDNzVWYmcydVd6WTY3TWdOTGQ5VW9RU3BDRkYrVm4KM0cyUnhybnhBb0dCQU40U3M0ZVlPU2huMVpQQjdhTUZsY0k2RHR2S2ErTGZTTXFyY2pOZjJlSEpZNnhubmxKdgpGenpGL2RiVWVTbWxSekR0WkdlcXZXaHFISy9iTjIyeWJhOU1WMDlRQ0JFTk5jNmtWajJTVHpUWkJVbEx4QzYrCk93Z0wyZHhKendWelU0VC84ajdHalRUN05BZVpFS2FvRHFyRG5BYWkyaW5oZU1JVWZHRXFGKzJyQW9HQkFOMVAKK0tZL0lsS3RWRzRKSklQNzBjUis3RmpyeXJpY05iWCtQVzUvOXFHaWxnY2grZ3l4b25BWlBpd2NpeDN3QVpGdwpaZC96ZFB2aTBkWEppc1BSZjRMazg5b2pCUmpiRmRmc2l5UmJYbyt3TFU4NUhRU2NGMnN5aUFPaTVBRHdVU0FkCm45YWFweUNweEFkREtERHdObit3ZFhtaTZ0OHRpSFRkK3RoVDhkaVpBb0dCQUt6Wis1bG9OOTBtYlF4VVh5YUwKMjFSUm9tMGJjcndsTmVCaWNFSmlzaEhYa2xpSVVxZ3hSZklNM2hhUVRUcklKZENFaHFsV01aV0xPb2I2NTNyZgo3aFlMSXM1ZUtka3o0aFRVdnpldm9TMHVXcm9CV2xOVHlGanIrSWhKZnZUc0hpOGdsU3FkbXgySkJhZUFVWUNXCndNdlQ4NmNLclNyNkQrZG8wS05FZzFsL0FvR0FlMkFVdHVFbFNqLzBmRzgrV3hHc1RFV1JqclRNUzRSUjhRWXQKeXdjdFA4aDZxTGxKUTRCWGxQU05rMXZLTmtOUkxIb2pZT2pCQTViYjhibXNVU1BlV09NNENoaFJ4QnlHbmR2eAphYkJDRkFwY0IvbEg4d1R0alVZYlN5T294ZGt5OEp0ek90ajJhS0FiZHd6NlArWDZDODhjZmxYVFo5MWpYL3RMCjF3TmRKS2tDZ1lCbyt0UzB5TzJ2SWFmK2UwSkN5TGhzVDQ5cTN3Zis2QWVqWGx2WDJ1VnRYejN5QTZnbXo5aCsKcDNlK2JMRUxwb3B0WFhNdUFRR0xhUkcrYlNNcjR5dERYbE5ZSndUeThXczNKY3dlSTdqZVp2b0ZpbmNvVlVIMwphdmxoTUVCRGYxSjltSDB5cDBwWUNaS2ROdHNvZEZtQktzVEtQMjJhTmtsVVhCS3gyZzR6cFE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress namespace: nginx-ingress --- kind: ConfigMap apiVersion: v1 metadata: name: nginx-config namespace: nginx-ingress data: --- # Described at: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/ # Source from: https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/common/ingress-class.yaml apiVersion: networking.k8s.io/v1beta1 kind: IngressClass metadata: name: nginx # annotations: # ingressclass.kubernetes.io/is-default-class: "true" spec: controller: nginx.org/ingress-controller --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress namespace: nginx-ingress spec: replicas: 1 selector: matchLabels: app: nginx-ingress template: metadata: labels: app: nginx-ingress spec: serviceAccountName: nginx-ingress containers: - image: nginx/nginx-ingress:edge imagePullPolicy: Always name: nginx-ingress ports: - name: http containerPort: 80 - name: https containerPort: 443 env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name args: - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret --- apiVersion: v1 kind: Service metadata: name: nginx-ingress namespace: nginx-ingress spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP name: http - port: 443 targetPort: 443 protocol: TCP name: https selector: app: nginx-ingress externalIPs: - 172.17.0.88
Ingress
控制器以熟悉的方式部署到其他 Kubernetes 对象
controlplane $ kubectl create -f ingress.yaml namespace/nginx-ingress created secret/default-server-secret created serviceaccount/nginx-ingress created configmap/nginx-config created ingressclass.networking.k8s.io/nginx created deployment.apps/nginx-ingress created service/nginx-ingress created controlplane $ kubectl get deployment -n nginx-ingress NAME READY UP-TO-DATE AVAILABLE AGE nginx-ingress 0/1 1 0 4s
3.3 Deploy Ingress Rules
入口规则是 Kubernetes 的对象类型。规则可以基于请求主机(域),或请求的路径,或两者的组合。
controlplane $ cat ingress-rules.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: webapp-ingress spec: ingressClassName: nginx rules: - host: my.kubernetes.example http: paths: - path: /webapp1 backend: serviceName: webapp1-svc servicePort: 80 - path: /webapp2 backend: serviceName: webapp2-svc servicePort: 80 - backend: serviceName: webapp3-svc servicePort: 80
规则的重要部分定义如下。
这些规则适用于对主机my.kubernetes.example 的请求。基于路径请求定义了两个规则,并使用一个 catch all 定义。对路径/webapp1 的请求被转发到服务webapp1-svc 上。同样,对/webapp2的请求被转发到webapp2-svc。如果没有规则适用,将使用webapp3-svc。
这演示了应用程序的 URL 结构如何独立于应用程序的部署方式。
controlplane $ kubectl create -f ingress-rules.yaml ingress.extensions/webapp-ingress created controlplane $ kubectl get ing NAME CLASS HOSTS ADDRESS PORTS AGE webapp-ingress nginx my.kubernetes.example 80 2s
3.4 测试
应用入口规则后,流量将被路由到定义的位置。
第一个请求将由webapp1部署处理。
curl -H "Host: my.kubernetes.example" 172.17.0.88/webapp1
第二个请求将由webapp2部署处理。
curl -H "Host: my.kubernetes.example" 172.17.0.88/webapp2
最后,所有其他请求将由webapp3部署处理。
curl -H "Host: my.kubernetes.example" 172.17.0.88
4. Liveness and Readiness Healthchecks
在此场景中,您将了解 Kubernetes 如何使用 Readiness and Liveness Probes 检查容器运行状况。
Readiness Probes 检查应用程序是否准备好开始处理流量。此探针解决了容器已启动的问题,但该进程仍在预热和配置自身,这意味着它尚未准备好接收流量。
Liveness Probes 确保应用程序健康并能够处理请求。
4.1 创建http应用程序
controlplane $ cat deploy.yaml kind: List apiVersion: v1 items: - kind: ReplicationController apiVersion: v1 metadata: name: frontend labels: name: frontend spec: replicas: 1 selector: name: frontend template: metadata: labels: name: frontend spec: containers: - name: frontend image: katacoda/docker-http-server:health readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 1 timeoutSeconds: 1 livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 1 timeoutSeconds: 1 - kind: ReplicationController apiVersion: v1 metadata: name: bad-frontend labels: name: bad-frontend spec: replicas: 1 selector: name: bad-frontend template: metadata: labels: name: bad-frontend spec: containers: - name: bad-frontend image: katacoda/docker-http-server:unhealthy readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 1 timeoutSeconds: 1 livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 1 timeoutSeconds: 1 - kind: Service apiVersion: v1 metadata: labels: app: frontend kubernetes.io/cluster-service: "true" name: frontend spec: type: NodePort ports: - port: 80 nodePort: 30080 selector: app: frontend
controlplane $ kubectl apply -f deploy.yaml replicationcontroller/frontend created replicationcontroller/bad-frontend created service/frontend created
4.2 Readiness Probe
在部署集群时,还部署了两个 Pod 来演示健康检查。
controlplane $ cat deploy.yaml kind: List apiVersion: v1 items: - kind: ReplicationController apiVersion: v1 metadata: name: frontend labels: name: frontend spec: replicas: 1 selector: name: frontend template: metadata: labels: name: frontend spec: containers: - name: frontend image: katacoda/docker-http-server:health readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 1 timeoutSeconds: 1 livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 1 timeoutSeconds: 1 - kind: ReplicationController apiVersion: v1 metadata: name: bad-frontend labels: name: bad-frontend spec: replicas: 1 selector: name: bad-frontend template: metadata: labels: name: bad-frontend spec: containers: - name: bad-frontend image: katacoda/docker-http-server:unhealthy readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 1 timeoutSeconds: 1 livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 1 timeoutSeconds: 1 - kind: Service apiVersion: v1 metadata: labels: app: frontend kubernetes.io/cluster-service: "true" name: frontend spec: type: NodePort ports: - port: 80 nodePort: 30080 selector: app: frontend
部署 Replication Controller 时,每个 Pod 都有一个 Readiness 和 Liveness 检查。每个检查都具有以下格式,用于通过 HTTP 执行健康检查。
livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 1 timeoutSeconds: 1
可以根据您的应用程序更改设置以调用不同的端点,例如 /ping。
第一个 Pod,bad-frontend
是一个 HTTP 服务,它总是返回 500
错误,表明它没有正确启动。您可以使用以下命令查看 Pod 的状态
controlplane $ kubectl get pods --selector="name=bad-frontend" NAME READY STATUS RESTARTS AGE bad-frontend-5p4k6 0/1 CrashLoopBackOff 4 2m55s
Kubectl 将返回使用我们的特定标签部署的 Pod。因为健康检查失败,它会说零容器已准备就绪。它还将指示容器的重启尝试次数。
controlplane $ kubectl describe pod $pod Name: bad-frontend-5p4k6 Namespace: default Priority: 0 PriorityClassName: <none> Node: controlplane/172.17.0.44 Start Time: Tue, 09 Nov 2021 15:44:19 +0000 Labels: name=bad-frontend Annotations: <none> Status: Running IP: 10.32.0.6 Controlled By: ReplicationController/bad-frontend Containers: bad-frontend: Container ID: docker://ae3c84bfdaa178fe2976e8b075e4e98da95df06b6f5bd85ef2eb5f92466c5f5d Image: katacoda/docker-http-server:unhealthy Image ID: docker-pullable://katacoda/docker-http-server@sha256:bea95c69c299c690103c39ebb3159c39c5061fee1dad13aa1b0625e0c6b52f22 Port: <none> Host Port: <none> State: Running Started: Tue, 09 Nov 2021 15:47:44 +0000 Last State: Terminated Reason: Error Exit Code: 2 Started: Tue, 09 Nov 2021 15:46:34 +0000 Finished: Tue, 09 Nov 2021 15:47:01 +0000 Ready: False Restart Count: 5 Liveness: http-get http://:80/ delay=1s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:80/ delay=1s timeout=1s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-h7qch (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-h7qch: Type: Secret (a volume populated by a Secret) SecretName: default-token-h7qch Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m31s default-scheduler Successfully assigned default/bad-frontend-5p4k6 to controlplane Normal Pulling 3m21s kubelet, controlplane Pulling image "katacoda/docker-http-server:unhealthy" Normal Pulled 3m11s kubelet, controlplane Successfully pulled image "katacoda/docker-http-server:unhealthy" Normal Killing 2m19s (x2 over 2m49s) kubelet, controlplane Container bad-frontend failed liveness probe, will be restarted Normal Created 2m18s (x3 over 3m11s) kubelet, controlplane Created container bad-frontend Normal Pulled 2m18s (x2 over 2m48s) kubelet, controlplane Container image "katacoda/docker-http-server:unhealthy" already present on machine Normal Started 2m16s (x3 over 3m10s) kubelet, controlplane Started container bad-frontend Warning Unhealthy 2m5s (x5 over 3m5s) kubelet, controlplane Readiness probe failed: HTTP probe failed with statuscode: 500 Warning Unhealthy 119s (x8 over 3m9s) kubelet, controlplane Liveness probe failed: HTTP probe failed with statuscode: 500
我们的第二个 Pod,frontend,在启动时返回 OK 状态。
controlplane $ kubectl get pods --selector="name=frontend" NAME READY STATUS RESTARTS AGE frontend-d29h8 1/1 Running 0 4m3s
4.3 Liveness Probe
由于我们的第二个 Pod 当前处于健康状态,我们可以模拟发生的故障。
目前,应该没有发生崩溃。
controlplane $ kubectl get pods --selector="name=frontend" NAME READY STATUS RESTARTS AGE frontend-d29h8 1/1 Running 0 4m35s
崩溃服务
HTTP 服务器有一个额外的端点,这将导致它返回 500 个错误。使用kubectl exec可以调用端点
controlplane $ pod=$(kubectl get pods --selector="name=frontend" --output=jsonpath={.items..metadata.name}) controlplane $ kubectl exec $pod -- /usr/bin/curl -s localhost/unhealthy
Kubernetes 将根据配置执行 Liveness Probe。如果探测器失败,Kubernetes 将销毁并重新创建失败的容器。执行上面的命令使服务崩溃并观察 Kubernetes 自动恢复它。
controlplane $ kubectl get pods --selector="name=frontend" NAME READY STATUS RESTARTS AGE frontend-d29h8 1/1 Running 1 5m56s
检查可能需要一些时间才能检测到。