Kubernetes----HPA控制器实现动态弹性扩缩容

简介: Kubernetes----HPA控制器实现动态弹性扩缩容

【原文链接】

一、HPA控制器简介

HPA(Horizontal Pod Autoscaler)控制器可以获取pod利用率,然后和HPA中定义的指标进行对比,同时计算出需要伸缩的具体值,最后实现pod的数量的调整,其实HPA与之前的Deployment一样,也属于一种Kubernetes对象,它通过追踪分析目标pod的负载变化情况,类确定是否需要针对性地调整目标pod的副本数

在这里插入图片描述

二、HPA控制器环境准备

2.1 安装metric-server

(1)参照 Git----安装(CentOS) 首先安装git工具
(2)下载metric-server

git clone -b v0.3.6 https://github.com/kubernetes-incubator/metrics-server

(3)配置yaml文件

cd metrics-server/deploy/1.8+/
vi metrics-server-deployment.yaml

然后按照下图的位置编辑文件,内容如下:

hostNetwork: true
image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.6
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalNDS,ExternalDNS,ExternalIP

在这里插入图片描述
(4)使用如下命令部署

[root@master 1.8+]# kubectl apply -f ./
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
Warning: apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@master 1.8+]#

查询如下,有metrics-server-669dfc56ff-v6drv 即表示已经部署成功了

[root@master 1.8+]# kubectl get pod -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-558bd4d5db-7vbmq          1/1     Running   0          12d
coredns-558bd4d5db-sps22          1/1     Running   0          12d
etcd-master                       1/1     Running   0          12d
kube-apiserver-master             1/1     Running   0          12d
kube-controller-manager-master    1/1     Running   0          12d
kube-flannel-ds-cd9qk             1/1     Running   0          12d
kube-flannel-ds-gg4jq             1/1     Running   0          12d
kube-flannel-ds-n76xj             1/1     Running   0          12d
kube-proxy-g4j5g                  1/1     Running   0          12d
kube-proxy-h27ms                  1/1     Running   0          12d
kube-proxy-tqzjl                  1/1     Running   0          12d
kube-scheduler-master             1/1     Running   0          12d
metrics-server-669dfc56ff-v6drv   1/1     Running   0          56s
[root@master 1.8+]#

(5)此时通过如下命令,可以查询到节点的资源使用情况

[root@master 1.8+]# kubectl top node
W0327 11:53:58.289701    9379 top_node.go:119] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master   71m          0%     1268Mi          3%
node1    24m          0%     687Mi           2%
node2    23m          0%     721Mi           2%
[root@master 1.8+]#

(6)通过如下命令,可以查看到pod的资源使用情况

[root@master 1.8+]# kubectl top  pod -n kube-system
W0327 11:55:35.833234   10195 top_pod.go:140] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
NAME                              CPU(cores)   MEMORY(bytes)
coredns-558bd4d5db-7vbmq          1m           13Mi
coredns-558bd4d5db-sps22          1m           13Mi
etcd-master                       9m           26Mi
kube-apiserver-master             22m          297Mi
kube-controller-manager-master    6m           53Mi
kube-flannel-ds-cd9qk             2m           15Mi
kube-flannel-ds-gg4jq             2m           15Mi
kube-flannel-ds-n76xj             1m           16Mi
kube-proxy-g4j5g                  1m           19Mi
kube-proxy-h27ms                  1m           19Mi
kube-proxy-tqzjl                  1m           19Mi
kube-scheduler-master             2m           21Mi
metrics-server-669dfc56ff-v6drv   1m           15Mi
[root@master 1.8+]#

2.2 创建deployment和service

编辑service_deployment.yaml文件,内容如下:

apiVersion: v1
kind: Namespace
metadata:
  name: dev

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nginx
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      run: nginx
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.1
        ports:
        - containerPort: 80
          protocol: TCP

---

apiVersion: v1
kind: Service
metadata:
  name: service-nginx
  namespace: dev
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30030
  selector:
    run: nginx
  type: NodePort

使用如下命令创建

[root@master pod_controller]# kubectl apply -f service_deployment.yaml
namespace/dev created
deployment.apps/deploy-nginx created
service/service-nginx created
[root@master pod_controller]#

查看创建的资源如下:

[root@master pod_controller]# kubectl get deployment,pod,service -n dev
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/deploy-nginx   3/3     3            3           5m6s

NAME                                READY   STATUS    RESTARTS   AGE
pod/deploy-nginx-66ffc897cf-jhc9w   1/1     Running   0          5m6s

NAME                    TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/service-nginx   NodePort   10.106.128.172   <none>        80:30030/TCP   5m6s
[root@master pod_controller]#

2.3 部署HPA

编辑pc_hpa.yaml文件,内容如下:注意这里把cpu使用率设置为1%是为了测试用。在实际部署中需要根据具体情况去定

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: pc-hpa
  namespace: dev
spec:
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: deploy-nginx

然后使用如下命令创建

[root@master pod_controller]# kubectl apply -f pc_hpa.yaml
horizontalpodautoscaler.autoscaling/pc-hpa created
[root@master pod_controller]#

查看hpa如下

[root@master pod_controller]# kubectl get hpa -n dev
NAME     REFERENCE                 TARGETS        MINPODS   MAXPODS   REPLICAS   AGE
pc-hpa   Deployment/deploy-nginx   <unknown>/1%   1         10        1          5m18s
[root@master pod_controller]#

2.4 压力测试观测HPA控制器

开三个窗口分别对pod,deployment,hpa进行监视分别使用如下命令

kubectl get pod -n dev -w

kubectl get deploy -n dev -w

kubectl get hpa -n dev -w

编写测试脚本 test.sh,用于发送get请求

#! /bin/bash

for((i=1;i<1000000;i++));
do
curl http://192.168.16.40:30030;
done

执行测试脚本,然后观察pod数量变化

如下为deploy的观测结果:

[root@master ~]# kubectl get deploy -n dev -w
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
deploy-nginx   1/1     1            1           82m
deploy-nginx   1/3     1            1           84m
deploy-nginx   1/3     1            1           84m
deploy-nginx   1/3     1            1           84m
deploy-nginx   1/3     3            1           84m
deploy-nginx   2/3     3            2           84m
deploy-nginx   3/3     3            3           84m

如下为HPA的观测结果

[root@master ~]# kubectl get hpa -n dev -w
NAME     REFERENCE                 TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
pc-hpa   Deployment/deploy-nginx   0%/1%     1         10        1          63m
pc-hpa   Deployment/deploy-nginx   3%/1%     1         10        1          64m
pc-hpa   Deployment/deploy-nginx   3%/1%     1         10        3          64m
pc-hpa   Deployment/deploy-nginx   1%/1%     1         10        3          65m
pc-hpa   Deployment/deploy-nginx   1%/1%     1         10        3          66m

可以看到,HPA确实可以做到动态扩缩容

相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
目录
相关文章
|
5月前
|
Kubernetes Cloud Native 应用服务中间件
云原生|kubernetes|ResourceQuota 资源与准入控制器
云原生|kubernetes|ResourceQuota 资源与准入控制器
66 0
|
8月前
|
Kubernetes 监控 应用服务中间件
Kubernetes(k8s)容器编排控制器使用3
Kubernetes(k8s)容器编排控制器使用3
57 0
|
8月前
|
Kubernetes 监控 应用服务中间件
Kubernetes(k8s)容器编排控制器使用2
Kubernetes(k8s)容器编排控制器使用2
68 0
|
8月前
|
Kubernetes 监控 应用服务中间件
Kubernetes(k8s)容器编排控制器使用1
Kubernetes(k8s)容器编排控制器使用1
86 0
|
7月前
|
存储 应用服务中间件 nginx
kubernetes Statefulset控制器
kubernetes Statefulset控制器
|
15天前
|
Kubernetes 测试技术 Docker
K8S中Deployment控制器的概念、原理解读以及使用技巧
K8S中Deployment控制器的概念、原理解读以及使用技巧
|
1月前
|
存储 Kubernetes 调度
|
1月前
|
运维 Kubernetes 容灾
kubernetes核心技术之Controller控制器知识总结
kubernetes核心技术之Controller控制器知识总结
19 1
|
2月前
|
Kubernetes 应用服务中间件 nginx
K8S(05)核心插件-ingress(服务暴露)控制器-traefik
K8S(05)核心插件-ingress(服务暴露)控制器-traefik
37 0
|
7月前
|
存储 Kubernetes 数据库
剖析 Kubernetes 控制器:Deployment、ReplicaSet 和 StatefulSet 的功能与应用场景
剖析 Kubernetes 控制器:Deployment、ReplicaSet 和 StatefulSet 的功能与应用场景
280 0