K8S(5)HPA

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
简介: K8S(5)HPA


下面有metrics-server的完整yaml文件内容


一、HPA概述


  • HPA全称Horizontal Pod Autoscaler,即水平Pod自动伸缩器,可以根据观察到的CPU、内存使用率或自定义度量标准来自动增加或者减少Pod的数量,但是HPA不适用于无法扩、缩容的对象,例如DaemonSet,通常都作用与Deployment
  • HPA控制器会定期调整RC或者Deployment的副本数,使对象数量符合用户定义规则的数量
  • 既然是通过CPU、内存等指标来自动扩、缩容Pod,那么HPA肯定是需要一个能监控这些硬件资源的组件,则例的组件有很多选择,例如metrices-server、Heapster等,这里使用metrices-server


metrices-server从api-server中获取cpu、内存使用率等监控指标


二、HPA版本


  • 查看HPA所有版本


[root@master C]# kubectl api-versions |grep autoscaling
autoscaling/v1      #只支持通过CPU为参考依据来改变Pod的副本数
autoscaling/v2beta1   #支持通过CPU、内存、连接数或者自定义规则为参考依据
autoscaling/v2beta2   #和v2beta1差不多
  • 查看当前版本
[root@master C]# kubectl explain hpa
KIND:     HorizontalPodAutoscaler
VERSION:  autoscaling/v1  #可以看到使用的默认版本是v1
DESCRIPTION:
     configuration of a horizontal pod autoscaler.
FIELDS:
   apiVersion   <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
   kind <string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
   metadata     <Object>
     Standard object metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
   spec <Object>
     behaviour of autoscaler. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
   status       <Object>
     current information about the autoscaler.
  • 指定使用版本,这里并不是修改,相当于执行这条命令时,指定了下版本


[root@master C]# kubectl explain hpa --api-version=autoscaling/v2beta1
KIND:     HorizontalPodAutoscaler
VERSION:  autoscaling/v2beta1
DESCRIPTION:
     HorizontalPodAutoscaler is the configuration for a horizontal pod
     autoscaler, which automatically manages the replica count of any resource
     implementing the scale subresource based on the metrics specified.
FIELDS:
   apiVersion   <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
   kind <string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
   metadata     <Object>
     metadata is the standard object metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
   spec <Object>
     spec is the specification for the behaviour of the autoscaler. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
   status       <Object>
     status is the current information about the autoscaler.


三、HPA部署


(1)部署metrics-server

[root@master kube-system]# kubectl top nodes  #查看节点状态,因为没有安装,所以会报错
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)

编写yaml文件,注意端口和镜像

[root@master kube-system]# vim components-v0.5.0.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        image: registry.cn-shenzhen.aliyuncs.com/zengfengjin/metrics-server:v0.5.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
  • 部署
[root@master kube-system]# kubectl apply -f components-v0.5.0.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
#查看创建的pod
[root@master kube-system]# kubectl get pods -n kube-system| egrep 'NAME|metrics-server'
NAME                              READY   STATUS              RESTARTS   AGE
metrics-server-5944675dfb-q6cdd   0/1     ContainerCreating   0          6s
 #查看日志
[root@master kube-system]# kubectl logs metrics-server-5944675dfb-q6cdd  -n kube-system 
I0718 03:06:39.064633       1 serving.go:341] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0718 03:06:39.870097       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0718 03:06:39.870122       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0718 03:06:39.870159       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0718 03:06:39.870160       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0718 03:06:39.870105       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0718 03:06:39.871166       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0718 03:06:39.872804       1 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key
I0718 03:06:39.875741       1 secure_serving.go:197] Serving securely on [::]:4443
I0718 03:06:39.876050       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0718 03:06:39.970469       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0718 03:06:39.970575       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0718 03:06:39.971610       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
#如果报错的化,可以修改apiserver的yaml文件,这是k8s的yaml文件
[root@master kube-system]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
 40     - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
 41     - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
 42     - --enable-aggregator-routing=true     #添加这行
 43     image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
 44     imagePullPolicy: IfNotPresent
#保存退出
[root@master kube-system]# systemctl restart kubelet  #修改后重启kubelet
#再次查看节点信息
[root@master kube-system]# kubectl top node
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master   327m         4%     3909Mi          23%
node     148m         1%     1327Mi          8%


(2)创建Deployment


  • 这里创建一个nginx的deployment
[root@master test]# cat nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      run: nginx
  replicas: 1
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.2
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 500m
          requests:  #想要HPA生效,必须添加requests声明
            cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    run: nginx
spec:
  ports:
  - port: 80
  selector:
    run: nginx
  • 访问测试
[root@master test]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE   NOMINATED NODE   READINESS GATES
nginx-9cb8d65b5-tq9v4   1/1     Running   0          14m   10.244.1.22   node   <none>           <none>
[root@master test]# kubectl get svc nginx
NAME    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
nginx   ClusterIP   172.16.169.27   <none>        80/TCP    15m
[root@master test]# kubectl describe svc nginx
Name:              nginx
Namespace:         default
Labels:            run=nginx
Annotations:       Selector:  run=nginx
Type:              ClusterIP
IP:                172.16.169.27
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.22:80
Session Affinity:  None
Events:            <none>
[root@node test]# curl 172.16.169.27  #访问成功
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

(3)基于CPU创建HPA

#创建一个cpu利用率达到20,最大10个pod,最小1个,这里没有指定版本所以默认是v1版本,而v1版本只能以CPU为标准
[root@master test]# kubectl autoscale deployment nginx --cpu-percent=20 --min=1 --max=10
horizontalpodautoscaler.autoscaling/nginx autoscaled
#TARGETS可以看到使用率
[root@master test]# kubectl get hpa
NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx   Deployment/nginx   0%/20%    1         10        1          86s
#创建一个测试pod增加负载,访问地址要和pod的svc地址相同
[root@master ~]# kubectl  run busybox -it --image=busybox -- /bin/sh -c 'while true; do wget -q -O- http://10.244.1.22; done'
#过一分钟后看hap的使用率,REPLICAS是当前pod的数量
[root@master test]# kubectl get hpa
NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx   Deployment/nginx   27%/20%   1         10        5          54m
[root@master test]# kubectl get pods   #再看pod数量,发现已经增加到了5个
NAME                    READY   STATUS    RESTARTS   AGE
bustbox                 1/1     Running   0          119s
nginx-9cb8d65b5-24dg2   1/1     Running   0          57s
nginx-9cb8d65b5-c6n98   1/1     Running   0          87s
nginx-9cb8d65b5-ksjzv   1/1     Running   0          57s
nginx-9cb8d65b5-n77fm   1/1     Running   0          87s
nginx-9cb8d65b5-tq9v4   1/1     Running   0          84m
[root@master test]# kubectl get deployments.apps
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   5/5     5            5           84m
#此时,停止压测,过好几分钟后再次查看pod数量和使用率
[root@master test]# kubectl delete pod busybox  #终止后,删除pod
[root@master test]# kubectl get hpa  #虽然使用率已经降到0了,但是可以看到当前REPLICAS的数量还5,这个需要等一会就会缩容
NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx   Deployment/nginx   0%/20%    1         10        5          58m
#过了几分钟后,可以看到pod数量已经回到了1
[root@master test]# kubectl get hpa
NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx   Deployment/nginx   0%/20%    1         10        1          64m  
[root@master test]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-9cb8d65b5-tq9v4   1/1     Running   0          95m


(4)基于内存创建的HPA

#先把上面创建的资源删除
[root@master test]# kubectl delete horizontalpodautoscalers.autoscaling  nginx
horizontalpodautoscaler.autoscaling "nginx" deleted
[root@master test]# kubectl delete -f nginx.yaml
deployment.apps "nginx" deleted
service "nginx" deleted
  • 重新编写yaml文件
[root@master test]# cat nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      run: nginx
  replicas: 1
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.2
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 500m
            memory: 60Mi
          requests:
            cpu: 200m
            memory: 25Mi
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    run: nginx
spec:
  ports:
  - port: 80
  selector:
    run: nginx
[root@master test]# kubectl apply -f nginx.yaml
deployment.apps/nginx created
service/nginx created
  • 创建HPA
[root@master test]# vim hpa-nginx.yaml
apiVersion: autoscaling/v2beta1  #上面的hpa版本有提到过,使用基于内存的hpa需要换个版本
kind: HorizontalPodAutoscaler
metadata:
  name: nginx-hpa
spec:
  maxReplicas: 10  #1-10的pod数量限制
  minReplicas: 1
  scaleTargetRef:               #指定使用hpa的资源对象,版本、类型、名称要和上面创建的相同
    apiVersion: apps/v1
    kind: Deployment
    name: nginx
  metrics:
  - type: Resource
    resource:
      name: memory
      targetAverageUtilization: 50   #限制%50的内存
[root@master test]# kubectl apply -f hpa-nginx.yaml
horizontalpodautoscaler.autoscaling/nginx-hpa created
[root@master test]# kubectl get hpa
NAME        REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx-hpa   Deployment/nginx   7%/50%    1         10        1          59s
  • 更换终端进行测试


#在pod中执行命令,增加内存负载
[root@master ~]# kubectl exec -it nginx-78f4944bb8-2rz7j -- /bin/sh -c 'dd if=/dev/zero of=/tmp/file1'
  • 等待负载上去,然后查看pod数量与内存使用率


[root@master test]# kubectl get hpa
NAME        REFERENCE          TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
nginx-hpa   Deployment/nginx   137%/50%   1         10        1          12m
[root@master test]# kubectl get hpa
NAME        REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx-hpa   Deployment/nginx   14%/50%   1         10        3          12m
[root@master test]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-78f4944bb8-2rz7j   1/1     Running   0          21m
nginx-78f4944bb8-bxh78   1/1     Running   0          34s
nginx-78f4944bb8-g8w2h   1/1     Running   0          34s
#与CPU相同,内存上去了也会自动创建pod


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
1月前
|
Prometheus Kubernetes 监控
k8s学习--kubernetes服务自动伸缩之水平伸缩(pod副本伸缩)HPA详细解释与案例应用
k8s学习--kubernetes服务自动伸缩之水平伸缩(pod副本伸缩)HPA详细解释与案例应用
k8s学习--kubernetes服务自动伸缩之水平伸缩(pod副本伸缩)HPA详细解释与案例应用
|
3月前
|
Kubernetes 容器 Perl
在K8S中,Deployment⽀持扩容吗?它与HPA有什么区别?
在K8S中,Deployment⽀持扩容吗?它与HPA有什么区别?
|
3月前
|
Kubernetes 监控 Perl
在K8S中,hpa原理是什么?
在K8S中,hpa原理是什么?
|
3月前
|
Kubernetes 监控 API
在K8S中,如何使用HPA实现自动扩缩容?
在K8S中,如何使用HPA实现自动扩缩容?
|
3月前
|
Prometheus Kubernetes API
在k8S中,HPA V1 V2的区别是什么?
在k8S中,HPA V1 V2的区别是什么?
|
3月前
|
Kubernetes 监控 API
在K8S中,HPA原理是什么?
在K8S中,HPA原理是什么?
|
Kubernetes 应用服务中间件 API
kubernetes HPA-超详细中文官方文档
kubernetes HPA-超详细中文官方文档
|
6月前
|
Kubernetes jenkins 持续交付
容器服务ACK常见问题之HPA触发记录查看失败如何解决
容器服务ACK(阿里云容器服务 Kubernetes 版)是阿里云提供的一种托管式Kubernetes服务,帮助用户轻松使用Kubernetes进行应用部署、管理和扩展。本汇总收集了容器服务ACK使用中的常见问题及答案,包括集群管理、应用部署、服务访问、网络配置、存储使用、安全保障等方面,旨在帮助用户快速解决使用过程中遇到的难题,提升容器管理和运维效率。
|
Kubernetes 测试技术 Perl
16-Kubernetes-Pod控制器详解-Horizontal Pod Autoscaler(HPA)
16-Kubernetes-Pod控制器详解-Horizontal Pod Autoscaler(HPA)
|
存储 Kubernetes 关系型数据库
【Kubernetes的Configmap、SecretMetric service及HPA、Wordpress应用Mysql主从PVPVCSCHPA】
【Kubernetes的Configmap、SecretMetric service及HPA、Wordpress应用Mysql主从PVPVCSCHPA】
118 0

推荐镜像

更多