一.资源配额ResourceQuota概述
1.资源配额概述
当多个用户或团队共享具有固定节点数目的集群时,人们会担心有人使用超过其基于公平原则所分配到的资源量。
资源配额是帮助管理员解决这一问题的工具。
资源配额,通过 ResourceQuota 对象来定义,对每个命名空间的资源消耗总量提供限制。 它可以限制命名空间中某种类型的对象的总数目上限,也可以限制命名空间中的 Pod 可以使用的计算资源的总上限。
参考链接:
https://kubernetes.io/zh-cn/docs/concepts/policy/resource-quotas/
2.资源配额ResourceQuota的工作方式
- 不同的团队可以在不同的命名空间下工作。这可以通过 RBAC 强制执行。
- 集群管理员可以为每个命名空间创建一个或多个 ResourceQuota 对象。
- 当用户在命名空间下创建资源(如 Pod、Service 等)时,Kubernetes 的配额系统会跟踪集群的资源使用情况, 以确保使用的资源用量不超过 ResourceQuota 中定义的硬性资源限额。
- 如果资源创建或者更新请求违反了配额约束,那么该请求会报错(HTTP 403 FORBIDDEN), 并在消息中给出有可能违反的约束。
- 如果命名空间下的计算资源 (如 cpu 和 memory)的配额被启用, 则用户必须为这些资源设定请求值(request)和约束值(limit),否则配额系统将拒绝 Pod 的创建。 提示: 可使用 LimitRanger 准入控制器来为没有设置计算资源需求的 Pod 设置默认值。
3.计算资源配额
资源名称 |
描述 |
limits.cpu |
所有非终止状态的 Pod,其 CPU 限额总量不能超过该值。 |
limits.memory |
所有非终止状态的 Pod,其内存限额总量不能超过该值。 |
requests.cpu |
所有非终止状态的 Pod,其 CPU 需求总量不能超过该值。 |
requests.memory |
所有非终止状态的 Pod,其内存需求总量不能超过该值。 |
hugepages-<size> |
对于所有非终止状态的 Pod,针对指定尺寸的巨页请求总数不能超过此值。 |
cpu |
与 requests.cpu 相同。 |
memory |
与 requests.memory 相同。 |
参考链接:
https://kubernetes.io/zh-cn/docs/concepts/policy/resource-quotas/#compute-resource-quota
4.存储资源配额
资源名称 |
描述 |
requests.storage |
所有 PVC,存储资源的需求总量不能超过该值。 |
persistentvolumeclaims |
在该命名空间中所允许的 PVC 总量。 |
<storage-class-name>.storageclass.storage.k8s.io/requests.storage |
在所有与 <storage-class-name> 相关的持久卷申领中,存储请求的总和不能超过该值。 |
<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims |
在与 storage-class-name 相关的所有持久卷申领中,命名空间中可以存在的持久卷申领总数。 |
参考链接:
https://kubernetes.io/zh-cn/docs/concepts/policy/resource-quotas/#storage-resource-quota
5.对象数量配额
资源名称 |
描述 |
configmaps |
在该命名空间中允许存在的 ConfigMap 总数上限。 |
persistentvolumeclaims |
在该命名空间中允许存在的 PVC 的总数上限。 |
pods |
在该命名空间中允许存在的非终止状态的 Pod 总数上限。Pod 终止状态等价于 Pod 的 .status.phase in (Failed, Succeeded) 为真。 |
replicationcontrollers |
在该命名空间中允许存在的 ReplicationController 总数上限。 |
resourcequotas |
在该命名空间中允许存在的 ResourceQuota 总数上限。 |
services |
在该命名空间中允许存在的 Service 总数上限。 |
services.loadbalancers |
在该命名空间中允许存在的 LoadBalancer 类型的 Service 总数上限。 |
services.nodeports |
在该命名空间中允许存在的 NodePort 类型的 Service 总数上限。 |
secrets |
在该命名空间中允许存在的 Secret 总数上限。 |
你可以使用以下语法对所有标准的、命名空间域的资源类型进行配额设置:
count/<resource>.<group>:用于非核心(core)组的资源
count/<resource>:用于核心组的资源
这是用户可能希望利用对象计数配额来管理的一组资源示例。
count/persistentvolumeclaims
count/services
count/secrets
count/configmaps
count/replicationcontrollers
count/deployments.apps
count/replicasets.apps
count/statefulsets.apps
count/jobs.batch
count/cronjobs.batch
推荐阅读:
https://kubernetes.io/zh-cn/docs/concepts/policy/resource-quotas/#object-count-quota
二.资源配额案例
1.计算资源配额案例
1.1 创建计算资源配额
[root@master231 01-ResourceQuota]# cat 01-compute-resources.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
namespace: kube-public
spec:
# 定义硬性配置
hard:
# 配置cpu 的相关参数
requests.cpu: "1"
limits.cpu: "2"
# 定义memory相关的参数
requests.memory: 2Gi
limits.memory: 3Gi
# 定义GPU相关的参数
# requests.nvidia.com/gpu: 4
[root@master231 01-ResourceQuota]# kubectl apply -f 01-compute-resources.yaml
resourcequota/compute-resources created
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl get quota -n kube-public
NAME AGE REQUEST LIMIT
compute-resources 13s requests.cpu: 0/1, requests.memory: 0/2Gi limits.cpu: 0/2, limits.memory: 0/3Gi
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl get pods -n kube-public
No resources found in kube-public namespace.
[root@master231 01-ResourceQuota]#
1.2 验证计算资源配额
[root@master231 01-ResourceQuota]# cat 02-pods.yaml
apiVersion: v1
kind: Pod
metadata:
name: pods-nginx
namespace: kube-public
spec:
containers:
- name: web
image: nginx:1.20.1-alpine
resources:
requests:
cpu: 0.5
memory: 1Gi
limits:
cpu: 1
memory: 2Gi
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl apply -f 02-pods.yaml
pod/pods-nginx created
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl get pods -n kube-public
NAME READY STATUS RESTARTS AGE
pods-nginx 1/1 Running 0 7s
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl get quota -n kube-public
NAME AGE REQUEST LIMIT
compute-resources 14m requests.cpu: 500m/1, requests.memory: 1Gi/2Gi limits.cpu: 1/2, limits.memory: 2Gi/3Gi
[root@master231 01-ResourceQuota]#
1.3 超出计算配额验证
[root@master231 01-ResourceQuota]# kubectl get quota -n kube-public
NAME AGE REQUEST LIMIT
compute-resources 17m requests.cpu: 500m/1, requests.memory: 1Gi/2Gi limits.cpu: 1/2, limits.memory: 2Gi/3Gi
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# cat 03-pods.yaml
apiVersion: v1
kind: Pod
metadata:
name: pods-alpine
namespace: kube-public
spec:
containers:
- name: c1
image: alpine
resources:
requests:
cpu: 1.5
memory: 2Gi
limits:
cpu: 2
memory: 4Gi
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl apply -f 03-pods.yaml
Error from server (Forbidden): error when creating "03-pods.yaml": pods "pods-alpine" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=2,limits.memory=4Gi,requests.cpu=1500m,requests.memory=2Gi, used: limits.cpu=1,limits.memory=2Gi,requests.cpu=500m,requests.memory=1Gi, limited: limits.cpu=2,limits.memory=3Gi,requests.cpu=1,requests.memory=2Gi
[root@master231 01-ResourceQuota]#
2.存储资源配额案例
2.1 创建存储资源配额
[root@master231 01-ResourceQuota]# cat 04-storage-reources.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: storage-resources
namespace: kube-public
spec:
hard:
requests.storage: "10Gi"
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl apply -f 04-storage-reources.yaml
resourcequota/storage-resources created
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl get quota -n kube-public storage-resources
NAME AGE REQUEST LIMIT
storage-resources 7s requests.storage: 0/10Gi
[root@master231 01-ResourceQuota]#
2.2 验证存储资源配额
[root@master231 01-ResourceQuota]# kubectl get quota -n kube-public storage-resources
NAME AGE REQUEST LIMIT
storage-resources 71s requests.storage: 0/10Gi
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# cat 05-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-demo
namespace: kube-public
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl apply -f 05-pvc.yaml
persistentvolumeclaim/pvc-demo created
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl get quota -n kube-public storage-resources
NAME AGE REQUEST LIMIT
storage-resources 83s requests.storage: 8Gi/10Gi
[root@master231 01-ResourceQuota]#
2.3 超出存储资源配额
[root@master231 01-ResourceQuota]# kubectl get quota -n kube-public storage-resources
NAME AGE REQUEST LIMIT
storage-resources 83s requests.storage: 8Gi/10Gi
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# cat 06-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-demo-02
namespace: kube-public
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl apply -f 06-pvc.yaml
Error from server (Forbidden): error when creating "06-pvc.yaml": persistentvolumeclaims "pvc-demo-02" is forbidden: exceeded quota: storage-resources, requested: requests.storage=3Gi, used: requests.storage=8Gi, limited: requests.storage=10Gi
[root@master231 01-ResourceQuota]#
3.对象数量配额案例
3.1 创建对象资源配额
[root@master231 01-ResourceQuota]# cat 07-object-counts.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
namespace: kube-public
spec:
hard:
pods: "10"
count/deployments.apps: "3"
count/services: "3"
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl apply -f 07-object-counts.yaml
resourcequota/object-counts created
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl -n kube-public get quota object-counts
NAME AGE REQUEST LIMIT
object-counts 30s count/deployments.apps: 0/3, count/services: 0/3, pods: 1/10
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl get pods -n kube-public
NAME READY STATUS RESTARTS AGE
pods-nginx 1/1 Running 0 33m
[root@master231 01-ResourceQuota]#
3.2 验证对象资源配额
[root@master231 01-ResourceQuota]# kubectl -n kube-public get pods,deployment
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-55fd5fd97c-4fbnb 1/1 Running 0 2m36s
pod/nginx-deployment-55fd5fd97c-6flm6 1/1 Running 0 2m36s
pod/nginx-deployment-55fd5fd97c-8cxn9 1/1 Running 0 2m36s
pod/pods-nginx 1/1 Running 0 51m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 3/3 3 3 2m36s
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl -n kube-public get quota
NAME AGE REQUEST LIMIT
compute-resources 65m requests.cpu: 800m/1, requests.memory: 1324Mi/2Gi limits.cpu: 1600m/2, limits.memory: 2648Mi/3Gi
object-counts 18m count/deployments.apps: 1/3, count/services: 0/3, pods: 4/10
storage-resources 25m requests.storage: 8Gi/10Gi
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl -n kube-public get quota object-counts
NAME AGE REQUEST LIMIT
object-counts 18m count/deployments.apps: 1/3, count/services: 0/3, pods: 4/10
[root@master231 01-ResourceQuota]#
3.3 超出对象资源配额
将pod副本数量改到10个后,测试发现无法到达10个,因为我们定义的"compute-resources"资源的"limit.cpu"资源已经达到上限啦~
[root@master231 01-ResourceQuota]# kubectl -n kube-public get quota
NAME AGE REQUEST LIMIT
compute-resources 66m requests.cpu: 1/1, requests.memory: 1524Mi/2Gi limits.cpu: 2/2, limits.memory: 3048Mi/3Gi
object-counts 19m count/deployments.apps: 1/3, count/services: 0/3, pods: 6/10
storage-resources 25m requests.storage: 8Gi/10Gi
[root@master231 01-ResourceQuota]#
如果你不考虑其他两个因素,则可以直接使用删除其他的资源限制即可。
[root@master231 01-ResourceQuota]# kubectl -n kube-public delete quota compute-resources storage-resources
resourcequota "compute-resources" deleted
resourcequota "storage-resources" deleted
[root@master231 01-ResourceQuota]#
[root@master231 01-ResourceQuota]# kubectl -n kube-public get quota
NAME AGE REQUEST LIMIT
object-counts 22m count/deployments.apps: 1/3, count/services: 0/3, pods: 6/10
[root@master231 01-ResourceQuota]#
再次测试效果如下: 发现deploy资源仅仅创建了9个哟~
[root@master231 01-ResourceQuota]# kubectl -n kube-public get quota,deploy,pods
NAME AGE REQUEST LIMIT
resourcequota/object-counts 25m count/deployments.apps: 1/3, count/services: 0/3, pods: 10/10
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 9/10 9 9 9m54s
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-55fd5fd97c-4fbnb 1/1 Running 0 9m54s
pod/nginx-deployment-55fd5fd97c-6flm6 1/1 Running 0 9m54s
pod/nginx-deployment-55fd5fd97c-8cxn9 1/1 Running 0 9m54s
pod/nginx-deployment-55fd5fd97c-8fwvt 1/1 Running 0 6m37s
pod/nginx-deployment-55fd5fd97c-l844s 1/1 Running 0 2m32s
pod/nginx-deployment-55fd5fd97c-qkn2x 1/1 Running 0 2m32s
pod/nginx-deployment-55fd5fd97c-tpcgt 1/1 Running 0 2m32s
pod/nginx-deployment-55fd5fd97c-wg4fq 1/1 Running 0 2m32s
pod/nginx-deployment-55fd5fd97c-xvhrs 1/1 Running 0 6m36s
pod/pods-nginx 1/1 Running 0 58m
[root@master231 01-ResourceQuota]#