1. 背景:今天在集群中搭建 Prometheus 监控,在使用 kubectl apply -f [文件名] 时,出现了报错,我对集群资源、命名空间、权限等进行一系列排查,甚至在没部署任何服务新集群部署该服务依旧显示部署失败。
第一次使用 kubectl apply -f 文件命 显示报错。
#kubectl apply -f setup/ customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/scrapeconfigs.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created namespace/monitoring created Error from server (Invalid): error when creating "setup/0alertmanagerCustomResourceDefinition.yaml": CustomResourceDefinition.apiextensions.k8s.io "alertmanagers.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes Error from server (Invalid): error when creating "setup/0prometheusCustomResourceDefinition.yaml": CustomResourceDefinition.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes Error from server (Invalid): error when creating "setup/0prometheusagentCustomResourceDefinition.yaml": CustomResourceDefinition.apiextensions.k8s.io "prometheusagents.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes Error from server (Invalid): error when creating "setup/0thanosrulerCustomResourceDefinition.yaml": CustomResourceDefinition.apiextensions.k8s.io "thanosrulers.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes
使用 kubectl delete -f setup/ --grace-period=0 --force 强制删除创建yaml。
kubectl delete -f setup/ --grace-period=0 --force Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. customresourcedefinition.apiextensions.k8s.io "alertmanagerconfigs.monitoring.coreos.com" force deleted customresourcedefinition.apiextensions.k8s.io "podmonitors.monitoring.coreos.com" force deleted customresourcedefinition.apiextensions.k8s.io "probes.monitoring.coreos.com" force deleted customresourcedefinition.apiextensions.k8s.io "prometheusrules.monitoring.coreos.com" force deleted customresourcedefinition.apiextensions.k8s.io "scrapeconfigs.monitoring.coreos.com" force deleted customresourcedefinition.apiextensions.k8s.io "servicemonitors.monitoring.coreos.com" force deleted namespace "monitoring" force deleted Error from server (NotFound): error when deleting "setup/0alertmanagerCustomResourceDefinition.yaml": customresourcedefinitions.apiextensions.k8s.io "alertmanagers.monitoring.coreos.com" not found Error from server (NotFound): error when deleting "setup/0prometheusCustomResourceDefinition.yaml": customresourcedefinitions.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" not found Error from server (NotFound): error when deleting "setup/0prometheusagentCustomResourceDefinition.yaml": customresourcedefinitions.apiextensions.k8s.io "prometheusagents.monitoring.coreos.com" not found Error from server (NotFound): error when deleting "setup/0thanosrulerCustomResourceDefinition.yaml": customresourcedefinitions.apiextensions.k8s.io "thanosrulers.monitoring.coreos.com" not found
重新使用 kubectl create -f setup/ 显示成功。
#kubectl create -f setup/ customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/prometheusagents.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/scrapeconfigs.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created namespace/monitoring created
2. 经过查询各种相关资讯,在知识星球有篇文章可以解释create与apply在创建yaml有很大区别。
1. kubectl create 命令只是简单地创建资源对象,它不会去检查或者处理对象的注解数据,所以它不会因为注解数据过大而报错。 2. kubectl apply 命令则会对资源对象进行更复杂的处理。在创建或者更新资源对象时,它会将整个资源对象的配置数据(包括注解数据)保存在 kubectl.kubernetes.io/last-applied-configuration 注解中。这样在以后的 apply 操作中,kubectl 就可以通过比较这个注解中的配置数据和当前的配置数据,来决定哪些字段需要更新,哪些字段不需要更新。由于这个原因,如果你的注解数据过大,超过了 Kubernetes 对注解数据大小的限制,那么 kubectl apply 就会报错。
总结,如果你的注解数据过大,你可以选择使用 kubectl create 命令来创建资源对象,但是你需要注意,这样做之后你就不能再使用 kubectl apply 命令来更新这个资源对象了,因为 apply 命令仍然会因为注解数据过大而报错。你需要使用其他的命令,比如 kubectl edit 或者 kubectl patch 来更新资源对象。