手把手带你,在K8S集群中删除处于 "terminating" 状态命名空间。

简介: 手把手带你,在K8S集群中删除处于 "terminating" 状态命名空间。
1. 背景

背景:在Kubernetes集群中,有时侯想要删掉某个长时间不用命名空间以释放资源却发现删不掉,经过反复删除发现想要删除命名空间处在Terminating状态,强制删除也无济于事。作者也遇到这样事情,我又是如何删除掉呢?

#kubectl get ns
NAME                   STATUS        AGE
default                Active        44d
kube-node-lease        Active        44d
kube-public            Active        44d
kube-system            Active        44d
kubernetes-dashboard   Active        44d
rook-ceph              Terminating   15d
2. 使用 "kubectl delete ns rook-ceph" 删除

发现根本删除不掉,一直处于被夯住状态......, 最终不得使用 Ctrl +c 结束。

#kubectl delete ns rook-ceph 
namespace "rook-ceph" deleted
......
3. 使用 "kubectl delete ns rook-ceph --grace-period=0 --force" 强制删除
#kubectl delete ns rook-ceph --grace-period=0 --force
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
namespace "rook-ceph" force deleted
...............

同样,发现根本删除不掉,一直处于被夯住状态......, 最终不得使用 Ctrl +c 结束。

4. 使用调用api-server接口进行删除 删除成功

4.1 将要删除命名空间使用 “kubectl get ns rook-ceph -o json > rook-ceph.json” 该命令将其转化为json格式并保存在当前目录。

#kubectl get ns
NAME                   STATUS        AGE
default                Active        44d
kube-node-lease        Active        44d
kube-public            Active        44d
kube-system            Active        44d
kubernetes-dashboard   Active        44d
rook-ceph              Terminating   15d
#kubectl get ns rook-ceph -o json > rook-ceph.json
#cat rook-ceph.json
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"rook-ceph\"}}\n"
        },
"creationTimestamp": "2024-03-23T10:01:56Z",
"deletionTimestamp": "2024-04-07T13:15:10Z",
"labels": {
"kubernetes.io/metadata.name": "rook-ceph"
        },
"name": "rook-ceph",
"resourceVersion": "658949",
"uid": "66c4eb8a-33ac-4cd3-a7b6-89398b739d6b"
    },
"spec": {
"finalizers": [
"kubernetes"
        ]
    },
"status": {
"conditions": [
            {
"lastTransitionTime": "2024-04-08T03:30:14Z",
"message": "Discovery failed for some groups, 1 failing: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1",
"reason": "DiscoveryFailed",
"status": "True",
"type": "NamespaceDeletionDiscoveryFailure"
            },
            {
"lastTransitionTime": "2024-04-07T13:15:41Z",
"message": "All legacy kube types successfully parsed",
"reason": "ParsedGroupVersions",
"status": "False",
"type": "NamespaceDeletionGroupVersionParsingFailure"
            },
            {
"lastTransitionTime": "2024-04-08T02:31:11Z",
"message": "All content successfully deleted, may be waiting on finalization",
"reason": "ContentDeleted",
"status": "False",
"type": "NamespaceDeletionContentFailure"
            },
            {
"lastTransitionTime": "2024-04-07T13:15:41Z",
"message": "Some resources are remaining: cephblockpools.ceph.rook.io has 1 resource instances, cephclusters.ceph.rook.io has 1 resource instances, cephfilesystems.ceph.rook.io has 1 resource instances, cephfilesystemsubvolumegroups.ceph.rook.io has 1 resource instances, configmaps. has 1 resource instances, secrets. has 1 resource instances",
"reason": "SomeResourcesRemain",
"status": "True",
"type": "NamespaceContentRemaining"
            },
            {
"lastTransitionTime": "2024-04-07T13:15:41Z",
"message": "Some content in the namespace has finalizers remaining: ceph.rook.io/disaster-protection in 2 resource instances, cephblockpool.ceph.rook.io in 1 resource instances, cephcluster.ceph.rook.io in 1 resource instances, cephfilesystem.ceph.rook.io in 1 resource instances, cephfilesystemsubvolumegroup.ceph.rook.io in 1 resource instances",
"reason": "SomeFinalizersRemain",
"status": "True",
"type": "NamespaceFinalizersRemaining"
            }
        ],
"phase": "Terminating"
    }
}
[root@k8s-master01 ~]#

4.2 编辑rook-ceph.json文件,将spec.finalizers设为空数组。

#将rook-ceph.json文件"spec"中"kubernetes"删除后保存。
"spec": {
"finalizers": [
"kubernetes"
    ]
#更新后的样子:
"spec": {
"finalizers": [
        ]
    },

4.3 执行kubectl proxy,启动一个kube api server本地代理,待执行完删除命令后再结束掉。

#kubectl proxy 
Starting to serve on 127.0.0.1:8001

4.4 另开一个窗口执行 "curl -k -H Content-Type: application/json -X PUT --data-binary @rook-ceph.json http://127.0.0.1:8001/api/v1/namespaces/rook-ceph/finalize" 进行接口调用方式删除处于Terminating状态命名空间。

注意:rook-ceph.json替换为4.1步骤当中你自己的json文件,127.0.0.1.8001替换为4.3步骤你自己kubectl proxy本地代理,rook-ceph替换为你将要删除命名空间名字。

注意:rook-ceph.json替换为4.1步骤当中你自己的json文件,127.0.0.1.8001替换为4.3步骤你自己kubectl proxy本地代理,rook-ceph替换为你将要删除命名空间名字。

注意:rook-ceph.json替换为4.1步骤当中你自己的json文件,127.0.0.1.8001替换为4.3步骤你自己kubectl proxy本地代理,rook-ceph替换为你将要删除命名空间名字。

[root@k8s-master01 ~]#curl -k -H Content-Type: application/json -X PUT --data-binary @rook-ceph.json http://127.0.0.1:8001/api/v1/namespaces/rook-ceph/finalize
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "rook-ceph",
"uid": "66c4eb8a-33ac-4cd3-a7b6-89398b739d6b",
"resourceVersion": "658949",
"creationTimestamp": "2024-03-23T10:01:56Z",
"deletionTimestamp": "2024-04-07T13:15:10Z",
"labels": {
"kubernetes.io/metadata.name": "rook-ceph"
    },
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"rook-ceph\"}}\n"
    },
"managedFields": [
      {
"manager": "kubectl-create",
"operation": "Update",
"apiVersion": "v1",
"time": "2024-03-23T10:01:56Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:labels": {
".": {},
"f:kubernetes.io/metadata.name": {}
            }
          }
        }
      },
      {
"manager": "kubectl-client-side-apply",
"operation": "Update",
"apiVersion": "v1",
"time": "2024-03-23T11:01:37Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {
".": {},
"f:kubectl.kubernetes.io/last-applied-configuration": {}
            }
          }
        }
      },
      {
"manager": "kube-controller-manager",
"operation": "Update",
"apiVersion": "v1",
"time": "2024-04-08T03:30:14Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:status": {
"f:conditions": {
".": {},
"k:{\"type\":\"NamespaceContentRemaining\"}": {
".": {},
"f:lastTransitionTime": {},
"f:message": {},
"f:reason": {},
"f:status": {},
"f:type": {}
              },
"k:{\"type\":\"NamespaceDeletionContentFailure\"}": {
".": {},
"f:lastTransitionTime": {},
"f:message": {},
"f:reason": {},
"f:status": {},
"f:type": {}
              },
"k:{\"type\":\"NamespaceDeletionDiscoveryFailure\"}": {
".": {},
"f:lastTransitionTime": {},
"f:message": {},
"f:reason": {},
"f:status": {},
"f:type": {}
              },
"k:{\"type\":\"NamespaceDeletionGroupVersionParsingFailure\"}": {
".": {},
"f:lastTransitionTime": {},
"f:message": {},
"f:reason": {},
"f:status": {},
"f:type": {}
              },
"k:{\"type\":\"NamespaceFinalizersRemaining\"}": {
".": {},
"f:lastTransitionTime": {},
"f:message": {},
"f:reason": {},
"f:status": {},
"f:type": {}
              }
            }
          }
        },
"subresource": "status"
      }
    ]
  },
"spec": {},
"status": {
"phase": "Terminating",
"conditions": [
      {
"type": "NamespaceDeletionDiscoveryFailure",
"status": "True",
"lastTransitionTime": "2024-04-08T03:30:14Z",
"reason": "DiscoveryFailed",
"message": "Discovery failed for some groups, 1 failing: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1"
      },
      {
"type": "NamespaceDeletionGroupVersionParsingFailure",
"status": "False",
"lastTransitionTime": "2024-04-07T13:15:41Z",
"reason": "ParsedGroupVersions",
"message": "All legacy kube types successfully parsed"
      },
      {
"type": "NamespaceDeletionContentFailure",
"status": "False",
"lastTransitionTime": "2024-04-08T02:31:11Z",
"reason": "ContentDeleted",
"message": "All content successfully deleted, may be waiting on finalization"
      },
      {
"type": "NamespaceContentRemaining",
"status": "True",
"lastTransitionTime": "2024-04-07T13:15:41Z",
"reason": "SomeResourcesRemain",
"message": "Some resources are remaining: cephblockpools.ceph.rook.io has 1 resource instances, cephclusters.ceph.rook.io has 1 resource instances, cephfilesystems.ceph.rook.io has 1 resource instances, cephfilesystemsubvolumegroups.ceph.rook.io has 1 resource instances, configmaps. has 1 resource instances, secrets. has 1 resource instances"
      },
      {
"type": "NamespaceFinalizersRemaining",
"status": "True",
"lastTransitionTime": "2024-04-07T13:15:41Z",
"reason": "SomeFinalizersRemain",
"message": "Some content in the namespace has finalizers remaining: ceph.rook.io/disaster-protection in 2 resource instances, cephblockpool.ceph.rook.io in 1 resource instances, cephcluster.ceph.rook.io in 1 resource instances, cephfilesystem.ceph.rook.io in 1 resource instances, cephfilesystemsubvolumegroup.ceph.rook.io in 1 resource instances"
      }
    ]
  }
}

4.5 执行完后,你会惊奇发现,处于Terminating状态命名空间就这样被删除了。

#kubectl get ns
NAME                   STATUS   AGE
default                Active   44d
kube-node-lease        Active   44d
kube-public            Active   44d
kube-system            Active   44d
kubernetes-dashboard   Active   44d

5.5 祝大家能够轻松删除处于Terminating状态命名空间。

相关实践学习
深入解析Docker容器化技术
Docker是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器上,也可以实现虚拟化,容器是完全使用沙箱机制,相互之间不会有任何接口。Docker是世界领先的软件容器平台。开发人员利用Docker可以消除协作编码时“在我的机器上可正常工作”的问题。运维人员利用Docker可以在隔离容器中并行运行和管理应用,获得更好的计算密度。企业利用Docker可以构建敏捷的软件交付管道,以更快的速度、更高的安全性和可靠的信誉为Linux和Windows Server应用发布新功能。 在本套课程中,我们将全面的讲解Docker技术栈,从环境安装到容器、镜像操作以及生产环境如何部署开发的微服务应用。本课程由黑马程序员提供。     相关的阿里云产品:容器服务 ACK 容器服务 Kubernetes 版(简称 ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情: https://www.aliyun.com/product/kubernetes
相关文章
|
2月前
|
人工智能 算法 调度
阿里云ACK托管集群Pro版共享GPU调度操作指南
本文介绍在阿里云ACK托管集群Pro版中,如何通过共享GPU调度实现显存与算力的精细化分配,涵盖前提条件、使用限制、节点池配置及任务部署全流程,提升GPU资源利用率,适用于AI训练与推理场景。
279 1
|
2月前
|
弹性计算 监控 调度
ACK One 注册集群云端节点池升级:IDC 集群一键接入云端 GPU 算力,接入效率提升 80%
ACK One注册集群节点池实现“一键接入”,免去手动编写脚本与GPU驱动安装,支持自动扩缩容与多场景调度,大幅提升K8s集群管理效率。
247 89
|
7月前
|
资源调度 Kubernetes 调度
从单集群到多集群的快速无损转型:ACK One 多集群应用分发
ACK One 的多集群应用分发,可以最小成本地结合您已有的单集群 CD 系统,无需对原先应用资源 YAML 进行修改,即可快速构建成多集群的 CD 系统,并同时获得强大的多集群资源调度和分发的能力。
288 9
|
7月前
|
资源调度 Kubernetes 调度
从单集群到多集群的快速无损转型:ACK One 多集群应用分发
本文介绍如何利用阿里云的分布式云容器平台ACK One的多集群应用分发功能,结合云效CD能力,快速将单集群CD系统升级为多集群CD系统。通过增加分发策略(PropagationPolicy)和差异化策略(OverridePolicy),并修改单集群kubeconfig为舰队kubeconfig,可实现无损改造。该方案具备多地域多集群智能资源调度、重调度及故障迁移等能力,帮助用户提升业务效率与可靠性。
|
9月前
|
存储 Kubernetes 监控
K8s集群实战:使用kubeadm和kuboard部署Kubernetes集群
总之,使用kubeadm和kuboard部署K8s集群就像回归童年一样,简单又有趣。不要忘记,技术是为人服务的,用K8s集群操控云端资源,我们不过是想在复杂的世界找寻简单。尽管部署过程可能遇到困难,但朝着简化复杂的目标,我们就能找到意义和乐趣。希望你也能利用这些工具,找到你的乐趣,满足你的需求。
866 33
|
9月前
|
Kubernetes 开发者 Docker
集群部署:使用Rancher部署Kubernetes集群。
以上就是使用 Rancher 部署 Kubernetes 集群的流程。使用 Rancher 和 Kubernetes,开发者可以受益于灵活性和可扩展性,允许他们在多种环境中运行多种应用,同时利用自动化工具使工作负载更加高效。
505 19
|
9月前
|
人工智能 分布式计算 调度
打破资源边界、告别资源浪费:ACK One 多集群Spark和AI作业调度
ACK One多集群Spark作业调度,可以帮助您在不影响集群中正在运行的在线业务的前提下,打破资源边界,根据各集群实际剩余资源来进行调度,最大化您多集群中闲置资源的利用率。
|
9月前
|
Prometheus Kubernetes 监控
OpenAI故障复盘丨如何保障大规模K8s集群稳定性
OpenAI故障复盘丨如何保障大规模K8s集群稳定性
321 0
OpenAI故障复盘丨如何保障大规模K8s集群稳定性
|
11月前
|
缓存 容灾 网络协议
ACK One多集群网关:实现高效容灾方案
ACK One多集群网关可以帮助您快速构建同城跨AZ多活容灾系统、混合云同城跨AZ多活容灾系统,以及异地容灾系统。
|
10月前
|
运维 分布式计算 Kubernetes
ACK One多集群Service帮助大批量应用跨集群无缝迁移
ACK One多集群Service可以帮助您,在无需关注服务间的依赖,和最小化迁移风险的前提下,完成跨集群无缝迁移大批量应用。

推荐镜像

更多