手把手带你,在K8S集群中删除处于 "terminating" 状态命名空间。

简介: 手把手带你,在K8S集群中删除处于 "terminating" 状态命名空间。
1. 背景

背景:在Kubernetes集群中,有时侯想要删掉某个长时间不用命名空间以释放资源却发现删不掉,经过反复删除发现想要删除命名空间处在Terminating状态,强制删除也无济于事。作者也遇到这样事情,我又是如何删除掉呢?

#kubectl get ns
NAME                   STATUS        AGE
default                Active        44d
kube-node-lease        Active        44d
kube-public            Active        44d
kube-system            Active        44d
kubernetes-dashboard   Active        44d
rook-ceph              Terminating   15d
2. 使用 "kubectl delete ns rook-ceph" 删除

发现根本删除不掉,一直处于被夯住状态......, 最终不得使用 Ctrl +c 结束。

#kubectl delete ns rook-ceph 
namespace "rook-ceph" deleted
......
3. 使用 "kubectl delete ns rook-ceph --grace-period=0 --force" 强制删除
#kubectl delete ns rook-ceph --grace-period=0 --force
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
namespace "rook-ceph" force deleted
...............

同样,发现根本删除不掉,一直处于被夯住状态......, 最终不得使用 Ctrl +c 结束。

4. 使用调用api-server接口进行删除 删除成功

4.1 将要删除命名空间使用 “kubectl get ns rook-ceph -o json > rook-ceph.json” 该命令将其转化为json格式并保存在当前目录。

#kubectl get ns
NAME                   STATUS        AGE
default                Active        44d
kube-node-lease        Active        44d
kube-public            Active        44d
kube-system            Active        44d
kubernetes-dashboard   Active        44d
rook-ceph              Terminating   15d
#kubectl get ns rook-ceph -o json > rook-ceph.json
#cat rook-ceph.json
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"rook-ceph\"}}\n"
        },
"creationTimestamp": "2024-03-23T10:01:56Z",
"deletionTimestamp": "2024-04-07T13:15:10Z",
"labels": {
"kubernetes.io/metadata.name": "rook-ceph"
        },
"name": "rook-ceph",
"resourceVersion": "658949",
"uid": "66c4eb8a-33ac-4cd3-a7b6-89398b739d6b"
    },
"spec": {
"finalizers": [
"kubernetes"
        ]
    },
"status": {
"conditions": [
            {
"lastTransitionTime": "2024-04-08T03:30:14Z",
"message": "Discovery failed for some groups, 1 failing: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1",
"reason": "DiscoveryFailed",
"status": "True",
"type": "NamespaceDeletionDiscoveryFailure"
            },
            {
"lastTransitionTime": "2024-04-07T13:15:41Z",
"message": "All legacy kube types successfully parsed",
"reason": "ParsedGroupVersions",
"status": "False",
"type": "NamespaceDeletionGroupVersionParsingFailure"
            },
            {
"lastTransitionTime": "2024-04-08T02:31:11Z",
"message": "All content successfully deleted, may be waiting on finalization",
"reason": "ContentDeleted",
"status": "False",
"type": "NamespaceDeletionContentFailure"
            },
            {
"lastTransitionTime": "2024-04-07T13:15:41Z",
"message": "Some resources are remaining: cephblockpools.ceph.rook.io has 1 resource instances, cephclusters.ceph.rook.io has 1 resource instances, cephfilesystems.ceph.rook.io has 1 resource instances, cephfilesystemsubvolumegroups.ceph.rook.io has 1 resource instances, configmaps. has 1 resource instances, secrets. has 1 resource instances",
"reason": "SomeResourcesRemain",
"status": "True",
"type": "NamespaceContentRemaining"
            },
            {
"lastTransitionTime": "2024-04-07T13:15:41Z",
"message": "Some content in the namespace has finalizers remaining: ceph.rook.io/disaster-protection in 2 resource instances, cephblockpool.ceph.rook.io in 1 resource instances, cephcluster.ceph.rook.io in 1 resource instances, cephfilesystem.ceph.rook.io in 1 resource instances, cephfilesystemsubvolumegroup.ceph.rook.io in 1 resource instances",
"reason": "SomeFinalizersRemain",
"status": "True",
"type": "NamespaceFinalizersRemaining"
            }
        ],
"phase": "Terminating"
    }
}
[root@k8s-master01 ~]#

4.2 编辑rook-ceph.json文件,将spec.finalizers设为空数组。

#将rook-ceph.json文件"spec"中"kubernetes"删除后保存。
"spec": {
"finalizers": [
"kubernetes"
    ]
#更新后的样子:
"spec": {
"finalizers": [
        ]
    },

4.3 执行kubectl proxy,启动一个kube api server本地代理,待执行完删除命令后再结束掉。

#kubectl proxy 
Starting to serve on 127.0.0.1:8001

4.4 另开一个窗口执行 "curl -k -H Content-Type: application/json -X PUT --data-binary @rook-ceph.json http://127.0.0.1:8001/api/v1/namespaces/rook-ceph/finalize" 进行接口调用方式删除处于Terminating状态命名空间。

注意:rook-ceph.json替换为4.1步骤当中你自己的json文件,127.0.0.1.8001替换为4.3步骤你自己kubectl proxy本地代理,rook-ceph替换为你将要删除命名空间名字。

注意:rook-ceph.json替换为4.1步骤当中你自己的json文件,127.0.0.1.8001替换为4.3步骤你自己kubectl proxy本地代理,rook-ceph替换为你将要删除命名空间名字。

注意:rook-ceph.json替换为4.1步骤当中你自己的json文件,127.0.0.1.8001替换为4.3步骤你自己kubectl proxy本地代理,rook-ceph替换为你将要删除命名空间名字。

[root@k8s-master01 ~]#curl -k -H Content-Type: application/json -X PUT --data-binary @rook-ceph.json http://127.0.0.1:8001/api/v1/namespaces/rook-ceph/finalize
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "rook-ceph",
"uid": "66c4eb8a-33ac-4cd3-a7b6-89398b739d6b",
"resourceVersion": "658949",
"creationTimestamp": "2024-03-23T10:01:56Z",
"deletionTimestamp": "2024-04-07T13:15:10Z",
"labels": {
"kubernetes.io/metadata.name": "rook-ceph"
    },
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"rook-ceph\"}}\n"
    },
"managedFields": [
      {
"manager": "kubectl-create",
"operation": "Update",
"apiVersion": "v1",
"time": "2024-03-23T10:01:56Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:labels": {
".": {},
"f:kubernetes.io/metadata.name": {}
            }
          }
        }
      },
      {
"manager": "kubectl-client-side-apply",
"operation": "Update",
"apiVersion": "v1",
"time": "2024-03-23T11:01:37Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {
".": {},
"f:kubectl.kubernetes.io/last-applied-configuration": {}
            }
          }
        }
      },
      {
"manager": "kube-controller-manager",
"operation": "Update",
"apiVersion": "v1",
"time": "2024-04-08T03:30:14Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:status": {
"f:conditions": {
".": {},
"k:{\"type\":\"NamespaceContentRemaining\"}": {
".": {},
"f:lastTransitionTime": {},
"f:message": {},
"f:reason": {},
"f:status": {},
"f:type": {}
              },
"k:{\"type\":\"NamespaceDeletionContentFailure\"}": {
".": {},
"f:lastTransitionTime": {},
"f:message": {},
"f:reason": {},
"f:status": {},
"f:type": {}
              },
"k:{\"type\":\"NamespaceDeletionDiscoveryFailure\"}": {
".": {},
"f:lastTransitionTime": {},
"f:message": {},
"f:reason": {},
"f:status": {},
"f:type": {}
              },
"k:{\"type\":\"NamespaceDeletionGroupVersionParsingFailure\"}": {
".": {},
"f:lastTransitionTime": {},
"f:message": {},
"f:reason": {},
"f:status": {},
"f:type": {}
              },
"k:{\"type\":\"NamespaceFinalizersRemaining\"}": {
".": {},
"f:lastTransitionTime": {},
"f:message": {},
"f:reason": {},
"f:status": {},
"f:type": {}
              }
            }
          }
        },
"subresource": "status"
      }
    ]
  },
"spec": {},
"status": {
"phase": "Terminating",
"conditions": [
      {
"type": "NamespaceDeletionDiscoveryFailure",
"status": "True",
"lastTransitionTime": "2024-04-08T03:30:14Z",
"reason": "DiscoveryFailed",
"message": "Discovery failed for some groups, 1 failing: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1"
      },
      {
"type": "NamespaceDeletionGroupVersionParsingFailure",
"status": "False",
"lastTransitionTime": "2024-04-07T13:15:41Z",
"reason": "ParsedGroupVersions",
"message": "All legacy kube types successfully parsed"
      },
      {
"type": "NamespaceDeletionContentFailure",
"status": "False",
"lastTransitionTime": "2024-04-08T02:31:11Z",
"reason": "ContentDeleted",
"message": "All content successfully deleted, may be waiting on finalization"
      },
      {
"type": "NamespaceContentRemaining",
"status": "True",
"lastTransitionTime": "2024-04-07T13:15:41Z",
"reason": "SomeResourcesRemain",
"message": "Some resources are remaining: cephblockpools.ceph.rook.io has 1 resource instances, cephclusters.ceph.rook.io has 1 resource instances, cephfilesystems.ceph.rook.io has 1 resource instances, cephfilesystemsubvolumegroups.ceph.rook.io has 1 resource instances, configmaps. has 1 resource instances, secrets. has 1 resource instances"
      },
      {
"type": "NamespaceFinalizersRemaining",
"status": "True",
"lastTransitionTime": "2024-04-07T13:15:41Z",
"reason": "SomeFinalizersRemain",
"message": "Some content in the namespace has finalizers remaining: ceph.rook.io/disaster-protection in 2 resource instances, cephblockpool.ceph.rook.io in 1 resource instances, cephcluster.ceph.rook.io in 1 resource instances, cephfilesystem.ceph.rook.io in 1 resource instances, cephfilesystemsubvolumegroup.ceph.rook.io in 1 resource instances"
      }
    ]
  }
}

4.5 执行完后,你会惊奇发现,处于Terminating状态命名空间就这样被删除了。

#kubectl get ns
NAME                   STATUS   AGE
default                Active   44d
kube-node-lease        Active   44d
kube-public            Active   44d
kube-system            Active   44d
kubernetes-dashboard   Active   44d

5.5 祝大家能够轻松删除处于Terminating状态命名空间。

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
11天前
|
存储 Kubernetes 关系型数据库
阿里云ACK备份中心,K8s集群业务应用数据的一站式灾备方案
本文源自2024云栖大会苏雅诗的演讲,探讨了K8s集群业务为何需要灾备及其重要性。文中强调了集群与业务高可用配置对稳定性的重要性,并指出人为误操作等风险,建议实施周期性和特定情况下的灾备措施。针对容器化业务,提出了灾备的新特性与需求,包括工作负载为核心、云资源信息的备份,以及有状态应用的数据保护。介绍了ACK推出的备份中心解决方案,支持命名空间、标签、资源类型等维度的备份,并具备存储卷数据保护功能,能够满足GitOps流程企业的特定需求。此外,还详细描述了备份中心的使用流程、控制台展示、灾备难点及解决方案等内容,展示了备份中心如何有效应对K8s集群资源和存储卷数据的灾备挑战。
|
1月前
|
Kubernetes 监控 Cloud Native
Kubernetes集群的高可用性与伸缩性实践
Kubernetes集群的高可用性与伸缩性实践
71 1
|
2月前
|
JSON 运维 Kubernetes
|
2月前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
2月前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
2月前
|
Kubernetes Ubuntu Linux
Centos7 搭建 kubernetes集群
本文介绍了如何搭建一个三节点的Kubernetes集群,包括一个主节点和两个工作节点。各节点运行CentOS 7系统,最低配置为2核CPU、2GB内存和15GB硬盘。详细步骤包括环境配置、安装Docker、关闭防火墙和SELinux、禁用交换分区、安装kubeadm、kubelet、kubectl,以及初始化Kubernetes集群和安装网络插件Calico或Flannel。
203 4
|
2月前
|
Kubernetes 应用服务中间件 nginx
搭建Kubernetes v1.31.1服务器集群,采用Calico网络技术
在阿里云服务器上部署k8s集群,一、3台k8s服务器,1个Master节点,2个工作节点,采用Calico网络技术。二、部署nginx服务到k8s集群,并验证nginx服务运行状态。
924 1
|
2月前
|
Kubernetes Cloud Native 流计算
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
91 3
|
2月前
|
Kubernetes Cloud Native 微服务
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
131 1
|
2月前
|
负载均衡 应用服务中间件 nginx
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
71 1
下一篇
DataWorks