云原生|kubernetes|kubernetes集群升级+证书更新(Ubuntu-18.04+kubeadm)

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
简介: 云原生|kubernetes|kubernetes集群升级+证书更新(Ubuntu-18.04+kubeadm)

前言:

kubernetes集群根据部署手法来分类,一般是两种,一种是基于kubeadm搭建的集群,一种是二进制方式搭建的集群。那么,二进制集群升级和证书更新就完全是手动处理了,而kubeadm的集群可以自动的升级集群(有一些手动的操作,但不多)和证书更新。

kubernetes集群升级的目的和意义:

1,

某些版本有安全隐患的时候需要升级集群

例如,kubernetes CVE-2022-3172 安全漏洞,kubernetes CVE-2019-11247,kubernetesCVE-2018-1002105,kubernetes CVE-2020-8559等等漏洞。

这些内容可以在阿里云查找到,链接如下:

【CVE安全】漏洞修复公告_容器服务Kubernetes版-阿里云帮助中心

一般情况下,不管是kubernetes集群还是什么其它的集群,还是什么tomcat,什么elasticsearch,ssh等等各类软件,当然也包括各个操作系统,比如centos7.Ubuntu,debian等等所有软件类,规避各种各样的安全漏洞基本都是通过升级版本的方式来完成的。

多说一句,在软件制作者的角度来说,升级软件版本的动力一个是安全漏洞的压力,一个是更多的,更   新的功能以及更美观的软件界面。在软件使用者的角度来说,升级软件版本的动力第一个是安全漏洞,其次才是更 新的功能,最后才是美观的软件界面。

2,

有更加强大的功能

例如,kubernetes集群从最初的1.1版本发展到现在的1.26版本,无疑功能是更多了,某些版本也更加的好用,易用,是能够提升生产效率的。那么,很多的新功能无疑是需要升级集群版本才可以做到的。

3,

有更加美观的界面

例如,kubernetes的组件dashboard,每个版本界面都是不一样的,虽然总体的风格类似,但,其中的某个web界面总能打动某些人的心,这些也是需要通过升级版本(这里是组件的版本)才可以做到的。

4,

对于kubernetes集群来说,kubernetes集群升级会同时刷新集群内的证书期限,某些时候,这个比较特定的机制也是kubernetes升级的一个小小动力吧。




OK,以上是对kubernetes集群升级的目的以及升级的用途做了一个简单的总结,下面将就一个Ubuntu-18.04版本的操作系统内部署的kubernetes-1.22.0集群做一个小小的升级---从1.22.0升级到1.22.2,说明如何升级一个kubernetes集群并更新集群的所有证书。

一,

环境介绍

操作系统:Ubuntu-18.04

集群架构:三个节点,一主两从节点,kubernetes集群初始版本1.22.0,kubeadm部署的集群

IP地址:192.168.123.150,192.168.123.151,192.168.123.152

集群状态:三个节点由于证书过期,全部挂掉的

二,

kubernetes集群和证书的关系

由于kubernetes集群出于安全方面的考虑,因此,从一开始发布就基于RBAC(权限管理系统),到1.16还是1.17版本,这个不太记得了,集群基本是默认使用RBAC了,这就使得集群内的各个组件之间的通信都是基于证书的,例如kubelet的配置文件(这里随便找了一个kubelet的配置文件):

 cat /etc/kubernetes/kubelet.conf 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1USXdPREEyTXpJek1Wb1hEVE14TVRJd05qQTJNekl6TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0N4Ck14ekVGcTZEeVZBTjNkelU0MUFUVU94VEgvd1dlUmZkS2F2L0lDSmFjR3VQQXZQOG9LUTc3L1BucDlnMHdsa2YKZFVTWW1WclpEVFR0dHN6YWRPdDlkWWFRTHI0MkM1MlNWWEU4eEl2MSt3MXo3QURYek04N2FISXlCZXVqbm1INwptS3lYdFlyR3I0UmxIM1d4TGU1YmRCYk03QkMrSTRndUZmNThHVFJ3N1QrclpJYXpqcDRPd1pVeFZGRm0rd0Y4ClZTZ2s1VVZXMGxtZ05mamt4WjZPbk1EcDBBREdDZ2JUZkVkazdmdlpGTVFkUkFMU2dmVmNGdGtWcG5xWjBjYVMKRDZaVHBwTWNiNkVqV2JNd1dnQ2F1eFRmNTF5dkhFdTFRTXdXa2Y1V1NZWEsyMmRoN3VjbHlMOGRra1NYaERWKwpEWjR2cmlhR3JZR21UMFQzbTRVQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZMOGFOS1lFM2M1LytqOTlqci9XeXVwNTF2cllNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBTGZCakdEUEo1Q1BVUUlRTkZmOApxMzgwU3hWWjBVMERZWXNGdFg0SW5SbmxCWGFtNkF5VUNMUGJYQkxnNDdCMThhZWN4aTZzVmpFeVZDdFpWUU9ZClZZd3Ftd1RkZ3VlK01sL1hKTGcyZDFVYjZBSVhSVnF2VExVTGt2ck56NVh6RHRjZElCdUlwUXRRUFBVS013NXoKYmZQanhxNndNeks1Z1htWUVONnI1ZzJTR1lNdEc5UGlhMFppcHJFY2lLaUNybW5TN3plaHBVOS9taUFEbWZzaQpld2E2RyszbHlBc3JISFRraTZWMUtNVVJUN3BWWnpFWFJUNElmK25kRDd5N2FwcS9lN3FLMkwwVDd1NnA0WVVuCmowQkdLcWNscGkwRzZzVU54NGxKdkN3aDZJbkh6UEN3OC9BRk9yTzdXRTRWVmM1N0JJVjZrQlFzVmJmaEVnN2kKM0JzPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.123.150:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:node:k8s-master
  name: system:node:k8s-master@kubernetes
current-context: system:node:k8s-master@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:k8s-master
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURKVENDQWcyZ0F3SUJBZ0lJS0h2czRzK3FaR0l3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeU1EZ3dOak15TXpGYUZ3MHlNekV5TVRJeE1EQXdNamRhTURneApGVEFUQmdOVkJBb1RESE41YzNSbGJUcHViMlJsY3pFZk1CMEdBMVVFQXhNV2MzbHpkR1Z0T201dlpHVTZhemh6CkxXMWhjM1JsY2pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTHRQVkZIdWJ1KzgKNkJCZ0I4UGJadzV0OGF2S3VxeklQMHRPN2tRbVN0ZkpTbjNxdjFHS0xwL2U4UllLbkpuUFRzaytYKzRPcitxSwoyeDhXQWxKUGtjQUFXNEJxUlZOT3R5WDhaZHRYc3J2d3VVaGRoZEZkU3dFV3ZzaWRVOEJJamRwQktKVkN0dHR0CkJPQ0hobXBSY0VyQ1JqYkt2Zy90WE9YcDE1cmx0aS92ZHJHaXpKRUt2cUJQZElpVU05UUh3WEZqdmFqMXFnT2YKaUpraTRlZDBnZ3AwSmtxM0grSXp6MFZNUG9YdTJXdnBTTU81dmtGbi9Lay92bUxhaHRwaTlJeCtPN1VMYUxTNQpsdEVGaVJXQnEyT1R2N3YzMndEVUJUcFdwN25wWlU0WE9ra3ltaW5VT3E3ZGhVZWxrQTMwWVhucjB0MklwOWI4ClgxRVNuSEtKMHMwQ0F3RUFBYU5XTUZRd0RnWURWUjBQQVFIL0JBUURBZ1dnTUJNR0ExVWRKUVFNTUFvR0NDc0cKQVFVRkJ3TUNNQXdHQTFVZEV3RUIvd1FDTUFBd0h3WURWUjBqQkJnd0ZvQVV2eG8wcGdUZHpuLzZQMzJPdjliSwo2bm5XK3Rnd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFORk5MVXA5SzRMM0s0NTRXbmp2OHprNVVhREU0ZGovCnpmRXBRdHRWSHRLV3VreHZTRWRRTG15RGZURDFoNm8veGFqNkc3Wk1YbmFWUTQ1WmkzZ3F5ck1ibTdETHhxWDYKV0pLdkZkNUJNY2F4YW16dWhoN0I4R2xrMkNsNUZsK3Z0QnUxREtya293blpydFBTeGFaVjhsUmo2bmFHU1k4RQo4RVVqUWN5VXF3Z2duZWwwanNoaFVKOGdKMHV0MXN5UVAxWEJJcEpsTEZ5b0dDQmNuWkFvdE9oWnFWTWwxcTdOClQ5aVZEVy9IZ2xPbll0WktTbXREN2JvMk4rSDZxNmhaUVFJWmVzWVJxNUoxV0IwclR5SkkzbHJEV3J2QWRrcDEKTUZrekJJK3d2WHF2MEdtTFNYNzRudU4wZnY2K0VvQkdUakFVbkNIdUxvQ0RQNmQxTVBYL3ZZcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBdTA5VVVlNXU3N3pvRUdBSHc5dG5EbTN4cThxNnJNZy9TMDd1UkNaSzE4bEtmZXEvClVZb3VuOTd4RmdxY21jOU95VDVmN2c2djZvcmJIeFlDVWsrUndBQmJnR3BGVTA2M0pmeGwyMWV5dS9DNVNGMkYKMFYxTEFSYSt5SjFUd0VpTjJrRW9sVUsyMjIwRTRJZUdhbEZ3U3NKR05zcStEKzFjNWVuWG11VzJMKzkyc2FMTQprUXErb0U5MGlKUXoxQWZCY1dPOXFQV3FBNStJbVNMaDUzU0NDblFtU3JjZjRqUFBSVXcraGU3WmErbEl3N20rClFXZjhxVCsrWXRxRzJtTDBqSDQ3dFF0b3RMbVcwUVdKRllHclk1Ty91L2ZiQU5RRk9sYW51ZWxsVGhjNlNUS2EKS2RRNnJ0MkZSNldRRGZSaGVldlMzWWluMXZ4ZlVSS2Njb25TelFJREFRQUJBb0lCQVFDcHNWUFZxaW9zM1RwcwpZMk9GaDhhVXB2d3p3OFZjOVVtS1EyYk9yTlpQS2hobmZQMTR0TFJLdCtJNE1zTHZBWVlDQVpWTkNWZE1LQ0lkCnhvV3g1azVINE1zRXlzSWxtQUdLMDErLzJIS2ZtNVZ3UHZJVjIrd3dmMWUyVGZuckVKQWFzNzg5Z2lSQkpFSXYKMi9mbGFBUlFaakxRUHRyemVQb1pmTUdNbmlGd3lIVVJVQWZkL0U0QlFGNS8zUWVFeEpEVkNtaEZEOG5YRXlqRwpLSG5CSlI1TGFCeFpPcmh0bDVqckRZRGxtaWcyZGNPY21TOU5xN2xCUVExb3NmR1FhbjlTODQ1VnlFdjQvcWZrCjAwZkpJN0JpSStDcWNtM0lHRmhNMG5lNXlvVE8zSEo4Sy94bTdpRmxIcVg1cXZaYjJnc2hTRzZIR1E4YjZPV2UKT09id0xxZnRBb0dCQU1VNisvaEdsaVJrNGd4S3NmUE12dERsV3hGTFVWaTdlK1BGQlJ6Zm5FRlh3U055dlI4ZgpYTmlWZStUOFQvcTgwUnZrTmQ1QVNnd1NBTkwraUZnN0R4SXIzUC9GSkc5SjYwcURYMDZzZi9tKzM5c2VzZFp0Ck9Fanl6UXBmNnVtRGVKMHhvNDlrVWtiTkNvQUNPamE4QndDeFdRcjJHb0ZyL0ptZGEra0JteWRUQW9HQkFQTWYKbDkzbXo1QUpiVkE3cVRWckRiVDNDWEsxRjQyV0RuQ0FDRU5kcE8vZlhOZWtuWEhWcjZiRUl6UTluelQ4R2hmOApUUlBtb1VJTVNVVGpHZmdUMTlkbTliODdZb2NkMENCd0pZejlwcmtRaUpMdm13QS82cytQalkvVUwvSHdrT05QCllvcm9YclB5WVpIeHNjMW54di9lRWJic0UyWjdVSXpXbGpxK0lIbGZBb0dBYlN1OUZTeGRKMEFBTDdXWTBzNWUKUU5yemtac1RKLzUvRVJDWlIrWXVZNnpqWjIrM1oyYkF5ZEhVaG1kekRlTStEQ1pCK3dleTlRTnlHVmh5dUFQWQp6OElmemlPZGkweHJSUTk2emQyRjZRUFNmVU44Uktpb0l4amlqZitSMURmRnA1MDJYOFMwRmlTZ3owSnNYcWV0CmFLRENIT01rd01hNVIzNXZvTVlXejZrQ2dZRUE1TWdiR2ZhRDVjL3BMUEluaFp3SzF2c015Z045ZVgvMmNJa2EKdllIV250ODZ0N1l4YnBpZDVUbDJ3MGNsbFMrU3duVnFkc3ExZnJpZkRoTURNZjVDUTNHZzJXWmhqakpRMHVXVgpnSHFFdEd2SmlUT3VVV3JVWktONm5Ca1pVUHVHN0ZDY3M0aDg3YXF0aEMvRG1ENEs5bVlibDEzSjE4czgvbnRECi9WMUNvOU1DZ1lFQWlCb2tPTEFBNlFDbHNYOXcwNlMrZ1pXT0FpbDlRRXhHalhVR2o0ZEtielpibmgrWnkwbUoKNW9LRHpydTFKeGdoa1JtQ0ZiZmx6aHpMMktWU2xIbXNPYWZDSU1JSHJSZTdic0gvdjg2SVpkYXlnTWxLckJ2LwpXUVMxdmJoQjVvNGdwRkE5aG0rMFcwTW9ZOUVsaERPUG52ajFxT2lTRVArN0dXdDAxOW5HdWdzPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

可以看到,此配置文件携带了三个证书和api-server通信认证,也就是说,如果此配置文件没有证书或者证书过期了,那么,kubelet服务奖不会成功启动的。

同样的,controller-manager服务也携带了证书来与api-server通信,同样的,如果此配置文件内的证书是不对的,或者证书过期了,controller-manager服务也不会成功启动,即使这个服务是存在于静态pod内。

 cat /etc/kubernetes/controller-manager.conf 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1USXdPREEyTXpJek1Wb1hEVE14TVRJd05qQTJNekl6TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0N4Ck14ekVGcTZEeVZBTjNkelU0MUFUVU94VEgvd1dlUmZkS2F2L0lDSmFjR3VQQXZQOG9LUTc3L1BucDlnMHdsa2YKZFVTWW1WclpEVFR0dHN6YWRPdDlkWWFRTHI0MkM1MlNWWEU4eEl2MSt3MXo3QURYek04N2FISXlCZXVqbm1INwptS3lYdFlyR3I0UmxIM1d4TGU1YmRCYk03QkMrSTRndUZmNThHVFJ3N1QrclpJYXpqcDRPd1pVeFZGRm0rd0Y4ClZTZ2s1VVZXMGxtZ05mamt4WjZPbk1EcDBBREdDZ2JUZkVkazdmdlpGTVFkUkFMU2dmVmNGdGtWcG5xWjBjYVMKRDZaVHBwTWNiNkVqV2JNd1dnQ2F1eFRmNTF5dkhFdTFRTXdXa2Y1V1NZWEsyMmRoN3VjbHlMOGRra1NYaERWKwpEWjR2cmlhR3JZR21UMFQzbTRVQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZMOGFOS1lFM2M1LytqOTlqci9XeXVwNTF2cllNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBTGZCakdEUEo1Q1BVUUlRTkZmOApxMzgwU3hWWjBVMERZWXNGdFg0SW5SbmxCWGFtNkF5VUNMUGJYQkxnNDdCMThhZWN4aTZzVmpFeVZDdFpWUU9ZClZZd3Ftd1RkZ3VlK01sL1hKTGcyZDFVYjZBSVhSVnF2VExVTGt2ck56NVh6RHRjZElCdUlwUXRRUFBVS013NXoKYmZQanhxNndNeks1Z1htWUVONnI1ZzJTR1lNdEc5UGlhMFppcHJFY2lLaUNybW5TN3plaHBVOS9taUFEbWZzaQpld2E2RyszbHlBc3JISFRraTZWMUtNVVJUN3BWWnpFWFJUNElmK25kRDd5N2FwcS9lN3FLMkwwVDd1NnA0WVVuCmowQkdLcWNscGkwRzZzVU54NGxKdkN3aDZJbkh6UEN3OC9BRk9yTzdXRTRWVmM1N0JJVjZrQlFzVmJmaEVnN2kKM0JzPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.123.150:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-controller-manager
  name: system:kube-controller-manager@kubernetes
current-context: system:kube-controller-manager@kubernetes
kind: Config
preferences: {}
users:
- name: system:kube-controller-manager
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGakNDQWY2Z0F3SUJBZ0lJVXJjNDVZUVF1NE13RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeU1EZ3dOak15TXpGYUZ3MHlNekV5TVRJeE1EQXdNamRhTUNreApKekFsQmdOVkJBTVRIbk41YzNSbGJUcHJkV0psTFdOdmJuUnliMnhzWlhJdGJXRnVZV2RsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUtkU2d1d1NDMVhVSmlvRjIvbFhaNHQwWjBQRVEvc0QKWk9ZTzdPdlUwbHVSYngzcW1VaEN6NjlPbFBtY0h0OUllRHl2a09qYzFBZ2FlZWwzR3VnSjlKZ2xFM0RmZm5hegpZVVBrOXVzTVBIQjZtMUhNR2ZjUVloZkhLUG9TNmk5bEVWZUhKOXEvOTVaSUdGKzVMR0F2eXVBUWg1NmZYT1hrCjRZRTRsSmYzRGhzdGRNeEtDNXVZTXpxazR3RmlNVkNxYkRhdlVqc3VmZzhhYlFQQWFVQ3NieWFFMm04RVllOGgKb2VkTkdVdml5WkxrQUM2ckM4bGVIeGNBbjB2ZlVvYXU3UzJFdWpNK01jV3RLZ1o4NmVVdjJaU2dXemFCa1VWZAo4QlRxK0VDOUtlb1pWRHJnUlBpejVub3FsVDBYTTJacUVJK01Ud1lTRWt3aTJ5WlpqNS9yWFQ4Q0F3RUFBYU5XCk1GUXdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CTUdBMVVkSlFRTU1Bb0dDQ3NHQVFVRkJ3TUNNQXdHQTFVZEV3RUIKL3dRQ01BQXdId1lEVlIwakJCZ3dGb0FVdnhvMHBnVGR6bi82UDMyT3Y5Yks2bm5XK3Rnd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR0d3NGxaaTZDSzB5YWpsOGNlUEg3a20vY1VLcGFFTld1aWJyVTQyNEVDM1A0UkFZZlJXCmRranZjNm9JbVhzR2V2T2Y0ZWlJU0dxaWNOb254N2RkWUxLY2tDaWxLQmcwS2hyZGFRT3o3N3ZCQitvamczbmgKMHByb05oYW12dkVpc0lUY212cmdzNTZqMk1Id2lUK3ZHeXFHbWxPOG9TRHZmWVFnMUVqTkRxWlVEd0g3OFlHYwowT0h5cXU3SW1hYngvKzdWOGcvMmlBS3NEVVVja3I3UHVMWWI3RlA0ZlZvVjlDWkIzVHI3bXFRQ2FrUmJmMnF1CjUvd3pEMG9lYjFBeHV0aUFSVjBlM2JBZUxXV0tqckEyNW9ISVBCRW1zTEFQSmtlMDVlRk9LK05ZUHBMdjBNU04KVnlzaXEzcVl0RkxZSzRaN0kyaGgrKzc5MXM4Y2g1TDNFeGM9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBcDFLQzdCSUxWZFFtS2dYYitWZG5pM1JuUThSRCt3Tms1ZzdzNjlUU1c1RnZIZXFaClNFTFByMDZVK1p3ZTMwaDRQSytRNk56VUNCcDU2WGNhNkFuMG1DVVRjTjkrZHJOaFErVDI2d3c4Y0hxYlVjd1oKOXhCaUY4Y28raExxTDJVUlY0Y24yci8zbGtnWVg3a3NZQy9LNEJDSG5wOWM1ZVRoZ1RpVWwvY09HeTEwekVvTAptNWd6T3FUakFXSXhVS3BzTnE5U095NStEeHB0QThCcFFLeHZKb1RhYndSaDd5R2g1MDBaUytMSmt1UUFMcXNMCnlWNGZGd0NmUzk5U2hxN3RMWVM2TXo0eHhhMHFCbnpwNVMvWmxLQmJOb0dSUlYzd0ZPcjRRTDBwNmhsVU91QkUKK0xQbWVpcVZQUmN6Wm1vUWo0eFBCaElTVENMYkpsbVBuK3RkUHdJREFRQUJBb0lCQUZ0Vm9QMjRBOVFBRUMwVQpNYlZ6enFQREVMTmZLVFNWNzdmZElkckJ1Mm9jZ3lremJDU1R3OGFRQUtZWVlJbkZoMHlwRVZMcmFCcGNTWHYxCmRneC9rcktTV29CY255MndVVUc4ZEVSdDAzZ2FsVG9iVFhrZHlrM3NleU8ydTNyUGtwM1N1eUNmZFVqbFpkaXEKdmR4cmVqVEJFU2EzR3dDcTVhV2grd3JRNHpSVnc5eERTVGF0MXA1cS82UFRPcHhkWElJRWhHQVd4SEE2RnVnSgpuR2RrMWFxT1ZweFFSZVZ1QVluNVlYemZ0MXByU0wwdWQwOFRNd3FaRUowa0dabUdIODJDV0tTRnQ4cEs1MnJ2CnpURDFnWE5hZzhrNGF3czhrQXpnT1lWek8wMHR2SjdVQjNFcVBSSThKQVJucWpyTGpNaTRqeDVMZGFQZkc2VUQKMERwNFdsRUNnWUVBelRYT2FJdUQ2QUhkaTNPaFlpQ2puOUwwM3ZOUGY1c1JtRmJzclc2UFFxdEVTTit2aVFxMApFMVREU2REM3BpSndqSC9zQTNhRW5ISStRZVYxT1ZFbVR3YkxpaFhOVWtYTWo2OWRmVHdxYTd1MnJEWm0zQnF6CkVrYUI3N0FHb3BvR2hRajBHcGMzTnY5MDZwN3orekVEKzFSajFaeGtsaXVXQ2NDMzViVlZqejBDZ1lFQTBMd2QKa1VYdGxhK0hWb2dicFJsRlE2dTVtbVFzRXpMYWdrQkFoRDFqYkpORWtVL1o0RjJGNkwwVTN0SWtWNWRyRG9XRQowbUo5TmdELzhPWVdXbTZSdVo1bVhPZlRSOVdkc2k5MGpUNU9NRkJHbHVMdlUyNllhTEhUdmMwbHdiQlVPelJECjJvb2ZEZURjU3Y2d1oxSVU3QlQ2YTA4dWtON0FpNVR6ZWhrQ1ppc0NnWUVBdWVNeHRKWWN5TDlYMW9qSitiK2oKT0pXNTUzUHo0WjJ3bEpTNUZHbUFNRjVBSHRzeGdTeEc3dlByYXlSMkVQSkZqYUFiUlEvSkZJYVFTdFQyR1JPZgpaaHE3cWJ3U0g2TEdxS21zUUZPT0FjVXF0bGtaVit4L3BlQmt0NkIyZ2ppUUMxYU8rTDlkN3QzOUpNTVVNOGkwCjJLZ2JQMWJKN3haUWRVa3p6RXMwMCtrQ2dZQlJraUlQNG43dEx4STVpNmthQk4wZmk5MVZhMjRaOXBhVHJoNUkKVDJFcVRnYk9ycURiWUZEeldlanRCcnd6Q3JaSWozOFBaSFBBQmZYL0l6dDdEWmlmTERxZWRlNElOWCtSNFordgpqcmlwZ3NXRE01NEpRY0FIc2U2b1RxSkJwZkhVelNEekoyVHBYSVZhUFZ1Y2xPUWVPamgrZFF3aWl4bzlzZkRRCk56UEx6d0tCZ1FDaVU2QUZ4NjdPdW1jRGtwK21tRXpYNWpsQmtlZk9XakJlNlZ4ZGwzNjA1TjJ6YS9CbDlINzEKZ3pjdlZhQTY5RC9uVG50bE1xaWhnUm1NTGdEMGovOWkrV0VkMzR2Y0JTOHI2M1VZZVRvUWRoakJCZC8yeUM5NAppQ0ZiNlZ0dGRrSmg5SEhkV1pTRkxwQ0FmTnVlMVRxclA5L1RobTNRaTFjM2lZZVJUblh0YkE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

 三,

报错信息

集群很明显没有启动

root@k8s-master:~# kubectl get no
The connection to the server 192.168.123.150:6443 was refused - did you specify the right host or port?

系统日志报错如下;

12月8号证书已经过期。

ec 12 23:11:18 k8s-master kubelet[2750]: I1212 23:11:18.900395    2750 server.go:868] "Client rotation is on, will bootstrap in background"
Dec 12 23:11:18 k8s-master kubelet[2750]: E1212 23:11:18.903330    2750 bootstrap.go:265] part of the existing bootstrap client certificate in /etc/kubernetes/kubelet.conf is expired: 2022-12-08 06:32:35 +0000 UTC
Dec 12 23:11:18 k8s-master kubelet[2750]: E1212 23:11:18.905482    2750 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or

  查看证书时间,再次确认一哈:

root@k8s-master:~# kubeadm  certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Dec 08, 2022 06:32 UTC   <invalid>                               no      
apiserver                  Dec 08, 2022 06:32 UTC   <invalid>       ca                      no      
apiserver-etcd-client      Dec 08, 2022 06:32 UTC   <invalid>       etcd-ca                 no      
apiserver-kubelet-client   Dec 08, 2022 06:32 UTC   <invalid>       ca                      no      
controller-manager.conf    Dec 08, 2022 06:32 UTC   <invalid>                               no      
etcd-healthcheck-client    Dec 08, 2022 06:32 UTC   <invalid>       etcd-ca                 no      
etcd-peer                  Dec 08, 2022 06:32 UTC   <invalid>       etcd-ca                 no      
etcd-server                Dec 08, 2022 06:32 UTC   <invalid>       etcd-ca                 no      
front-proxy-client         Dec 08, 2022 06:32 UTC   <invalid>       front-proxy-ca          no      
scheduler.conf             Dec 08, 2022 06:32 UTC   <invalid>                               no      
CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Dec 06, 2031 06:32 UTC   8y              no      
etcd-ca                 Dec 06, 2031 06:32 UTC   8y              no      
front-proxy-ca          Dec 06, 2031 06:32 UTC   8y              no 

四,  

kubernetes集群的升级

添加阿里云的apt源:

cat >/etc/apt/sources.list.d/kubernetes.list <<EOF
# 阿里源
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe 
multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe 
multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe 
multiverse
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

更新apt源:

sudo apt-get update

输出如下:

有警告,可以不处理,也可以处理一哈,这里的警告是说两个文件的源重复配置,冲突了,因此,删除/etc/apt/sources.list这个文件即可。

root@k8s-master:~# sudo apt-get update
Hit:1 http://mirrors.aliyun.com/ubuntu bionic InRelease                 
Hit:2 http://mirrors.aliyun.com/ubuntu bionic-updates InRelease                                                                  
Hit:3 https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic InRelease                                                         
Hit:4 http://mirrors.aliyun.com/ubuntu bionic-backports InRelease                                       
Hit:5 http://mirrors.aliyun.com/ubuntu bionic-security InRelease                                                                  
Hit:6 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease                           
Get:7 http://mirrors.aliyun.com/ubuntu bionic-proposed InRelease [242 kB]                             
Get:8 http://mirrors.aliyun.com/ubuntu bionic/universe Sources [9,051 kB]       
Get:9 http://mirrors.aliyun.com/ubuntu bionic/restricted Sources [5,324 B]                                                                                                                                                                 
Get:10 http://mirrors.aliyun.com/ubuntu bionic/main Sources [829 kB]                                                                                                                                                                       
Get:11 http://mirrors.aliyun.com/ubuntu bionic/multiverse Sources [181 kB]                                                                                                                                                                 
Get:12 http://mirrors.aliyun.com/ubuntu bionic-updates/restricted Sources [33.1 kB]                                                                                                                                                        
Get:13 http://mirrors.aliyun.com/ubuntu bionic-updates/universe Sources [486 kB]                                                                                                                                                           
Get:14 http://mirrors.aliyun.com/ubuntu bionic-updates/multiverse Sources [17.2 kB]                                                                                                                                                        
Get:15 http://mirrors.aliyun.com/ubuntu bionic-updates/main Sources [537 kB]                                                                                                                                                               
Get:16 http://mirrors.aliyun.com/ubuntu bionic-backports/universe Sources [6,600 B]                                                                                                                                                        
Get:17 http://mirrors.aliyun.com/ubuntu bionic-backports/main Sources [10.5 kB]                                                                                                                                                            
Get:18 http://mirrors.aliyun.com/ubuntu bionic-security/restricted Sources [30.2 kB]                                                                                                                                                       
Get:19 http://mirrors.aliyun.com/ubuntu bionic-security/multiverse Sources [10.6 kB]                                                                                                                                                       
Get:20 http://mirrors.aliyun.com/ubuntu bionic-security/main Sources [288 kB]                                                                                                                                                              
Get:21 http://mirrors.aliyun.com/ubuntu bionic-security/universe Sources [309 kB]                                                                                                                                                          
Get:22 http://mirrors.aliyun.com/ubuntu bionic-proposed/restricted Sources [8,164 B]                                                                                                                                                       
Get:23 http://mirrors.aliyun.com/ubuntu bionic-proposed/universe Sources [9,428 B]                                                                                                                                                         
Get:24 http://mirrors.aliyun.com/ubuntu bionic-proposed/main Sources [75.6 kB]                                                                                                                                                             
Get:25 http://mirrors.aliyun.com/ubuntu bionic-proposed/main amd64 Packages [145 kB]                                                                                                                                                       
Get:26 http://mirrors.aliyun.com/ubuntu bionic-proposed/main Translation-en [32.2 kB]                                                                                                                                                      
Get:27 http://mirrors.aliyun.com/ubuntu bionic-proposed/restricted amd64 Packages [132 kB]                                                                                                                                                 
Get:28 http://mirrors.aliyun.com/ubuntu bionic-proposed/restricted Translation-en [18.5 kB]                                                                                                                                                
Get:29 http://mirrors.aliyun.com/ubuntu bionic-proposed/universe amd64 Packages [11.0 kB]                                                                                                                                                  
Get:30 http://mirrors.aliyun.com/ubuntu bionic-proposed/universe Translation-en [6,676 B]                                                                                                                                                  
Fetched 12.5 MB in 14s (878 kB/s)                                                                                                                                                                                                          
Reading package lists... Done
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:3 and /etc/apt/sources.list.d/kubernetes.list:2
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:3 and /etc/apt/sources.list.d/kubernetes.list:2

2,

重新安装kubelet,kubeadm,kubectl

sudo apt-get install kubelet=1.22.2-00  kubeadm=1.22.2-00 kubectl=1.22.2-00  -y

3,

开始升级

先更新证书以启动kubelet

kubeadm  certs renew all

输出如下:

[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.

以上输出最后提示需要重启restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd,在此之前先看看证书是否更新:

已经更新了,但kubelet的配置文件等等这些还是使用的旧证书,因此,此时的kubelet等服务还是不能启动的状态

root@k8s-master:~# kubeadm  certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Dec 12, 2023 15:29 UTC   364d                                    no      
apiserver                  Dec 12, 2023 15:29 UTC   364d            ca                      no      
apiserver-etcd-client      Dec 12, 2023 15:29 UTC   364d            etcd-ca                 no      
apiserver-kubelet-client   Dec 12, 2023 15:29 UTC   364d            ca                      no      
controller-manager.conf    Dec 12, 2023 15:29 UTC   364d                                    no      
etcd-healthcheck-client    Dec 12, 2023 15:29 UTC   364d            etcd-ca                 no      
etcd-peer                  Dec 12, 2023 15:29 UTC   364d            etcd-ca                 no      
etcd-server                Dec 12, 2023 15:29 UTC   364d            etcd-ca                 no      
front-proxy-client         Dec 12, 2023 15:29 UTC   364d            front-proxy-ca          no      
scheduler.conf             Dec 12, 2023 15:29 UTC   364d                                    no      
CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Dec 06, 2031 06:32 UTC   8y              no      
etcd-ca                 Dec 06, 2031 06:32 UTC   8y              no      
front-proxy-ca          Dec 06, 2031 06:32 UTC   8y              no      

因此,这个时候需要删除这些服务的配置文件,使用kubeadm重新生成这些文件:

root@k8s-master:~# rm -rf /etc/kubernetes/*.conf
root@k8s-master:~# kubeadm  init phase kubeconfig all
I1212 23:35:49.775848   19629 version.go:255] remote version is much newer: v1.26.0; falling back to: stable-1.22
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file

如上所示,可以看到新的配置文件都使用的新的证书了,此时kubelet可以重启了。

root@k8s-master:~# systemctl restart kubelet
root@k8s-master:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2022-12-12 23:36:57 CST; 2s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 21307 (kubelet)
    Tasks: 20 (limit: 2210)
   CGroup: /system.slice/kubelet.service
           ├─21307 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=registry.
           └─21589 /opt/cni/bin/calico
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316164   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c80-a5
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316204   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c4992d6a9ff2341f1
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316279   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzwxm\" (UniqueName: \"kubernetes.io/projected/8ad3a63e-e
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316323   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c80-
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316364   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx6q6\" (UniqueName: \"kubernetes.io/projected/5ef5e743-e
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316407   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4992d6a9ff2341f1f1b0
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316447   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c8
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316497   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-local-net-dir\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316540   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c80-a
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316562   21307 reconciler.go:157] "Reconciler: start to sync state"

此时查看节点状态发现报错,因为是使用的.kube目录下congfig,这个文件是使用的旧的证书,因此,删除它,重新引用变量,并覆盖旧的config:

root@k8s-master:~# kubectl get no 
error: You must be logged in to the server (Unauthorized)

OK,master节点的正式就更新完了,那么,工作节点的证书怎么处理呢?

解决方案为:由于整个集群是kubeadm搭建的,而etcd是静态pod 形式存在在master节点的,因此,master节点恢复后,确认etcd正常后,工作节点重新加入集群即可:

删除工作节点:

root@k8s-master:~# kubectl delete nodes k8s-node1 
node "k8s-node1" deleted
root@k8s-master:~# kubectl delete nodes k8s-node2
node "k8s-node2" deleted

生成加入节点命令:

root@k8s-master:~# kubeadm token create --print-join-command
kubeadm join 192.168.123.150:6443 --token 692e4m.o8njp7guix9w5jne --discovery-token-ca-cert-hash sha256:fb346dffae444c802ffeaee5269375b3727c05d92a4365231772de414cbd6923

在工作节点重设节点并重新加入节点(151和152节点执行):

root@k8s-node1:~# kubeadm reset -f
[preflight] Running pre-flight checks
W1212 23:52:30.989714   84567 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
root@k8s-node1:~# kubeadm join 192.168.123.150:6443 --token 692e4m.o8njp7guix9w5jne --discovery-token-ca-cert-hash sha256:fb346dffae444c802ffeaee5269375b3727c05d92a4365231772de414cbd6923
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
root@k8s-master:~# export KUBECONFIG=/etc/kubernetes/admin.conf 
root@k8s-master:~# kubectl get no 
NAME         STATUS     ROLES                  AGE    VERSION
k8s-master   Ready      control-plane,master   369d   v1.22.2
k8s-node1    NotReady   <none>                 369d   v1.22.0
k8s-node2    NotReady   <none>                 369d   v1.22.0
root@k8s-master:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: overwrite '/root/.kube/config'? y

此时,回到master节点,查看节点状态,可以看到恢复正常了:

root@k8s-master:~# kubectl get no
NAME         STATUS   ROLES                  AGE     VERSION
k8s-master   Ready    control-plane,master   369d    v1.22.2
k8s-node1    Ready    <none>                 2m18s   v1.22.0
k8s-node2    Ready    <none>                 12s     v1.22.0

查看pod,也都是正常的状态了:

root@k8s-master:~# kubectl get po -A
NAMESPACE       NAME                                       READY   STATUS    RESTARTS        AGE
default         front-end-6f94965fd9-dq7t8                 1/1     Running   0               27m
default         guestbook-86bb8f5bc9-mcdvg                 1/1     Running   0               27m
default         guestbook-86bb8f5bc9-zh7zq                 1/1     Running   0               27m
default         nfs-client-provisioner-56dd5765dc-gp6mz    1/1     Running   0               27m

五,

集群升级

先升级kubeadm到1.22.10

root@k8s-master:~# apt-get install kubeadm=1.22.10-00
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  cri-tools
The following packages will be upgraded:
  cri-tools kubeadm
2 upgraded, 0 newly installed, 0 to remove and 193 not upgraded.
Need to get 26.7 MB of archives.
After this operation, 19.9 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 cri-tools amd64 1.25.0-00 [17.9 MB]
Get:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubeadm amd64 1.22.10-00 [8,728 kB]                                                                                                                           
Fetched 26.7 MB in 51s (522 kB/s)                                                                                                                                                                                                          
(Reading database ... 67719 files and directories currently installed.)
Preparing to unpack .../cri-tools_1.25.0-00_amd64.deb ...
Unpacking cri-tools (1.25.0-00) over (1.19.0-00) ...
Preparing to unpack .../kubeadm_1.22.10-00_amd64.deb ...
Unpacking kubeadm (1.22.10-00) over (1.22.2-00) ...
Setting up cri-tools (1.25.0-00) ...
Setting up kubeadm (1.22.10-00) ...

在升级kubernetes集群:

kubeadm upgrade apply v1.22.10

输出如下:

root@k8s-master:~# kubeadm upgrade apply v1.22.10
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.22.10"
[upgrade/versions] Cluster version: v1.22.0
[upgrade/versions] kubeadm version: v1.22.10
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.22.10"...
Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
Static pod: kube-controller-manager-k8s-master hash: c4992d6a9ff2341f1f1b0d3058a62049
Static pod: kube-scheduler-k8s-master hash: 938652c36b8ab3b7a6345373ea6e1ded
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s-master hash: 942d6def80e349c32e518bf1fb533795
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-k8s-master hash: 942d6def80e349c32e518bf1fb533795
Static pod: etcd-k8s-master hash: 942d6def80e349c32e518bf1fb533795
Static pod: etcd-k8s-master hash: ee7d79d2b2967f03af72732ecda2b44f
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests149634972"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
Static pod: kube-apiserver-k8s-master hash: d2601c13ace3af023db083125c56d47b
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master hash: c4992d6a9ff2341f1f1b0d3058a62049
Static pod: kube-controller-manager-k8s-master hash: 648269e02b16780e315b096eec7eaa5d
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master hash: 938652c36b8ab3b7a6345373ea6e1ded
Static pod: kube-scheduler-k8s-master hash: ec4c9f7722e075d30583bde88d591749
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.10". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

提示kubelet也应该升级:

root@k8s-master:~# apt-get install kubelet=1.22.10-00
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be upgraded:
  kubelet
1 upgraded, 0 newly installed, 0 to remove and 193 not upgraded.
Need to get 19.2 MB of archives.
After this operation, 32.1 MB disk space will be freed.
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubelet amd64 1.22.10-00 [19.2 MB]
Fetched 19.2 MB in 42s (453 kB/s)                                                                                                                                                                                                          
(Reading database ... 67719 files and directories currently installed.)
Preparing to unpack .../kubelet_1.22.10-00_amd64.deb ...
Unpacking kubelet (1.22.10-00) over (1.22.2-00) ...
Setting up kubelet (1.22.10-00) ...

证书的时间也刷新了:

(为什么一开始不直接升级呢?因为升级的时候需要集群是正常运行的,但前面证书是已经过期,集群宕机状态了,没办法升级)

root@k8s-master:~# kubeadm  certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Dec 12, 2023 16:11 UTC   364d            ca                      no      
apiserver                  Dec 12, 2023 16:11 UTC   364d            ca                      no      
apiserver-etcd-client      Dec 12, 2023 16:11 UTC   364d            etcd-ca                 no      
apiserver-kubelet-client   Dec 12, 2023 16:11 UTC   364d            ca                      no      
controller-manager.conf    Dec 12, 2023 16:11 UTC   364d            ca                      no      
etcd-healthcheck-client    Dec 12, 2023 16:10 UTC   364d            etcd-ca                 no      
etcd-peer                  Dec 12, 2023 16:10 UTC   364d            etcd-ca                 no      
etcd-server                Dec 12, 2023 16:10 UTC   364d            etcd-ca                 no      
front-proxy-client         Dec 12, 2023 16:11 UTC   364d            front-proxy-ca          no      
scheduler.conf             Dec 12, 2023 16:11 UTC   364d            ca                      no      
CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Dec 06, 2031 06:32 UTC   8y              no      
etcd-ca                 Dec 06, 2031 06:32 UTC   8y              no      
front-proxy-ca          Dec 06, 2031 06:32 UTC   8y              no      


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
11天前
|
Kubernetes Cloud Native Docker
云原生时代的容器化实践:Docker和Kubernetes入门
【10月更文挑战第37天】在数字化转型的浪潮中,云原生技术成为企业提升敏捷性和效率的关键。本篇文章将引导读者了解如何利用Docker进行容器化打包及部署,以及Kubernetes集群管理的基础操作,帮助初学者快速入门云原生的世界。通过实际案例分析,我们将深入探讨这些技术在现代IT架构中的应用与影响。
46 2
|
7天前
|
Kubernetes 监控 Cloud Native
Kubernetes集群的高可用性与伸缩性实践
Kubernetes集群的高可用性与伸缩性实践
30 1
|
11天前
|
Kubernetes 监控 负载均衡
深入云原生:Kubernetes 集群部署与管理实践
【10月更文挑战第37天】在数字化转型的浪潮中,云原生技术以其弹性、可扩展性成为企业IT架构的首选。本文将引导你了解如何部署和管理一个Kubernetes集群,包括环境准备、安装步骤和日常维护技巧。我们将通过实际代码示例,探索云原生世界的秘密,并分享如何高效运用这一技术以适应快速变化的业务需求。
38 1
|
15天前
|
运维 Kubernetes Cloud Native
Kubernetes云原生架构深度解析与实践指南####
本文深入探讨了Kubernetes作为领先的云原生应用编排平台,其设计理念、核心组件及高级特性。通过剖析Kubernetes的工作原理,结合具体案例分析,为读者呈现如何在实际项目中高效部署、管理和扩展容器化应用的策略与技巧。文章还涵盖了服务发现、负载均衡、配置管理、自动化伸缩等关键议题,旨在帮助开发者和运维人员掌握利用Kubernetes构建健壮、可伸缩的云原生生态系统的能力。 ####
|
16天前
|
存储 运维 Kubernetes
云原生之旅:Kubernetes的弹性与可扩展性探索
【10月更文挑战第32天】在云计算的浪潮中,云原生技术以其独特的魅力成为开发者的新宠。本文将深入探讨Kubernetes如何通过其弹性和可扩展性,助力应用在复杂环境中稳健运行。我们将从基础架构出发,逐步揭示Kubernetes集群管理、服务发现、存储机制及自动扩缩容等核心功能,旨在为读者呈现一个全景式的云原生平台视图。
27 1
|
21天前
|
Kubernetes 负载均衡 Cloud Native
云原生应用:Kubernetes在容器编排中的实践与挑战
【10月更文挑战第27天】Kubernetes(简称K8s)是云原生应用的核心容器编排平台,提供自动化、扩展和管理容器化应用的能力。本文介绍Kubernetes的基本概念、安装配置、核心组件(如Pod和Deployment)、服务发现与负载均衡、网络配置及安全性挑战,帮助读者理解和实践Kubernetes在容器编排中的应用。
64 4
|
22天前
|
Kubernetes 监控 Cloud Native
云原生应用:Kubernetes在容器编排中的实践与挑战
【10月更文挑战第26天】随着云计算技术的发展,容器化成为现代应用部署的核心趋势。Kubernetes(K8s)作为容器编排领域的佼佼者,以其强大的可扩展性和自动化能力,为开发者提供了高效管理和部署容器化应用的平台。本文将详细介绍Kubernetes的基本概念、核心组件、实践过程及面临的挑战,帮助读者更好地理解和应用这一技术。
56 3
|
28天前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
26天前
|
Kubernetes Cloud Native 开发者
云原生技术入门:Kubernetes和Docker的协作之旅
【10月更文挑战第22天】在数字化转型的浪潮中,云原生技术成为推动企业创新的重要力量。本文旨在通过浅显易懂的语言,引领读者步入云原生的世界,着重介绍Kubernetes和Docker如何携手打造弹性、可扩展的云环境。我们将从基础概念入手,逐步深入到它们在实际场景中的应用,以及如何简化部署和管理过程。文章不仅为初学者提供入门指南,还为有一定基础的开发者提供实践参考,共同探索云原生技术的无限可能。
40 3
|
24天前
|
运维 Kubernetes Cloud Native
云原生入门:Kubernetes和容器化的未来
【10月更文挑战第23天】本文将带你走进云原生的世界,探索Kubernetes如何成为现代软件部署的心脏。我们将一起揭开容器化技术的神秘面纱,了解它如何改变软件开发和运维的方式。通过实际的代码示例,你将看到理论与实践的结合,感受到云原生技术带来的革命性影响。无论你是初学者还是有经验的开发者,这篇文章都将为你开启一段新的旅程。让我们一起踏上这段探索之旅,解锁云原生技术的力量吧!