云原生|kubernetes|kubernetes集群升级+证书更新(Ubuntu-18.04+kubeadm)

简介: 云原生|kubernetes|kubernetes集群升级+证书更新(Ubuntu-18.04+kubeadm)

前言:

kubernetes集群根据部署手法来分类,一般是两种,一种是基于kubeadm搭建的集群,一种是二进制方式搭建的集群。那么,二进制集群升级和证书更新就完全是手动处理了,而kubeadm的集群可以自动的升级集群(有一些手动的操作,但不多)和证书更新。

kubernetes集群升级的目的和意义:

1,

某些版本有安全隐患的时候需要升级集群

例如,kubernetes CVE-2022-3172 安全漏洞,kubernetes CVE-2019-11247,kubernetesCVE-2018-1002105,kubernetes CVE-2020-8559等等漏洞。

这些内容可以在阿里云查找到,链接如下:

【CVE安全】漏洞修复公告_容器服务Kubernetes版-阿里云帮助中心

一般情况下,不管是kubernetes集群还是什么其它的集群,还是什么tomcat,什么elasticsearch,ssh等等各类软件,当然也包括各个操作系统,比如centos7.Ubuntu,debian等等所有软件类,规避各种各样的安全漏洞基本都是通过升级版本的方式来完成的。

多说一句,在软件制作者的角度来说,升级软件版本的动力一个是安全漏洞的压力,一个是更多的,更   新的功能以及更美观的软件界面。在软件使用者的角度来说,升级软件版本的动力第一个是安全漏洞,其次才是更 新的功能,最后才是美观的软件界面。

2,

有更加强大的功能

例如,kubernetes集群从最初的1.1版本发展到现在的1.26版本,无疑功能是更多了,某些版本也更加的好用,易用,是能够提升生产效率的。那么,很多的新功能无疑是需要升级集群版本才可以做到的。

3,

有更加美观的界面

例如,kubernetes的组件dashboard,每个版本界面都是不一样的,虽然总体的风格类似,但,其中的某个web界面总能打动某些人的心,这些也是需要通过升级版本(这里是组件的版本)才可以做到的。

4,

对于kubernetes集群来说,kubernetes集群升级会同时刷新集群内的证书期限,某些时候,这个比较特定的机制也是kubernetes升级的一个小小动力吧。




OK,以上是对kubernetes集群升级的目的以及升级的用途做了一个简单的总结,下面将就一个Ubuntu-18.04版本的操作系统内部署的kubernetes-1.22.0集群做一个小小的升级---从1.22.0升级到1.22.2,说明如何升级一个kubernetes集群并更新集群的所有证书。

一,

环境介绍

操作系统:Ubuntu-18.04

集群架构:三个节点,一主两从节点,kubernetes集群初始版本1.22.0,kubeadm部署的集群

IP地址:192.168.123.150,192.168.123.151,192.168.123.152

集群状态:三个节点由于证书过期,全部挂掉的

二,

kubernetes集群和证书的关系

由于kubernetes集群出于安全方面的考虑,因此,从一开始发布就基于RBAC(权限管理系统),到1.16还是1.17版本,这个不太记得了,集群基本是默认使用RBAC了,这就使得集群内的各个组件之间的通信都是基于证书的,例如kubelet的配置文件(这里随便找了一个kubelet的配置文件):

 cat /etc/kubernetes/kubelet.conf 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1USXdPREEyTXpJek1Wb1hEVE14TVRJd05qQTJNekl6TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0N4Ck14ekVGcTZEeVZBTjNkelU0MUFUVU94VEgvd1dlUmZkS2F2L0lDSmFjR3VQQXZQOG9LUTc3L1BucDlnMHdsa2YKZFVTWW1WclpEVFR0dHN6YWRPdDlkWWFRTHI0MkM1MlNWWEU4eEl2MSt3MXo3QURYek04N2FISXlCZXVqbm1INwptS3lYdFlyR3I0UmxIM1d4TGU1YmRCYk03QkMrSTRndUZmNThHVFJ3N1QrclpJYXpqcDRPd1pVeFZGRm0rd0Y4ClZTZ2s1VVZXMGxtZ05mamt4WjZPbk1EcDBBREdDZ2JUZkVkazdmdlpGTVFkUkFMU2dmVmNGdGtWcG5xWjBjYVMKRDZaVHBwTWNiNkVqV2JNd1dnQ2F1eFRmNTF5dkhFdTFRTXdXa2Y1V1NZWEsyMmRoN3VjbHlMOGRra1NYaERWKwpEWjR2cmlhR3JZR21UMFQzbTRVQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZMOGFOS1lFM2M1LytqOTlqci9XeXVwNTF2cllNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBTGZCakdEUEo1Q1BVUUlRTkZmOApxMzgwU3hWWjBVMERZWXNGdFg0SW5SbmxCWGFtNkF5VUNMUGJYQkxnNDdCMThhZWN4aTZzVmpFeVZDdFpWUU9ZClZZd3Ftd1RkZ3VlK01sL1hKTGcyZDFVYjZBSVhSVnF2VExVTGt2ck56NVh6RHRjZElCdUlwUXRRUFBVS013NXoKYmZQanhxNndNeks1Z1htWUVONnI1ZzJTR1lNdEc5UGlhMFppcHJFY2lLaUNybW5TN3plaHBVOS9taUFEbWZzaQpld2E2RyszbHlBc3JISFRraTZWMUtNVVJUN3BWWnpFWFJUNElmK25kRDd5N2FwcS9lN3FLMkwwVDd1NnA0WVVuCmowQkdLcWNscGkwRzZzVU54NGxKdkN3aDZJbkh6UEN3OC9BRk9yTzdXRTRWVmM1N0JJVjZrQlFzVmJmaEVnN2kKM0JzPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.123.150:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:node:k8s-master
  name: system:node:k8s-master@kubernetes
current-context: system:node:k8s-master@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:k8s-master
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURKVENDQWcyZ0F3SUJBZ0lJS0h2czRzK3FaR0l3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeU1EZ3dOak15TXpGYUZ3MHlNekV5TVRJeE1EQXdNamRhTURneApGVEFUQmdOVkJBb1RESE41YzNSbGJUcHViMlJsY3pFZk1CMEdBMVVFQXhNV2MzbHpkR1Z0T201dlpHVTZhemh6CkxXMWhjM1JsY2pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTHRQVkZIdWJ1KzgKNkJCZ0I4UGJadzV0OGF2S3VxeklQMHRPN2tRbVN0ZkpTbjNxdjFHS0xwL2U4UllLbkpuUFRzaytYKzRPcitxSwoyeDhXQWxKUGtjQUFXNEJxUlZOT3R5WDhaZHRYc3J2d3VVaGRoZEZkU3dFV3ZzaWRVOEJJamRwQktKVkN0dHR0CkJPQ0hobXBSY0VyQ1JqYkt2Zy90WE9YcDE1cmx0aS92ZHJHaXpKRUt2cUJQZElpVU05UUh3WEZqdmFqMXFnT2YKaUpraTRlZDBnZ3AwSmtxM0grSXp6MFZNUG9YdTJXdnBTTU81dmtGbi9Lay92bUxhaHRwaTlJeCtPN1VMYUxTNQpsdEVGaVJXQnEyT1R2N3YzMndEVUJUcFdwN25wWlU0WE9ra3ltaW5VT3E3ZGhVZWxrQTMwWVhucjB0MklwOWI4ClgxRVNuSEtKMHMwQ0F3RUFBYU5XTUZRd0RnWURWUjBQQVFIL0JBUURBZ1dnTUJNR0ExVWRKUVFNTUFvR0NDc0cKQVFVRkJ3TUNNQXdHQTFVZEV3RUIvd1FDTUFBd0h3WURWUjBqQkJnd0ZvQVV2eG8wcGdUZHpuLzZQMzJPdjliSwo2bm5XK3Rnd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFORk5MVXA5SzRMM0s0NTRXbmp2OHprNVVhREU0ZGovCnpmRXBRdHRWSHRLV3VreHZTRWRRTG15RGZURDFoNm8veGFqNkc3Wk1YbmFWUTQ1WmkzZ3F5ck1ibTdETHhxWDYKV0pLdkZkNUJNY2F4YW16dWhoN0I4R2xrMkNsNUZsK3Z0QnUxREtya293blpydFBTeGFaVjhsUmo2bmFHU1k4RQo4RVVqUWN5VXF3Z2duZWwwanNoaFVKOGdKMHV0MXN5UVAxWEJJcEpsTEZ5b0dDQmNuWkFvdE9oWnFWTWwxcTdOClQ5aVZEVy9IZ2xPbll0WktTbXREN2JvMk4rSDZxNmhaUVFJWmVzWVJxNUoxV0IwclR5SkkzbHJEV3J2QWRrcDEKTUZrekJJK3d2WHF2MEdtTFNYNzRudU4wZnY2K0VvQkdUakFVbkNIdUxvQ0RQNmQxTVBYL3ZZcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBdTA5VVVlNXU3N3pvRUdBSHc5dG5EbTN4cThxNnJNZy9TMDd1UkNaSzE4bEtmZXEvClVZb3VuOTd4RmdxY21jOU95VDVmN2c2djZvcmJIeFlDVWsrUndBQmJnR3BGVTA2M0pmeGwyMWV5dS9DNVNGMkYKMFYxTEFSYSt5SjFUd0VpTjJrRW9sVUsyMjIwRTRJZUdhbEZ3U3NKR05zcStEKzFjNWVuWG11VzJMKzkyc2FMTQprUXErb0U5MGlKUXoxQWZCY1dPOXFQV3FBNStJbVNMaDUzU0NDblFtU3JjZjRqUFBSVXcraGU3WmErbEl3N20rClFXZjhxVCsrWXRxRzJtTDBqSDQ3dFF0b3RMbVcwUVdKRllHclk1Ty91L2ZiQU5RRk9sYW51ZWxsVGhjNlNUS2EKS2RRNnJ0MkZSNldRRGZSaGVldlMzWWluMXZ4ZlVSS2Njb25TelFJREFRQUJBb0lCQVFDcHNWUFZxaW9zM1RwcwpZMk9GaDhhVXB2d3p3OFZjOVVtS1EyYk9yTlpQS2hobmZQMTR0TFJLdCtJNE1zTHZBWVlDQVpWTkNWZE1LQ0lkCnhvV3g1azVINE1zRXlzSWxtQUdLMDErLzJIS2ZtNVZ3UHZJVjIrd3dmMWUyVGZuckVKQWFzNzg5Z2lSQkpFSXYKMi9mbGFBUlFaakxRUHRyemVQb1pmTUdNbmlGd3lIVVJVQWZkL0U0QlFGNS8zUWVFeEpEVkNtaEZEOG5YRXlqRwpLSG5CSlI1TGFCeFpPcmh0bDVqckRZRGxtaWcyZGNPY21TOU5xN2xCUVExb3NmR1FhbjlTODQ1VnlFdjQvcWZrCjAwZkpJN0JpSStDcWNtM0lHRmhNMG5lNXlvVE8zSEo4Sy94bTdpRmxIcVg1cXZaYjJnc2hTRzZIR1E4YjZPV2UKT09id0xxZnRBb0dCQU1VNisvaEdsaVJrNGd4S3NmUE12dERsV3hGTFVWaTdlK1BGQlJ6Zm5FRlh3U055dlI4ZgpYTmlWZStUOFQvcTgwUnZrTmQ1QVNnd1NBTkwraUZnN0R4SXIzUC9GSkc5SjYwcURYMDZzZi9tKzM5c2VzZFp0Ck9Fanl6UXBmNnVtRGVKMHhvNDlrVWtiTkNvQUNPamE4QndDeFdRcjJHb0ZyL0ptZGEra0JteWRUQW9HQkFQTWYKbDkzbXo1QUpiVkE3cVRWckRiVDNDWEsxRjQyV0RuQ0FDRU5kcE8vZlhOZWtuWEhWcjZiRUl6UTluelQ4R2hmOApUUlBtb1VJTVNVVGpHZmdUMTlkbTliODdZb2NkMENCd0pZejlwcmtRaUpMdm13QS82cytQalkvVUwvSHdrT05QCllvcm9YclB5WVpIeHNjMW54di9lRWJic0UyWjdVSXpXbGpxK0lIbGZBb0dBYlN1OUZTeGRKMEFBTDdXWTBzNWUKUU5yemtac1RKLzUvRVJDWlIrWXVZNnpqWjIrM1oyYkF5ZEhVaG1kekRlTStEQ1pCK3dleTlRTnlHVmh5dUFQWQp6OElmemlPZGkweHJSUTk2emQyRjZRUFNmVU44Uktpb0l4amlqZitSMURmRnA1MDJYOFMwRmlTZ3owSnNYcWV0CmFLRENIT01rd01hNVIzNXZvTVlXejZrQ2dZRUE1TWdiR2ZhRDVjL3BMUEluaFp3SzF2c015Z045ZVgvMmNJa2EKdllIV250ODZ0N1l4YnBpZDVUbDJ3MGNsbFMrU3duVnFkc3ExZnJpZkRoTURNZjVDUTNHZzJXWmhqakpRMHVXVgpnSHFFdEd2SmlUT3VVV3JVWktONm5Ca1pVUHVHN0ZDY3M0aDg3YXF0aEMvRG1ENEs5bVlibDEzSjE4czgvbnRECi9WMUNvOU1DZ1lFQWlCb2tPTEFBNlFDbHNYOXcwNlMrZ1pXT0FpbDlRRXhHalhVR2o0ZEtielpibmgrWnkwbUoKNW9LRHpydTFKeGdoa1JtQ0ZiZmx6aHpMMktWU2xIbXNPYWZDSU1JSHJSZTdic0gvdjg2SVpkYXlnTWxLckJ2LwpXUVMxdmJoQjVvNGdwRkE5aG0rMFcwTW9ZOUVsaERPUG52ajFxT2lTRVArN0dXdDAxOW5HdWdzPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

可以看到,此配置文件携带了三个证书和api-server通信认证,也就是说,如果此配置文件没有证书或者证书过期了,那么,kubelet服务奖不会成功启动的。

同样的,controller-manager服务也携带了证书来与api-server通信,同样的,如果此配置文件内的证书是不对的,或者证书过期了,controller-manager服务也不会成功启动,即使这个服务是存在于静态pod内。

 cat /etc/kubernetes/controller-manager.conf 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1USXdPREEyTXpJek1Wb1hEVE14TVRJd05qQTJNekl6TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0N4Ck14ekVGcTZEeVZBTjNkelU0MUFUVU94VEgvd1dlUmZkS2F2L0lDSmFjR3VQQXZQOG9LUTc3L1BucDlnMHdsa2YKZFVTWW1WclpEVFR0dHN6YWRPdDlkWWFRTHI0MkM1MlNWWEU4eEl2MSt3MXo3QURYek04N2FISXlCZXVqbm1INwptS3lYdFlyR3I0UmxIM1d4TGU1YmRCYk03QkMrSTRndUZmNThHVFJ3N1QrclpJYXpqcDRPd1pVeFZGRm0rd0Y4ClZTZ2s1VVZXMGxtZ05mamt4WjZPbk1EcDBBREdDZ2JUZkVkazdmdlpGTVFkUkFMU2dmVmNGdGtWcG5xWjBjYVMKRDZaVHBwTWNiNkVqV2JNd1dnQ2F1eFRmNTF5dkhFdTFRTXdXa2Y1V1NZWEsyMmRoN3VjbHlMOGRra1NYaERWKwpEWjR2cmlhR3JZR21UMFQzbTRVQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZMOGFOS1lFM2M1LytqOTlqci9XeXVwNTF2cllNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBTGZCakdEUEo1Q1BVUUlRTkZmOApxMzgwU3hWWjBVMERZWXNGdFg0SW5SbmxCWGFtNkF5VUNMUGJYQkxnNDdCMThhZWN4aTZzVmpFeVZDdFpWUU9ZClZZd3Ftd1RkZ3VlK01sL1hKTGcyZDFVYjZBSVhSVnF2VExVTGt2ck56NVh6RHRjZElCdUlwUXRRUFBVS013NXoKYmZQanhxNndNeks1Z1htWUVONnI1ZzJTR1lNdEc5UGlhMFppcHJFY2lLaUNybW5TN3plaHBVOS9taUFEbWZzaQpld2E2RyszbHlBc3JISFRraTZWMUtNVVJUN3BWWnpFWFJUNElmK25kRDd5N2FwcS9lN3FLMkwwVDd1NnA0WVVuCmowQkdLcWNscGkwRzZzVU54NGxKdkN3aDZJbkh6UEN3OC9BRk9yTzdXRTRWVmM1N0JJVjZrQlFzVmJmaEVnN2kKM0JzPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.123.150:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-controller-manager
  name: system:kube-controller-manager@kubernetes
current-context: system:kube-controller-manager@kubernetes
kind: Config
preferences: {}
users:
- name: system:kube-controller-manager
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGakNDQWY2Z0F3SUJBZ0lJVXJjNDVZUVF1NE13RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeU1EZ3dOak15TXpGYUZ3MHlNekV5TVRJeE1EQXdNamRhTUNreApKekFsQmdOVkJBTVRIbk41YzNSbGJUcHJkV0psTFdOdmJuUnliMnhzWlhJdGJXRnVZV2RsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUtkU2d1d1NDMVhVSmlvRjIvbFhaNHQwWjBQRVEvc0QKWk9ZTzdPdlUwbHVSYngzcW1VaEN6NjlPbFBtY0h0OUllRHl2a09qYzFBZ2FlZWwzR3VnSjlKZ2xFM0RmZm5hegpZVVBrOXVzTVBIQjZtMUhNR2ZjUVloZkhLUG9TNmk5bEVWZUhKOXEvOTVaSUdGKzVMR0F2eXVBUWg1NmZYT1hrCjRZRTRsSmYzRGhzdGRNeEtDNXVZTXpxazR3RmlNVkNxYkRhdlVqc3VmZzhhYlFQQWFVQ3NieWFFMm04RVllOGgKb2VkTkdVdml5WkxrQUM2ckM4bGVIeGNBbjB2ZlVvYXU3UzJFdWpNK01jV3RLZ1o4NmVVdjJaU2dXemFCa1VWZAo4QlRxK0VDOUtlb1pWRHJnUlBpejVub3FsVDBYTTJacUVJK01Ud1lTRWt3aTJ5WlpqNS9yWFQ4Q0F3RUFBYU5XCk1GUXdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CTUdBMVVkSlFRTU1Bb0dDQ3NHQVFVRkJ3TUNNQXdHQTFVZEV3RUIKL3dRQ01BQXdId1lEVlIwakJCZ3dGb0FVdnhvMHBnVGR6bi82UDMyT3Y5Yks2bm5XK3Rnd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR0d3NGxaaTZDSzB5YWpsOGNlUEg3a20vY1VLcGFFTld1aWJyVTQyNEVDM1A0UkFZZlJXCmRranZjNm9JbVhzR2V2T2Y0ZWlJU0dxaWNOb254N2RkWUxLY2tDaWxLQmcwS2hyZGFRT3o3N3ZCQitvamczbmgKMHByb05oYW12dkVpc0lUY212cmdzNTZqMk1Id2lUK3ZHeXFHbWxPOG9TRHZmWVFnMUVqTkRxWlVEd0g3OFlHYwowT0h5cXU3SW1hYngvKzdWOGcvMmlBS3NEVVVja3I3UHVMWWI3RlA0ZlZvVjlDWkIzVHI3bXFRQ2FrUmJmMnF1CjUvd3pEMG9lYjFBeHV0aUFSVjBlM2JBZUxXV0tqckEyNW9ISVBCRW1zTEFQSmtlMDVlRk9LK05ZUHBMdjBNU04KVnlzaXEzcVl0RkxZSzRaN0kyaGgrKzc5MXM4Y2g1TDNFeGM9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBcDFLQzdCSUxWZFFtS2dYYitWZG5pM1JuUThSRCt3Tms1ZzdzNjlUU1c1RnZIZXFaClNFTFByMDZVK1p3ZTMwaDRQSytRNk56VUNCcDU2WGNhNkFuMG1DVVRjTjkrZHJOaFErVDI2d3c4Y0hxYlVjd1oKOXhCaUY4Y28raExxTDJVUlY0Y24yci8zbGtnWVg3a3NZQy9LNEJDSG5wOWM1ZVRoZ1RpVWwvY09HeTEwekVvTAptNWd6T3FUakFXSXhVS3BzTnE5U095NStEeHB0QThCcFFLeHZKb1RhYndSaDd5R2g1MDBaUytMSmt1UUFMcXNMCnlWNGZGd0NmUzk5U2hxN3RMWVM2TXo0eHhhMHFCbnpwNVMvWmxLQmJOb0dSUlYzd0ZPcjRRTDBwNmhsVU91QkUKK0xQbWVpcVZQUmN6Wm1vUWo0eFBCaElTVENMYkpsbVBuK3RkUHdJREFRQUJBb0lCQUZ0Vm9QMjRBOVFBRUMwVQpNYlZ6enFQREVMTmZLVFNWNzdmZElkckJ1Mm9jZ3lremJDU1R3OGFRQUtZWVlJbkZoMHlwRVZMcmFCcGNTWHYxCmRneC9rcktTV29CY255MndVVUc4ZEVSdDAzZ2FsVG9iVFhrZHlrM3NleU8ydTNyUGtwM1N1eUNmZFVqbFpkaXEKdmR4cmVqVEJFU2EzR3dDcTVhV2grd3JRNHpSVnc5eERTVGF0MXA1cS82UFRPcHhkWElJRWhHQVd4SEE2RnVnSgpuR2RrMWFxT1ZweFFSZVZ1QVluNVlYemZ0MXByU0wwdWQwOFRNd3FaRUowa0dabUdIODJDV0tTRnQ4cEs1MnJ2CnpURDFnWE5hZzhrNGF3czhrQXpnT1lWek8wMHR2SjdVQjNFcVBSSThKQVJucWpyTGpNaTRqeDVMZGFQZkc2VUQKMERwNFdsRUNnWUVBelRYT2FJdUQ2QUhkaTNPaFlpQ2puOUwwM3ZOUGY1c1JtRmJzclc2UFFxdEVTTit2aVFxMApFMVREU2REM3BpSndqSC9zQTNhRW5ISStRZVYxT1ZFbVR3YkxpaFhOVWtYTWo2OWRmVHdxYTd1MnJEWm0zQnF6CkVrYUI3N0FHb3BvR2hRajBHcGMzTnY5MDZwN3orekVEKzFSajFaeGtsaXVXQ2NDMzViVlZqejBDZ1lFQTBMd2QKa1VYdGxhK0hWb2dicFJsRlE2dTVtbVFzRXpMYWdrQkFoRDFqYkpORWtVL1o0RjJGNkwwVTN0SWtWNWRyRG9XRQowbUo5TmdELzhPWVdXbTZSdVo1bVhPZlRSOVdkc2k5MGpUNU9NRkJHbHVMdlUyNllhTEhUdmMwbHdiQlVPelJECjJvb2ZEZURjU3Y2d1oxSVU3QlQ2YTA4dWtON0FpNVR6ZWhrQ1ppc0NnWUVBdWVNeHRKWWN5TDlYMW9qSitiK2oKT0pXNTUzUHo0WjJ3bEpTNUZHbUFNRjVBSHRzeGdTeEc3dlByYXlSMkVQSkZqYUFiUlEvSkZJYVFTdFQyR1JPZgpaaHE3cWJ3U0g2TEdxS21zUUZPT0FjVXF0bGtaVit4L3BlQmt0NkIyZ2ppUUMxYU8rTDlkN3QzOUpNTVVNOGkwCjJLZ2JQMWJKN3haUWRVa3p6RXMwMCtrQ2dZQlJraUlQNG43dEx4STVpNmthQk4wZmk5MVZhMjRaOXBhVHJoNUkKVDJFcVRnYk9ycURiWUZEeldlanRCcnd6Q3JaSWozOFBaSFBBQmZYL0l6dDdEWmlmTERxZWRlNElOWCtSNFordgpqcmlwZ3NXRE01NEpRY0FIc2U2b1RxSkJwZkhVelNEekoyVHBYSVZhUFZ1Y2xPUWVPamgrZFF3aWl4bzlzZkRRCk56UEx6d0tCZ1FDaVU2QUZ4NjdPdW1jRGtwK21tRXpYNWpsQmtlZk9XakJlNlZ4ZGwzNjA1TjJ6YS9CbDlINzEKZ3pjdlZhQTY5RC9uVG50bE1xaWhnUm1NTGdEMGovOWkrV0VkMzR2Y0JTOHI2M1VZZVRvUWRoakJCZC8yeUM5NAppQ0ZiNlZ0dGRrSmg5SEhkV1pTRkxwQ0FmTnVlMVRxclA5L1RobTNRaTFjM2lZZVJUblh0YkE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

 三,

报错信息

集群很明显没有启动

root@k8s-master:~# kubectl get no
The connection to the server 192.168.123.150:6443 was refused - did you specify the right host or port?

系统日志报错如下;

12月8号证书已经过期。

ec 12 23:11:18 k8s-master kubelet[2750]: I1212 23:11:18.900395    2750 server.go:868] "Client rotation is on, will bootstrap in background"
Dec 12 23:11:18 k8s-master kubelet[2750]: E1212 23:11:18.903330    2750 bootstrap.go:265] part of the existing bootstrap client certificate in /etc/kubernetes/kubelet.conf is expired: 2022-12-08 06:32:35 +0000 UTC
Dec 12 23:11:18 k8s-master kubelet[2750]: E1212 23:11:18.905482    2750 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or

  查看证书时间,再次确认一哈:

root@k8s-master:~# kubeadm  certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Dec 08, 2022 06:32 UTC   <invalid>                               no      
apiserver                  Dec 08, 2022 06:32 UTC   <invalid>       ca                      no      
apiserver-etcd-client      Dec 08, 2022 06:32 UTC   <invalid>       etcd-ca                 no      
apiserver-kubelet-client   Dec 08, 2022 06:32 UTC   <invalid>       ca                      no      
controller-manager.conf    Dec 08, 2022 06:32 UTC   <invalid>                               no      
etcd-healthcheck-client    Dec 08, 2022 06:32 UTC   <invalid>       etcd-ca                 no      
etcd-peer                  Dec 08, 2022 06:32 UTC   <invalid>       etcd-ca                 no      
etcd-server                Dec 08, 2022 06:32 UTC   <invalid>       etcd-ca                 no      
front-proxy-client         Dec 08, 2022 06:32 UTC   <invalid>       front-proxy-ca          no      
scheduler.conf             Dec 08, 2022 06:32 UTC   <invalid>                               no      
CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Dec 06, 2031 06:32 UTC   8y              no      
etcd-ca                 Dec 06, 2031 06:32 UTC   8y              no      
front-proxy-ca          Dec 06, 2031 06:32 UTC   8y              no 

四,  

kubernetes集群的升级

添加阿里云的apt源:

cat >/etc/apt/sources.list.d/kubernetes.list <<EOF
# 阿里源
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe 
multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe 
multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe 
multiverse
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

更新apt源:

sudo apt-get update

输出如下:

有警告,可以不处理,也可以处理一哈,这里的警告是说两个文件的源重复配置,冲突了,因此,删除/etc/apt/sources.list这个文件即可。

root@k8s-master:~# sudo apt-get update
Hit:1 http://mirrors.aliyun.com/ubuntu bionic InRelease                 
Hit:2 http://mirrors.aliyun.com/ubuntu bionic-updates InRelease                                                                  
Hit:3 https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic InRelease                                                         
Hit:4 http://mirrors.aliyun.com/ubuntu bionic-backports InRelease                                       
Hit:5 http://mirrors.aliyun.com/ubuntu bionic-security InRelease                                                                  
Hit:6 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease                           
Get:7 http://mirrors.aliyun.com/ubuntu bionic-proposed InRelease [242 kB]                             
Get:8 http://mirrors.aliyun.com/ubuntu bionic/universe Sources [9,051 kB]       
Get:9 http://mirrors.aliyun.com/ubuntu bionic/restricted Sources [5,324 B]                                                                                                                                                                 
Get:10 http://mirrors.aliyun.com/ubuntu bionic/main Sources [829 kB]                                                                                                                                                                       
Get:11 http://mirrors.aliyun.com/ubuntu bionic/multiverse Sources [181 kB]                                                                                                                                                                 
Get:12 http://mirrors.aliyun.com/ubuntu bionic-updates/restricted Sources [33.1 kB]                                                                                                                                                        
Get:13 http://mirrors.aliyun.com/ubuntu bionic-updates/universe Sources [486 kB]                                                                                                                                                           
Get:14 http://mirrors.aliyun.com/ubuntu bionic-updates/multiverse Sources [17.2 kB]                                                                                                                                                        
Get:15 http://mirrors.aliyun.com/ubuntu bionic-updates/main Sources [537 kB]                                                                                                                                                               
Get:16 http://mirrors.aliyun.com/ubuntu bionic-backports/universe Sources [6,600 B]                                                                                                                                                        
Get:17 http://mirrors.aliyun.com/ubuntu bionic-backports/main Sources [10.5 kB]                                                                                                                                                            
Get:18 http://mirrors.aliyun.com/ubuntu bionic-security/restricted Sources [30.2 kB]                                                                                                                                                       
Get:19 http://mirrors.aliyun.com/ubuntu bionic-security/multiverse Sources [10.6 kB]                                                                                                                                                       
Get:20 http://mirrors.aliyun.com/ubuntu bionic-security/main Sources [288 kB]                                                                                                                                                              
Get:21 http://mirrors.aliyun.com/ubuntu bionic-security/universe Sources [309 kB]                                                                                                                                                          
Get:22 http://mirrors.aliyun.com/ubuntu bionic-proposed/restricted Sources [8,164 B]                                                                                                                                                       
Get:23 http://mirrors.aliyun.com/ubuntu bionic-proposed/universe Sources [9,428 B]                                                                                                                                                         
Get:24 http://mirrors.aliyun.com/ubuntu bionic-proposed/main Sources [75.6 kB]                                                                                                                                                             
Get:25 http://mirrors.aliyun.com/ubuntu bionic-proposed/main amd64 Packages [145 kB]                                                                                                                                                       
Get:26 http://mirrors.aliyun.com/ubuntu bionic-proposed/main Translation-en [32.2 kB]                                                                                                                                                      
Get:27 http://mirrors.aliyun.com/ubuntu bionic-proposed/restricted amd64 Packages [132 kB]                                                                                                                                                 
Get:28 http://mirrors.aliyun.com/ubuntu bionic-proposed/restricted Translation-en [18.5 kB]                                                                                                                                                
Get:29 http://mirrors.aliyun.com/ubuntu bionic-proposed/universe amd64 Packages [11.0 kB]                                                                                                                                                  
Get:30 http://mirrors.aliyun.com/ubuntu bionic-proposed/universe Translation-en [6,676 B]                                                                                                                                                  
Fetched 12.5 MB in 14s (878 kB/s)                                                                                                                                                                                                          
Reading package lists... Done
W: Target Packages (main/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:3 and /etc/apt/sources.list.d/kubernetes.list:2
W: Target Packages (main/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:3 and /etc/apt/sources.list.d/kubernetes.list:2

2,

重新安装kubelet,kubeadm,kubectl

sudo apt-get install kubelet=1.22.2-00  kubeadm=1.22.2-00 kubectl=1.22.2-00  -y

3,

开始升级

先更新证书以启动kubelet

kubeadm  certs renew all

输出如下:

[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.

以上输出最后提示需要重启restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd,在此之前先看看证书是否更新:

已经更新了,但kubelet的配置文件等等这些还是使用的旧证书,因此,此时的kubelet等服务还是不能启动的状态

root@k8s-master:~# kubeadm  certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Dec 12, 2023 15:29 UTC   364d                                    no      
apiserver                  Dec 12, 2023 15:29 UTC   364d            ca                      no      
apiserver-etcd-client      Dec 12, 2023 15:29 UTC   364d            etcd-ca                 no      
apiserver-kubelet-client   Dec 12, 2023 15:29 UTC   364d            ca                      no      
controller-manager.conf    Dec 12, 2023 15:29 UTC   364d                                    no      
etcd-healthcheck-client    Dec 12, 2023 15:29 UTC   364d            etcd-ca                 no      
etcd-peer                  Dec 12, 2023 15:29 UTC   364d            etcd-ca                 no      
etcd-server                Dec 12, 2023 15:29 UTC   364d            etcd-ca                 no      
front-proxy-client         Dec 12, 2023 15:29 UTC   364d            front-proxy-ca          no      
scheduler.conf             Dec 12, 2023 15:29 UTC   364d                                    no      
CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Dec 06, 2031 06:32 UTC   8y              no      
etcd-ca                 Dec 06, 2031 06:32 UTC   8y              no      
front-proxy-ca          Dec 06, 2031 06:32 UTC   8y              no      

因此,这个时候需要删除这些服务的配置文件,使用kubeadm重新生成这些文件:

root@k8s-master:~# rm -rf /etc/kubernetes/*.conf
root@k8s-master:~# kubeadm  init phase kubeconfig all
I1212 23:35:49.775848   19629 version.go:255] remote version is much newer: v1.26.0; falling back to: stable-1.22
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file

如上所示,可以看到新的配置文件都使用的新的证书了,此时kubelet可以重启了。

root@k8s-master:~# systemctl restart kubelet
root@k8s-master:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2022-12-12 23:36:57 CST; 2s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 21307 (kubelet)
    Tasks: 20 (limit: 2210)
   CGroup: /system.slice/kubelet.service
           ├─21307 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=registry.
           └─21589 /opt/cni/bin/calico
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316164   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c80-a5
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316204   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c4992d6a9ff2341f1
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316279   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzwxm\" (UniqueName: \"kubernetes.io/projected/8ad3a63e-e
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316323   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c80-
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316364   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx6q6\" (UniqueName: \"kubernetes.io/projected/5ef5e743-e
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316407   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4992d6a9ff2341f1f1b0
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316447   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c8
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316497   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-local-net-dir\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316540   21307 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5ef5e743-ee71-4c80-a
Dec 12 23:36:59 k8s-master kubelet[21307]: I1212 23:36:59.316562   21307 reconciler.go:157] "Reconciler: start to sync state"

此时查看节点状态发现报错,因为是使用的.kube目录下congfig,这个文件是使用的旧的证书,因此,删除它,重新引用变量,并覆盖旧的config:

root@k8s-master:~# kubectl get no 
error: You must be logged in to the server (Unauthorized)

OK,master节点的正式就更新完了,那么,工作节点的证书怎么处理呢?

解决方案为:由于整个集群是kubeadm搭建的,而etcd是静态pod 形式存在在master节点的,因此,master节点恢复后,确认etcd正常后,工作节点重新加入集群即可:

删除工作节点:

root@k8s-master:~# kubectl delete nodes k8s-node1 
node "k8s-node1" deleted
root@k8s-master:~# kubectl delete nodes k8s-node2
node "k8s-node2" deleted

生成加入节点命令:

root@k8s-master:~# kubeadm token create --print-join-command
kubeadm join 192.168.123.150:6443 --token 692e4m.o8njp7guix9w5jne --discovery-token-ca-cert-hash sha256:fb346dffae444c802ffeaee5269375b3727c05d92a4365231772de414cbd6923

在工作节点重设节点并重新加入节点(151和152节点执行):

root@k8s-node1:~# kubeadm reset -f
[preflight] Running pre-flight checks
W1212 23:52:30.989714   84567 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
root@k8s-node1:~# kubeadm join 192.168.123.150:6443 --token 692e4m.o8njp7guix9w5jne --discovery-token-ca-cert-hash sha256:fb346dffae444c802ffeaee5269375b3727c05d92a4365231772de414cbd6923
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
root@k8s-master:~# export KUBECONFIG=/etc/kubernetes/admin.conf 
root@k8s-master:~# kubectl get no 
NAME         STATUS     ROLES                  AGE    VERSION
k8s-master   Ready      control-plane,master   369d   v1.22.2
k8s-node1    NotReady   <none>                 369d   v1.22.0
k8s-node2    NotReady   <none>                 369d   v1.22.0
root@k8s-master:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: overwrite '/root/.kube/config'? y

此时,回到master节点,查看节点状态,可以看到恢复正常了:

root@k8s-master:~# kubectl get no
NAME         STATUS   ROLES                  AGE     VERSION
k8s-master   Ready    control-plane,master   369d    v1.22.2
k8s-node1    Ready    <none>                 2m18s   v1.22.0
k8s-node2    Ready    <none>                 12s     v1.22.0

查看pod,也都是正常的状态了:

root@k8s-master:~# kubectl get po -A
NAMESPACE       NAME                                       READY   STATUS    RESTARTS        AGE
default         front-end-6f94965fd9-dq7t8                 1/1     Running   0               27m
default         guestbook-86bb8f5bc9-mcdvg                 1/1     Running   0               27m
default         guestbook-86bb8f5bc9-zh7zq                 1/1     Running   0               27m
default         nfs-client-provisioner-56dd5765dc-gp6mz    1/1     Running   0               27m

五,

集群升级

先升级kubeadm到1.22.10

root@k8s-master:~# apt-get install kubeadm=1.22.10-00
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  cri-tools
The following packages will be upgraded:
  cri-tools kubeadm
2 upgraded, 0 newly installed, 0 to remove and 193 not upgraded.
Need to get 26.7 MB of archives.
After this operation, 19.9 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 cri-tools amd64 1.25.0-00 [17.9 MB]
Get:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubeadm amd64 1.22.10-00 [8,728 kB]                                                                                                                           
Fetched 26.7 MB in 51s (522 kB/s)                                                                                                                                                                                                          
(Reading database ... 67719 files and directories currently installed.)
Preparing to unpack .../cri-tools_1.25.0-00_amd64.deb ...
Unpacking cri-tools (1.25.0-00) over (1.19.0-00) ...
Preparing to unpack .../kubeadm_1.22.10-00_amd64.deb ...
Unpacking kubeadm (1.22.10-00) over (1.22.2-00) ...
Setting up cri-tools (1.25.0-00) ...
Setting up kubeadm (1.22.10-00) ...

在升级kubernetes集群:

kubeadm upgrade apply v1.22.10

输出如下:

root@k8s-master:~# kubeadm upgrade apply v1.22.10
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.22.10"
[upgrade/versions] Cluster version: v1.22.0
[upgrade/versions] kubeadm version: v1.22.10
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.22.10"...
Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
Static pod: kube-controller-manager-k8s-master hash: c4992d6a9ff2341f1f1b0d3058a62049
Static pod: kube-scheduler-k8s-master hash: 938652c36b8ab3b7a6345373ea6e1ded
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s-master hash: 942d6def80e349c32e518bf1fb533795
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-k8s-master hash: 942d6def80e349c32e518bf1fb533795
Static pod: etcd-k8s-master hash: 942d6def80e349c32e518bf1fb533795
Static pod: etcd-k8s-master hash: ee7d79d2b2967f03af72732ecda2b44f
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests149634972"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
Static pod: kube-apiserver-k8s-master hash: 2761ce4ea551870599dbc22df4805251
Static pod: kube-apiserver-k8s-master hash: d2601c13ace3af023db083125c56d47b
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master hash: c4992d6a9ff2341f1f1b0d3058a62049
Static pod: kube-controller-manager-k8s-master hash: 648269e02b16780e315b096eec7eaa5d
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-12-13-00-10-26/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master hash: 938652c36b8ab3b7a6345373ea6e1ded
Static pod: kube-scheduler-k8s-master hash: ec4c9f7722e075d30583bde88d591749
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.10". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

提示kubelet也应该升级:

root@k8s-master:~# apt-get install kubelet=1.22.10-00
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be upgraded:
  kubelet
1 upgraded, 0 newly installed, 0 to remove and 193 not upgraded.
Need to get 19.2 MB of archives.
After this operation, 32.1 MB disk space will be freed.
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubelet amd64 1.22.10-00 [19.2 MB]
Fetched 19.2 MB in 42s (453 kB/s)                                                                                                                                                                                                          
(Reading database ... 67719 files and directories currently installed.)
Preparing to unpack .../kubelet_1.22.10-00_amd64.deb ...
Unpacking kubelet (1.22.10-00) over (1.22.2-00) ...
Setting up kubelet (1.22.10-00) ...

证书的时间也刷新了:

(为什么一开始不直接升级呢?因为升级的时候需要集群是正常运行的,但前面证书是已经过期,集群宕机状态了,没办法升级)

root@k8s-master:~# kubeadm  certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Dec 12, 2023 16:11 UTC   364d            ca                      no      
apiserver                  Dec 12, 2023 16:11 UTC   364d            ca                      no      
apiserver-etcd-client      Dec 12, 2023 16:11 UTC   364d            etcd-ca                 no      
apiserver-kubelet-client   Dec 12, 2023 16:11 UTC   364d            ca                      no      
controller-manager.conf    Dec 12, 2023 16:11 UTC   364d            ca                      no      
etcd-healthcheck-client    Dec 12, 2023 16:10 UTC   364d            etcd-ca                 no      
etcd-peer                  Dec 12, 2023 16:10 UTC   364d            etcd-ca                 no      
etcd-server                Dec 12, 2023 16:10 UTC   364d            etcd-ca                 no      
front-proxy-client         Dec 12, 2023 16:11 UTC   364d            front-proxy-ca          no      
scheduler.conf             Dec 12, 2023 16:11 UTC   364d            ca                      no      
CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Dec 06, 2031 06:32 UTC   8y              no      
etcd-ca                 Dec 06, 2031 06:32 UTC   8y              no      
front-proxy-ca          Dec 06, 2031 06:32 UTC   8y              no      


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
1月前
|
缓存 容灾 网络协议
ACK One多集群网关:实现高效容灾方案
ACK One多集群网关可以帮助您快速构建同城跨AZ多活容灾系统、混合云同城跨AZ多活容灾系统,以及异地容灾系统。
|
2月前
|
Kubernetes Ubuntu 网络安全
ubuntu使用kubeadm搭建k8s集群
通过以上步骤,您可以在 Ubuntu 系统上使用 kubeadm 成功搭建一个 Kubernetes 集群。本文详细介绍了从环境准备、安装 Kubernetes 组件、初始化集群到管理和使用集群的完整过程,希望对您有所帮助。在实际应用中,您可以根据具体需求调整配置,进一步优化集群性能和安全性。
148 12
|
2月前
|
Prometheus Kubernetes 监控
OpenAI故障复盘 - 阿里云容器服务与可观测产品如何保障大规模K8s集群稳定性
聚焦近日OpenAI的大规模K8s集群故障,介绍阿里云容器服务与可观测团队在大规模K8s场景下我们的建设与沉淀。以及分享对类似故障问题的应对方案:包括在K8s和Prometheus的高可用架构设计方面、事前事后的稳定性保障体系方面。
|
2月前
|
Kubernetes 网络协议 应用服务中间件
Kubernetes Ingress:灵活的集群外部网络访问的利器
《Kubernetes Ingress:集群外部访问的利器-打造灵活的集群网络》介绍了如何通过Ingress实现Kubernetes集群的外部访问。前提条件是已拥有Kubernetes集群并安装了kubectl工具。文章详细讲解了Ingress的基本组成(Ingress Controller和资源对象),选择合适的版本,以及具体的安装步骤,如下载配置文件、部署Nginx Ingress Controller等。此外,还提供了常见问题的解决方案,例如镜像下载失败的应对措施。最后,通过部署示例应用展示了Ingress的实际使用方法。
87 2
|
2月前
|
运维 Cloud Native 持续交付
云原生技术深度探索:重塑现代IT架构的无形之力####
本文深入剖析了云原生技术的核心概念、关键技术组件及其对现代IT架构变革的深远影响。通过实例解析,揭示云原生如何促进企业实现敏捷开发、弹性伸缩与成本优化,为数字化转型提供强有力的技术支撑。不同于传统综述,本摘要直接聚焦于云原生技术的价值本质,旨在为读者构建一个宏观且具体的技术蓝图。 ####
|
2月前
|
Cloud Native API 持续交付
云原生架构下的微服务治理策略与实践####
本文旨在探讨云原生环境下微服务架构的治理策略,通过分析当前面临的挑战,提出一系列实用的解决方案。我们将深入讨论如何利用容器化、服务网格(Service Mesh)等先进技术手段,提升微服务系统的可管理性、可扩展性和容错能力。此外,还将分享一些来自一线项目的经验教训,帮助读者更好地理解和应用这些理论到实际工作中去。 ####
72 0
|
2月前
|
Kubernetes Cloud Native 微服务
探索云原生技术:容器化与微服务架构的融合之旅
本文将带领读者深入了解云原生技术的核心概念,特别是容器化和微服务架构如何相辅相成,共同构建现代软件系统。我们将通过实际代码示例,探讨如何在云平台上部署和管理微服务,以及如何使用容器编排工具来自动化这一过程。文章旨在为开发者和技术决策者提供实用的指导,帮助他们在云原生时代中更好地设计、部署和维护应用。
|
2月前
|
Cloud Native 持续交付 云计算
云原生架构的崛起:企业数字化转型的加速器
在当今快速发展的技术环境中,企业正面临着前所未有的变革压力。本文深入探讨了云原生架构如何成为推动企业数字化转型的关键力量。通过分析其核心概念、优势以及实施策略,本文旨在为读者提供对云原生技术的全面理解,展示其在现代企业中不可或缺的作用。
60 19
|
2月前
|
运维 Cloud Native 持续交付
深入理解云原生架构及其在现代企业中的应用
随着数字化转型的浪潮席卷全球,企业正面临着前所未有的挑战与机遇。云计算技术的迅猛发展,特别是云原生架构的兴起,正在重塑企业的IT基础设施和软件开发模式。本文将深入探讨云原生的核心概念、关键技术以及如何在企业中实施云原生策略,以实现更高效的资源利用和更快的市场响应速度。通过分析云原生架构的优势和面临的挑战,我们将揭示它如何助力企业在激烈的市场竞争中保持领先地位。
|
2月前
|
弹性计算 运维 Cloud Native
云原生架构的崛起与未来展望
在数字化转型的浪潮中,云原生架构凭借其高效、灵活和可扩展的特性,正逐渐成为企业IT战略的核心。本文旨在探讨云原生架构的定义、关键特性、实施优势以及面临的挑战,同时展望未来的发展趋势。通过深入分析,我们期望为读者提供一个关于云原生架构全面而深入的视角,助力企业在云计算时代做出更明智的决策。
61 3

热门文章

最新文章