K8S二进制部署详解,一文教会你部署高可用K8S集群(一)https://developer.aliyun.com/article/1496870
5.4 部署kube-controller-manager组件
#创建csr请求文件
[root@master01 work ]#cat kube-controller-manager-csr.json { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "hosts": [ "127.0.0.1", "10.10.0.10", "10.10.0.11", "10.10.0.12", "10.10.0.100" ], "names": [ { "C": "CN", "ST": "Hubei", "L": "Wuhan", "O": "system:kube-controller-manager", "OU": "system" } ] }
注: hosts 列表包含所有 kube-controller-manager 节点 IP;
CN 为 system:kube-controller-manager、
O 为 system:kube-controller-manager,
kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限
#生成证书
[root@master01 work ]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager 2022/10/26 14:27:54 [INFO] generate received request 2022/10/26 14:27:54 [INFO] received CSR 2022/10/26 14:27:54 [INFO] generating key: rsa-2048 2022/10/26 14:27:54 [INFO] encoded CSR 2022/10/26 14:27:54 [INFO] signed certificate with serial number 721895207641224154550279977497269955483545721536 2022/10/26 14:27:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
#创建kube-controller-manager的kubeconfig
1.设置集群参数
[root@master01 work ]#kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.0.10:6443 --kubeconfig=kube-controller-manager.kubeconfig Cluster "kubernetes" set. [root@master01 work ]#cat kube-controller-manager.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://10.10.0.10:6443 name: kubernetes contexts: null current-context: "" kind: Config preferences: {} users: null
2.设置客户端认证参数
用户system:kube-controller-manager 就是kube-controller-manager-csr.json CN设置的值
[root@master01 work ]#kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
User “system:kube-controller-manager” set.
[root@master01 work ]#cat kube-controller-manager.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://10.10.0.10:6443 name: kubernetes contexts: null current-context: "" kind: Config preferences: {} users: - name: system:kube-controller-manager user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVLakNDQXhLZ0F3SUJBZ0lVZm5MbWtpdDdmZEJuY2lmdE5pQ01FdjZ0V3NBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkyTURZeU16QXdXaGNOTXpJeE1ESXpNRFl5TXpBd1dqQ0JrREVMTUFrR0ExVUUKQmhNQ1EwNHhEakFNQmdOVkJBZ1RCVWgxWW1WcE1RNHdEQVlEVlFRSEV3VlhkV2hoYmpFbk1DVUdBMVVFQ2hNZQpjM2x6ZEdWdE9tdDFZbVV0WTI5dWRISnZiR3hsY2kxdFlXNWhaMlZ5TVE4d0RRWURWUVFMRXdaemVYTjBaVzB4Ckp6QWxCZ05WQkFNVEhuTjVjM1JsYlRwcmRXSmxMV052Ym5SeWIyeHNaWEl0YldGdVlXZGxjakNDQVNJd0RRWUoKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS01RQWFjZElxUEZ3K29LckR1aTdQcmxnaThpT1hKMworTTBTZUNDaFlMZjd5dU9VWlRZV0wyZk81UmZMQ1RRODFCWXVQOUxrY0k4aFRIRHZMZkQ3ZmdWcmZIOVMyaUZ3CllHSGNCc2ZjNFZGVVg5YUlDU0VlM0RRY3NDWG55VnJEVkQzUVUvY3VZQVRuZEMxdFMvZU1pNHFEVFVKTU1maFUKQnR1L3NwUmY4bGpkOWVtdk1ZQzZkQ3dIT1FsZmF0WThLcHJxZjdYTUtKSWJRdnhsSEhlL3RaYW1RWGo0Si9tagpqNHRCYW1uckQrNEp2dXpDY3ZOaEhtc09PbXA5Mkkxd3lNNlNWQUpjM2ZPTFRrakFGZGQ3MnJRUHo4elhhK3NkClpzVWR3SXNZbGp5bW5NcUdaWlZuU1dYL2RxQ2VxQzRxcnZybmo4dmN2NlE4OVVBTjNSU2pmdTBDQXdFQUFhT0IKcVRDQnBqQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRgpCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGTktGSjhqQXIvanQyUmN4MWNNbStFNUJVRTNsCk1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRFhPcGVETVVUTUNjR0ExVWRFUVFnTUI2SEJIOEEKQUFHSEJBb0tBQXFIQkFvS0FBdUhCQW9LQUF5SEJBb0tBR1F3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUFkcQo4cE4rOXdKdUJFRVpDT2FEN2MrZ0tMK3ZtL2lJTzdQL3ZlbFgrNzd0cVgrTTBqaUwwa2IvTmczRWNuVUhieDVECi9haUJLZU1rRVltUG5rc2hHWng1eGhmN0oraFlkOWsrNm5wOGRmUk80bHlpeENCMWNYODBoMFhZbjBmZ2N0eE8Kd0RCaVhNcUI2OEo4WnkvYVBoSVY0OGpCaUZZZlJMcUpxZWZEMEx2NFh2RVpISkt6bVpZdnlWV3dYRWFzRFI0bQo4cGRSWEtOakRXTXkyTHI1WGNRLzNKaFRCZ2g4cEV4czBFaUQwTUxDUmNwaGdjK3p0UndXZmR0T2FQR0tjNUpDCis3aEVjQmZJZ1MwNTE5bmt5Yjg0dEJpWmRhK21PQVB4RTNqVTRLK01hRWxZK3pFNHR6a1pMOGJkbEV0Vm9Jb2YKSVF2bEl6akJxSzE4VG9tUmNmQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBb3hBQnB4MGlvOFhENmdxc082THMrdVdDTHlJNWNuZjR6Uko0SUtGZ3Qvdks0NVJsCk5oWXZaODdsRjhzSk5EelVGaTQvMHVSd2p5Rk1jTzh0OFB0K0JXdDhmMUxhSVhCZ1lkd0d4OXpoVVZSZjFvZ0oKSVI3Y05CeXdKZWZKV3NOVVBkQlQ5eTVnQk9kMExXMUw5NHlMaW9OTlFrd3grRlFHMjcreWxGL3lXTjMxNmE4eApnTHAwTEFjNUNWOXExandxbXVwL3Rjd29raHRDL0dVY2Q3KzFscVpCZVBnbithT1BpMEZxYWVzUDdnbSs3TUp5CjgyRWVhdzQ2YW4zWWpYREl6cEpVQWx6ZDg0dE9TTUFWMTN2YXRBL1B6TmRyNngxbXhSM0FpeGlXUEthY3lvWmwKbFdkSlpmOTJvSjZvTGlxdSt1ZVB5OXkvcER6MVFBM2RGS04rN1FJREFRQUJBb0lCQUc4WFU1anZ2NDdHQ0lCbAp2d3R1SjNlVGJ3ci9qUlhRYUlBR0tqTkkzcVRaOVZMdzRiZGtpKzEwUmgzY3BLdWpHWGIzRVdKelljQVJsb3VHClY4MUsrWU5seEU3V09tZjNzS0piRFgrU215c1dpYWlWeTJwMkpOMllBZVlCTU93V0VVbC9xZ1RING9EVTB4Q3oKMnNLUFRPNFVJRW1mc1plV1g0bk00elEwM2QzdVc4Qi90R3BtcHhvNk1Da2FPNW9yVVRyUFYvUHpTdnA5R3l1VwpzUnE3b3J0QjdNZnRjTXlWdUUzWWVXYlJYbWRFSlk3cWpBWW9qVXVjdEJzNnVlSHhUbGRKNWNWdzQ2aXBPRzE4CnhBZE5xLzNNRVl2b21TbzhJQXJUWTBIc0o4djlXQ0VhUnF5ZVAwTHB5eDdIamxDb0ZhRnhTUTdxTCt0VjlPUnEKZmxIVnlXRUNnWUVBMGdlK3h0amtMOXdFVkpkZHBrektFWGJTc09TaHhya0V6blhNY0ZTNThkQ0hJS0tYalZIagpUaDV4K2EzWkpucTduRDJNUVRZeWcxYmNERUhQMW0vNGdrQnRJS0ZvOFgvcjdEM1c2OHFRNEZJTlZjRnVOZk9vCmFmc3RNQkF4ZnJQaGVyYlRVdUtJTTRvOUxzT3lvN1lHcTBFTTcxWmRYZDg5QVlSVUxibElxRGtDZ1lFQXhzQ1oKN210TG5wZzhaZlJZeWRhVEh4dmtBWTFWNVI3S2JmQ0pmMDhhb0c2N1c0UWF3TXVNZHVrZ3M1QmVwTUNidkRVTQovNmJnQUVncjFENnJTTDVlL3BiYjZkQ0diYVRhSVFsZmdqNlZpakpDZzNLazBTZEw2L2Q1UEdsL3N3ZHB5azljCmMyL1NzaHdoUFpjK1Evd3FqNGw5MGRkblFBWTBBWmhVWExSNHhGVUNnWUVBdGZncDdVU0xaMy9iYktMOGE1SUsKWE5rek1EbldoRU5YQzczNkU3VUVxYUwvQUdKK3BkMDE4RC9taGVsK3c1MEFvUnllUVAzQkJCUWtjS1l3ZVZ6bgoxWW9XUW5nMllVNXd6R3pEb2VVT1lwd1VtNkVNYU1nanVUYjY3cktJLzNyQU43N2hGdVhZRmJlR3pOYVhGc29sCnV3aVFPV2o5V2RDSm5aL1dBd3VPRE5rQ2dZQTdNUlVtK25GMDlDWFl2MkxLQ2N1YkVqVmZlUFpCM0YreFNsZkkKd0loUGkycmxJSHpQT2svRkFqMG8vVEFTcFFJOGxSZ2Y4MVQzQUlkOUdJVHVqek8vWXJKditoaHZBdytya3gwTQpyeExlSzRXL25COFY0endyTkhLNDJUcWMyUEphdkRQdWRUa3NybEFBQmRFWGNqeENyMUgzY3MxZk5mbTdGK0RZCkV5OThXUUtCZ0JycnFMZjc4WE91SWN5K0RnYWkwQ3FKdmg3RU9pMFBPN3BDWGtvUHFQM2Q4VlUrcFFyV0QyTG8KQndOaVh6dzVUQ0Y5UXQ3blJxUW1BRXJjdmJBcHg4UEk5TE9HZ3hYbmpPTHVNWmRWQ2h6Mjg3T0tEbitTL0JhLwppQVJjZ0R6YXlTUllUUCs5Z3lBQm4rNGV0TS9PWFFVYWw0RisyYU5wOG1OV0d2TVhYdCsxCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
3.设置上下文参数
[root@master01 work ]#kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig Context "system:kube-controller-manager" created.
4.设置当前上下文
[root@master01 work ]#kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig Switched to context "system:kube-controller-manager".
[root@master01 work ]#cat kube-controller-manager.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://10.10.0.10:6443 name: kubernetes contexts: - context: cluster: kubernetes user: system:kube-controller-manager name: system:kube-controller-manager current-context: system:kube-controller-manager kind: Config preferences: {} users: - name: system:kube-controller-manager user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVLakNDQXhLZ0F3SUJBZ0lVZm5MbWtpdDdmZEJuY2lmdE5pQ01FdjZ0V3NBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkyTURZeU16QXdXaGNOTXpJeE1ESXpNRFl5TXpBd1dqQ0JrREVMTUFrR0ExVUUKQmhNQ1EwNHhEakFNQmdOVkJBZ1RCVWgxWW1WcE1RNHdEQVlEVlFRSEV3VlhkV2hoYmpFbk1DVUdBMVVFQ2hNZQpjM2x6ZEdWdE9tdDFZbVV0WTI5dWRISnZiR3hsY2kxdFlXNWhaMlZ5TVE4d0RRWURWUVFMRXdaemVYTjBaVzB4Ckp6QWxCZ05WQkFNVEhuTjVjM1JsYlRwcmRXSmxMV052Ym5SeWIyeHNaWEl0YldGdVlXZGxjakNDQVNJd0RRWUoKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS01RQWFjZElxUEZ3K29LckR1aTdQcmxnaThpT1hKMworTTBTZUNDaFlMZjd5dU9VWlRZV0wyZk81UmZMQ1RRODFCWXVQOUxrY0k4aFRIRHZMZkQ3ZmdWcmZIOVMyaUZ3CllHSGNCc2ZjNFZGVVg5YUlDU0VlM0RRY3NDWG55VnJEVkQzUVUvY3VZQVRuZEMxdFMvZU1pNHFEVFVKTU1maFUKQnR1L3NwUmY4bGpkOWVtdk1ZQzZkQ3dIT1FsZmF0WThLcHJxZjdYTUtKSWJRdnhsSEhlL3RaYW1RWGo0Si9tagpqNHRCYW1uckQrNEp2dXpDY3ZOaEhtc09PbXA5Mkkxd3lNNlNWQUpjM2ZPTFRrakFGZGQ3MnJRUHo4elhhK3NkClpzVWR3SXNZbGp5bW5NcUdaWlZuU1dYL2RxQ2VxQzRxcnZybmo4dmN2NlE4OVVBTjNSU2pmdTBDQXdFQUFhT0IKcVRDQnBqQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRgpCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGTktGSjhqQXIvanQyUmN4MWNNbStFNUJVRTNsCk1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRFhPcGVETVVUTUNjR0ExVWRFUVFnTUI2SEJIOEEKQUFHSEJBb0tBQXFIQkFvS0FBdUhCQW9LQUF5SEJBb0tBR1F3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUFkcQo4cE4rOXdKdUJFRVpDT2FEN2MrZ0tMK3ZtL2lJTzdQL3ZlbFgrNzd0cVgrTTBqaUwwa2IvTmczRWNuVUhieDVECi9haUJLZU1rRVltUG5rc2hHWng1eGhmN0oraFlkOWsrNm5wOGRmUk80bHlpeENCMWNYODBoMFhZbjBmZ2N0eE8Kd0RCaVhNcUI2OEo4WnkvYVBoSVY0OGpCaUZZZlJMcUpxZWZEMEx2NFh2RVpISkt6bVpZdnlWV3dYRWFzRFI0bQo4cGRSWEtOakRXTXkyTHI1WGNRLzNKaFRCZ2g4cEV4czBFaUQwTUxDUmNwaGdjK3p0UndXZmR0T2FQR0tjNUpDCis3aEVjQmZJZ1MwNTE5bmt5Yjg0dEJpWmRhK21PQVB4RTNqVTRLK01hRWxZK3pFNHR6a1pMOGJkbEV0Vm9Jb2YKSVF2bEl6akJxSzE4VG9tUmNmQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBb3hBQnB4MGlvOFhENmdxc082THMrdVdDTHlJNWNuZjR6Uko0SUtGZ3Qvdks0NVJsCk5oWXZaODdsRjhzSk5EelVGaTQvMHVSd2p5Rk1jTzh0OFB0K0JXdDhmMUxhSVhCZ1lkd0d4OXpoVVZSZjFvZ0oKSVI3Y05CeXdKZWZKV3NOVVBkQlQ5eTVnQk9kMExXMUw5NHlMaW9OTlFrd3grRlFHMjcreWxGL3lXTjMxNmE4eApnTHAwTEFjNUNWOXExandxbXVwL3Rjd29raHRDL0dVY2Q3KzFscVpCZVBnbithT1BpMEZxYWVzUDdnbSs3TUp5CjgyRWVhdzQ2YW4zWWpYREl6cEpVQWx6ZDg0dE9TTUFWMTN2YXRBL1B6TmRyNngxbXhSM0FpeGlXUEthY3lvWmwKbFdkSlpmOTJvSjZvTGlxdSt1ZVB5OXkvcER6MVFBM2RGS04rN1FJREFRQUJBb0lCQUc4WFU1anZ2NDdHQ0lCbAp2d3R1SjNlVGJ3ci9qUlhRYUlBR0tqTkkzcVRaOVZMdzRiZGtpKzEwUmgzY3BLdWpHWGIzRVdKelljQVJsb3VHClY4MUsrWU5seEU3V09tZjNzS0piRFgrU215c1dpYWlWeTJwMkpOMllBZVlCTU93V0VVbC9xZ1RING9EVTB4Q3oKMnNLUFRPNFVJRW1mc1plV1g0bk00elEwM2QzdVc4Qi90R3BtcHhvNk1Da2FPNW9yVVRyUFYvUHpTdnA5R3l1VwpzUnE3b3J0QjdNZnRjTXlWdUUzWWVXYlJYbWRFSlk3cWpBWW9qVXVjdEJzNnVlSHhUbGRKNWNWdzQ2aXBPRzE4CnhBZE5xLzNNRVl2b21TbzhJQXJUWTBIc0o4djlXQ0VhUnF5ZVAwTHB5eDdIamxDb0ZhRnhTUTdxTCt0VjlPUnEKZmxIVnlXRUNnWUVBMGdlK3h0amtMOXdFVkpkZHBrektFWGJTc09TaHhya0V6blhNY0ZTNThkQ0hJS0tYalZIagpUaDV4K2EzWkpucTduRDJNUVRZeWcxYmNERUhQMW0vNGdrQnRJS0ZvOFgvcjdEM1c2OHFRNEZJTlZjRnVOZk9vCmFmc3RNQkF4ZnJQaGVyYlRVdUtJTTRvOUxzT3lvN1lHcTBFTTcxWmRYZDg5QVlSVUxibElxRGtDZ1lFQXhzQ1oKN210TG5wZzhaZlJZeWRhVEh4dmtBWTFWNVI3S2JmQ0pmMDhhb0c2N1c0UWF3TXVNZHVrZ3M1QmVwTUNidkRVTQovNmJnQUVncjFENnJTTDVlL3BiYjZkQ0diYVRhSVFsZmdqNlZpakpDZzNLazBTZEw2L2Q1UEdsL3N3ZHB5azljCmMyL1NzaHdoUFpjK1Evd3FqNGw5MGRkblFBWTBBWmhVWExSNHhGVUNnWUVBdGZncDdVU0xaMy9iYktMOGE1SUsKWE5rek1EbldoRU5YQzczNkU3VUVxYUwvQUdKK3BkMDE4RC9taGVsK3c1MEFvUnllUVAzQkJCUWtjS1l3ZVZ6bgoxWW9XUW5nMllVNXd6R3pEb2VVT1lwd1VtNkVNYU1nanVUYjY3cktJLzNyQU43N2hGdVhZRmJlR3pOYVhGc29sCnV3aVFPV2o5V2RDSm5aL1dBd3VPRE5rQ2dZQTdNUlVtK25GMDlDWFl2MkxLQ2N1YkVqVmZlUFpCM0YreFNsZkkKd0loUGkycmxJSHpQT2svRkFqMG8vVEFTcFFJOGxSZ2Y4MVQzQUlkOUdJVHVqek8vWXJKditoaHZBdytya3gwTQpyeExlSzRXL25COFY0endyTkhLNDJUcWMyUEphdkRQdWRUa3NybEFBQmRFWGNqeENyMUgzY3MxZk5mbTdGK0RZCkV5OThXUUtCZ0JycnFMZjc4WE91SWN5K0RnYWkwQ3FKdmg3RU9pMFBPN3BDWGtvUHFQM2Q4VlUrcFFyV0QyTG8KQndOaVh6dzVUQ0Y5UXQ3blJxUW1BRXJjdmJBcHg4UEk5TE9HZ3hYbmpPTHVNWmRWQ2h6Mjg3T0tEbitTL0JhLwppQVJjZ0R6YXlTUllUUCs5Z3lBQm4rNGV0TS9PWFFVYWw0RisyYU5wOG1OV0d2TVhYdCsxCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
#创建配置文件kube-controller-manager.conf
[root@master01 work ]#cat kube-controller-manager.conf KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \ --secure-port=10252 \ --bind-address=127.0.0.1 \ #官方推荐绑定127.0.0.1,外界不能访问 --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \ --service-cluster-ip-range=10.255.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ --allocate-node-cidrs=true \ --cluster-cidr=10.0.0.0/16 \ --experimental-cluster-signing-duration=87600h \ --root-ca-file=/etc/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \ --leader-elect=true \ --feature-gates=RotateKubeletServerCertificate=true \ --controllers=*,bootstrapsigner,tokencleaner \ --horizontal-pod-autoscaler-use-rest-clients=true \ --horizontal-pod-autoscaler-sync-period=10s \ --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \ --use-service-account-credentials=true \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2"
#创建启动文件
[root@master01 work ]#cat kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
#启动服务
[root@master01 work ]#cp kube-controller-manager*.pem /etc/kubernetes/ssl/ [root@master01 work ]#cp kube-controller-manager.kubeconfig /etc/kubernetes/ [root@master01 work ]#cp kube-controller-manager.conf /etc/kubernetes/ [root@master01 work ]#cp kube-controller-manager.service /usr/lib/systemd/system/ [root@master01 work ]# [root@master01 work ]#rsync -vaz kube-controller-manager*.pem master02:/etc/kubernetes/ssl/ sending incremental file list kube-controller-manager-key.pem kube-controller-manager.pem sent 2,498 bytes received 54 bytes 5,104.00 bytes/sec total size is 3,180 speedup is 1.25 [root@master01 work ]#rsync -vaz kube-controller-manager*.pem master03:/etc/kubernetes/ssl/ sending incremental file list kube-controller-manager-key.pem kube-controller-manager.pem sent 2,498 bytes received 54 bytes 1,701.33 bytes/sec total size is 3,180 speedup is 1.25 [root@master01 work ]#rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master02:/etc/kubernetes/ sending incremental file list kube-controller-manager.conf kube-controller-manager.kubeconfig sent 4,869 bytes received 54 bytes 9,846.00 bytes/sec total size is 7,575 speedup is 1.54 [root@master01 work ]#rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master03:/etc/kubernetes/ sending incremental file list kube-controller-manager.conf kube-controller-manager.kubeconfig sent 4,869 bytes received 54 bytes 3,282.00 bytes/sec total size is 7,575 speedup is 1.54 [root@master01 work ]#rsync -vaz kube-controller-manager.service master02:/usr/lib/systemd/system/ sending incremental file list kube-controller-manager.service sent 328 bytes received 35 bytes 726.00 bytes/sec total size is 325 speedup is 0.90 [root@master01 work ]#rsync -vaz kube-controller-manager.service master03:/usr/lib/systemd/system/ sending incremental file list kube-controller-manager.service sent 328 bytes received 35 bytes 242.00 bytes/sec total size is 325 speedup is 0.90
[root@master01 work ]#systemctl daemon-reload [root@master01 work ]# [root@master01 work ]#systemctl enable kube-controller-manager.service --now Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service. [root@master02 ~ ]#systemctl daemon-reload [root@master02 ~ ]#systemctl enable kube-controller-manager.service --now Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service. [root@master03 kubernetes ]#systemctl daemon-reload [root@master03 kubernetes ]#systemctl enable kube-controller-manager.service --now Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
端口开启表示正常
[root@master02 kubernetes ]#ss -lanptu|grep 10252 tcp LISTEN 0 16384 127.0.0.1:10252
5.5 部署kube-scheduler组件
#创建csr请求
[root@master01 work ]#cat kube-scheduler-csr.json { "CN": "system:kube-scheduler", "hosts": [ "127.0.0.1", "10.10.0.10", "10.10.0.11", "10.10.0.12", "10.10.0.100" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Hubei", "L": "Wuhan", "O": "system:kube-scheduler", "OU": "system" } ] }
注: hosts 列表包含所有 kube-scheduler 节点 IP;
CN 为 system:kube-scheduler、
O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler
将赋予 kube-scheduler 工作所需的权限。#生成证书
[root@master01 work ]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler 2022/10/26 14:45:07 [INFO] generate received request 2022/10/26 14:45:07 [INFO] received CSR 2022/10/26 14:45:07 [INFO] generating key: rsa-2048 2022/10/26 14:45:07 [INFO] encoded CSR 2022/10/26 14:45:07 [INFO] signed certificate with serial number 635930628159443813223981479189831176983910270655 2022/10/26 14:45:07 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
#创建kube-scheduler的kubeconfig
1.设置集群参数
[root@master01 work ]#kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.0.10:6443 --kubeconfig=kube-scheduler.kubeconfig Cluster "kubernetes" set.
2.设置客户端认证参数
system:kube-scheduler这个用户就是kube-scheduler-csr.json 里面CN设置的用户
[root@master01 work ]#kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig User "system:kube-scheduler" set.
3.设置上下文参数
[root@master01 work ]#kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig Context "system:kube-scheduler" created.
4.设置当前上下文
[root@master01 work ]#kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig Switched to context "system:kube-scheduler".
#创建配置文件kube-scheduler.conf
[root@master01 work ]#cat kube-scheduler.conf KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \ --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \ --leader-elect=true \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2"
#创建服务启动文件
[root@master01 work ]#cat kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
#启动服务
[root@master01 work ]#cp kube-scheduler*.pem /etc/kubernetes/ssl/ [root@master01 work ]#cp kube-scheduler.kubeconfig /etc/kubernetes/ [root@master01 work ]#cp kube-scheduler.conf /etc/kubernetes/ [root@master01 work ]#cp kube-scheduler.service /usr/lib/systemd/system/ [root@master01 work ]#rsync -vaz kube-scheduler*.pem master02:/etc/kubernetes/ssl/ sending incremental file list kube-scheduler-key.pem kube-scheduler.pem sent 2,531 bytes received 54 bytes 1,723.33 bytes/sec total size is 3,159 speedup is 1.22 [root@master01 work ]#rsync -vaz kube-scheduler*.pem master03:/etc/kubernetes/ssl/ sending incremental file list kube-scheduler-key.pem kube-scheduler.pem sent 2,531 bytes received 54 bytes 1,723.33 bytes/sec total size is 3,159 speedup is 1.22 [root@master01 work ]#rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master02:/etc/kubernetes/ sending incremental file list kube-scheduler.conf kube-scheduler.kubeconfig sent 4,495 bytes received 54 bytes 9,098.00 bytes/sec total size is 6,617 speedup is 1.45 [root@master01 work ]#rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master03:/etc/kubernetes/ sending incremental file list kube-scheduler.conf kube-scheduler.kubeconfig sent 4,495 bytes received 54 bytes 9,098.00 bytes/sec total size is 6,617 speedup is 1.45 [root@master01 work ]#rsync -vaz kube-scheduler.service master02:/usr/lib/systemd/system/ sending incremental file list kube-scheduler.service sent 301 bytes received 35 bytes 672.00 bytes/sec total size is 293 speedup is 0.87 [root@master01 work ]#rsync -vaz kube-scheduler.service master03:/usr/lib/systemd/system/ sending incremental file list kube-scheduler.service sent 301 bytes received 35 bytes 224.00 bytes/sec total size is 293 speedup is 0.87 [root@master01 work ]#systemctl daemon-reload [root@master01 work ]#systemctl enable kube-scheduler.service --now [root@master02 kubernetes ]#systemctl daemon-reload [root@master02 kubernetes ]#systemctl enable kube-scheduler.service --now [root@master03 kubernetes ]#systemctl daemon-reload [root@master03 kubernetes ]#systemctl enable kube-scheduler.service --now
端口开启表示正常
[root@master01 kubernetes ]#ss -lanptu|grep 10251 tcp LISTEN 0 16384 127.0.0.1:10251 *:* users:(("kube-scheduler",pid=823,fd=8))
kubeadm安装也是绑定到127.0.0.1
5.6 导入离线镜像压缩包
#把pause-cordns.tar.gz上传到node01节点,手动解压
[root@node01 ~ ]#docker load -i pause-cordns.tar.gz 225df95e717c: Loading layer [==================================================>] 336.4kB/336.4kB 96d17b0b58a7: Loading layer [==================================================>] 45.02MB/45.02MB Loaded image: k8s.gcr.io/coredns:1.7.0 ba0dae6243cc: Loading layer [==================================================>] 684.5kB/684.5kB Loaded image: k8s.gcr.io/pause:3.2 [root@node01 ~ ]# [root@node01 ~ ]# [root@node01 ~ ]#docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/coredns 1.7.0 bfe3a36ebd25 2 years ago 45.2MB k8s.gcr.io/pause 3.2 80d28bedfe5d 2 years ago 683kB
5.7 部署kubelet组件
kubelet: 每个Node节点上的kubelet定期就会调用API Server的REST接口报告自身状态,
API Server接收这些信息后,将节点状态信息更新到etcd中。
kubelet也通过API Server监听Pod信息,从而对Node机器上的POD进行管理,如创建、删除、更新Pod
控制节点不允许调度pod,pod都会被调度到工作节点,所以kubelet只需要部署到工作节点,启动
以下操作在master01上操作
创建kubelet-bootstrap.kubeconfig [root@master01 kubernetes ]#cd /data/work/ [root@master01 work ]#BOOTSTRAP_TOKEN=$(awk -F “,” ‘{print $1}’ /etc/kubernetes/token.csv) [root@master01 work ]#rm -r kubelet-bootstrap.kubeconfig rm: cannot remove ‘kubelet-bootstrap.kubeconfig’: No such file or directory
1.设置集群参数
[root@master01 work ]#kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.0.10:6443 --kubeconfig=kubelet-bootstrap.kubeconfig Cluster “kubernetes” set.
2.设置客户端认证参数
[root@master01 work ]#kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig User “kubelet-bootstrap” set.
3.设置上下文参数
[root@master01 work ]#kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig Context “default” created.
4.设置当前上下文
[root@master01 work ]#kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig Switched to context “default”.
[root@master01 work ]#cat kubelet-bootstrap.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://10.10.0.10:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubelet-bootstrap name: default current-context: default kind: Config preferences: {} users: - name: kubelet-bootstrap user: {}
授权:
[root@master01 work ]#kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
#创建配置文件kubelet.json
“cgroupDriver”: "systemd"要和docker的驱动一致。
address替换为自己node01的IP地址。
[root@master01 work ]#cat kubelet.json { "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", "authentication": { "x509": { "clientCAFile": "/etc/kubernetes/ssl/ca.pem" }, "webhook": { "enabled": true, "cacheTTL": "2m0s" }, "anonymous": { "enabled": false } }, "authorization": { "mode": "Webhook", "webhook": { "cacheAuthorizedTTL": "5m0s", "cacheUnauthorizedTTL": "30s" } }, "address": "10.10.0.14", "port": 10250, "readOnlyPort": 10255, "cgroupDriver": "systemd", "hairpinMode": "promiscuous-bridge", "serializeImagePulls": false, "featureGates": { "RotateKubeletClientCertificate": true, "RotateKubeletServerCertificate": true }, "clusterDomain": "cluster.local.", "clusterDNS": ["10.255.0.2"] }
[root@master01 work ]#cat kubelet.service [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/local/bin/kubelet \ --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \ #第一次启动用这个文件,后续都用--kubeconfig --cert-dir=/etc/kubernetes/ssl \ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \ #这个是自动生成的 --config=/etc/kubernetes/kubelet.json \ --network-plugin=cni \ --pod-infra-container-image=k8s.gcr.io/pause:3.2 \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
#注: –hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI
–kubeconfig:空路径,会自动生成,后面用于连接apiserver
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件
–cert-dir:kubelet证书生成目录
–pod-infra-container-image:管理Pod网络容器的镜像
#注:kubelete.json配置文件address改为各个节点的ip地址,在各个work节点上启动服务
[root@node01 ~ ]#mkdir /etc/kubernetes/ssl -p [root@master01 work ]#scp kubelet-bootstrap.kubeconfig kubelet.json node01:/etc/kubernetes/ kubelet-bootstrap.kubeconfig 100% 2107 3.7MB/s 00:00 kubelet.json
[root@master01 work ]#scp ca.pem node01:/etc/kubernetes/ssl/
ca.pem
[root@master01 work ]#scp kubelet.service node01:/usr/lib/systemd/system/
kubelet.service
#启动kubelet服务
这个目录最好先删一下 /var/lib/kubelet,如果存在之前的会有冲突
[root@node01 ~ ]#mkdir /var/lib/kubelet
[root@node01 ~ ]#mkdir /var/log/kubernetes
[root@node01 system ]#systemctl daemon-reload [root@node01 system ]# [root@node01 system ]# [root@node01 system ]# [root@node01 system ]#systemctl enable kubelet.service --now Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
确认kubelet服务启动成功后,接着到master01节点上Approve一下bootstrap请求。
master节点执行如下命令可以看到一个worker节点发送了一个 CSR 请求:
[root@master01 work ]#kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-auJz6SMLC0bu-hO2MXiibM-syMll0YQHZ1kAXyo3gaM 32s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
批准请求:
[root@master01 work ]#kubectl certificate approve node-csr-auJz6SMLC0bu-hO2MXiibM-syMll0YQHZ1kAXyo3gaM certificatesigningrequest.certificates.k8s.io/node-csr-auJz6SMLC0bu-hO2MXiibM-syMll0YQHZ1kAXyo3gaM approved
[root@master01 work ]#kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-auJz6SMLC0bu-hO2MXiibM-syMll0YQHZ1kAXyo3gaM 11m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
[root@master01 work ]#kubectl get nodes NAME STATUS ROLES AGE VERSION node01 NotReady <none> 34s v1.20.7
#注意:STATUS是NotReady表示还没有安装网络插件
二进制安装 kubectl get nodes 看不到控制节点,只能看到工作节点
默认主节点是没部署kubelet的,不具备服务调度能力。
一般生产上都是这么做的。如果想让主节点可具备调度能力,
可以像worker节点一样去在master上部署一下kubelet相关服务,也就可以get no看到了
node节点动态生成一些证书:
[root@node01 ~ ]#cd /etc/kubernetes/ [root@node01 kubernetes ]#ll total 12 -rw------- 1 root root 2148 Oct 27 08:44 kubelet-bootstrap.kubeconfig -rw-r--r-- 1 root root 800 Oct 27 08:44 kubelet.json -rw------- 1 root root 2277 Oct 27 08:49 kubelet.kubeconfig drwxr-xr-x 2 root root 138 Oct 27 08:49 ssl [root@node01 kubernetes ]#cd ssl/ [root@node01 ssl ]#ll total 16 -rw-r--r-- 1 root root 1346 Oct 27 08:44 ca.pem -rw------- 1 root root 1212 Oct 27 08:49 kubelet-client-2022-10-27-08-49-42.pem lrwxrwxrwx 1 root root 58 Oct 27 08:49 kubelet-client-current.pem -> /etc/kubernetes/ssl/kubelet-client-2022-10-27-08-49-42.pem -rw-r--r-- 1 root root 2237 Oct 27 08:47 kubelet.crt -rw------- 1 root root 1675 Oct 27 08:47 kubelet.key
5.8 部署kube-proxy组件
[root@master01 work ]#kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.255.0.1 <none> 443/TCP 22h
10.255.0.1在外面是访问不了的,只能在iptables,ipvs规则中使用,需要kube-proxy来生成规则
#创建csr请求
[root@master01 work ]#cat kube-proxy-csr.json { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Hubei", "L": "Wuhan", "O": "k8s", "OU": "system" } ] }
生成证书
[root@master01 work ]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 2022/10/27 08:52:02 [INFO] generate received request 2022/10/27 08:52:02 [INFO] received CSR 2022/10/27 08:52:02 [INFO] generating key: rsa-2048 2022/10/27 08:52:02 [INFO] encoded CSR 2022/10/27 08:52:02 [INFO] signed certificate with serial number 621761669559249301102184867130469483006128723356 2022/10/27 08:52:02 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
[root@master01 work ]#ll kube-proxy* -rw-r--r-- 1 root root 1005 Oct 27 08:52 kube-proxy.csr -rw-r--r-- 1 root root 212 Oct 25 15:39 kube-proxy-csr.json -rw------- 1 root root 1679 Oct 27 08:52 kube-proxy-key.pem -rw------- 1 root root 6238 Oct 27 08:54 kube-proxy.kubeconfig -rw-r--r-- 1 root root 1391 Oct 27 08:52 kube-proxy.pem -rw-r--r-- 1 root root 297 Oct 25 15:39 kube-proxy.yaml
#创建kubeconfig文件,安全上下文,确认kube-proxy 跟哪个集群交互
设置集群参数
[root@master01 work ]#kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.0.10:6443 --kubeconfig=kube-proxy.kubeconfig Cluster “kubernetes” set.
设置客户端认证参数
[root@master01 work ]#kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig User “kube-proxy” set.
设置上下文参数
[root@master01 work ]#kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig Context “default” created.
–user=kube-proxy 是在kube-proxy-csr.json中设置的CN用户
如果OU那里设置的是system,可以–user那里可以直接使用后面的用户名,与加上system:效果一样
设置当前上下文
[root@master01 work ]#kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig Switched to context “default”.
#创建kube-proxy配置文件
[root@master01 work ]#cat kube-proxy.yaml apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 10.10.0.14 #node节点ip clientConnection: kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig clusterCIDR: 10.10.0.0/24 #这里要使用物理机的网段 healthzBindAddress: 10.10.0.14:10256 #node节点ip kind: KubeProxyConfiguration metricsBindAddress: 10.10.0.14:10249 #node节点ip 采集指标 mode: "ipvs" #使用ipvs转发模式
#创建服务启动文件
[root@master01 work ]#cat kube-proxy.service [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/usr/local/bin/kube-proxy \ --config=/etc/kubernetes/kube-proxy.yaml \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
[root@master01 work ]#scp kube-proxy.kubeconfig kube-proxy.yaml node01:/etc/kubernetes/ kube-proxy.kubeconfig 100% 6238 5.9MB/s 00:00 kube-proxy.yaml 100% 282 418.8KB/s 00:00 [root@master01 work ]#scp kube-proxy.service node01:/usr/lib/systemd/system/ kube-proxy.service
#启动服务,在node节点创建目录
[root@node01 kubernetes ]#mkdir -p /var/lib/kube-proxy [root@node01 kubernetes ]#systemctl daemon-reload [root@node01 kubernetes ]#systemctl enable kube-proxy.service --now Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node01 kubernetes ]#systemctl status kube-proxy.service ● kube-proxy.service - Kubernetes Kube-Proxy Server Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2022-10-27 09:39:33 CST; 1min 22s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 12737 (kube-proxy) Tasks: 7 Memory: 57.5M CGroup: /system.slice/kube-proxy.service └─12737 /usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.yaml --alsologtostderr=true --logtostderr=false --log-dir=/var/log/ku... Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.136193 12737 shared_informer.go:240] Waiting for caches to sync for endpoint slice config Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.136212 12737 config.go:315] Starting service config controller Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.136214 12737 shared_informer.go:240] Waiting for caches to sync for service config Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.136432 12737 reflector.go:219] Starting reflector *v1beta1.EndpointSlice (15m0s) f...ry.go:134 Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.136441 12737 reflector.go:219] Starting reflector *v1.Service (15m0s) from k8s.io/...ry.go:134 Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.172077 12737 service.go:275] Service default/kubernetes updated: 1 ports Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.236331 12737 shared_informer.go:247] Caches are synced for service config Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.236505 12737 shared_informer.go:247] Caches are synced for endpoint slice config Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.236532 12737 proxier.go:1036] Not syncing ipvs rules until Services and Endpoints ...om master Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.236928 12737 service.go:390] Adding new service port "default/kubernetes:https" at...1:443/TCP Hint: Some lines were ellipsized, use -l to show in full.
5.9 部署calico组件
#解压离线镜像压缩包
#把calico.tar.gz上传到node01节点,手动解压
[root@node01 ~ ]#docker load -i calico.tar.gz
#把calico.yaml文件上传到master01上的的/data/work目录
Calico组件的安装
Calico:https://www.projectcalico.org/
https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises
curl https://docs.projectcalico.org/manifests/calico.yaml -O 原版本
点击Manifest下载:
curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O 新版本
点击requirements 可以查看calico支持的kubernetes的版本
修改calico.yaml的以下位置,把注释取消,改成自己配置的podSubnet网段
- name: CALICO_IPV4POOL_CIDR
value: “10.0.0.0/16”
这个如果是二进制安装,必须改,否则生成的pod 的ip地址默认是 192.168.0.0/16网段的
使用默认的:
[root@master01 work ]#vim calico.yaml
# - name: CALICO_IPV4POOL_CIDR # value: “192.168.0.0/16” [root@master01 ~ ]#kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 1/1 Running 0 55s 192.168.196.130 node01
修改后:
[root@master01 work ]#vim calico.yaml - name: CALICO_IPV4POOL_CIDR value: “10.0.0.0/16” [root@master01 work ]#kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 1/1 Running 0 2m9s 10.0.196.130 node01
[root@master01 work ]#kubectl apply -f calico.yaml [root@master01 work ]#kubectl get pods -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-6949477b58-dpwmn 1/1 Running 0 24s 10.0.196.129 node01 <none> <none> calico-node-rmkbm 1/1 Running 0 24s 10.10.0.14 node01 <none> <none>
calico-node-rmkbm 分配ip的
calico-kube-controllers-6949477b58-dpwmn 做网络策略的
[root@master01 work ]#kubectl get nodes NAME STATUS ROLES AGE VERSION node01 Ready <none> 62m v1.20.7
报错分析:
[root@master03 kubernetes ]#kubectl get pods -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-6949477b58-cr5lh 1/1 Running 0 5h44m 10.0.196.129 node01 <none> <none> calico-node-49kj4 1/1 Running 6 35m 10.10.0.11 master02 <none> <none> calico-node-bgv4n 1/1 Running 5 35m 10.10.0.10 master01 <none> <none> calico-node-bjsmm 1/1 Running 0 5h44m 10.10.0.14 node01 <none> <none> calico-node-t2ljz 0/1 Running 1 8m46s 10.10.0.12 master03 <none> <none> coredns-7bf4bd64bd-572z9 1/1 Running 0 5h31m 10.0.196.131 node01 <none> <none>
master03的calico组件未就绪
K8S集群Calico网络组件报错BIRD
calico/node is not ready: BIRD is not ready: BGP not established with
Warning Unhealthy 8s kubelet Readiness probe failed: 2022-10-27 08:00:12.256 [INFO][374] confd/health.go 180: Number of node(s) with BGP peering established = 0
calico/node is not ready: BIRD is not ready: BGP not established with 10.10.0.10,10.10.0.11,10.10.0.14
产生该报错的原因是节点上出现了冲突的网卡,将有问题的网卡删除后,Calico组件的pod就能正常启动了异常网卡命名都是以br开头的,
[root@master03 kubernetes ]#ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:77:30:7c brd ff:ff:ff:ff:ff:ff 3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:c2:48:1f:bc brd ff:ff:ff:ff:ff:ff 5: br-8763acf3655c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:da:e8:e8:55 brd ff:ff:ff:ff:ff:ff
就是这个br-8763acf3655c
将br开头的网卡删除:
[root@master03 kubernetes ]#ip link delete br-8763acf3655c 重新生成calico,成功: [root@master03 kubernetes ]#kubectl get pods -n kube-system -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-6949477b58-5rqgc 1/1 Running 2 20m 10.0.241.64 master01 <none> <none> calico-node-g48ng 1/1 Running 0 7m53s 10.10.0.12 master03 <none> <none> calico-node-nb4g4 1/1 Running 0 20m 10.10.0.11 master02 <none> <none> calico-node-ppcqd 1/1 Running 0 20m 10.10.0.10 master01 <none> <none> calico-node-tsrd4 1/1 Running 0 20m 10.10.0.14 node01 <none> <none> coredns-7bf4bd64bd-572z9 1/1 Running 0 5h57m 10.0.196.131 node01 <none> <none>
或者:
估计是没用发现实际真正的网卡
解决方法
/* 调整calicao 网络插件的网卡发现机制,修改IP_AUTODETECTION_METHOD对应的value值。 官方提供的yaml文件中,ip识别策略(IPDETECTMETHOD)没有配置,即默认为first-found,这会导致一个网络异常的ip作为nodeIP被注册, 从而影响node-to-node mesh。我们可以修改成can-reach或者interface的策略,尝试连接某一个Ready的node的IP,以此选择出正确的IP。 */ // calico.yaml 文件添加以下二行 - name: IP_AUTODETECTION_METHOD value: “interface=ens.*” # ens 根据实际网卡开头配置 // 配置如下 - name: CLUSTER_TYPE value: “k8s,bgp” - name: IP_AUTODETECTION_METHOD value: “interface=ens.*” #或者 value: “interface=ens160” # Auto-detect the BGP IP address. - name: IP value: “autodetect” # Enable IPIP - name: CALICO_IPV4POOL_IPIP value: “Always”
5.10 部署coredns组件
coreDNS 就是DNS服务器,将域名解析成ip
会根据svc创建FQDN 全限定域名 svcname.namespace.svc.cluster.local
镜像就是之前解压的pause-cordns.tar.gz
[root@master01 work ]#cat coredns.yaml apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } --- apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: coredns/coredns:1.7.0 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.255.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP
[root@master01 work ]#kubectl apply -f coredns.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created
[root@master01 work ]#kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.255.0.2 <none> 53/UDP,53/TCP,9153/TCP 2m6s
calico和coredns都是以pod方式运行的
4.查看集群状态
[root@master01 work ]#kubectl get nodes NAME STATUS ROLES AGE VERSION node01 Ready <none> 102m v1.20.7
5.测试k8s集群部署tomcat服务
#把tomcat.tar.gz和busybox-1-28.tar.gz上传到node01,手动解压
[root@node01 ~ ]#docker load -i busybox-1-28.tar.gz 432b65032b94: Loading layer [==================================================>] 1.36MB/1.36MB Loaded image: busybox:1.28 [root@node01 ~ ]#docker load -i tomcat.tar.gz f1b5933fe4b5: Loading layer [==================================================>] 5.796MB/5.796MB 9b9b7f3d56a0: Loading layer [==================================================>] 3.584kB/3.584kB edd61588d126: Loading layer [==================================================>] 80.28MB/80.28MB 48988bb7b861: Loading layer [==================================================>] 2.56kB/2.56kB 8e0feedfd296: Loading layer [==================================================>] 24.06MB/24.06MB aac21c2169ae: Loading layer [==================================================>] 2.048kB/2.048kB Loaded image: tomcat:8.5-jre8-alpine
在master01执行:
[root@master01 work ]#cat tomcat.yaml apiVersion: v1 #pod属于k8s核心组v1 kind: Pod #创建的是一个Pod资源 metadata: #元数据 name: demo-pod #pod名字 namespace: default #pod所属的名称空间 labels: app: myapp #pod具有的标签 env: dev #pod具有的标签 spec: containers: #定义一个容器,容器是对象列表,下面可以有多个name - name: tomcat-pod-java #容器的名字 ports: - containerPort: 8080 image: tomcat:8.5-jre8-alpine #容器使用的镜像 imagePullPolicy: IfNotPresent - name: busybox image: busybox:latest command: #command是一个列表,定义的时候下面的参数加横线 - "/bin/sh" - "-c" - "sleep 3600"
[root@master01 work ]#kubectl apply -f tomcat.yaml
pod/demo-pod created
[root@master01 work ]#kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES demo-pod 2/2 Running 0 17s 10.0.196.132 node01 <none> <none>
创建个service讲服务代理出去:
[root@master01 work ]#cat tomcat-service.yaml apiVersion: v1 kind: Service metadata: name: tomcat spec: type: NodePort ports: - port: 8080 nodePort: 30080 selector: app: myapp env: dev
[root@master01 work ]#kubectl apply -f tomcat-service.yaml
service/tomcat created
[root@master01 work ]#kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.255.0.1 <none> 443/TCP 23h tomcat NodePort 10.255.199.247 <none> 8080:30080/TCP 12s
在浏览器访问 http://node01的ip:30080
验证可以正常展示页面
6.验证cordns是否正常
[root@master01 work ]#kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh If you don't see a command prompt, try pressing enter. / # ping www.baidu.com PING www.baidu.com (14.215.177.39): 56 data bytes 64 bytes from 14.215.177.39: seq=0 ttl=127 time=5.189 ms 64 bytes from 14.215.177.39: seq=1 ttl=127 time=17.240 ms ^C --- www.baidu.com ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 5.189/11.214/17.240 ms / # nslookup kubernetes.default.svc.cluster.local Server: 10.255.0.2 Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default.svc.cluster.local Address 1: 10.255.0.1 kubernetes.default.svc.cluster.local / # nslookup tomcat.default.svc.cluster.local Server: 10.255.0.2 Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.local Name: tomcat.default.svc.cluster.local Address 1: 10.255.199.247 tomcat.default.svc.cluster.local
#注意:
busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip,报错如下:
/ # nslookup kubernetes.default.svc.cluster.local Server: 10.255.0.2 Address: 10.255.0.2:53 *** Can't find kubernetes.default.svc.cluster.local: No answer *** Can't find kubernetes.default.svc.cluster.local: No answer
10.255.0.2 就是我们coreDNS的clusterIP,说明coreDNS配置好了。
解析内部Service的名称,是通过coreDNS去解析的。
6.安装keepalived+nginx实现k8s apiserver高可用
把epel.repo上传到xianchaomaster1的/etc/yum.repos.d目录下,这样才能安装keepalived和nginx
把epel.repo传到xianchaomaster2、xianchaomaster3、node01上
总结一下,各个节点安装nginx和keepalived
三个master节点。直接用epel源安装:
yum install nginx keepalived -y
所有Master节点配置nginx(详细配置参考nginx文档,所有Master节点的nginx配置相同):http模块外面下方添加,stream模块与http模块时平行的
[root@master01 work ]#cat /etc/nginx/nginx.conf # For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 4096; include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; server { listen 80; listen [::]:80; server_name _; root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; error_page 404 /404.html; location = /404.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } # Settings for a TLS enabled server. # # server { # listen 443 ssl http2; # listen [::]:443 ssl http2; # server_name _; # root /usr/share/nginx/html; # # ssl_certificate "/etc/pki/nginx/server.crt"; # ssl_certificate_key "/etc/pki/nginx/private/server.key"; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 10m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # # # Load configuration files for the default server block. # include /etc/nginx/default.d/*.conf; # # error_page 404 /404.html; # location = /40x.html { # } # # error_page 500 502 503 504 /50x.html; # location = /50x.html { # } # } } stream { upstream apiserver { server 10.10.0.10:6443 max_fails=3 fail_timeout=30s; server 10.10.0.11:6443 max_fails=3 fail_timeout=30s; server 10.10.0.12:6443 max_fails=3 fail_timeout=30s; } server { listen 7443; proxy_connect_timeout 2s; proxy_timeout 900s; proxy_pass apiserver; } }
启动nginx如果报错:
[root@master01 work ]#systemctl status nginx.service ● nginx.service - The nginx HTTP and reverse proxy server Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Thu 2022-10-27 10:55:16 CST; 12s ago Process: 3019 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE) Process: 3017 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS) Oct 27 10:55:16 master01 systemd[1]: Starting The nginx HTTP and reverse proxy server... Oct 27 10:55:16 master01 nginx[3019]: nginx: [emerg] unknown directive "stream" in /etc/nginx/nginx.conf:37 Oct 27 10:55:16 master01 nginx[3019]: nginx: configuration file /etc/nginx/nginx.conf test failed Oct 27 10:55:16 master01 systemd[1]: nginx.service: control process exited, code=exited status=1 Oct 27 10:55:16 master01 systemd[1]: Failed to start The nginx HTTP and reverse proxy server. Oct 27 10:55:16 master01 systemd[1]: Unit nginx.service entered failed state. Oct 27 10:55:16 master01 systemd[1]: nginx.service failed.
就需要安装stream模块,并不是所有的版本都有现成的,没有的话需要编译,目前epel源的1.20版本有
yum install nginx-mod-stream -y
[root@master01 work ]#systemctl enable nginx.service --now Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service. [root@master02 yum.repos.d ]#systemctl enable nginx.service --now Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service. [root@master03 yum.repos.d ]#systemctl enable nginx.service --now Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
keepalived配置:
所有Master节点配置KeepAlived,配置不一样,注意区分 公有云不支持keepalived
主备三个地方配置不一样
router_id
state
priority
其他都完全一样
主keepalived-master01
[root@master01 keepalived ]#cat keepalived.conf ! Configuration File for keepalived global_defs { router_id 10.10.0.10 } vrrp_script chk_nginx { script "/etc/keepalived/check_nginx.sh" interval 2 weight -20 } vrrp_instance VI_1 { state MASTER interface ens33 # 修改为实际网卡名 virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 ,如果运行多个keepalived,该id不能重复 priority 100 # 优先级,备服务器设置 90 advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒 nopreempt #默认是抢占模式,主节点改成非抢占模式,防止VIP来回漂移 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.10.0.100 } track_script { chk_nginx } }
如果nginx死掉,会导致用户请求失败,但keepalived并不会进行切换,需要写脚本检查nginx存活状态,
如果确认nginx已死掉,则停掉本机keepalived,已确保keepalived能够正常切换
[root@master01 keepalived ]#cat check_nginx.sh #!/bin/bash while true;do nginxpid=$(ps -C nginx --no-header|wc -l) &> /dev/null if [ $nginxpid -eq 0 ];then systemctl start nginx echo "nginx已重启!" sleep 5 nginxpid=$(ps -C nginx --no-header|wc -l) &> /dev/null if [ $nginxpid -eq 0 ];then systemctl stop keepalived echo "该机已出现故障!" exit 1 fi fi sleep 5 echo "正常运行中!" done
给脚本加执行权限
[root@master01 keepalived ]#chmod +x check_nginx.sh
备-master02配置:
[root@master02 keepalived ]#cat keepalived.conf ! Configuration File for keepalived global_defs { router_id 10.10.0.11 } vrrp_script chk_nginx { script "/etc/keepalived/check_nginx.sh" interval 2 weight -20 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 51 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.10.0.100 } track_script { chk_nginx } }
备-master03配置:
[root@master03 keepalived ]#cat keepalived.conf ! Configuration File for keepalived global_defs { router_id 10.10.0.12 } vrrp_script chk_nginx { script "/etc/keepalived/check_nginx.sh" interval 2 weight -20 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 51 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.10.0.100 } track_script { chk_nginx } }
测试keepalived
停掉master01上的nginx。vip会漂移到master02或master03
经测试,会漂移
更改node节点连接VIP
目前所有的Worker Node组件连接都还是master01 Node,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。
因此接下来就是要改所有Worker Node(kubectl get node命令查看到的节点)组件配置文件,
由原来10.10.0.10修改为10.10.0.100(VIP)。在所有Worker Node执行:
[root@node01 ~]# sed -i 's#10.10.0.10:6443#10.10.0.100:7443#' /etc/kubernetes/kubelet-bootstrap.kubeconfig [root@node01 ~]# sed -i 's#10.10.0.10:6443#10.10.0.100:7443#' /etc/kubernetes/kubelet.kubeconfig [root@node01 ~]# sed -i 's#10.10.0.10:6443#10.10.0.100:7443#' /etc/kubernetes/kube-proxy.yaml [root@node01 ~]# sed -i 's#10.10.0.10:6443#10.10.0.100:7443#' /etc/kubernetes/kube-proxy.kubeconfig [root@node01 ~]# systemctl restart kubelet kube-proxy
每个master节点:
[root@master02 ~ ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /root/.kube/config [root@master03 keepalived ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /root/.kube/config [root@master01 work ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /root/.kube/config [root@master03 ~ ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /etc/kubernetes/kube-scheduler.kubeconfig [root@master03 ~ ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /etc/kubernetes/kube-controller-manager.kubeconfig [root@master02 ~ ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /etc/kubernetes/kube-scheduler.kubeconfig [root@master02 ~ ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /etc/kubernetes/kube-controller-manager.kubeconfig [root@master01 ~ ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /etc/kubernetes/kube-scheduler.kubeconfig [root@master01 ~ ]#sed -i 's/10.10.0.10:6443/10.10.0.100:7443/g' /etc/kubernetes/kube-controller-manager.kubeconfig [root@master03 ~ ]#systemctl restart kube-scheduler.service kube-controller-manager.service [root@master02 ~ ]#systemctl restart kube-scheduler.service kube-controller-manager.service [root@master01 ~ ]#systemctl restart kube-scheduler.service kube-controller-manager.service [root@master03 ~ ]#kubectl cluster-info Kubernetes control plane is running at https://10.10.0.100:7443 CoreDNS is running at https://10.10.0.100:7443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
这样高可用集群就安装好了
目前kubectl get nodes 只能看到工作节点,若想能看到控制节点。则需要在控制节点将kubelet kube-proxy流程也走一遍
并且注意修改kubelet.json kubelet-bootstrap.kubeconfig kube-proxy.yaml kube-proxy.kubeconfig 中的ip
将master节点打上标签:
[root@master01 work ]#kubectl label node master01 node-role.kubernetes.io/controlplane=true node/master01 labeled [root@master01 work ]#kubectl label node master02 node-role.kubernetes.io/controlplane=true node/master02 labeled [root@master01 work ]#kubectl label node master03 node-role.kubernetes.io/controlplane=true node/master03 labeled [root@master01 work ]#kubectl label node master01 node-role.kubernetes.io/etcd=true node/master01 labeled [root@master01 work ]#kubectl label node master02 node-role.kubernetes.io/etcd=true node/master02 labeled [root@master01 work ]#kubectl label node master03 node-role.kubernetes.io/etcd=true
工作节点打标签
[root@master01 work ]#kubectl get nodes NAME STATUS ROLES AGE VERSION master01 Ready controlplane,etcd 67m v1.20.7 master02 Ready controlplane,etcd 67m v1.20.7 master03 Ready controlplane,etcd 67m v1.20.7 node01 Ready worker 7h42m v1.20.7
[root@master01 work ]#kubectl get nodes NAME STATUS ROLES AGE VERSION master01 Ready controlplane,etcd 67m v1.20.7 master02 Ready controlplane,etcd 67m v1.20.7 master03 Ready controlplane,etcd 67m v1.20.7 node01 Ready worker 7h42m v1.20.7
7.将master节点打上污点,禁止调度
[root@master01 work ]#kubectl taint node master01 master01=null:NoSchedule node/master01 tainted [root@master01 work ]#kubectl taint node master02 master02=null:NoSchedule node/master02 tainted [root@master01 work ]#kubectl taint node master03 master03=null:NoSchedule node/master03 tainted
好了,以上就是二进制文件搭建高可用K8S集群的全部流程,希望对大家学习工作中提供帮助,ღ( ´・ᴗ・` )比心。