云原生|kubernetes|kubernetes-1.18 二进制安装教程单master(其它的版本也基本一样)(上):https://developer.aliyun.com/article/1399626
生成kube-proxy.kubeconfig文件
vim /opt/kubernetes/ssl/kube-proxy-csr.json
{"CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] }
生成证书:
cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/ca-config.json -profile=kubernetes /opt/kubernetes/ssl/kube-proxy-csr.json | cfssljson -bare kube-proxy
复制证书:
cp kube-proxy-key.pem kube-proxy.pem /opt/kubernetes/ssl/
生成kubeconfig文件:
KUBE_APISERVER="https://192.168.217.16:6443"
kubectl config set-cluster kubernetes \ --certificate-authority=/opt/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
服务启动脚本:
[root@master bin]# cat /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
启动并设置开机启动:
systemctl daemon-reload systemctl start kube-proxy systemctl enable kube-proxy
(14)
工作节点部署(在master节点的基础上部署)
拷贝相关文件到node1节点(配置文件,证书文件和可执行文件以及服务启动脚本,这里是演示的node1节点,node2节点一样的操作啦):
先在node1服务器上建立相关文件夹:
mkdir -p /opt/kubernetes/{cfg,bin,ssl,logs}
scp /opt/kubernetes/bin/{kubelet,kube-proxy} k8s-node1:/opt/kubernetes/bin/ scp /usr/lib/systemd/system/{kubelet.service,kube-proxy.service} k8s-node1:/usr/lib/systemd/system/ scp /opt/kubernetes/cfg/{kubelet.conf,kube-proxy.conf,kube-proxy-config.yml,kubelet-config.yml,bootstrap.kubeconfig,kube-proxy.kubeconfig} k8s-node1:/opt/kubernetes/cfg/ scp /opt/kubernetes/ssl/{ca-key.pem,ca.pem,kubelet.crt,kubelet.key,kube-proxy-key.pem,kube-proxy.pem,server-key.pem,server.pem} k8s-node1:/opt/kubernetes/ssl/
两个文件修改修改主机名:
kube-proxy-config.yml和kubelet.conf 这两个文件里的--hostname-override=的值修改为当前主机名,比如,是在node1服务器上修改,就写k8s-node1
剩下的都不需要改动,启动服务就好了,服务状态检查,看成功后,在master服务器上批准节点加入就可以啦(例如,node1节点的批准):
[root@master ssl]# k get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-6peyUWAChHCuvf5bO75sb0SRB5xVxlnMpH1F1UKbc2U 51s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending [root@master ssl]# kubectl certificate approve node-csr-6peyUWAChHCuvf5bO75sb0SRB5xVxlnMpH1F1UKbc2U certificatesigningrequest.certificates.k8s.io/node-csr-6peyUWAChHCuvf5bO75sb0SRB5xVxlnMpH1F1UKbc2U approved
在master上查看结果,验证是否正常:
[root@master cfg]# k get no NAME STATUS ROLES AGE VERSION k8s-master NotReady <none> 4h40m v1.18.3 k8s-node1 NotReady <none> 34m v1.18.3 k8s-node2 NotReady <none> 33m v1.18.3
此时节点状态是没有准备,原因如下(cni插件配置没有初始化):
Aug 27 15:58:56 master kubelet: E0827 15:58:56.093236 14623 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
15,
安装网络插件
导入docker镜像包quay.io_coreos_flannel_v0.13.0.tar,所有节点都导入,也就是docker load <quay.io_coreos_flannel_v0.13.0.tar
[root@master cfg]# cat ../bin/kube-flannel.yml --- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN', 'NET_RAW'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.13.0 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.13.0 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
kubectl apply -f kube-flannel.yaml 执行上面的文件就安装好了网络插件。
(15)COREDNS的安装
总共5个配置文件,其中一个是测试dns的文件:
ServiceAccount的
cat << EOF > coredns-ns.yaml apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system EOF
coredns 的configMap:
cat << EOF > coredns-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } EOF
coredns的用户角色权限清单文件:
cat << EOF > coredns-rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system EOF
coredns部署文件
cat << EOF > coredns-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: registry.aliyuncs.com/google_containers/coredns:1.8.6 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile EOF
coredns的service文件,这里需要注意,clusterIP的值是和kubelet-config.yml 里的值一致的。
cat << EOF > coredns-svc.yaml apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.0.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP EOF
coredns的测试:
启动一个临时pod,以运行测试命令:
kubectl run -i --tty --image busybox:1.28.3 -n web dns-test --restart=Never --rm
解析集群内外域名都没有问题,输出是以下的表示coredns功能正常。
/ # nslookup kubernetes Server: 10.0.0.2 Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local / # nslookup kubernetes.default Server: 10.0.0.2 Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local / # nslookup baidu.com Server: 10.0.0.2 Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local Name: baidu.com Address 1: 110.242.68.66 Address 2: 39.156.66.10 / # cat /etc/resolv.conf nameserver 10.0.0.2 search web.svc.cluster.local svc.cluster.local cluster.local localdomain default.svc.cluster.local options ndots:5
(16)
dashboard的安装
kubernetesui_metrics-scraper_v1.0.6.tar和dashboard.tar这两个网盘内的文件三节点都使用docker load 导入,因为是deployment方式部署,不清楚到底是在哪个节点部署。
集群角色绑定
kubectl create clusterrolebinding default --clusterrole=cluster-admin --serviceaccount=kube-system:default --namespace=kube-system
安装部署yaml文件内容如下(文件内容比较多,执行这个文件 kubectl apply -f dashboard.yml);
[root@master ~]# cat dashboard.yml # Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. apiVersion: v1 kind: Namespace metadata: name: kubernetes-dashboard --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 nodePort: 30008 type: NodePort selector: k8s-app: kubernetes-dashboard --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kubernetes-dashboard type: Opaque --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-csrf namespace: kubernetes-dashboard type: Opaque data: csrf: "" --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-key-holder namespace: kubernetes-dashboard type: Opaque --- kind: ConfigMap apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-settings namespace: kubernetes-dashboard --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard rules: # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster", "dashboard-metrics-scraper"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"] verbs: ["get"] --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard rules: # Allow Metrics Scraper to get metrics from the Metrics server - apiGroups: ["metrics.k8s.io"] resources: ["pods", "nodes"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: kubernetesui/dashboard:v2.0.4 imagePullPolicy: IfNotPresent ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates - --namespace=kubernetes-dashboard # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- kind: Service apiVersion: v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: ports: - port: 8000 targetPort: 8000 selector: k8s-app: dashboard-metrics-scraper --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: dashboard-metrics-scraper template: metadata: labels: k8s-app: dashboard-metrics-scraper annotations: seccomp.security.alpha.kubernetes.io/pod: 'runtime/default' spec: containers: - name: dashboard-metrics-scraper image: kubernetesui/metrics-scraper:v1.0.6 imagePullPolicy: IfNotPresent ports: - containerPort: 8000 protocol: TCP livenessProbe: httpGet: scheme: HTTP path: / port: 8000 initialDelaySeconds: 30 timeoutSeconds: 30 volumeMounts: - mountPath: /tmp name: tmp-volume securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule volumes: - name: tmp-volume emptyDir: {}
获取登录token:
kubectl describe secrets $(kubectl describe sa default -n kube-system | grep Mountable | awk 'NR == 2 {next} {print $3}') -n kube-system
登录地址:https://nodeip:30008
那么,使用配置文件登录呢?那个文件就是bootstrap.kubeconfig,将这个文件拷贝到桌面,然后登录 的时候选择文件即可了,文件内容应该如下:
[root@master ~]# cat /opt/kubernetes/cfg/bootstrap.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVRDYzSGpYeFRiM3EzdGZUeEM4QjZwalUzYUVRd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEl5TURneU56QXhNVFV3TUZvWERUSTNNRGd5TmpBeE1UVXdNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeitLb3pMQlVEYXNQTmxKc2lMSXoKZHRUR0M4a2tvNnVJZjVUSUNkY3pPemJyaks1TVJ4UzYrd2ZwVzNHOGFtN2J1QlcvRE1hcW15dGNlbEwxd0VpMwoxUGZTNE9oWXRlczUwWU4wMkZrZ2JCYmVBSVN3NnJ5cnRadGFxdWhZeHUwQjlOLzVuZGhETUx2ZFhFV1NYYWZrCmtWQXNnTFZ0dmNPMCtKVUt3OGE5eFJSRTkyWThZYXZ0azN4M3VBU2hTejUrS3FQZ1V6Q2x2a2N4UUpXVFBiTkUKOEpERXlaY0I0ay8za0NuOGtsREc3Um9Wck1hcHJ6Z3lNQkNVOEgzS1hsM0FJdkFYNGVQQTFOVGJzbUhWaDdCcgpmeWdLT0x5RHA3OUswbkp1c0NtY1JmRGJ4TWJMVCtNeU01Y0NFcm1LMkVnSTRuYXIyMndQQU5kemRTb1dIbDljCkp3SURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVZG1mUXNkMy85czVoKzl0V1dDMHhBL1htSENZd0h3WURWUjBqQkJnd0ZvQVVkbWZRc2QzLwo5czVoKzl0V1dDMHhBL1htSENZd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJR0lPa0xDMlpxL3Y4dStNelVyCkpVRUY4SFV2QzdldkdXV3ZMMmlqd3dxTnJMWE1XYWd6UWJxcDM5UTVlRnRxdngzSWEveFZFZ0ZTTnBSRjNMNGYKN0VYUlYxRXpJVlUxeThaSnZzVXl1aFQyVENCQ3UwMkdvSWc1VGlJNzMwak5xQllMRGh6SVJKLzBadEtTQlcwaApIUEo3eGRybnZSdnY3WG9uT1dCbldBTUhJY0N0TzNLYlovbXY1VHBoTnkzWHJMSTdRaFVvWVlQSXN5N1BvUjhVCm9WVm80RkRRUDFPYXhGSzljSE1DNWNuLzFNSnRZUGpVRzg5RldEc01HbWVVZVZ1cnhsVStkVlFUMUZzOWJxanoKaDJWaHNtanFCK3RCbjVGdENOaEY5STVxYlJuMWJmTGRpQzl2QzJ3U00xSDZQVWRxeHB6ZlRaVHhSbEptdmtjZAo1ZTQ9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://192.168.217.16:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubelet-bootstrap name: default current-context: default kind: Config preferences: {} users: - name: kubelet-bootstrap user: token: c47ffb939f5ca36231d9e3121a252940
两个地方需要特别注意,一个是token,这个是token.csv文件里定义的,一个是kubelet-bootstrap用户,可能使用config文件会登录失败,提示权限不足,
那么,就先删除用户的角色绑定,重新绑定为admin,代码如下:
kubectl delete clusterrolebindings kubelet-bootstrap kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=cluster-admin --user=kubelet-bootstrap