12、后续操作
[root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf #把两台master主机的keepalived文件的最后几行删除注释 ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 2 weight -5 fall 3 rise 2 } vrrp_instance VI_1 { state MASTER interface ens32 mcast_src_ip 192.168.100.202 virtual_router_id 51 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.100.204 } track_script { #删除注释 chk_apiserver } }
二、Metrics、Dashboard部署
Dashboard介绍:
Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。web界面
Dashboard 是基于网页的 Kubernetes 用户界面。您可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。您可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源(如 Deployment,Job,DaemonSet 等等)。例如,您可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。
Metrics介绍:
在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
Kubernetes的早期版本依靠Heapster来实现完整的性能数据采集和监控功能,Kubernetes从1.8版本开始,性能数据开始以Metrics API的方式提供标准化接口,并且从1.10版本开始将Heapster替换为Metrics Server。在Kubernetes新的监控体系中,Metrics Server用于提供核心指标(Core Metrics),包括Node、Pod的CPU和内存使用指标。
对其他自定义指标(Custom Metrics)的监控则由Prometheus等组件来完成。
记得上传metrics-server metrics-scraper_v1.0.1到四个节点
(1)先安装Metrics
#在四个节点上都上传metrics-server和metrics-scraper_v1.0.1 [root@k8s-master01 ~]# ll #在master01节点上上传 总用量 1001808 。。。。。。 -rw-r--r-- 1 root root 40124928 6月 29 2020 metrics-scraper_v1.0.1.tar -rw-r--r-- 1 root root 41199616 6月 26 2020 metrics-server.tar.gz 。。。。。。 [root@k8s-master01 ~]# scp metrics-* root@192.168.100.203:/root/ #分别传给另外三台服务器 [root@k8s-master01 ~]# scp metrics-* root@192.168.100.205:/root/ [root@k8s-master01 ~]# scp metrics-* root@192.168.100.206:/root/ #四台节点上都上传镜像 docker load -i metrics-scraper_v1.0.1.tar docker load -i metrics-server.tar.gz #在master01上传components.yaml文件 [root@k8s-master01 ~]# ll | grep com -rw-r--r-- 1 root root 3509 6月 26 2020 components.yaml [root@k8s-master01 ~]# kubectl apply -f components.yaml [root@k8s-master01 ~]# kubectl -n kube-system get pods -l k8s-app=metrics-server #这里为running即可 NAME READY STATUS RESTARTS AGE metrics-server-7b97647899-bt7wx 1/1 Running 0 77s metrics-server-7b97647899-j9m4b 1/1 Running 0 77s #master01查看资源监控 [root@k8s-master01 ~]# kubectl top nodes #可以查看集群中所有节点的资源负载 NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 155m 7% 1053Mi 27% k8s-master02 116m 5% 857Mi 22% k8s-node01 15m 1% 362Mi 19% k8s-node02 33m 3% 337Mi 17% [root@k8s-master01 ~]# kubectl top pods --all-namespaces #查看所有pod的负载 NAMESPACE NAME CPU(cores) MEMORY(bytes) kube-system coredns-66bff467f8-l75z6 4m 13Mi kube-system coredns-66bff467f8-x4crp 4m 13Mi kube-system etcd-k8s-master01 39m 84Mi kube-system etcd-k8s-master02 35m 83Mi kube-system kube-apiserver-k8s-master01 51m 363Mi kube-system kube-apiserver-k8s-master02 39m 355Mi kube-system kube-controller-manager-k8s-master01 21m 43Mi kube-system kube-controller-manager-k8s-master02 2m 20Mi kube-system kube-flannel-ds-amd64-6rj6k 1m 11Mi kube-system kube-flannel-ds-amd64-dnv8s 2m 12Mi kube-system kube-flannel-ds-amd64-q8cgj 1m 11Mi kube-system kube-flannel-ds-amd64-wl4d6 6m 15Mi kube-system kube-proxy-66lsh 1m 16Mi kube-system kube-proxy-lfcfw 1m 16Mi kube-system kube-proxy-q7q45 1m 17Mi kube-system kube-proxy-zwwkc 1m 17Mi kube-system kube-scheduler-k8s-master01 2m 16Mi kube-system kube-scheduler-k8s-master02 5m 20Mi kube-system metrics-server-7b97647899-bt7wx 1m 11Mi kube-system metrics-server-7b97647899-j9m4b 1m 11Mi kubernetes-dashboard kubernetes-dashboard-7b544877d5-h5m9x 1m 16Mi
(2)安装Dashboard
#下面的操作全部都是master01一台机器进行 #网络下载:(这个可能会失效,无法下载,下面的dashboard.yaml文件和这个recommended.yaml是相同的,只是名称不一样) wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml #下载后修改 [root@k8s-master01 ~]# ll 总用量 922376 -rw-------. 1 root root 1264 1月 12 2021 anaconda-ks.cfg -rw-r--r-- 1 root root 43932160 8月 4 13:21 coredns.tar.gz -rw-r--r-- 1 root root 290010624 8月 4 13:22 etcd.tar.gz -rw-r--r-- 1 root root 55390720 8月 4 13:22 flannel.tar.gz -rw-r--r-- 1 root root 14366 8月 3 16:07 flannel.yml -rw-r--r-- 1 root root 174554624 8月 4 13:22 kube-apiserver.tar.gz -rw-r--r-- 1 root root 163945984 8月 4 13:22 kube-controller-manager.tar.gz -rw-r--r-- 1 root root 119103488 8月 4 13:23 kube-proxy.tar.gz -rw-r--r-- 1 root root 96841216 8月 4 13:23 kube-scheduler.tar.gz -rw-r--r-- 1 root root 692736 8月 4 13:23 pause.tar.gz -rw-r--r-- 1 root root 7591 8月 4 15:23 dashboard.yaml #上传这个 [root@k8s-master01 ~]# vim dashboard.yaml 。。。。。。 37 name: kubernetes-dashboard 38 namespace: kubernetes-dashboard 39 spec: 40 type: NodePort #修改类型 41 ports: 42 - port: 443 43 targetPort: 8443 44 nodePort: 30001 #修改端口为30001 45 selector: 46 k8s-app: kubernetes-dashboard 47 48 --- #保存退出 [root@k8s-master01 ~]# kubectl create -f dashboard.yaml [root@k8s-master01 ~]# kubectl get pod -n kubernetes-dashboard #都是running即可 NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-6b4884c9d5-mbfws 1/1 Running 0 46m kubernetes-dashboard-7b544877d5-h5m9x 1/1 Running 0 46m [root@k8s-master01 ~]# kubectl get pod,svc -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE pod/dashboard-metrics-scraper-6b4884c9d5-mbfws 1/1 Running 0 46m pod/kubernetes-dashboard-7b544877d5-h5m9x 1/1 Running 0 46m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/dashboard-metrics-scraper ClusterIP 10.1.45.12 <none> 8000/TCP 46m service/kubernetes-dashboard NodePort 10.1.58.130 <none> 443:30001/TCP 46m
(3)创建serviceaccount和clusterrolebinding资源YAML文件
#下面的操作全部都是master01一台机器进行,默认Dashboard为最小RBAC权限,添加集群管理员权限以便从Dashboard操作集群资源 [root@k8s-master01 ~]# vim adminuser.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard #保存退出 [root@k8s-master01 ~]# kubectl create -f adminuser.yaml #做完这部即可 serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created
三、测试
(1)使用浏览器访问Dashboard UI
- 使用浏览器访问https://192.168.100.204:30001,访问的是虚拟ip,30001端口在dashboard.yaml文件中已经指定了
#在master01上获取Token值 [root@k8s-master01 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') Name: admin-user-token-jgs6t Namespace: kubernetes-dashboard Labels: <none> Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: 8e0b1c00-814e-4984-900b-be3938e33642 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 20 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjJzTEpWNDRoVkVrVHA0RExzUzFrdzk4ZmdSeUVqX0ZLRWNPWm10aUFKWWMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWpnczZ0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4ZTBiMWMwMC04MTRlLTQ5ODQtOTAwYi1iZTM5MzhlMzM2NDIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.oelr-6lROucibVIcC3FNwI5ubm4FcrBT3BFRT2wDCSEmjqvOFvx_5KUrJjcsW6mHy7BGPHsHmeXZDgavOQKc9hB6cQOlI0BUFCP1FciCQw3rBXrOY2CYfapW8nztMaIzsCyZl0C0xO35jI0REyp9Gx7laoPb6-4-bFpWcQIR5WrQAoJ9sPuFbcYLWMLsdYVUdct8PKY4MzrYN-pEqteb-QNm96XfrUV98idyQ1bx2rvR8KyEfSvF8Glg2i627bD-GKkMsZuGRvlWs2cIw5CA0l1mkadiZgASpFK4CQaiPmxXK2W3fYBTmavaBWrmXhFV40cFgsPJccoWiH9V9Y__-Q
成功进入!!
(2)后续操作
#将Kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下 [root@k8s-master01 ~]# kubectl edit cm kube-proxy -n kube-system 。。。。。。 41 kind: KubeProxyConfiguration 42 metricsBindAddress: "" 43 mode: "ipvs" #修改为ipvs 44 nodePortAddresses: null 45 oomScoreAdj: 。。。。。。 #保存退出 #更新Kube-Proxy的Pod [root@k8s-master01 ~]# kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system #验证Kube-Proxy模式 [root@k8s-master01 ~]# curl 127.0.0.1:10249/proxyMode ipvs #这里是ipvs表示成功更换
四、Kubernetes集群初始化步骤
#先移除k8s集群中的所有主机 [root@k8s-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 19m v1.18.4 [root@k8s-master01 ~]# kubectl delete node k8s-master01 node "k8s-master01" deleted [root@k8s-master01 ~]# kubectl get nodes No resources found in default namespace. #所有工作节点删除工作目录,并且重置kubeadm,这里的工作节点是指已经加入集群并且移除集群的节点 [root@k8s-master01 ~]# rm -rf /etc/kubernetes/* [root@k8s-master01 ~]# kubeadm reset [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] Are you sure you want to proceed? [y/N]: y #Master01节点删除工作目录,并重置kubeadm,然后重新创建 [root@k8s-master01 ~]# rm -rf /etc/kubernetes/* [root@k8s-master01 ~]# rm -rf ~/.kube/* [root@k8s-master01 ~]# rm -rf /var/lib/etcd/* [root@k8s-master01 ~]# kubeadm reset -f