1.6 部署flannel网络组件(命令)
cd /opt #上传cni-plugins-linux-amd64-v0.8.6.tgz和flannel.tar到/opt目录中 docker load -i flannel.tar mkdir -p /opt/cni/bin tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin #在master01节点上操作上传kube-flannel.yml文件到/opt/kubernetes目录中,部署CNI网络(39-44行可能需要改) cd /opt/kubernetes #此时可能需要等待几秒钟,才会出现ready kubectl apply -f kube-flannel.yml kubectl get nodes #单节点master部署k8s至此结束
1.6 部署flannel网络组件(截图)
1.7 部署master02节点(命令)
#多节点部署#多节点部署 #在master01节点上操作,复制文件给master02 scp -r /opt/etcd/ root@192.168.13.40:/opt/ scp -r /opt/kubernetes/ root@192.168.13.40:/opt/ cd /usr/lib/systemd/system scp kube-apiserver.service kube-controller-manager.service kube-scheduler.service root@192.168.13.40:`pwd` cd ~ scp -r .kube/ 192.168.13.40:/root #在master02上操作,进行配置文件部署 hostnamectl set-hostname master02 su vim /opt/kubernetes/cfg/kube-apiserver 第5行:地址修改为master02地址 第7行:地址修改为master02地址 #创建命令链接 ln -s /opt/kubernetes/bin/* /usr/local/bin #启动kube-apiserver、scheduler、controller-manager三项服务 systemctl enable --now kube-apiserver.service systemctl enable --now kube-controller-manager.service systemctl enable --now kube-scheduler.service #查看服务是否启动 systemctl status kube-apiserver.service systemctl status kube-controller-manager.service systemctl status kube-scheduler.service
1.7 部署master02节点(截图)
1.8 部署nginx负载均衡节点(命令)
#所有nginx负载均衡节点 cat > /etc/yum.repos.d/nginx.repo << EOF [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/\$basearch/ gpgcheck=0 EOF yum install -y nginx vim /etc/nginx/nginx.conf #添加如下内容(http模块同级别) stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiservers { server 192.168.13.10:6443; server 192.168.13.40:6443; } server { listen 6443; proxy_pass k8s-apiservers; } } #添加内容到此结束 nginx -t systemctl enable --now nginx yum install -y keepalived vim /etc/keepalived/keepalived.conf 10行:smtp_server 127.0.0.1 12行:router_id NGINX_01 13-16行:删除 14行:插入周期性执行脚本 vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" } 21行:interface ens33 30行:192.168.13.100/32 31-32:ip地址删除 33行:留两个大括号,下面全部删除 倒数第二行:最后一个大括号上方插入 track_script { check_nginx } #编写nginx检查脚本 vim /etc/nginx/check_nginx.sh #!/bin/bash #egrep -cv "grep|$$"用于过滤掉包含grep或者$$表示的当前shell进程ID count=$(ps -ef | grep nginx | egrep -cv "grep|$$") if [ "$count" -eq 0 ];then systemctl stop keepalived fi chmod +x /etc/nginx/check_nginx.sh scp /etc/nginx/check_nginx.sh root@192.168.13.60:/etc/nginx/check_nginx.sh systemctl enable --now keepalived.service systemctl status keepalived.service ip addr #在node节点上操作 vim /opt/kubernetes/cfg/bootstrap.kubeconfig vim /opt/kubernetes/cfg/kubelet.kubeconfig vim /opt/kubernetes/cfg/kube-proxy.kubeconfig 都是第5行:server: https://192.168.13.100:6443 #或者用sed命令,可以免交互修改 #sed -i 's! server: https://192.168.13.10:6443! server: https://192.168.13.100:6443!' /opt/kubernetes/cfg/bootstrap.kubeconfig #sed -i 's! server: https://192.168.13.10:6443! server: https://192.168.13.100:6443!' /opt/kubernetes/cfg/kubelet.kubeconfig #sed -i 's! server: https://192.168.13.10:6443! server: https://192.168.13.100:6443!' /opt/kubernetes/cfg/kube-proxy.kubeconfig #重启kubele、proxy服务 systemctl restart kubelet.service systemctl restart kube-proxy.service #master02节点执行下方命令检查是否部署成功 kubectl get nodes kubectl run nginx --image=nginx #此时大概需要等待十来秒(下载镜像,下载好后ready1/1) kubectl get pods -o wide #多节点部署至此结束
1.8 部署nginx负载均衡节点(截图)
二、结语
- 按照脚本刷,指定没问题,唯一要注意的就是步骤太多,需要细心
- 要多做几遍,错了就排障,排不出来就重做,只有多做几遍,才能知道每一步骤是干嘛的
- 错误1:单节点master01找不到node节点:报错是No resources found(解决:node节点没安装docker引擎,安装即可),当然我的报错是因为这个,如果你配置文件写错了,也可能会出现这种情况
- 错误2:etcd启动不起来,无报错(原因:etcd脚本之前有错误,却直接执行了,形成了缓存;解决:删除缓存rm -rf /var/lib/etcd/*),当然,也可能是你证书文件生成失败、证书不匹配之类的
- 小技巧:利用:kubectl describe pod定位异常问题的具体信息