1、前情提要
- kubeasz是github上的一个kubernetes集群部署工具
- 利用kubeasz给suse 12部署的时候,会有一些操作需要自己提前做好准备
- role里面的chrony目前只支持Debian,Centos,Redhat,所以,suse需要自己先部署chrony时间同步服务
- docker的service启动文件里面的iptables所在路径需要修改
- br_netfilter和ip_conntrack两个模块需要开启
- ~/.bashrc需要手动创建
2、环境准备
2.1、环境介绍
IP | HOSTNAME | SERVICE |
192.168.10.175 | k8s-01 | master&node |
192.168.10.176 | k8s-02 | master&node |
192.168.10.177 | k8s-03 | master&node |
- 官方建议:master节点最低需要2C2G
- linux内核需要4.x以上
- 自己创建虚拟机的话,磁盘直接给100G,VMware是用多少占用多少,磁盘给少了,以后自己玩的时候,玩到一半,磁盘不够多尴尬
# 发行版 linux-oz6w:~ # cat /etc/os-release NAME="SLES" VERSION="12-SP3" VERSION_ID="12.3" PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3" ID="sles" ANSI_COLOR="0;32" CPE_NAME="cpe:/o:suse:sles:12:sp3" # 内核 linux-oz6w:~ # uname -r 4.4.73-5-default
2.2、配置静态网络
linux-oz6w:~ # cp /etc/sysconfig/network/ifcfg-eth0{,.bak} # 先备份,失败了还有后悔药
# 配置IP linux-oz6w:~ # cat > /etc/sysconfig/network/ifcfg-eth0 <<EOF BOOTPROTO='static' BROADCAST='' ETHTOOL_OPTIONS='' IPADDR='192.168.10.175/24' MTU='' NAME='' NETMASK='' NETWORK='' REMOTE_IPADDR='' STARTMODE='auto' DHCLIENT_SET_DEFAULT_ROUTE='yes' EOF
# 配置网关 linux-oz6w:~ # cat > /etc/sysconfig/network/ifroute-eth0 <<EOF default 192.168.10.2 - eth0 EOF
# 配置DNS linux-oz6w:~ # cat >> /etc/resolv.conf <<EOF nameserver 192.168.10.2 EOF
# 重启使配置生效 linux-oz6w:~ # systemctl restart network && ping www.baidu.com -w 3 PING www.a.shifen.com (180.101.49.12) 56(84) bytes of data. 64 bytes from 180.101.49.12: icmp_seq=1 ttl=128 time=13.0 ms 64 bytes from 180.101.49.12: icmp_seq=2 ttl=128 time=11.8 ms 64 bytes from 180.101.49.12: icmp_seq=3 ttl=128 time=10.5 ms --- www.a.shifen.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2004ms rtt min/avg/max/mdev = 10.533/11.819/13.077/1.042 ms
# 创建hosts解析 linux-oz6w:~ # cat >> /etc/hosts <<EOF 192.168.10.175 k8s-01 192.168.10.176 k8s-02 192.168.10.177 k8s-03 EOF
# 修改hostname linux-oz6w:~ # hostnamectl set-hostname --static k8s-01 # 断开终端,再次连接,hostname就会更新
- 剩下两台机器,一样的方式配置一遍
- 以下的操作,只需要在k8s-01机器上操作就可以了
2.3、配置ssh免密
# 我的机器密码是123.com,注意修改 #!/usr/bin/env bash ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa for host in k8s-01 k8s-02 k8s-03 do expect -c " spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$host expect { \"*yes/no*\" {send \"yes\r\"; exp_continue} \"*Password*\" {send \"123.com\r\"; exp_continue} \"*Password*\" {send \"123.com\r\";} }" done
- 通过ssh root@k8s-01来验证免密和hosts解析是否成功
2.4、批量开启模块以及创建文件
#!/usr/bin/env bash for host in k8s-01 k8s-02 k8s-03 do ssh root@${host} "modprobe br_netfilter" ssh root@${host} "modprobe ip_conntrack" ssh root@${host} "touch ~/.bashrc" done
2.5、安装ansible
- 这里选用pip的方式安装ansible
2.5.1、安装pip
k8s-01:~ # wget https://pypi.python.org/packages/source/s/setuptools/setuptools-11.3.tar.gz k8s-01:~ # tar xf setuptools-11.3.tar.gz k8s-01:~ # python setuptools-11.3/setup.py install k8s-01:~ # easy_install https://mirrors.aliyun.com/pypi/packages/0b/f5/be8e741434a4bf4ce5dbc235aa28ed0666178ea8986ddc10d035023744e6/pip-20.2.4.tar.gz#sha256=85c99a857ea0fb0aedf23833d9be5c40cf253fe24443f0829c7b472e23c364a1
2.5.2、安装ansible
k8s-01:~ # pip install ansible -i https://mirrors.aliyun.com/pypi/simple/
2.6、下载kubeasz
- github上的kubeasz已经更新了,导致和之前写的博客不一样,之前保留了一份kubeasz在本地,现已经上传到百度云了,新版的kubeasz暂时没有好好的把玩过,suse上估计还是会有很多问题存在的,毕竟版本差异问题
- 链接:https://pan.baidu.com/s/1rFscCCLHhD4O3os_9yKqEQ
提取码:o1bs
从百度云上下载下来后,本地解压好,把kubeasz目录下所有内容都上传到服务器的/etc/ansible目录下,如下即可: k8s-01:~ # mkdir /etc/ansible k8s-01:~ # cd /etc/ansible/ k8s-01:/etc/ansible # ll total 88 -rw-r--r-- 1 root root 414 Feb 12 16:13 .gitignore -rw-r--r-- 1 root root 395 Feb 12 16:13 01.prepare.yml -rw-r--r-- 1 root root 58 Feb 12 16:13 02.etcd.yml -rw-r--r-- 1 root root 149 Feb 12 16:13 03.containerd.yml -rw-r--r-- 1 root root 137 Feb 12 16:13 03.docker.yml -rw-r--r-- 1 root root 470 Feb 12 16:13 04.kube-master.yml -rw-r--r-- 1 root root 140 Feb 12 16:13 05.kube-node.yml -rw-r--r-- 1 root root 408 Feb 12 16:13 06.network.yml -rw-r--r-- 1 root root 77 Feb 12 16:13 07.cluster-addon.yml -rw-r--r-- 1 root root 3686 Feb 12 16:13 11.harbor.yml -rw-r--r-- 1 root root 431 Feb 12 16:13 22.upgrade.yml -rw-r--r-- 1 root root 2119 Feb 12 16:13 23.backup.yml -rw-r--r-- 1 root root 113 Feb 12 16:13 24.restore.yml -rw-r--r-- 1 root root 1752 Feb 12 16:13 90.setup.yml -rw-r--r-- 1 root root 1127 Feb 12 16:13 91.start.yml -rw-r--r-- 1 root root 1120 Feb 12 16:13 92.stop.yml -rw-r--r-- 1 root root 337 Feb 12 16:13 99.clean.yml -rw-r--r-- 1 root root 5654 Feb 12 16:13 README.md -rw-r--r-- 1 root root 10283 Feb 12 16:13 ansible.cfg drwxr-xr-x 1 root root 534 Feb 12 16:13 bin drwxr-xr-x 1 root root 18 Feb 12 16:13 dockerfiles drwxr-xr-x 1 root root 76 Feb 12 16:12 docs drwxr-xr-x 1 root root 432 Feb 12 16:13 down drwxr-xr-x 1 root root 60 Feb 12 16:13 example drwxr-xr-x 1 root root 232 Feb 12 16:12 manifests drwxr-xr-x 1 root root 424 Feb 12 16:13 pics drwxr-xr-x 1 root root 338 Feb 12 16:12 roles drwxr-xr-x 1 root root 386 Feb 12 16:13 tools
2.7、配置chrony时间同步
k8s-01:~ # zypper in -y chrony k8s-01:~ # cp /etc/chrony.conf{,.bak} # 多备份,少跑路
k8s-01:~ # vim /etc/chrony.conf # k8s-01配置 server ntp.aliyun.com iburst server ntp1-7.aliyun.com iburst makestep 1.0 3 rtcsync allow 192.168.10.0/16 local stratum 10 k8s-01:~ # systemctl enable chronyd.service --now # k8s-02和k8s-03配置 server 192.168.10.175 iburst k8s-02:~ # systemctl enable chronyd.service --now
2.8、修改docker.server.j2文件
k8s-01:~ # cd /etc/ansible/ k8s-01:/etc/ansible # vim roles/docker/templates/docker.service.j2 ExecStartPost=/usr/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT # 修改iptables命令所在路径
2.9、配置ansible主机清单文件
k8s-01:/etc/ansible # cp example/hosts.multi-node ./hosts # 注意路径,复制到/etc/ansible目录下
k8s-01:/etc/ansible # vim hosts # 'etcd' cluster should have odd member(s) (1,3,5,...) # variable 'NODE_NAME' is the distinct name of a member in 'etcd' cluster [etcd] 192.168.10.175 NODE_NAME=etcd1 192.168.10.176 NODE_NAME=etcd2 192.168.10.177 NODE_NAME=etcd3 # master node(s) [kube-master] 192.168.10.175 192.168.10.176 192.168.10.177 # work node(s) [kube-node] 192.168.10.175 192.168.10.176 192.168.10.177 # [optional] harbor server, a private docker registry # 'NEW_INSTALL': 'yes' to install a harbor server; 'no' to integrate with existed one # 'SELF_SIGNED_CERT': 'no' you need put files of certificates named harbor.pem and harbor-key.pem in directory 'down' [harbor] #192.168.1.8 HARBOR_DOMAIN="harbor.yourdomain.com" NEW_INSTALL=no SELF_SIGNED_CERT=yes # [optional] loadbalance for accessing k8s from outside [ex-lb] #192.168.1.6 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443 #192.168.1.7 LB_ROLE=master EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443 # [optional] ntp server for the cluster [chrony] #192.168.1.1 [all:vars] # --------- Main Variables --------------- # Cluster container-runtime supported: docker, containerd CONTAINER_RUNTIME="docker" # Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn CLUSTER_NETWORK="flannel" # Service proxy mode of kube-proxy: 'iptables' or 'ipvs' PROXY_MODE="ipvs" # K8S Service CIDR, not overlap with node(host) networking SERVICE_CIDR="10.68.0.0/16" # Cluster CIDR (Pod CIDR), not overlap with node(host) networking CLUSTER_CIDR="172.20.0.0/16" # NodePort Range NODE_PORT_RANGE="20000-40000" # Cluster DNS Domain CLUSTER_DNS_DOMAIN="cluster.local." # -------- Additional Variables (don't change the default value right now) --- # Binaries Directory bin_dir="/opt/kube/bin" # CA and other components cert/key Directory ca_dir="/etc/kubernetes/ssl" # Deploy Directory (kubeasz workspace) base_dir="/etc/ansible" ------------------------------------------------------------------------------------- # 我机器内存不够,所以选择了3master,3node共存在三台主机,暂时只部署了kubernetes集群,harbor和lb没有部署,有兴趣的,可以自己尝试一下 # 需要harbor,打开[harbor]下方的注释,修改ip和自己想要访问的域名即可 # 需要高可用,配置[ex-lb]下方的ip即可
- 配置完成后,验证ansible是否可以连通节点
k8s-01:/etc/ansible # ansible all -m ping 192.168.10.177 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "ping": "pong" } 192.168.10.176 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "ping": "pong" } 192.168.10.175 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "ping": "pong" }
3、安装并验证Kubernetes集群
3.1、安装Kubernetes集群
- 目录下的这些yml文件都可以执行,执行90.setup.yml会根据主机清单文件内的配置来部署,后期如果需要增加harbor或者ex-lb,只需要单独执行对应的yml文件即可
01.prepare.yml 02.etcd.yml 03.containerd.yml 03.docker.yml 04.kube-master.yml 05.kube-node.yml 06.network.yml 07.cluster-addon.yml 11.harbor.yml 22.upgrade.yml 23.backup.yml 24.restore.yml 90.setup.yml 91.start.yml 92.stop.yml 99.clean.yml k8s-01:/etc/ansible # ansible-playbook 90.setup.yml
3.2、验证Kubernetes集群
- 部署完成后,断开终端重连一下,让kubectl补全生效
- 查看k8s各个节点是否ready
k8s-01:~ # kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 192.168.10.175 Ready master 3m18s v1.20.1 192.168.10.175 <none> SUSE Linux Enterprise Server 12 SP3 4.4.73-5-default docker://19.3.14 192.168.10.176 Ready master 3m18s v1.20.1 192.168.10.176 <none> SUSE Linux Enterprise Server 12 SP3 4.4.73-5-default docker://19.3.14 192.168.10.177 Ready master 3m18s v1.20.1 192.168.10.177 <none> SUSE Linux Enterprise Server 12 SP3 4.4.73-5-default docker://19.3.14
- 查看Kubernetes集群中有哪些pod,分别运行在什么节点上
k8s-01:~ # kubectl get pod -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-5787695b7f-9wzv8 1/1 Running 0 8m54s 172.20.1.2 192.168.10.176 <none> <none> kube-system dashboard-metrics-scraper-79c5968bdc-r8md7 1/1 Running 0 8m16s 172.20.1.3 192.168.10.176 <none> <none> kube-system kube-flannel-ds-amd64-j5dnz 1/1 Running 0 9m35s 192.168.10.176 192.168.10.176 <none> <none> kube-system kube-flannel-ds-amd64-r7kgh 1/1 Running 0 9m35s 192.168.10.177 192.168.10.177 <none> <none> kube-system kube-flannel-ds-amd64-vnnzc 1/1 Running 0 9m35s 192.168.10.175 192.168.10.175 <none> <none> kube-system kubernetes-dashboard-c4c6566d6-n8hs9 1/1 Running 0 8m16s 172.20.2.2 192.168.10.177 <none> <none> kube-system metrics-server-8568cf894b-gxnvr 1/1 Running 0 8m22s 172.20.0.2 192.168.10.175 <none> <none>
- 查看全部的服务
# 可以看到,dashboard也已经部署成功了,访问https://192.168.10.175:23292即可 k8s-01:~ # kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.68.0.1 <none> 443/TCP 6m28s kube-system dashboard-metrics-scraper ClusterIP 10.68.114.225 <none> 8000/TCP 2m54s kube-system kube-dns ClusterIP 10.68.0.2 <none> 53/UDP,53/TCP,9153/TCP 3m33s kube-system kubernetes-dashboard NodePort 10.68.146.89 <none> 443:23292/TCP 2m55s kube-system metrics-server ClusterIP 10.68.18.87 <none> 443/TCP 3m
关于如何获取dashboard的token
k8s-01:~ # kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') Name: admin-user-token-pw96q Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: 7e8b9c7e-f1a1-4dc7-acb1-ec72ccfd2192 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1350 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlROSUt0NWV5Q045SlJ5WXdmSXZyRmRYU3RiZklLQkp5bEh6b2ZXYlRmTGsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXB3OTZxIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3ZThiOWM3ZS1mMWExLTRkYzctYWNiMS1lYzcyY2NmZDIxOTIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.HVBPYED-m12WOf_4G81JGeYXduWYF3j-94GLvgUCHMxcbnPDWX2WTvIrQp4tyDVCfge6HkgCFeIZoBLNa5Xc_rDRjwzqVp9VVcKEGK0i6aEPWz2dHCfzJ8XG_jC8J87nK4wG6ZT-N-VOF2kljdfBh2mS_nx7G9LEanJELcK65177MG-cWJ9RLiieOSBu4L0elCeuqzI5cdeq67YoQuJ_0LAHdix27oiHBBfi9GKauLQv9Po4QEjhtsHsOMKsYLM_pe1cvUwGtXAz46PeHdTvmrzbaACz6HKD2b3OTZ33633BGy7UgByGw9TNlXa81nGFRBwTg_nkijqhIYmZk8iBmg
- 到此,整个Kubernetes集群已经部署完成,可以开始学习了