前言:
操作系统准备
准备5台虚拟机,每台虚拟机配置如下:centos7.6 4核/8G/100G硬盘
特别提醒
因为微信公众号不支持上传压缩文件,所以下文涉及到的压缩包文件可以留言找我要哈
1.1 修改主机名
在199.174上操作:
hostnamectl set-hostname k8s-etcd01
在199.105上操作:
hostnamectl set-hostname k8s-etcd02
在199.230上操作:
hostnamectl set-hostname k8s-etcd03
在199.124上操作:
hostnamectl set-hostname k8s-node01
在199.107上操作:
hostnamectl set-hostname k8s-node02
1.2 修改host文件
三个主机上 /etc/hosts文件如下
192.168.199.174 k8s-etcd01.lucky.com k8s-master01.lucky.com k8s-etcd01 k8s-master01
192.168.199.105 k8s-etcd02.lucky.com k8s-master02.lucky.com k8s-etcd02 k8s-master02
192.168.199.230 k8s-etcd03.lucky.com k8s-master03.lucky.com k8s-etcd03 k8s-master03
192.168.199.124 k8s-node01.lucky.com k8s-node01
192.168.199.107 k8s-node02.lucky.com k8s-node02
1.3 初始化机器(三个几点操作)
yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release lrzsz openssh-server
1.4 时间同步
在199.174上
ntpdate cn.pool.ntp.org
systemctl start ntpd && systemctl enable ntpd
在其他节点
ntpdate k8s-master01
计划任务:
* */1 * * * /usr/sbin/ntpdate master
1.5 安装etcd集群
在k8s-etcd01,k8s-etcd02,k8s-etcd03
yum install etcd-3.2.22 -y
1.6 在k8s-etcd01节点下载证书文件
由于公众号不支持压缩文件,可以在下面留言找我要相关压缩包
把上面的软件包下载到k8s-etcd01节点的root目录下,解压
unzip k8s-certs-generator-master.zip
cd k8s-certs-generator-master
(1)给etcd生成证书
bash gencerts.sh etcd 出现如下
Enter Domain Name [ilinux.io]: 这个位置输入lucky.com
tree etcd 可看到生成的证书结构
(2)给k8s生成证书
bash gencerts.sh k8s 出现如下
Enter Domain Name [ilinux.io]: 这个位置输入lucky.com
Enter Kubernetes Cluster Name [kubernetes]: 这个位置输入mykube
Enter the IP Address in default namespace
of the Kubernetes API Server[10.96.0.1]: 直接回车
Enter Master servers name[master01 master02 master03]: 这个位置输入k8s-master01 k8s-master02 k8s-master03
tree kubernetes/ 可以看到生成好的kubernetes证书
(3)把etcd证书拷贝到指定的目录下
cp -rp /root/k8s-certs-generator-master/etcd /etc/etcd/ssl
cp -rp /root/k8s-certs-generator-master/etcd/* /etc/etcd/
(4)在k8s-etcd01上把生成的etcd证书拷贝到远程主机
cd /etc/etcd
scp -r pki patches k8s-etcd02:/etc/etcd/
scp -r pki patches k8s-etcd03:/etc/etcd/
1.7 在三个master节点下载配置文件模板(下面的步骤在三个master节点同时操作)
把压缩包上传到三个master节点的root目录下,解压
unzip k8s-bin-inst-master.zip
(1)拷贝k8s-bin-inst里的etcd.conf到etcd目录下
cp /root/k8s-bin-inst-master/etcd/etcd.conf /etc/etcd/
(2)修改etcd配置文件
在k8s-etcd01上:
grep -v '^#' /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/k8s.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.199.174:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.199.174:2379" ETCD_NAME="k8s-etcd01.lucky.com" ETCD_SNAPSHOT_COUNT="100000" ETCD_INITIAL_ADVERTISE_PEER_URLS="https://k8s-etcd01.lucky.com:2380" ETCD_ADVERTISE_CLIENT_URLS="https://k8s-etcd01.lucky.com:2379" ETCD_INITIAL_CLUSTER="k8s-etcd01.lucky.com=https://k8s-etcd01.lucky.com:2380,k8s-etcd02.lucky.com=https://k8s-etcd02.lucky.com:2380,k8s-etcd03.lucky.com=https://k8s-etcd03.lucky.com:2380" ETCD_CERT_FILE="/etc/etcd/pki/server.crt" ETCD_KEY_FILE="/etc/etcd/pki/server.key" ETCD_CLIENT_CERT_AUTH="true" ETCD_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt" ETCD_AUTO_TLS="false" ETCD_PEER_CERT_FILE="/etc/etcd/pki/peer.crt" ETCD_PEER_KEY_FILE="/etc/etcd/pki/peer.key" ETCD_PEER_CLIENT_CERT_AUTH="true" ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt" ETCD_PEER_AUTO_TLS="false"
在k8s-etcd02上:
grep -v '^#' /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/k8s.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.199.105:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.199.105:2379" ETCD_NAME="k8s-etcd02.lucky.com" ETCD_SNAPSHOT_COUNT="100000" ETCD_INITIAL_ADVERTISE_PEER_URLS="https://k8s-etcd02.lucky.com:2380" ETCD_ADVERTISE_CLIENT_URLS="https://k8s-etcd02.lucky.com:2379" ETCD_INITIAL_CLUSTER="k8s-etcd01.lucky.com=https://k8s-etcd01.lucky.com:2380,k8s-etcd02.lucky.com=https://k8s-etcd02.lucky.com:2380,k8s-etcd03.lucky.com=https://k8s-etcd03.lucky.com:2380" ETCD_CERT_FILE="/etc/etcd/pki/server.crt" ETCD_KEY_FILE="/etc/etcd/pki/server.key" ETCD_CLIENT_CERT_AUTH="true" ETCD_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt" ETCD_AUTO_TLS="false" ETCD_PEER_CERT_FILE="/etc/etcd/pki/peer.crt" ETCD_PEER_KEY_FILE="/etc/etcd/pki/peer.key" ETCD_PEER_CLIENT_CERT_AUTH="true" ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt" ETCD_PEER_AUTO_TLS="false"
在k8s-etcd03上:
grep -v '^#' /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/k8s.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.199.230:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.199.2305:2379" ETCD_NAME="k8s-etcd03.lucky.com" ETCD_SNAPSHOT_COUNT="100000" ETCD_INITIAL_ADVERTISE_PEER_URLS="https://k8s-etcd03.lucky.com:2380" ETCD_ADVERTISE_CLIENT_URLS="https://k8s-etcd03.lucky.com:2379" ETCD_INITIAL_CLUSTER="k8s-etcd01.lucky.com=https://k8s-etcd01.lucky.com:2380,k8s-etcd02.lucky.com=https://k8s-etcd02.lucky.com:2380,k8s-etcd03.lucky.com=https://k8s-etcd03.lucky.com:2380" ETCD_CERT_FILE="/etc/etcd/pki/server.crt" ETCD_KEY_FILE="/etc/etcd/pki/server.key" ETCD_CLIENT_CERT_AUTH="true" ETCD_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt" ETCD_AUTO_TLS="false" ETCD_PEER_CERT_FILE="/etc/etcd/pki/peer.crt" ETCD_PEER_KEY_FILE="/etc/etcd/pki/peer.key" ETCD_PEER_CLIENT_CERT_AUTH="true" ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt" ETCD_PEER_AUTO_TLS="false"
在k8s-etcd01上:
systemctl start etcd && systemctl enable etcd && systemctl status etcd
在k8s-etcd02上:
systemctl start etcd && systemctl enable etcd && systemctl status etcd
在k8s-etcd03上:
systemctl start etcd && systemctl enable etcd && systemctl status etcd
(3)验证etcd集群是否处于健康状态
etcdctl --key-file=/etc/etcd/pki/client.key --cert-file=/etc/etcd/pki/client.crt --ca-file=/etc/etcd/pki/ca.crt --endpoints="https://k8s-etcd01.lucky.com:2379" cluster-health
如果显示Cluster is healthy,则说明集群正常
1.8 上传kubernetes1.11.1的二进制软件包
在k8s-etcd01,k8s-etcd02,k8s-etcd03三个节点上操作
把上面的软件包上传到三个master节点的root目录下,然后解压到指定的目录/usr/local下
tar zxvf kubernetes-server-linux-amd64.tar.gz -C /usr/local/
在k8s-etcd01,k8s-etcd02,k8s-etcd03上:
mkdir /etc/kubernetes
在k8s-etcd01上:
cd /root/k8s-certs-generator-master/kubernetes
cp -rp k8s-master01/* /etc/kubernetes/
scp -r k8s-master02/* k8s-etcd02:/etc/kubernetes/
scp -r k8s-master03/* k8s-etcd03:/etc/kubernetes/
在k8s-etcd01,k8s-etcd02,k8s-etcd03上:
cat /etc/profile.d/k8s.sh
export PATH=$PATH:/usr/local/kubernetes/server/bin
source /etc/profile.d/k8s.sh
经过上面几步操作,在三个master节点就都可以操作kubectl命令了
1.9 在k8s-etcd01上,拷贝apiserver的配置文件和启动脚本文件
(1)拷贝配置文件和启动脚本文件到指定目录
cp /root/k8s-bin-inst-master/master/etc/kubernetes/* /etc/kubernetes/
cp /root/k8s-bin-inst-master/master/unit-files/kube-* /usr/lib/systemd/system/
systemctl daemon-reload
useradd -r kube
mkdir /var/run/kubernetes
chown kube.kube /var/run/kubernetes
grep -v "^#" /etc/kubernetes/apiserver |grep -v "^$" 显示如下
KUBE_API_ADDRESS="--advertise-address=0.0.0.0" KUBE_API_PORT="--secure-port=6443 --insecure-port=0" KUBE_ETCD_SERVERS="--etcd-servers=https://k8s-etcd01.lucky.com:2379,https://k8s-etcd02.lucky.com:2379,https://k8s-etcd03.lucky.com:2379" KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.96.0.0/12" KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NodeRestriction" KUBE_API_ARGS="--authorization-mode=Node,RBAC \ --client-ca-file=/etc/kubernetes/pki/ca.crt \ --enable-bootstrap-token-auth=true \ --etcd-cafile=/etc/etcd/pki/ca.crt \ --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt \ --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key \ --requestheader-allowed-names=front-proxy-client \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-group-headers=X-Remote-Group \ --requestheader-username-headers=X-Remote-User\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --tls-cert-file=/etc/kubernetes/pki/apiserver.crt \ --tls-private-key-file=/etc/kubernetes/pki/apiserver.key \ --token-auth-file=/etc/kubernetes/token.csv"
systemctl start kube-apiserver
(2)在k8s-etcd01上把apiserver配置文件和启动脚本文件拷贝到k8s-etcd02和k8s-etcd03主机上
scp /usr/lib/systemd/system/kube-apiserver.service k8s-etcd02:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/kube-apiserver.service k8s-etcd03:/usr/lib/systemd/system/
scp /etc/kubernetes/apiserver k8s-etcd02:/etc/kubernetes/
scp /etc/kubernetes/apiserver k8s-etcd03:/etc/kubernetes/
mkdir /root.kube
cp /etc/kubernetes/auth/admin.conf /root/.kube/config
验证apiserver和etcd是否通信正常:kubectl api-versions 显示如下,说明通信正常
在k8s-etcd02上
systemctl daemon-reload
useradd -r kube
mkdir /var/run/kubernetes
chown kube.kube /var/run/kubernetes
systemctl start kube-apiserver
mkdir /root/.kube
cp /etc/kubernetes/auth/admin.conf /root/.kube/config
验证apiserver和etcd是否通信正常:kubectl api-versions 显示如下,说明通信正常
在k8s-etcd03上
systemctl daemon-reload
useradd -r kube
mkdir /var/run/kubernetes
chown kube.kube /var/run/kubernetes
systemctl start kube-apiserver
mkdir /root/.kube
cp /etc/kubernetes/auth/admin.conf /root/.kube/config
验证apiserver和etcd是否通信正常:kubectl api-versions 显示如下,说明通信正常
在k8s-etcd01上执行如下:
kubectl create clusterrolebinding system:bootstrapper --user=system:bootstrapper --clusterrole=system:node-bootstrapper
1.10 在k8s-etcd01上,拷贝controller-manager和kube-scheduler的配置文件和启动脚本文件到k8s-etcd02和k8s-etcd03上
(1)拷贝controller
scp /usr/lib/systemd/system/kube-controller-manager.service k8s-etcd02:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/kube-controller-manager.service k8s-etcd03:/usr/lib/systemd/system/
grep -v "^#" /etc/kubernetes/controller-manager 显示如下
KUBE_CONTROLLER_MANAGER_ARGS="--bind-address=127.0.0.1 \ --allocate-node-cidrs=true \ --cluster-cidr=10.244.0.0/16 \ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt \ --cluster-signing-key-file=/etc/kubernetes/pki/ca.key \ --controllers=*,bootstrapsigner,tokencleaner \ --kubeconfig=/etc/kubernetes/auth/controller-manager.conf \ --leader-elect=true \ --node-cidr-mask-size=24 \ --root-ca-file=/etc/kubernetes/pki/ca.crt \ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \ --use-service-account-credentials=true"
scp /etc/kubernetes/controller-manager k8s-etcd02:/etc/kubernetes/
scp /etc/kubernetes/controller-manager k8s-etcd03:/etc/kubernetes/
systemctl start kube-controller-manager
在k8s-etcd02和k8s-etcd03上操作如下步骤
systemctl daemon-reload
systemctl start kube-controller-manager
(2)拷贝scheduler
在k8s-etcd01上操作
scp /usr/lib/systemd/system/kube-scheduler.service k8s-etcd02:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/kube-scheduler.service k8s-etcd03:/usr/lib/systemd/system/
scp /etc/kubernetes/scheduler k8s-etcd02:/etc/kubernetes/
scp /etc/kubernetes/scheduler k8s-etcd03:/etc/kubernetes/
systemctl start kube-scheduler
在k8s-etcd02和k8s-etcd03上操作
systemctl daemon-reload
systemctl start kube-scheduler
1.11 k8s-node01和k8s-node02节点操作
(1)安装docker-18.06
yum install docker-ce*18.06.0* -y
(2)启动docker服务
systemctl start docker
(3)配置docker加速器
vim /etc/docker/daemon.json 添加如下一行
{"registry-mirrors": ["*******"] }
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
systemctl daemon-reload && systemctl restart docker
systemctl enable docker
(4)下载pause镜像
docker pull carlziess/pause-amd64-3.1
docker tag carlziess/pause-amd64-3.1 k8s.gcr.io/pause:3.1
(5)配置kubelet和kube-proxy
(6)把k8s-etcd01节点上生成的证书拷贝到k8s-node01和k8s-node02节点上
在k8s-node01和k8s-node02上创建kubernetes目录
mkdir /etc/kubernetes
在k8s-etcd01上拷贝证书文件到k8s-node01和k8s-node02上
cd /root/k8s-certs-generator-master/kubernetes/kubelet
scp -rp * k8s-node01:/etc/kubernetes/
scp -rp * k8s-node02:/etc/kubernetes/
(7)在k8s-etcd01上拷贝配置文件和启动脚本文件到k8s-node01和k8s-node02上
cd /root/k8s-bin-inst-master/nodes/etc/kubernetes
scp -rp * k8s-node01:/etc/kubernetes/
scp -rp * k8s-node02:/etc/kubernetes/
cd /root/k8s-bin-inst-master/nodes/unit-files
scp -rp * k8s-node01:/usr/lib/systemd/system/
scp -rp * k8s-node02:/usr/lib/systemd/system/
在k8s-node01和k8s-node02上
systemctl daemon-reload
(8)在k8s-master01上拷贝var目录到 k8s-node01和k8s-node02上
cd /root/k8s-bin-inst-master/nodes
scp -rp ./var/lib/kube* k8s-node01:/var/lib/
scp -rp ./var/lib/kube* k8s-node02:/var/lib/
(9)在k8s-node01和k8s-node02上下载网络插件
wget https://github.com/containernetworking/plugins/releases/download/v0.7.4/cni-plugins-amd64-v0.7.4.tgz
mkdir -p /opt/cni/bin
tar zxvf cni-plugins-amd64-v0.7.4.tgz -C /opt/cni/bin/
(10)下载node的二进制文件,在k8s-node01和k8s-node02上操作
tar zxvf kubernetes-node-linux-amd64.tar.gz -C /usr/local/
修改hosts文件,如下
192.168.199.174 k8s-etcd01.lucky.com k8s-master01.lucky.com k8s-etcd01 k8s-master01 mykube-api.lucky.com 192.168.199.105 k8s-etcd02.lucky.com k8s-master02.lucky.com k8s-etcd02 k8s-master02 192.168.199.230 k8s-etcd03.lucky.com k8s-master03.lucky.com k8s-etcd03 k8s-master03 192.168.199.124 k8s-node01.lucky.com k8s-node01 192.168.199.107 k8s-node02.lucky.com k8s-node02
启动kubelet
systemctl start kubelet && systemctl status kubelet
systemctl enable kubelet
(11)在k8s-etcd01上给node签发证书
在k8s-etcd01上操作
kubectl get csr 显示如下
给node签发证书
kubectl certificate approve node-csr-9hc5lqIH5oY02pPaFg6sGgh2jImcUidRD54uL9WRZMw
kubectl certificate approve node-csr-x11qXmlcgnJDmeeMtgSONQSFYB2lzB86R9HGT2RoGlU
(12)启动kube-proxy(k8s-node01和k8s-node02上操作)
加载ipvs模块
vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs" for i in $(ls $ipvs_mods_dir | grep -o "^[^.]*");do /sbin/modinfo -F filename $i &> /dev/null if [ $? -eq 0 ];then /sbin/modprobe $i fi done
chmod +x /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep _vs 查看是否加载成功,显示如下说明模块加载成功
启动kube-proxy
systemctl start kube-proxy && systemctl status kube-proxy
1.12 安装flannel网络插件,下面步骤在k8s-etcd01和k8s-etcd02,k8s-etcd03节点操作
(1)安装docker-18.06
cd /etc/yum.repos.d/
wget "https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo"
vim kubernetes.repo
[kubernetes] name=kubernetes Repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg enabled=1
yum repolist
wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
rpm --import yum-key.gpg
wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
rpm --import rpm-package-key.gpg
yum install docker-ce*18.06.0* -y
(2)启动docker服务
systemctl start docker
(3)配置docker加速器
vim /etc/docker/daemon.json 添加如下一行
{"registry-mirrors": ["******"]
}
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
systemctl daemon-reload && systemctl restart docker && systemctl enable docker
(4)在三个master节点安装网络插件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
执行成功之后显示如下
clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created
docker images 可以看到如下镜像
docker load -i flannel.tar.gz 可以直接导入flannel镜像
kubectl get nodes 显示如下
status状态就正常了,显示ready了
kubectl get pods -n kube-system 查看在kube-system命名空间下的所有pod,显示如下
可以看到kube-flannel正常运行了
(5)在k8s-node01和k8s-node02上安装ipvsadm
yum install ipvsadm -y
ipvsadm -Ln 显示如下,说明ipvsadm规则正确
(6)在k8s-etcd01,k8s-etcd02,k8s-etcd03上部署cordns,三个master节点同时操作
coredns的官网地址
https://github.com/coredns/deployment/tree/master/kubernetes
mkdir /root/coredns && cd /root/coredns
wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh
在k8s-etcd01,k8s-etcd02,k8s-etcd03上操作如下
bash deploy.sh -i 10.96.0.10 -r "10.96.0.0/12" -s -t coredns.yaml.sed | kubectl apply -f -
kubectl get pods -n kube-system -o wide 显示如下,说明coredns部署成功
1.13 给kube-apiserver做高可用
kubectl create clusterrolebinding test:anonymous --clusterrole=cluster-admin --user=system:anonymous
yum -y install keepalived
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
配置k8s-etcd01上的keepalived
cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived global_defs { notification_email { hdb@tzg.cn } notification_email_from admin@tzg.cn smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id KUBE_APISERVER_HA } vrrp_script chk_kube_apiserver { script "curl -k https://127.0.0.1:6443" interval 3 timeout 9 fall 2 rise 2 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 111 priority 100 advert_int 1 nopreempt authentication { auth_type PASS auth_pass heyjava } virtual_ipaddress { 192.168.199.200 } track_script { chk_kube_apiserver } notify_master "/etc/keepalived/notify.py -n master -a 192.168.199.200" notify_backup "/etc/keepalived/notify.py -n backup -a 192.168.199.200" notify_fault "/etc/keepalived/notify.py -n fault -a 192.168.199.200" }
三个master节点的priority不一样,k8s-etcd01是100,k8s-etcd02是99,k8s-etcd03是98
cat /etc/keepalived/notify.py
#/usr/bin/python #-*- coding:utf-8 -*- ''' @file: notify.py @author: Hu Dongbiao @date: 2016/12/15 11:24 @version: 1.0 @email: hdb@tzg.cn ''' import argparse import sys import smtplib from email.mime.text import MIMEText #解析传进来的参数 parser = argparse.ArgumentParser(description=u"vrrp状态切换通知脚本") parser.add_argument("-n", "--notify", choices=["master", "backup", "fault"], help=u"指定通知的类型,即vrrp角色切换的目标角色") parser.add_argument("-a", "--address", help=u"指定相关虚拟路由器的VIP地址") args = parser.parse_args() # notify是当前角色,为master,backup,fault中的一个 notify = args.notify # address是vrrp虚拟地址 address = args.address # 发送告警邮件 smtp_host = 'smtp.163.com' smtp_user = 'xxx' smtp_password = 'xxx' mail_from = '150***@163.com' mail_to = '19***7@qq.com' mail_subject = u'[监控]VRRP角色切换' mail_body = ''' <p>管理员,你好:</p> <p style="text-indent:2em;"><strong>您的HA地址{vrrp_address}已切换角色为{vrrp_role},请及时处理</strong></p> '''.format(vrrp_address=address, vrrp_role=notify) msg = MIMEText(mail_body, 'html', 'utf-8') msg['From'] = mail_from msg['To'] = mail_to msg['Subject'] = mail_subject smtp = smtplib.SMTP() smtp.connect(smtp_host) smtp.login(smtp_user,smtp_password) smtp.sendmail(mail_from, mail_to, msg.as_string()) smtp.quit()
chmod +x /etc/keepalived/notify.py
三个master节点notify.py文件一样
systemctl start keepalived