源码安装kubernetes高可用集群

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
简介: 源码安装kubernetes高可用集群

前言:

操作系统准备

准备5台虚拟机,每台虚拟机配置如下:centos7.6   4核/8G/100G硬盘

特别提醒

因为微信公众号不支持上传压缩文件,所以下文涉及到的压缩包文件可以留言找我要哈

1.1 修改主机名

在199.174上操作:

hostnamectl set-hostname   k8s-etcd01

在199.105上操作:

hostnamectl set-hostname   k8s-etcd02

在199.230上操作:

hostnamectl set-hostname   k8s-etcd03

在199.124上操作:

hostnamectl set-hostname   k8s-node01

在199.107上操作:

hostnamectl set-hostname   k8s-node02

1.2 修改host文件

三个主机上 /etc/hosts文件如下  

192.168.199.174 k8s-etcd01.lucky.com k8s-master01.lucky.com k8s-etcd01 k8s-master01

192.168.199.105 k8s-etcd02.lucky.com k8s-master02.lucky.com k8s-etcd02 k8s-master02

192.168.199.230 k8s-etcd03.lucky.com k8s-master03.lucky.com k8s-etcd03 k8s-master03

192.168.199.124 k8s-node01.lucky.com k8s-node01  

192.168.199.107 k8s-node02.lucky.com k8s-node02

1.3 初始化机器(三个几点操作)

yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release lrzsz  openssh-server

1.4 时间同步  

在199.174上

ntpdate cn.pool.ntp.org

systemctl start ntpd   && systemctl   enable  ntpd

在其他节点

ntpdate  k8s-master01

计划任务:

* */1 * * * /usr/sbin/ntpdate  master

1.5 安装etcd集群

在k8s-etcd01,k8s-etcd02,k8s-etcd03

yum install etcd-3.2.22 -y

1.6 在k8s-etcd01节点下载证书文件

由于公众号不支持压缩文件,可以在下面留言找我要相关压缩包

把上面的软件包下载到k8s-etcd01节点的root目录下,解压

unzip k8s-certs-generator-master.zip

cd k8s-certs-generator-master

(1)给etcd生成证书

bash gencerts.sh etcd     出现如下

Enter Domain Name [ilinux.io]:   这个位置输入lucky.com

tree etcd   可看到生成的证书结构

(2)给k8s生成证书

bash gencerts.sh k8s   出现如下

Enter Domain Name [ilinux.io]: 这个位置输入lucky.com

Enter Kubernetes Cluster Name [kubernetes]: 这个位置输入mykube

Enter the IP Address in default namespace

of the Kubernetes API Server[10.96.0.1]:    直接回车

Enter Master servers name[master01 master02 master03]: 这个位置输入k8s-master01 k8s-master02 k8s-master03

tree kubernetes/   可以看到生成好的kubernetes证书

(3)把etcd证书拷贝到指定的目录下

cp -rp /root/k8s-certs-generator-master/etcd /etc/etcd/ssl

cp -rp /root/k8s-certs-generator-master/etcd/* /etc/etcd/

(4)在k8s-etcd01上把生成的etcd证书拷贝到远程主机

cd /etc/etcd

scp -r pki patches k8s-etcd02:/etc/etcd/

scp -r pki patches k8s-etcd03:/etc/etcd/

1.7 在三个master节点下载配置文件模板(下面的步骤在三个master节点同时操作)

把压缩包上传到三个master节点的root目录下,解压

unzip k8s-bin-inst-master.zip

(1)拷贝k8s-bin-inst里的etcd.conf到etcd目录下

cp /root/k8s-bin-inst-master/etcd/etcd.conf /etc/etcd/

(2)修改etcd配置文件

在k8s-etcd01上:

grep -v '^#' /etc/etcd/etcd.conf

ETCD_DATA_DIR="/var/lib/etcd/k8s.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.199.174:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.199.174:2379"
ETCD_NAME="k8s-etcd01.lucky.com"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://k8s-etcd01.lucky.com:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://k8s-etcd01.lucky.com:2379"
ETCD_INITIAL_CLUSTER="k8s-etcd01.lucky.com=https://k8s-etcd01.lucky.com:2380,k8s-etcd02.lucky.com=https://k8s-etcd02.lucky.com:2380,k8s-etcd03.lucky.com=https://k8s-etcd03.lucky.com:2380"
ETCD_CERT_FILE="/etc/etcd/pki/server.crt"
ETCD_KEY_FILE="/etc/etcd/pki/server.key"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt"
ETCD_AUTO_TLS="false"
ETCD_PEER_CERT_FILE="/etc/etcd/pki/peer.crt"
ETCD_PEER_KEY_FILE="/etc/etcd/pki/peer.key"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt"
ETCD_PEER_AUTO_TLS="false"

在k8s-etcd02上:

grep -v '^#' /etc/etcd/etcd.conf

ETCD_DATA_DIR="/var/lib/etcd/k8s.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.199.105:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.199.105:2379"
ETCD_NAME="k8s-etcd02.lucky.com"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://k8s-etcd02.lucky.com:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://k8s-etcd02.lucky.com:2379"
ETCD_INITIAL_CLUSTER="k8s-etcd01.lucky.com=https://k8s-etcd01.lucky.com:2380,k8s-etcd02.lucky.com=https://k8s-etcd02.lucky.com:2380,k8s-etcd03.lucky.com=https://k8s-etcd03.lucky.com:2380"
ETCD_CERT_FILE="/etc/etcd/pki/server.crt"
ETCD_KEY_FILE="/etc/etcd/pki/server.key"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt"
ETCD_AUTO_TLS="false"
ETCD_PEER_CERT_FILE="/etc/etcd/pki/peer.crt"
ETCD_PEER_KEY_FILE="/etc/etcd/pki/peer.key"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt"
ETCD_PEER_AUTO_TLS="false"

在k8s-etcd03上:

grep -v '^#' /etc/etcd/etcd.conf

ETCD_DATA_DIR="/var/lib/etcd/k8s.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.199.230:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.199.2305:2379"
ETCD_NAME="k8s-etcd03.lucky.com"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://k8s-etcd03.lucky.com:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://k8s-etcd03.lucky.com:2379"
ETCD_INITIAL_CLUSTER="k8s-etcd01.lucky.com=https://k8s-etcd01.lucky.com:2380,k8s-etcd02.lucky.com=https://k8s-etcd02.lucky.com:2380,k8s-etcd03.lucky.com=https://k8s-etcd03.lucky.com:2380"
ETCD_CERT_FILE="/etc/etcd/pki/server.crt"
ETCD_KEY_FILE="/etc/etcd/pki/server.key"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt"
ETCD_AUTO_TLS="false"
ETCD_PEER_CERT_FILE="/etc/etcd/pki/peer.crt"
ETCD_PEER_KEY_FILE="/etc/etcd/pki/peer.key"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/pki/ca.crt"
ETCD_PEER_AUTO_TLS="false"

在k8s-etcd01上:

systemctl start etcd && systemctl enable etcd  && systemctl status etcd

在k8s-etcd02上:

systemctl start etcd && systemctl enable etcd  && systemctl status etcd

在k8s-etcd03上:

systemctl start etcd && systemctl enable etcd  && systemctl status etcd

(3)验证etcd集群是否处于健康状态

etcdctl --key-file=/etc/etcd/pki/client.key --cert-file=/etc/etcd/pki/client.crt --ca-file=/etc/etcd/pki/ca.crt --endpoints="https://k8s-etcd01.lucky.com:2379" cluster-health

如果显示Cluster is healthy,则说明集群正常

1.8 上传kubernetes1.11.1的二进制软件包

在k8s-etcd01,k8s-etcd02,k8s-etcd03三个节点上操作

把上面的软件包上传到三个master节点的root目录下,然后解压到指定的目录/usr/local下

tar zxvf kubernetes-server-linux-amd64.tar.gz -C /usr/local/

在k8s-etcd01,k8s-etcd02,k8s-etcd03上:

mkdir /etc/kubernetes

在k8s-etcd01上:

cd /root/k8s-certs-generator-master/kubernetes

cp -rp k8s-master01/* /etc/kubernetes/

scp -r k8s-master02/* k8s-etcd02:/etc/kubernetes/

scp -r k8s-master03/* k8s-etcd03:/etc/kubernetes/

在k8s-etcd01,k8s-etcd02,k8s-etcd03上:

cat /etc/profile.d/k8s.sh

export PATH=$PATH:/usr/local/kubernetes/server/bin

source /etc/profile.d/k8s.sh

经过上面几步操作,在三个master节点就都可以操作kubectl命令了

1.9 在k8s-etcd01上,拷贝apiserver的配置文件和启动脚本文件

(1)拷贝配置文件和启动脚本文件到指定目录

cp /root/k8s-bin-inst-master/master/etc/kubernetes/* /etc/kubernetes/

cp /root/k8s-bin-inst-master/master/unit-files/kube-* /usr/lib/systemd/system/

systemctl daemon-reload

useradd -r kube

mkdir /var/run/kubernetes

chown kube.kube /var/run/kubernetes

grep -v "^#" /etc/kubernetes/apiserver |grep -v "^$"  显示如下

KUBE_API_ADDRESS="--advertise-address=0.0.0.0"
KUBE_API_PORT="--secure-port=6443 --insecure-port=0"
KUBE_ETCD_SERVERS="--etcd-servers=https://k8s-etcd01.lucky.com:2379,https://k8s-etcd02.lucky.com:2379,https://k8s-etcd03.lucky.com:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.96.0.0/12"
KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NodeRestriction"
KUBE_API_ARGS="--authorization-mode=Node,RBAC \
    --client-ca-file=/etc/kubernetes/pki/ca.crt \
    --enable-bootstrap-token-auth=true \
    --etcd-cafile=/etc/etcd/pki/ca.crt \
    --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt \
    --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key \
    --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt \
    --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key \
    --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
    --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt \
    --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key \
    --requestheader-allowed-names=front-proxy-client \
    --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt \
    --requestheader-extra-headers-prefix=X-Remote-Extra- \
    --requestheader-group-headers=X-Remote-Group \
    --requestheader-username-headers=X-Remote-User\
    --service-account-key-file=/etc/kubernetes/pki/sa.pub \
    --tls-cert-file=/etc/kubernetes/pki/apiserver.crt \
    --tls-private-key-file=/etc/kubernetes/pki/apiserver.key \
    --token-auth-file=/etc/kubernetes/token.csv"

systemctl start kube-apiserver

(2)在k8s-etcd01上把apiserver配置文件和启动脚本文件拷贝到k8s-etcd02和k8s-etcd03主机上

scp /usr/lib/systemd/system/kube-apiserver.service k8s-etcd02:/usr/lib/systemd/system/

scp /usr/lib/systemd/system/kube-apiserver.service k8s-etcd03:/usr/lib/systemd/system/

scp /etc/kubernetes/apiserver k8s-etcd02:/etc/kubernetes/

scp /etc/kubernetes/apiserver k8s-etcd03:/etc/kubernetes/

mkdir /root.kube

cp /etc/kubernetes/auth/admin.conf /root/.kube/config

验证apiserver和etcd是否通信正常:kubectl api-versions   显示如下,说明通信正常

在k8s-etcd02上

systemctl daemon-reload

useradd -r kube

mkdir /var/run/kubernetes

chown kube.kube /var/run/kubernetes

systemctl start kube-apiserver

mkdir /root/.kube

cp /etc/kubernetes/auth/admin.conf /root/.kube/config

验证apiserver和etcd是否通信正常:kubectl api-versions   显示如下,说明通信正常

在k8s-etcd03上

systemctl daemon-reload

useradd -r kube

mkdir /var/run/kubernetes

chown kube.kube /var/run/kubernetes

systemctl start kube-apiserver

mkdir /root/.kube

cp /etc/kubernetes/auth/admin.conf /root/.kube/config

验证apiserver和etcd是否通信正常:kubectl api-versions   显示如下,说明通信正常

在k8s-etcd01上执行如下:

kubectl create clusterrolebinding system:bootstrapper --user=system:bootstrapper --clusterrole=system:node-bootstrapper

1.10 在k8s-etcd01上,拷贝controller-manager和kube-scheduler的配置文件和启动脚本文件到k8s-etcd02和k8s-etcd03上

(1)拷贝controller

scp /usr/lib/systemd/system/kube-controller-manager.service k8s-etcd02:/usr/lib/systemd/system/

scp /usr/lib/systemd/system/kube-controller-manager.service k8s-etcd03:/usr/lib/systemd/system/

grep -v "^#" /etc/kubernetes/controller-manager  显示如下

KUBE_CONTROLLER_MANAGER_ARGS="--bind-address=127.0.0.1 \
    --allocate-node-cidrs=true \
    --cluster-cidr=10.244.0.0/16 \
    --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt \
    --cluster-signing-key-file=/etc/kubernetes/pki/ca.key \
    --controllers=*,bootstrapsigner,tokencleaner \
    --kubeconfig=/etc/kubernetes/auth/controller-manager.conf \
    --leader-elect=true \
    --node-cidr-mask-size=24 \
    --root-ca-file=/etc/kubernetes/pki/ca.crt \
    --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
    --use-service-account-credentials=true"

scp /etc/kubernetes/controller-manager k8s-etcd02:/etc/kubernetes/

scp /etc/kubernetes/controller-manager k8s-etcd03:/etc/kubernetes/

systemctl start kube-controller-manager

在k8s-etcd02和k8s-etcd03上操作如下步骤

systemctl daemon-reload

systemctl start kube-controller-manager

(2)拷贝scheduler

在k8s-etcd01上操作

scp /usr/lib/systemd/system/kube-scheduler.service k8s-etcd02:/usr/lib/systemd/system/

scp /usr/lib/systemd/system/kube-scheduler.service k8s-etcd03:/usr/lib/systemd/system/

scp /etc/kubernetes/scheduler k8s-etcd02:/etc/kubernetes/

scp /etc/kubernetes/scheduler k8s-etcd03:/etc/kubernetes/

systemctl start kube-scheduler

在k8s-etcd02和k8s-etcd03上操作

systemctl daemon-reload

systemctl start kube-scheduler

1.11 k8s-node01和k8s-node02节点操作

(1)安装docker-18.06

yum install  docker-ce*18.06.0*   -y

(2)启动docker服务

systemctl start docker

(3)配置docker加速器

vim /etc/docker/daemon.json     添加如下一行

{"registry-mirrors": ["*******"]    }

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

systemctl daemon-reload  && systemctl restart docker

systemctl enable docker

(4)下载pause镜像

docker pull carlziess/pause-amd64-3.1

docker tag carlziess/pause-amd64-3.1 k8s.gcr.io/pause:3.1

(5)配置kubelet和kube-proxy

(6)把k8s-etcd01节点上生成的证书拷贝到k8s-node01和k8s-node02节点上

在k8s-node01和k8s-node02上创建kubernetes目录

mkdir /etc/kubernetes

在k8s-etcd01上拷贝证书文件到k8s-node01和k8s-node02上

cd /root/k8s-certs-generator-master/kubernetes/kubelet

scp -rp * k8s-node01:/etc/kubernetes/

scp -rp * k8s-node02:/etc/kubernetes/

(7)在k8s-etcd01上拷贝配置文件和启动脚本文件到k8s-node01和k8s-node02上

cd  /root/k8s-bin-inst-master/nodes/etc/kubernetes

scp -rp * k8s-node01:/etc/kubernetes/

scp -rp * k8s-node02:/etc/kubernetes/

cd  /root/k8s-bin-inst-master/nodes/unit-files

scp -rp * k8s-node01:/usr/lib/systemd/system/

scp -rp * k8s-node02:/usr/lib/systemd/system/

在k8s-node01和k8s-node02上

systemctl daemon-reload

(8)在k8s-master01上拷贝var目录到 k8s-node01和k8s-node02上

cd /root/k8s-bin-inst-master/nodes

scp -rp ./var/lib/kube* k8s-node01:/var/lib/

scp -rp ./var/lib/kube* k8s-node02:/var/lib/

(9)在k8s-node01和k8s-node02上下载网络插件


wget https://github.com/containernetworking/plugins/releases/download/v0.7.4/cni-plugins-amd64-v0.7.4.tgz

mkdir -p /opt/cni/bin

tar zxvf cni-plugins-amd64-v0.7.4.tgz -C /opt/cni/bin/

(10)下载node的二进制文件,在k8s-node01和k8s-node02上操作

tar zxvf kubernetes-node-linux-amd64.tar.gz -C /usr/local/

修改hosts文件,如下

192.168.199.174 k8s-etcd01.lucky.com k8s-master01.lucky.com k8s-etcd01 k8s-master01 mykube-api.lucky.com
192.168.199.105 k8s-etcd02.lucky.com k8s-master02.lucky.com k8s-etcd02 k8s-master02
192.168.199.230 k8s-etcd03.lucky.com k8s-master03.lucky.com k8s-etcd03 k8s-master03
192.168.199.124 k8s-node01.lucky.com k8s-node01
192.168.199.107 k8s-node02.lucky.com k8s-node02

启动kubelet

systemctl start kubelet  && systemctl status kubelet

systemctl enable kubelet

(11)在k8s-etcd01上给node签发证书

在k8s-etcd01上操作

kubectl get csr    显示如下

给node签发证书

kubectl certificate approve node-csr-9hc5lqIH5oY02pPaFg6sGgh2jImcUidRD54uL9WRZMw

kubectl certificate approve node-csr-x11qXmlcgnJDmeeMtgSONQSFYB2lzB86R9HGT2RoGlU

(12)启动kube-proxy(k8s-node01和k8s-node02上操作)

加载ipvs模块

vim /etc/sysconfig/modules/ipvs.modules

#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir | grep -o "^[^.]*");do
   /sbin/modinfo -F filename $i &> /dev/null
   if [ $? -eq 0 ];then
      /sbin/modprobe $i
   fi
done

chmod +x /etc/sysconfig/modules/ipvs.modules

bash /etc/sysconfig/modules/ipvs.modules

lsmod | grep _vs   查看是否加载成功,显示如下说明模块加载成功

启动kube-proxy

systemctl start kube-proxy  &&  systemctl status kube-proxy

1.12 安装flannel网络插件,下面步骤在k8s-etcd01和k8s-etcd02,k8s-etcd03节点操作

(1)安装docker-18.06

cd /etc/yum.repos.d/

wget "https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo"

vim kubernetes.repo

[kubernetes]
name=kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1

yum repolist

wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

rpm --import yum-key.gpg

wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

rpm --import rpm-package-key.gpg

yum install  docker-ce*18.06.0*   -y

(2)启动docker服务

systemctl start docker

(3)配置docker加速器

vim /etc/docker/daemon.json     添加如下一行

{"registry-mirrors": ["******"]

}

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

systemctl daemon-reload  && systemctl restart docker  && systemctl enable docker

(4)在三个master节点安装网络插件

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

执行成功之后显示如下

clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

docker  images  可以看到如下镜像

docker load -i flannel.tar.gz  可以直接导入flannel镜像

kubectl get nodes  显示如下

status状态就正常了,显示ready了

kubectl get pods -n kube-system  查看在kube-system命名空间下的所有pod,显示如下

可以看到kube-flannel正常运行了

(5)在k8s-node01和k8s-node02上安装ipvsadm

yum install ipvsadm -y

ipvsadm -Ln    显示如下,说明ipvsadm规则正确

(6)在k8s-etcd01,k8s-etcd02,k8s-etcd03上部署cordns,三个master节点同时操作

coredns的官网地址

https://github.com/coredns/deployment/tree/master/kubernetes

mkdir /root/coredns && cd /root/coredns

wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed

wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh

在k8s-etcd01,k8s-etcd02,k8s-etcd03上操作如下

bash deploy.sh -i 10.96.0.10 -r "10.96.0.0/12" -s -t coredns.yaml.sed | kubectl apply -f  -

kubectl get pods -n kube-system  -o wide 显示如下,说明coredns部署成功

1.13 给kube-apiserver做高可用

kubectl create clusterrolebinding test:anonymous --clusterrole=cluster-admin --user=system:anonymous

yum -y install keepalived

cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

配置k8s-etcd01上的keepalived

cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived
global_defs {
notification_email {
hdb@tzg.cn
}
notification_email_from admin@tzg.cn
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id KUBE_APISERVER_HA
}
vrrp_script chk_kube_apiserver {
script "curl -k https://127.0.0.1:6443"
interval 3
timeout 9
fall 2
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 111
priority 100
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass heyjava
}
virtual_ipaddress {
192.168.199.200
}
track_script {
chk_kube_apiserver
}
notify_master "/etc/keepalived/notify.py -n master -a 192.168.199.200"
notify_backup "/etc/keepalived/notify.py -n backup -a 192.168.199.200"
notify_fault "/etc/keepalived/notify.py -n fault -a 192.168.199.200"
}

三个master节点的priority不一样,k8s-etcd01是100,k8s-etcd02是99,k8s-etcd03是98

cat  /etc/keepalived/notify.py

#/usr/bin/python
#-*- coding:utf-8 -*-
'''
@file: notify.py
@author: Hu Dongbiao
@date: 2016/12/15 11:24
@version: 1.0
@email: hdb@tzg.cn
'''
import argparse
import sys
import smtplib
from email.mime.text import MIMEText
#解析传进来的参数
parser = argparse.ArgumentParser(description=u"vrrp状态切换通知脚本")
parser.add_argument("-n", "--notify", choices=["master", "backup", "fault"], help=u"指定通知的类型,即vrrp角色切换的目标角色")
parser.add_argument("-a", "--address", help=u"指定相关虚拟路由器的VIP地址")
args = parser.parse_args()
# notify是当前角色,为master,backup,fault中的一个
notify = args.notify
# address是vrrp虚拟地址
address = args.address
# 发送告警邮件
smtp_host = 'smtp.163.com'
smtp_user = 'xxx'
smtp_password = 'xxx'
mail_from = '150***@163.com'
mail_to = '19***7@qq.com'
mail_subject = u'[监控]VRRP角色切换'
mail_body = '''
<p>管理员,你好:</p>
<p style="text-indent:2em;"><strong>您的HA地址{vrrp_address}已切换角色为{vrrp_role},请及时处理</strong></p>
'''.format(vrrp_address=address, vrrp_role=notify)
msg = MIMEText(mail_body, 'html', 'utf-8')
msg['From'] = mail_from
msg['To'] = mail_to
msg['Subject'] = mail_subject
smtp = smtplib.SMTP()
smtp.connect(smtp_host)
smtp.login(smtp_user,smtp_password)
smtp.sendmail(mail_from, mail_to, msg.as_string())
smtp.quit()

chmod +x /etc/keepalived/notify.py

三个master节点notify.py文件一样

systemctl start keepalived


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
9天前
|
Kubernetes 关系型数据库 MySQL
Kubernetes入门:搭建高可用微服务架构
【10月更文挑战第25天】在快速发展的云计算时代,微服务架构因其灵活性和可扩展性备受青睐。本文通过一个案例分析,展示了如何使用Kubernetes将传统Java Web应用迁移到Kubernetes平台并改造成微服务架构。通过定义Kubernetes服务、创建MySQL的Deployment/RC、改造Web应用以及部署Web应用,最终实现了高可用的微服务架构。Kubernetes不仅提供了服务发现和负载均衡的能力,还通过各种资源管理工具,提升了系统的可扩展性和容错性。
30 3
|
14天前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
15天前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
30天前
|
Kubernetes 应用服务中间件 nginx
搭建Kubernetes v1.31.1服务器集群,采用Calico网络技术
在阿里云服务器上部署k8s集群,一、3台k8s服务器,1个Master节点,2个工作节点,采用Calico网络技术。二、部署nginx服务到k8s集群,并验证nginx服务运行状态。
343 1
|
1月前
|
Kubernetes Cloud Native 微服务
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
91 1
|
1月前
|
负载均衡 应用服务中间件 nginx
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
48 1
|
26天前
|
Kubernetes Ubuntu Linux
Centos7 搭建 kubernetes集群
本文介绍了如何搭建一个三节点的Kubernetes集群,包括一个主节点和两个工作节点。各节点运行CentOS 7系统,最低配置为2核CPU、2GB内存和15GB硬盘。详细步骤包括环境配置、安装Docker、关闭防火墙和SELinux、禁用交换分区、安装kubeadm、kubelet、kubectl,以及初始化Kubernetes集群和安装网络插件Calico或Flannel。
125 0
|
1月前
|
Kubernetes Cloud Native 流计算
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
68 0
|
1月前
|
弹性计算 Kubernetes Linux
如何使用minikube搭建k8s集群
如何使用minikube搭建k8s集群
|
1月前
|
Kubernetes 应用服务中间件 nginx
k8s学习--k8s集群使用容器镜像仓库Harbor
本文介绍了在CentOS 7.9环境下部署Harbor容器镜像仓库,并将其集成到Kubernetes集群的过程。环境中包含一台Master节点和两台Node节点,均已部署好K8s集群。首先详细讲述了在Harbor节点上安装Docker和docker-compose,接着通过下载Harbor离线安装包并配置相关参数完成Harbor的部署。随后介绍了如何通过secret和serviceaccount两种方式让Kubernetes集群使用Harbor作为镜像仓库,包括创建secret、配置节点、上传镜像以及创建Pod等步骤。最后验证了Pod能否成功从Harbor拉取镜像运行。