多master(高可用)介绍
假设现在有3个node节点和3个master节点,node1到底是连到master1还是连到master2还是master3,需要有人来分配,这个中间人就是load balancer,load balancer起到两个作用,一是负载。二是检查master状态,如果master1异常,就会使node连到master2上,如果master1正常,则正常提供服务。由于节点之间互相访问是通过IP连接,这里也是一样的,只不过是通过VIP(虚拟ip)连接。
高可用集群使用技术介绍
keepalived:检查master状态,是否正常运行;配置虚拟IP。
haproxy:负载均衡服务器,起到负载作用。
master具体组件:apiserver, controller-manager, scheduler。
高可用集群架构图
搭建高可用k8s集群步骤
前提条件:
①固定四台主机IP,参考文章固定虚拟机IP
②可连外网
主机 | IP |
master1 | 192.168.2.200 |
master2 | 192.168.2.201 |
master3 | 192.168.2.204 |
node1 | 192.168.2.202 |
虚拟IP | 192.168.2.203 |
我们以3台master,1台node为例,首先准备好4台服务器,分别在四台服务器上做操作。
1. 准备环境-系统初始化
# 关闭防火墙 systemctl disable firewalld #永久关闭,移除开机自启项,以后开机不会启动 systemctl stop firewalld #临时关闭,立刻关闭 # 关闭selinux sed -i 's/enforcing/disabled/' /etc/selinux/config # 修改selinux配置文件为关闭状态,以后开机不会启动 setenforce 0 #临时关闭,立刻关闭 # 关闭swap分区 sed -ri 's/.*swap.*/#&/' /etc/fstab # 修改swap配置文件为关闭状态,以后开机不会启动 swapoff -a #临时关闭,立刻关闭 # 根据规划设置主机名 hostnamectl set-hostname <hostname> bash # 刷新 # 在master中添加hosts,每一个master都要执行 cat >> /etc/hosts << EOF 192.168.2.203 k8s-vip 192.168.2.200 master1 192.168.2.201 master2 192.168.2.204 master3 192.168.2.202 node1 EOF # 将桥接的 IPv4 流量传递到 iptables 的链 cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF cat >> /etc/sysctl.conf<<EOF net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-iptables=1 net.ipv4.neigh.default.gc_thresh1=4096 net.ipv4.neigh.default.gc_thresh2=6144 net.ipv4.neigh.default.gc_thresh3=8192 EOF #加载 modprobe br_netfilter sysctl -p sysctl --system # 生效 -------------------------------- # 时间同步 yum -y install ntpdate ntpdate time.windows.com
如果执行过程中出错,参考这篇文章,错误汇总
2. 所有master节点部署keepalived+haproxy
2.1 安装keepalived
# 安装相关依赖 yum install -y conntrack-tools libseccomp libtool-ltdl # 安装keepalived yum install -y keepalived
2.2 配置master节点
- master1节点配置:
cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id k8s } vrrp_script check_haproxy { script "killall -0 haproxy" # 检测haproxy是否存在 interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state MASTER # 主节点 interface ens33 # ens33 为网卡名称,可以使用ifconfig查看自己的网卡名称 virtual_router_id 51 priority 250 # 权重,keepalived的master节点必须大于keepalived的backup节点 advert_int 1 authentication { auth_type PASS auth_pass ceb1b3ec013d66163d6ab # 密钥,master和backup密钥必须一致 } virtual_ipaddress { 192.168.2.203 # 虚拟ip } track_script { check_haproxy } } EOF
- master2、master3节点配置
cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id k8s } vrrp_script check_haproxy { script "killall -0 haproxy" interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state BACKUP # 从节点 interface ens33 # ens33 为网卡名称 virtual_router_id 51 priority 200 # 必须小于MASTER权重 advert_int 1 authentication { auth_type PASS auth_pass ceb1b3ec013d66163d6ab # 密钥,于MASTER一致 } virtual_ipaddress { 192.168.2.203 # 虚拟ip } track_script { check_haproxy } } EOF
- 启动和检查
三台master节点都执行
# 启动keepalived systemctl start keepalived.service # 设置开机启动 systemctl enable keepalived.service # 查看启动状态 systemctl status keepalived.service
启动后查看master的网卡信息。由于虚拟IP配到了master1上,执行后,master1节点上会多出一个IP,即为虚拟IP;master2和master3上没有,挂上master1后才会出现。
ip a s ens33
2.3 部署haproxy
- 在3个master节点安装 haproxy
# 安装haproxy yum install -y haproxy
- 配置
3台master节点的配置均相同,配置中声明了后端代理的3个master节点服务器,指定了haproxy运行的端口为16443等,因此16443端口为集群的入口
cat > /etc/haproxy/haproxy.cfg << EOF #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global # to have these messages end up in /var/log/haproxy.log you will # need to: # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # kubernetes apiserver frontend which proxys to the backends #--------------------------------------------------------------------- frontend kubernetes-apiserver mode tcp bind *:16443 #默认监听端口16443 option tcplog default_backend kubernetes-apiserver #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend kubernetes-apiserver mode tcp balance roundrobin server master01.k8s.io 192.168.2.200:6443 check # 修改IP server master02.k8s.io 192.168.2.201:6443 check # 修改IP server master03.k8s.io 192.168.2.204:6443 check # 修改IP #--------------------------------------------------------------------- # collection haproxy statistics message #--------------------------------------------------------------------- listen stats bind *:1080 stats auth admin:awesomePassword stats refresh 5s stats realm HAProxy\ Statistics stats uri /admin?stats EOF
- 启动和检查
3台master都启动
# 启动 haproxy systemctl start haproxy # 设置开启自启 systemctl enable haproxy # 查看启动状态 systemctl status haproxy
启动后,检查端口,查看对应的端口是否包含 16443
netstat -tunlp | grep haproxy
如果执行过程中出错,参考这篇文章,错误汇总