云原生|kubernetes|kubeadm部署高可用集群(二)---kube-apiserver高可用+etcd外部集群+haproxy+keepalived(二)

本文涉及的产品
应用型负载均衡 ALB,每月750个小时 15LCU
传统型负载均衡 CLB,每月750个小时 15LCU
网络型负载均衡 NLB,每月750个小时 15LCU
简介: 云原生|kubernetes|kubeadm部署高可用集群(二)---kube-apiserver高可用+etcd外部集群+haproxy+keepalived

其实集群搭建也就麻烦在基础环境的搭建,费时费力,不过有一个良好的开端的话,后面会比较轻松一些。

三,创建负载均衡器(HAProxy+Keepalived)


当存在多个控制平面时,kube-apiserver也存在多个,可以使用Nginx+Keepalived、HAProxy+Keepalived等工具实现多个kube-apiserver的负载均衡和高可用。

推荐使用HAProxy+Keepalived这个组合,因为HAProxy可以提高更高性能的四层负载均衡功能,这也是大多数人的选择。

负载均衡架构图:  

image.png

在三个master节点都安装haproxy+keepalived:

yum install haproxy keepalived -y

haproxy的配置(绑定9443端口,监听三个apiserver并组成后端,此配置文件复制到其它两个master节点):

cat /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend apiserver
bind *:9443
mode tcp
option tcplog
default_backend apiserver
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend apiserver
    balance     roundrobin
    server k8s-master1 192.168.217.19:6443 check
    server k8s-master2 192.168.217.20:6443 check
    server k8s-master3 192.168.217.21:6443 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
listen admin_stats  
    bind 0.0.0.0:9188  #登录页面所绑定的地址加端口
    mode http #监控的模式
    log 127.0.0.1 local0 err #错误日志等级
    stats refresh 30s
    stats uri /haproxy-status   #登录页面的网址,IP:9188/haproxy-status 即为登录网址
    stats realm welcome login\ Haproxy 
    stats auth admin:admin123       #web页面的用户名和密码
    stats hide-version 
    stats admin if TRUE

复制haproxy配置文件:

1. scp /etc/haproxy/haproxy.cfg master2:/etc/haproxy/
2. scp /etc/haproxy/haproxy.cfg master3:/etc/haproxy/

打开任意浏览器,输入ip+9188,可以打开haproxy的web界面(账号:admin  密码:admin123):

image.png

62c4ae6a65c242de8654878069d84fd3.png

keepalived的配置:

master1节点的keepalived的配置文件:

[root@master1 ~]# cat /etc/keepalived/keepalived.conf 
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 192.168.217.19
}
vrrp_script check_haproxy {
script "bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi'"
interval 3
weight -2
fall 3
rise 3
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 50
priority 100
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.217.100
}
track_script {
check_haproxy
}
}

master节点的keepalived的配置文件:

[root@master2 ~]# cat /etc/keepalived/keepalived.conf 
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 192.168.217.20
}
vrrp_script check_haproxy {
script "bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi'"
interval 3
weight -2
fall 3
rise 3
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 50
priority 99
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.217.100
}
track_script {
check_haproxy
}
}

 master3节点的keepalived的配置文件:

[root@master3 ~]# cat /etc/keepalived/keepalived.conf 
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 192.168.217.21
}
vrrp_script check_haproxy {
script "bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi'"
interval 3
weight -2
fall 3
rise 3
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 50
priority 98
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.217.100
}
track_script {
check_haproxy
}
}

两个服务的自启和启动:

systemctl enable keepalived haproxy && systemctl start keepalived haproxy

负载均衡的测试:

查看端口是否开启:

[root@master1 ~]# netstat -antup |grep 9443
tcp        0      0 0.0.0.0:9443            0.0.0.0:*               LISTEN      1011/haproxy 

在master1上查看vip(为什么在master1呢?因为它的优先级是100):

[root@master1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:5c:1e:66 brd ff:ff:ff:ff:ff:ff
    inet 192.168.217.19/24 brd 192.168.217.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.217.100/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5c:1e66/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:f3:9c:2f:92 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

关闭master1的haproxy服务,查看vip是否漂移到master2:

[root@master1 ~]# systemctl stop haproxy

在master2上查看:

[root@master2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:86:30:e6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.217.20/24 brd 192.168.217.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.217.100/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe86:30e6/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:8a:27:b6:7f brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

漂移成功,这一步一定要成功才可以进行下一步。

在master1上恢复haproxy服务,可以看到很迅速的vip就回来了:

[root@master1 ~]# systemctl start haproxy
[root@master1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:5c:1e:66 brd ff:ff:ff:ff:ff:ff
    inet 192.168.217.19/24 brd 192.168.217.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.217.100/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5c:1e66/64 scope link 
       valid_lft forever preferred_lft forever

 查看master1的系统日志,看看vip漂移的过程:

Oct 27 19:01:28 master1 Keepalived_vrrp[1037]: /usr/bin/bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi' exited with status 1
Oct 27 19:01:31 master1 Keepalived_vrrp[1037]: /usr/bin/bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi' exited with status 1
Oct 27 19:01:34 master1 Keepalived_vrrp[1037]: /usr/bin/bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi' exited with status 1
Oct 27 19:01:37 master1 Keepalived_vrrp[1037]: /usr/bin/bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi' exited with status 1
Oct 27 19:01:38 master1 systemd: Started HAProxy Load Balancer.
Oct 27 19:01:38 master1 systemd: Starting HAProxy Load Balancer...
Oct 27 19:01:38 master1 haproxy-systemd-wrapper: [WARNING] 299/190138 (31426) : config : 'option forwardfor' ignored for frontend 'apiserver' as it requires HTTP mode.
Oct 27 19:01:46 master1 Keepalived_vrrp[1037]: VRRP_Script(check_haproxy) succeeded
Oct 27 19:01:47 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Changing effective priority from 98 to 100
Oct 27 19:01:47 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) forcing a new MASTER election
Oct 27 19:01:48 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Transition to MASTER STATE
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Entering MASTER STATE
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) setting protocol VIPs.
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: Sending gratuitous ARP on ens33 for 192.168.217.100
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 192.168.217.100
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: Sending gratuitous ARP on ens33 for 192.168.217.100
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: Sending gratuitous ARP on ens33 for 192.168.217.100
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: Sending gratuitous ARP on ens33 for 192.168.217.100
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: Sending gratuitous ARP on ens33 for 192.168.217.100
Oct 27 19:01:50 master1 ntpd[757]: Listen normally on 10 ens33 192.168.217.100 UDP 123

其中关于选举的部分是这样的:

  • 脚本执行成功
  • 由于脚本成功,优先级从98跳跃到100
  • 强制重新选举
  • master1当选为master
  • master1的keepalived进入master状态
ct 27 19:01:46 master1 Keepalived_vrrp[1037]: VRRP_Script(check_haproxy) succeeded
Oct 27 19:01:47 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Changing effective priority from 98 to 100
Oct 27 19:01:47 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) forcing a new MASTER election
Oct 27 19:01:48 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Transition to MASTER STATE
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Entering MASTER STATE


相关实践学习
深入解析Docker容器化技术
Docker是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器上,也可以实现虚拟化,容器是完全使用沙箱机制,相互之间不会有任何接口。Docker是世界领先的软件容器平台。开发人员利用Docker可以消除协作编码时“在我的机器上可正常工作”的问题。运维人员利用Docker可以在隔离容器中并行运行和管理应用,获得更好的计算密度。企业利用Docker可以构建敏捷的软件交付管道,以更快的速度、更高的安全性和可靠的信誉为Linux和Windows Server应用发布新功能。 在本套课程中,我们将全面的讲解Docker技术栈,从环境安装到容器、镜像操作以及生产环境如何部署开发的微服务应用。本课程由黑马程序员提供。 &nbsp; &nbsp; 相关的阿里云产品:容器服务 ACK 容器服务 Kubernetes 版(简称 ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情: https://www.aliyun.com/product/kubernetes
目录
相关文章
|
2月前
|
人工智能 算法 调度
阿里云ACK托管集群Pro版共享GPU调度操作指南
本文介绍在阿里云ACK托管集群Pro版中,如何通过共享GPU调度实现显存与算力的精细化分配,涵盖前提条件、使用限制、节点池配置及任务部署全流程,提升GPU资源利用率,适用于AI训练与推理场景。
268 1
|
2月前
|
弹性计算 监控 调度
ACK One 注册集群云端节点池升级:IDC 集群一键接入云端 GPU 算力,接入效率提升 80%
ACK One注册集群节点池实现“一键接入”,免去手动编写脚本与GPU驱动安装,支持自动扩缩容与多场景调度,大幅提升K8s集群管理效率。
242 89
|
7月前
|
资源调度 Kubernetes 调度
从单集群到多集群的快速无损转型:ACK One 多集群应用分发
ACK One 的多集群应用分发,可以最小成本地结合您已有的单集群 CD 系统,无需对原先应用资源 YAML 进行修改,即可快速构建成多集群的 CD 系统,并同时获得强大的多集群资源调度和分发的能力。
287 9
|
7月前
|
资源调度 Kubernetes 调度
从单集群到多集群的快速无损转型:ACK One 多集群应用分发
本文介绍如何利用阿里云的分布式云容器平台ACK One的多集群应用分发功能,结合云效CD能力,快速将单集群CD系统升级为多集群CD系统。通过增加分发策略(PropagationPolicy)和差异化策略(OverridePolicy),并修改单集群kubeconfig为舰队kubeconfig,可实现无损改造。该方案具备多地域多集群智能资源调度、重调度及故障迁移等能力,帮助用户提升业务效率与可靠性。
|
8月前
|
存储 Kubernetes 异构计算
Qwen3 大模型在阿里云容器服务上的极简部署教程
通义千问 Qwen3 是 Qwen 系列最新推出的首个混合推理模型,其在代码、数学、通用能力等基准测试中,与 DeepSeek-R1、o1、o3-mini、Grok-3 和 Gemini-2.5-Pro 等顶级模型相比,表现出极具竞争力的结果。
|
9月前
|
存储 Kubernetes 监控
K8s集群实战:使用kubeadm和kuboard部署Kubernetes集群
总之,使用kubeadm和kuboard部署K8s集群就像回归童年一样,简单又有趣。不要忘记,技术是为人服务的,用K8s集群操控云端资源,我们不过是想在复杂的世界找寻简单。尽管部署过程可能遇到困难,但朝着简化复杂的目标,我们就能找到意义和乐趣。希望你也能利用这些工具,找到你的乐趣,满足你的需求。
856 33
|
存储 Kubernetes API
在K8S集群中,如何正确选择工作节点资源大小? 2
在K8S集群中,如何正确选择工作节点资源大小?
|
Kubernetes Serverless 异构计算
基于ACK One注册集群实现IDC中K8s集群以Serverless方式使用云上CPU/GPU资源
在前一篇文章《基于ACK One注册集群实现IDC中K8s集群添加云上CPU/GPU节点》中,我们介绍了如何为IDC中K8s集群添加云上节点,应对业务流量的增长,通过多级弹性调度,灵活使用云上资源,并通过自动弹性伸缩,提高使用率,降低云上成本。这种直接添加节点的方式,适合需要自定义配置节点(runtime,kubelet,NVIDIA等),需要特定ECS实例规格等场景。同时,这种方式意味您需要自行
基于ACK One注册集群实现IDC中K8s集群以Serverless方式使用云上CPU/GPU资源
|
Kubernetes API 调度
在K8S集群中,如何正确选择工作节点资源大小?1
在K8S集群中,如何正确选择工作节点资源大小?
|
弹性计算 运维 Kubernetes
本地 IDC 中的 K8s 集群如何以 Serverless 方式使用云上计算资源
本地 IDC 中的 K8s 集群如何以 Serverless 方式使用云上计算资源

热门文章

最新文章

推荐镜像

更多