云原生|kubernetes|kubeadm部署高可用集群(二)---kube-apiserver高可用+etcd外部集群+haproxy+keepalived(二)

本文涉及的产品
应用型负载均衡 ALB,每月750个小时 15LCU
网络型负载均衡 NLB,每月750个小时 15LCU
传统型负载均衡 CLB,每月750个小时 15LCU
简介: 云原生|kubernetes|kubeadm部署高可用集群(二)---kube-apiserver高可用+etcd外部集群+haproxy+keepalived

其实集群搭建也就麻烦在基础环境的搭建,费时费力,不过有一个良好的开端的话,后面会比较轻松一些。

三,创建负载均衡器(HAProxy+Keepalived)


当存在多个控制平面时,kube-apiserver也存在多个,可以使用Nginx+Keepalived、HAProxy+Keepalived等工具实现多个kube-apiserver的负载均衡和高可用。

推荐使用HAProxy+Keepalived这个组合,因为HAProxy可以提高更高性能的四层负载均衡功能,这也是大多数人的选择。

负载均衡架构图:  

image.png

在三个master节点都安装haproxy+keepalived:

yum install haproxy keepalived -y

haproxy的配置(绑定9443端口,监听三个apiserver并组成后端,此配置文件复制到其它两个master节点):

cat /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend apiserver
bind *:9443
mode tcp
option tcplog
default_backend apiserver
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend apiserver
    balance     roundrobin
    server k8s-master1 192.168.217.19:6443 check
    server k8s-master2 192.168.217.20:6443 check
    server k8s-master3 192.168.217.21:6443 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
listen admin_stats  
    bind 0.0.0.0:9188  #登录页面所绑定的地址加端口
    mode http #监控的模式
    log 127.0.0.1 local0 err #错误日志等级
    stats refresh 30s
    stats uri /haproxy-status   #登录页面的网址,IP:9188/haproxy-status 即为登录网址
    stats realm welcome login\ Haproxy 
    stats auth admin:admin123       #web页面的用户名和密码
    stats hide-version 
    stats admin if TRUE

复制haproxy配置文件:

1. scp /etc/haproxy/haproxy.cfg master2:/etc/haproxy/
2. scp /etc/haproxy/haproxy.cfg master3:/etc/haproxy/

打开任意浏览器,输入ip+9188,可以打开haproxy的web界面(账号:admin  密码:admin123):

image.png

62c4ae6a65c242de8654878069d84fd3.png

keepalived的配置:

master1节点的keepalived的配置文件:

[root@master1 ~]# cat /etc/keepalived/keepalived.conf 
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 192.168.217.19
}
vrrp_script check_haproxy {
script "bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi'"
interval 3
weight -2
fall 3
rise 3
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 50
priority 100
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.217.100
}
track_script {
check_haproxy
}
}

master节点的keepalived的配置文件:

[root@master2 ~]# cat /etc/keepalived/keepalived.conf 
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 192.168.217.20
}
vrrp_script check_haproxy {
script "bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi'"
interval 3
weight -2
fall 3
rise 3
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 50
priority 99
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.217.100
}
track_script {
check_haproxy
}
}

 master3节点的keepalived的配置文件:

[root@master3 ~]# cat /etc/keepalived/keepalived.conf 
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 192.168.217.21
}
vrrp_script check_haproxy {
script "bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi'"
interval 3
weight -2
fall 3
rise 3
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 50
priority 98
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.217.100
}
track_script {
check_haproxy
}
}

两个服务的自启和启动:

systemctl enable keepalived haproxy && systemctl start keepalived haproxy

负载均衡的测试:

查看端口是否开启:

[root@master1 ~]# netstat -antup |grep 9443
tcp        0      0 0.0.0.0:9443            0.0.0.0:*               LISTEN      1011/haproxy 

在master1上查看vip(为什么在master1呢?因为它的优先级是100):

[root@master1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:5c:1e:66 brd ff:ff:ff:ff:ff:ff
    inet 192.168.217.19/24 brd 192.168.217.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.217.100/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5c:1e66/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:f3:9c:2f:92 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

关闭master1的haproxy服务,查看vip是否漂移到master2:

[root@master1 ~]# systemctl stop haproxy

在master2上查看:

[root@master2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:86:30:e6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.217.20/24 brd 192.168.217.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.217.100/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe86:30e6/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:8a:27:b6:7f brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

漂移成功,这一步一定要成功才可以进行下一步。

在master1上恢复haproxy服务,可以看到很迅速的vip就回来了:

[root@master1 ~]# systemctl start haproxy
[root@master1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:5c:1e:66 brd ff:ff:ff:ff:ff:ff
    inet 192.168.217.19/24 brd 192.168.217.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.217.100/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5c:1e66/64 scope link 
       valid_lft forever preferred_lft forever

 查看master1的系统日志,看看vip漂移的过程:

Oct 27 19:01:28 master1 Keepalived_vrrp[1037]: /usr/bin/bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi' exited with status 1
Oct 27 19:01:31 master1 Keepalived_vrrp[1037]: /usr/bin/bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi' exited with status 1
Oct 27 19:01:34 master1 Keepalived_vrrp[1037]: /usr/bin/bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi' exited with status 1
Oct 27 19:01:37 master1 Keepalived_vrrp[1037]: /usr/bin/bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi' exited with status 1
Oct 27 19:01:38 master1 systemd: Started HAProxy Load Balancer.
Oct 27 19:01:38 master1 systemd: Starting HAProxy Load Balancer...
Oct 27 19:01:38 master1 haproxy-systemd-wrapper: [WARNING] 299/190138 (31426) : config : 'option forwardfor' ignored for frontend 'apiserver' as it requires HTTP mode.
Oct 27 19:01:46 master1 Keepalived_vrrp[1037]: VRRP_Script(check_haproxy) succeeded
Oct 27 19:01:47 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Changing effective priority from 98 to 100
Oct 27 19:01:47 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) forcing a new MASTER election
Oct 27 19:01:48 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Transition to MASTER STATE
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Entering MASTER STATE
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) setting protocol VIPs.
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: Sending gratuitous ARP on ens33 for 192.168.217.100
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 192.168.217.100
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: Sending gratuitous ARP on ens33 for 192.168.217.100
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: Sending gratuitous ARP on ens33 for 192.168.217.100
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: Sending gratuitous ARP on ens33 for 192.168.217.100
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: Sending gratuitous ARP on ens33 for 192.168.217.100
Oct 27 19:01:50 master1 ntpd[757]: Listen normally on 10 ens33 192.168.217.100 UDP 123

其中关于选举的部分是这样的:

  • 脚本执行成功
  • 由于脚本成功,优先级从98跳跃到100
  • 强制重新选举
  • master1当选为master
  • master1的keepalived进入master状态
ct 27 19:01:46 master1 Keepalived_vrrp[1037]: VRRP_Script(check_haproxy) succeeded
Oct 27 19:01:47 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Changing effective priority from 98 to 100
Oct 27 19:01:47 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) forcing a new MASTER election
Oct 27 19:01:48 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Transition to MASTER STATE
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Entering MASTER STATE


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
21天前
|
存储 Kubernetes 开发者
容器化时代的领航者:Docker 和 Kubernetes 云原生时代的黄金搭档
Docker 是一种开源的应用容器引擎,允许开发者将应用程序及其依赖打包成可移植的镜像,并在任何支持 Docker 的平台上运行。其核心概念包括镜像、容器和仓库。镜像是只读的文件系统,容器是镜像的运行实例,仓库用于存储和分发镜像。Kubernetes(k8s)则是容器集群管理系统,提供自动化部署、扩展和维护等功能,支持服务发现、负载均衡、自动伸缩等特性。两者结合使用,可以实现高效的容器化应用管理和运维。Docker 主要用于单主机上的容器管理,而 Kubernetes 则专注于跨多主机的容器编排与调度。尽管 k8s 逐渐减少了对 Docker 作为容器运行时的支持,但 Doc
108 5
容器化时代的领航者:Docker 和 Kubernetes 云原生时代的黄金搭档
|
16天前
|
Kubernetes Ubuntu 网络安全
ubuntu使用kubeadm搭建k8s集群
通过以上步骤,您可以在 Ubuntu 系统上使用 kubeadm 成功搭建一个 Kubernetes 集群。本文详细介绍了从环境准备、安装 Kubernetes 组件、初始化集群到管理和使用集群的完整过程,希望对您有所帮助。在实际应用中,您可以根据具体需求调整配置,进一步优化集群性能和安全性。
67 12
|
2月前
|
Kubernetes Cloud Native 微服务
云原生入门与实践:Kubernetes的简易部署
云原生技术正改变着现代应用的开发和部署方式。本文将引导你了解云原生的基础概念,并重点介绍如何使用Kubernetes进行容器编排。我们将通过一个简易的示例来展示如何快速启动一个Kubernetes集群,并在其上运行一个简单的应用。无论你是云原生新手还是希望扩展现有知识,本文都将为你提供实用的信息和启发性的见解。
|
2月前
|
Kubernetes Cloud Native 云计算
云原生入门:Kubernetes 和容器化基础
在这篇文章中,我们将一起揭开云原生技术的神秘面纱。通过简单易懂的语言,我们将探索如何利用Kubernetes和容器化技术简化应用的部署和管理。无论你是初学者还是有一定经验的开发者,本文都将为你提供一条清晰的道路,帮助你理解和运用这些强大的工具。让我们从基础开始,逐步深入了解,最终能够自信地使用这些技术来优化我们的工作流程。
|
缓存 Kubernetes 数据安全/隐私保护
k8s1.18多master节点高可用集群安装-超详细中文官方文档
k8s1.18多master节点高可用集群安装-超详细中文官方文档
|
8月前
|
Kubernetes 负载均衡 监控
Kubernetes高可用集群二进制部署(一)主机准备和负载均衡器安装
Kubernetes高可用集群二进制部署(一)主机准备和负载均衡器安装
|
Kubernetes Linux 网络安全
k8s1.18高可用集群安装-超详细中文官方文档
k8s1.18高可用集群安装-超详细中文官方文档
|
数据采集 弹性计算 Prometheus
Kubernetes-1.18.4二进制高可用安装(下)
Kubernetes-1.18.4二进制高可用安装(下)
101 0
Kubernetes-1.18.4二进制高可用安装(下)
|
canal Kubernetes 负载均衡
Kubernetes-1.18.4二进制高可用安装(上)
Kubernetes-1.18.4二进制高可用安装(上)
174 0
Kubernetes-1.18.4二进制高可用安装(上)