前言:
docker作为事实上的容器标准应该所有人都没有意见,即使kvm也应该不会有任何反对的意见吧。
keepalived+haproxy高负载均衡集群是常见的组合,也基本是一个负载均衡的标准了,但,keepalived并不是非常的友好,也就是说它的部署方式只有rpm/deb 或者源码编译部署这两种形式,也不知道是什么原因,keepalived官方并没有二进制部署方式。
由此,我们可以在多种复杂的环境下,例如,Ubuntu和linux混合的系统内部署一个我们自制的包含keepalived的镜像,比如,A服务器是Ubuntu的,B服务器是linux的,C服务器是麒麟的,D服务器是centos8的,E服务器是centos6的,这些统统不重要,重要的是我们只需要有一个能够运行docker的环境,在这个环境下运行我们自制的镜像就可以了。
简单来说,现在即使服务器是非常杂牌,比如,A是麒麟,B是centos6,C是debian 等等操作系统,我们基于docker环境,仍能搭建出一个稳定可用的集群。
那么,docker有一个理念是:一次构建,随处运行,通过这次的build示例,将这个理念是体现的非常好了。
本文将在cento7操作系统下利用dockerfile制作keepalived镜像,然后在服务器上组建一个简单的keepalived集群,也就是抛砖引玉吧。
附:
为什么不使用keepalived的官方docker镜像呢?
主要是keepalived的官方docker镜像存在诸多限制,镜像内有诸多的监测脚本限制了镜像的使用,比如,某些服务器不是平常的ens33网卡。其实主要这些官方镜像是用来做keepalived+lvs的负载均衡,没有自己build的镜像用途更广泛。
一,
dockerfile
说明:此此build是在有网的环境下进行,基于centos7的基本镜像,从keepalived的官网拉取源码包后,在容器内解压并编译keepalived。
这里要注意一点,keepalived的版本不能太高,否则centos7基本镜像会由于gcc的版本比较低而编译失败,从而无法build镜像。2.0.8这个版本是比较合适的。
这个dockerfile文件需要放置到一个空的目录下(我是放置到/media目录下的,这个目录没有其它文件,比较空),因为build的时候会读取同级目录和文件,一方面会造成build的镜像过大,一方面会降低build的效率。
build的时候做了适当的优化,run命令通过&&合并了,并在最后清除缓存层。
cat >Dockerfile <<EOF FROM centos:7 RUN yum install -y wget && yum install -y gcc-c++ openssl-devel openssl && yum install -y net-tools&&wget http://www.keepalived.org/software/keepalived-2.0.8.tar.gz --no-check-certificate&& tar zxvf keepalived-2.0.8.tar.gz && cd keepalived-2.0.8 && ./configure && make && make install&&yum clean all CMD ["keepalived", "-n","--all", "-d", "-D", "-f", "/etc/keepalived/keepalived.conf", "--log-console"] EOF
build 命令:
新build的镜像名称为kp:v1.0
docker build -t kp:v1.0 .
build输出的部分日志:
从日志可以看到,优化是有效果的,Removing intermediate container 6f37152ebc2d
略略略 make[1]: Leaving directory `/keepalived-2.0.8' Loaded plugins: fastestmirror, ovl Cleaning repos: base extras updates Cleaning up list of fastest mirrors Removing intermediate container 957d916a624f ---> a531d31e8312 Step 3/3 : CMD ["keepalived", "-n","--all", "-d", "-D", "-f", "/etc/keepalived/keepalived.conf", "--log-console"] ---> Running in 6f37152ebc2d Removing intermediate container 6f37152ebc2d ---> acfc159663e2 Successfully built acfc159663e2 Successfully tagged kp:v1.0
可以看到,centos基础镜像是200M,build后360M,是符合预期的。
[root@node3 media]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE kp v1.0 acfc159663e2 51 minutes ago 357MB
二,
运行镜像
运行镜像有多种方式,可以docker-compose编排运行,也可以直接docker run 的方式,本例使用docker run的方式直接运行镜像。
上述我们自己build的镜像由于主要的用途是跑keepalived服务,因此,需要hosts的形式,又由于服务正常运行时基于配置文件的,因此,我们首先需要编写一个keepalived服务的配置文件,最后,keepalived需要的网络权限是比较高的,因此,运行容器的时候需要加参数--cap-add NET_ADMIN。
首先,查看本机的网络情况:
可以看到,主网卡是ens33,IP地址是192.168.217.23
[root@node3 media]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:70:12:12 brd ff:ff:ff:ff:ff:ff inet 192.168.217.23/24 brd 192.168.217.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe70:1212/64 scope link valid_lft forever preferred_lft forever
keepalived master的配置文件:
根据上面的网络情况编写的配置文件
cat >/opt/keepalived/keepalived.conf<<EOF ! Configuration File for keepalived global_defs { router_id LVS_Master } vrrp_instance VI_1 { state MASTER #指定instance初始状态,实际根据优先级决定.backup节点不一样 interface ens33 #虚拟IP所在网 virtual_router_id 51 #VRID,相同VRID为一个组,决定多播MAC地址 priority 100 #优先级,另一台改为90.backup节点不一样 advert_int 1 #检查间隔 authentication { auth_type PASS #认证方式,可以是pass或ha auth_pass 1111 #认证密码 } virtual_ipaddress { 192.168.217.100/16 #VIP地址 } } EOF
验证:
查看vip
可以看到,有VIP了
[root@node3 media]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:70:12:12 brd ff:ff:ff:ff:ff:ff inet 192.168.217.23/24 brd 192.168.217.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.217.100/16 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe70:1212/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:55:fc:14:b8 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:55ff:fefc:14b8/64 scope link valid_lft forever preferred_lft forever
keepalived backup(备节点)
192.168.217.23上build的镜像传到192.168.217.24服务器上
在24服务器查看网络情况:
[root@node4 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:5b:a4:eb brd ff:ff:ff:ff:ff:ff inet 192.168.217.24/24 brd 192.168.217.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe5b:a4eb/64 scope link valid_lft forever preferred_lft forever
在24服务器编写keepalived的backup节点的配置文件:
cat >/opt/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id LVS_Backup } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 51 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.217.100/16 } } EOF
在24服务器启动镜像:
docker run -it --name kp_backup --net=host --cap-add NET_ADMIN -v /opt/keepalived/keepalived.conf:/etc/keepalived/keepalived.conf -d kp:v1.0
三,
测试
在23服务器上查看容器日志,截取部分如下:
可以看到23在master模式
docker logs kp_master
Keepalived_vrrp[7]: (VI_1) Entering MASTER STATE Wed Dec 7 13:33:54 2022: (VI_1) setting VIPs. Keepalived_vrrp[7]: (VI_1) setting VIPs. Wed Dec 7 13:33:54 2022: Sending gratuitous ARP on ens33 for 192.168.217.100 Keepalived_vrrp[7]: Sending gratuitous ARP on ens33 for 192.168.217.100 Wed Dec 7 13:33:54 2022: (VI_1) Sending/queueing gratuitous ARPs on ens33 for 192.168.217.100 Keepalived_vrrp[7]: (VI_1) Sending/queueing gratuitous ARPs on ens33 for 192.168.217.100 Wed Dec 7 13:33:54 2022: Sending gratuitous ARP on ens33 for 192.168.217.100 Keepalived_vrrp[7]: Sending gratuitous ARP on ens33 for 192.168.217.100 Wed Dec 7 13:33:54 2022: Sending gratuitous ARP on ens33 for 192.168.217.100 Keepalived_vrrp[7]: Sending gratuitous ARP on ens33 for 192.168.217.100 Wed Dec 7 13:33:54 2022: Sending gratuitous ARP on ens33 for 192.168.217.100 Keepalived_vrrp[7]: Sending gratuitous ARP on ens33 for 192.168.217.100 Wed Dec 7 13:33:54 2022: Sending gratuitous ARP on ens33 for 192.168.217.100 Keepalived_vrrp[7]: Sending gratuitous ARP on ens33 for 192.168.217.100 Wed Dec 7 13:33:59 2022: Sending gratuitous ARP on ens33 for 192.168.217.100 Keepalived_vrrp[7]: Sending gratuitous ARP on ens33 for 192.168.217.100 Wed Dec 7 13:33:59 2022: (VI_1) Sending/queueing gratuitous ARPs on ens33 for 192.168.217.100 Keepalived_vrrp[7]: (VI_1) Sending/queueing gratuitous ARPs on ens33 for 192.168.217.100 Wed Dec 7 13:33:59 2022: Sending gratuitous ARP on ens33 for 192.168.217.100 Keepalived_vrrp[7]: Sending gratuitous ARP on ens33 for 192.168.217.100 Wed Dec 7 13:33:59 2022: Sending gratuitous ARP on ens33 for 192.168.217.100 Keepalived_vrrp[7]: Sending gratuitous ARP on ens33 for 192.168.217.100 Wed Dec 7 13:33:59 2022: Sending gratuitous ARP on ens33 for 192.168.217.100 Keepalived_vrrp[7]: Sending gratuitous ARP on ens33 for 192.168.217.100 Wed Dec 7 13:33:59 2022: Sending gratuitous ARP on ens33 for 192.168.217.100 Keepalived_vrrp[7]: Sending gratuitous ARP on ens33 for 192.168.217.100
在24服务器上查看容器日志,截取部分如下:
docker logs kp_backup
Wed Dec 7 13:47:13 2022: HW Type = ETHERNET Keepalived_vrrp[8]: HW Type = ETHERNET Wed Dec 7 13:47:13 2022: NIC netlink status update Keepalived_vrrp[8]: NIC netlink status update Wed Dec 7 13:47:13 2022: Reset promote_secondaries counter 0 Keepalived_vrrp[8]: Reset promote_secondaries counter 0 Wed Dec 7 13:47:13 2022: Tracking VRRP instances = 0 Keepalived_vrrp[8]: Tracking VRRP instances = 0 Wed Dec 7 13:47:13 2022: (VI_1) Entering BACKUP STATE (init) Keepalived_vrrp[8]: (VI_1) Entering BACKUP STATE (init) Wed Dec 7 13:47:13 2022: VRRP sockpool: [ifindex(2), family(IPv4), proto(112), unicast(0), fd(9,10)] Keepalived_vrrp[8]: VRRP sockpool: [ifindex(2), family(IPv4), proto(112), unicast(0), fd(9,10)]
在24服务器上停止容器:
docker stop kp_master
在23服务器上查看vip是否漂移:
[root@node4 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:5b:a4:eb brd ff:ff:ff:ff:ff:ff inet 192.168.217.24/24 brd 192.168.217.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.217.100/16 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe5b:a4eb/64 scope link valid_lft forever preferred_lft forever
查看24服务器上的容器日志:
选举内容,可以看到backup变为了master
docker logs kp_backup
Wed Dec 7 14:02:11 2022: (VI_1) Backup received priority 0 advertisement Keepalived_vrrp[8]: (VI_1) Backup received priority 0 advertisement Wed Dec 7 14:02:11 2022: (VI_1) Receive advertisement timeout Keepalived_vrrp[8]: (VI_1) Receive advertisement timeout Wed Dec 7 14:02:11 2022: (VI_1) Entering MASTER STATE Keepalived_vrrp[8]: (VI_1) Entering MASTER STATE Wed Dec 7 14:02:11 2022: (VI_1) setting VIPs. Keepalived_vrrp[8]: (VI_1) setting VIPs. Wed Dec 7 14:02:11 2022: Sending gratuitous ARP on ens33 for 192.168.217.100 Keepalived_vrrp[8]: Sending gratuitous ARP on ens33 for 192.168.217.100
在启动23的容器,并查看容器日志:
docker start kp_master
23服务器的选举过程:
docker logs kp_master
Keepalived_vrrp[7]: (VI_1) Entering BACKUP STATE (init) Wed Dec 7 14:06:16 2022: VRRP sockpool: [ifindex(2), family(IPv4), proto(112), unicast(0), fd(9,10)] Keepalived_vrrp[7]: VRRP sockpool: [ifindex(2), family(IPv4), proto(112), unicast(0), fd(9,10)] Wed Dec 7 14:06:16 2022: (VI_1) received lower priority (90) advert from 192.168.217.24 - discarding Keepalived_vrrp[7]: (VI_1) received lower priority (90) advert from 192.168.217.24 - discarding Wed Dec 7 14:06:17 2022: (VI_1) received lower priority (90) advert from 192.168.217.24 - discarding Keepalived_vrrp[7]: (VI_1) received lower priority (90) advert from 192.168.217.24 - discarding Wed Dec 7 14:06:18 2022: (VI_1) received lower priority (90) advert from 192.168.217.24 - discarding Keepalived_vrrp[7]: (VI_1) received lower priority (90) advert from 192.168.217.24 - discarding Wed Dec 7 14:06:19 2022: (VI_1) received lower priority (90) advert from 192.168.217.24 - discarding Keepalived_vrrp[7]: (VI_1) received lower priority (90) advert from 192.168.217.24 - discarding Wed Dec 7 14:06:20 2022: (VI_1) Receive advertisement timeout Keepalived_vrrp[7]: (VI_1) Receive advertisement timeout Wed Dec 7 14:06:20 2022: (VI_1) Entering MASTER STATE Keepalived_vrrp[7]: (VI_1) Entering MASTER STATE Wed Dec 7 14:06:20 2022: (VI_1) setting VIPs. Keepalived_vrrp[7]: (VI_1) setting VIPs.
vip 又回来了
[root@node3 media]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:70:12:12 brd ff:ff:ff:ff:ff:ff inet 192.168.217.23/24 brd 192.168.217.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.217.100/16 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe70:1212/64 scope link valid_lft forever preferred_lft forever
当然,24服务器的VIP是自己下掉了,证明此种方式搭建的keepalived没有脑裂的问题,完全可用。