一、Docker网络--理解Docker0
- 准备工作:清空所有环境
- 将docker 的所有镜像、容器先删除,干干净净!
1、查看本地网络信息 ip addr
● 可见有三个网卡信息:
- lo:本地(回环)地址;
- ens:虚拟机或阿里云服务器(内网)地址;
- docker0:docker网络地址。
● 问题:docker 是如何处理容器网络访问的?
2、查看docker容器启动时的内部网络 ip addr
● Docker容器没有ip addr命令:exec ip addr 报错:
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "ip": executable file not found in $PATH: unknown
☺ ip addr 命令成功执行:
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat01 ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever
3、docker 容器和linux 系统宿主机可以相互ping 通
# tomcat 容器 ping 通宿主机(外网地址 120.76.136.52, 内网地址 172.22.26.169) root@f1cfb81dedfd:/usr/local/tomcat# ping 120.76.136.52 PING 120.76.136.52 (120.76.136.52) 56(84) bytes of data. 64 bytes from 120.76.136.52: icmp_seq=1 ttl=63 time=2.97 ms 64 bytes from 120.76.136.52: icmp_seq=2 ttl=63 time=2.89 ms root@f1cfb81dedfd:/usr/local/tomcat# ping 172.22.26.169 PING 172.22.26.169 (172.22.26.169) 56(84) bytes of data. 64 bytes from 172.22.26.169: icmp_seq=1 ttl=64 time=0.088 ms 64 bytes from 172.22.26.169: icmp_seq=2 ttl=64 time=0.072 ms 64 bytes from 172.22.26.169: icmp_seq=3 ttl=64 time=0.086 ms # 宿主机ping 通 tomcat 容器(tomcat 的网卡地址 172.17.0.2) [root@iZwz9535z41cmgcpkm7i81Z ~]# ping 172.17.0.2 PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data. 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.106 ms 64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.083 ms
4、docker 容器互联的原理:
docker每启动一个容器,就会分配一个ip,只要安装了docker,就会有一个网卡docker0,桥接模式,使用的时veth-pair技术。
- docker 容器内部,查询ip信息:
● 容器 ip 命令,没有找到:bash: ping: command not found
- 解决:安装iputils-ping,命令:apt -y install iputils-ping
- 宿主机,查询ip信息:
■ 再启动一个容器, 宿主机查看ip信息:发现又多了一对网卡
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat02 ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever
可以看到容器内ip与本机ip成对出现,这就是veth-pair技术。
- 我们发现这个容器带来网卡,都是一对对的。
- evth—pair 就是一对的虚拟设备接口,他们都是成对出现的,一段连着协议,一段彼此相连。正因为有这个特性,evth—pair 充当一个桥梁,连接各种虚拟网络设备的。
- openstac,Docker容器之间的连接,OVS的连接,都是使用 evth-pair 技术 。
5、容器与容器之间可以相互ping 通,使用evth-pair 技术:
# tomcat01 容器ping tomcat02 容器 root@f1cfb81dedfd:/usr/local/tomcat# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever root@f1cfb81dedfd:/usr/local/tomcat# ping 172.17.0.3 PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data. 64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.125 ms # tomcat02 容器ping tomcat01 容器 root@23254b923487:/usr/local/tomcat# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever root@23254b923487:/usr/local/tomcat# ping 172.17.0.2 PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data. 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.136 ms 64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.105 ms
docker0相当于一个路由器,各个容器都与docker0相连,容器之间的通信通过路由器来转发。
■ 结论:容器tomcat01和容器tomcat02是公用的一个路由器,docker0。
- 所有的容器不指定网络的情况下,都是docker0路由的,docker会给我们的容器分配一个默认的可用IP
- docker0:
- evth-pair 技术:
- Docker 使用的是Linux的桥接,宿主机中 是一个docker 容器的网桥 docker0。
- Docker中的所有网络接口都是虚拟的(虚拟转发的效率高),相当于内网传递;
- 只要删除容器,对应网络就会删除。
二、容器互联技术 ● --link
1、(高可用问题)需求: database url = ip;
每次重启容器或Linux,ip就会变化,固定的ip互联网络就会失效,如何使用服务名来连接,而不考虑ip?
---可以通过名字来访问容器。
2、测试使用容器名来ping
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat02 ping tomcat01 ping: tomcat01: Name or service not known
- 容器之间无法通过容器名来连接;如何解决?
# 通过 --link 就可以解决了网络连通问题(通过名字连通) [root@iZwz9535z41cmgcpkm7i81Z ~]# docker run -d -P --name tomcat03 --link tomcat02 tomcat:9.0 81d38e78eea0756c654af6b51ac626ad7c086a7fe56589303ddb108fd0091f8d [root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat03 ping tomcat02 PING tomcat02 (172.17.0.3) 56(84) bytes of data. 64 bytes from tomcat02 (172.17.0.3): icmp_seq=1 ttl=64 time=0.182 ms 64 bytes from tomcat02 (172.17.0.3): icmp_seq=2 ttl=64 time=0.082 ms # 但是反向却无法链接通!!!(因为创建tomcat02 的时候,并没有--link tomcat03) # tomcat02 想通过名字 ping tomcat03 root@23254b923487:/usr/local/tomcat# ping tomcat03 ping: tomcat03: Name or service not known
3、docker network 命令:
- 探究命令 inspect
- inspect tomcat03:
- 进入tomcat03,查看它的主机文件hosts:
4、tomcat03能够通过容器名链接tomcat02的原理:
通过--link,tomcat03在自己容器hosts文件中配置了tomcat02 的ip信息!
[root@node1 ~]# docker exec -it tomcat03 cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.3 tomcat02 23254b923487 172.17.0.4 373a2f03bd8d # --link 在咱的hosts 配置中增加了一个 172.17.0.3 tomcat02 23254b923487 直接写死的
● 本质就是修改host映射,
--link已经摒弃
;建议实现使用自定义网络
实现!
- 不使用 --link,不使用网桥docker0
- 因为:docker0的问题:不支持容器名连接访问!
三、容器互联技术 ● 自定义网络
● network 命令:
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker network --help Usage: docker network COMMAND Manage networks Commands: connect Connect a container to a network create Create a network disconnect Disconnect a container from a network inspect Display detailed information on one or more networks ls List networks prune Remove all unused networks rm Remove one or more networks Run 'docker network COMMAND --help' for more information on a command.
1、查看所有的docker网络 docker network ls
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker network ls NETWORK ID NAME DRIVER SCOPE 8ddb7e9846c6 bridge bridge local 48e785b7efb3 host host local 7e07c5b5ae34 none null local
2、网络模式:
- bridge : 桥接(默认,自己创建也使用bridge 模式)
- host : 和宿主即共享网络
- none : 不配置网络
- container:容器网络连通!(很少用,局限性很大!)
3、测试自定义网络
# 原先启动容器,其实是默认使用docker0 [--net bridge],桥接模式 docker run -d -P --name tomcat01 tomcat:9.0 # 实际上是 docker run -d -P --name tomcat01 --net bridge tomcat:9.0 # docker0 特点:默认,但是对于域名无法访问,不过可以通过--link打通连接
- 创建自定义的网络
docker create
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker network create --help Usage: docker network create [OPTIONS] NETWORK Create a network Options: --attachable Enable manual container attachment --aux-address map Auxiliary IPv4 or IPv6 addresses used by Network driver (default map[]) --config-from string The network from which to copy the configuration --config-only Create a configuration only network -d, --driver string Driver to manage the Network (default "bridge") --gateway strings IPv4 or IPv6 Gateway for the master subnet --ingress Create swarm routing-mesh network --internal Restrict external access to the network --ip-range strings Allocate container ip from a sub-range --ipam-driver string IP Address Management Driver (default "default") --ipam-opt map Set IPAM driver specific options (default map[]) --ipv6 Enable IPv6 networking --label list Set metadata on a network -o, --opt map Set driver specific options (default map[]) --scope string Control the network's scope --subnet strings Subnet in CIDR format that represents a network segment ---------------------------------------------------------------------------------------------------------------------------- # 命令参数详解 [root@node1 ~]# docker network create --help Usage: docker network create [OPTIONS] NETWORK Create a network Options: --attachable Enable manual container attachment --aux-address map Auxiliary IPv4 or IPv6 addresses used by Network driver (default map[]) --config-from string The network from which to copy the configuration --config-only Create a configuration only network -d, --driver string Driver to manage the Network (default "bridge") --gateway strings IPv4 or IPv6 Gateway for the master subnet --ingress Create swarm routing-mesh network --internal Restrict external access to the network --ip-range strings Allocate container ip from a sub-range --ipam-driver string IP Address Management Driver (default "default") --ipam-opt map Set IPAM driver specific options (default map[]) --ipv6 Enable IPv6 networking --label list Set metadata on a network -o, --opt map Set driver specific options (default map[]) --scope string Control the network's scope --subnet strings Subnet in CIDR format that represents a network segment
- 创建一个网络 mynet:
- 自定义网络创建完成:
- 检查创建的网络:
- 创建容器的时候,连接自定义的网络mynet
不同容器同处于同一网络下mynet,维护好了容器间的关系
4、通过名字,容器之间相互ping
# 现在不使用 --link,也可以ping 名字了 # 通过名字 tomcat-net-01 ping tomcat-net-02 [root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat-net-01 ping tomcat-net-02 PING tomcat-net-02 (192.168.0.3) 56(84) bytes of data. 64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.102 ms 64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.064 ms 64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=3 ttl=64 time=0.062 ms 64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=4 ttl=64 time=0.070 ms ^C --- tomcat-net-02 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2999ms rtt min/avg/max/mdev = 0.062/0.074/0.102/0.016 ms # 通过名字 tomcat-net-02 ping tomcat-net-01 [root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat-net-02 ping tomcat-net-01 PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data. 64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.123 ms 64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.074 ms ^C --- tomcat-net-01 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.074/0.098/0.123/0.024 ms
5、自定义网络的意义:
我们自定义的网络docker都已经帮我们维护好了对应的关系。可以实现不同集群使用不同的网络,保证集群网络的安全和健康。
- 如Redis集群在192.160.0.0/16网段下,mysql集群在192.161.0.0/16网段下。
四、网络连通
1、场景:tomcat01 ping tomcat-net-01,无法ping 通
2、使用docker network connect
:
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker network connect mynet tomcat01 [root@iZwz9535z41cmgcpkm7i81Z ~]# docker network inspect mynet
■ 可以看到mynet将tomcat01容器添加到自己网络中:
- 测试打通 tomcat01-mynet
- 连通之后就是将 tomcat01 放到了 mynet 网络下?#一个容器两个ip地址!
- 阿里云服务:公网ip 私网ip
■ 网卡与网卡无法打通,但是容器和网卡之间可以打通。
■ 不同网段(卡) 上的容器互相 ping 通
- 通过
network connect
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat01 ping tomcat-net-01 \\PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data. 64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.115 ms 64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.063 ms 64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=3 ttl=64 time=0.062 ms ^C --- tomcat-net-01 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.062/0.080/0.115/0.024 ms [root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat-net-01 ping tomcat01 PING tomcat01 (192.168.0.4) 56(84) bytes of data. 64 bytes from tomcat01.mynet (192.168.0.4): icmp_seq=1 ttl=64 time=0.088 ms 64 bytes from tomcat01.mynet (192.168.0.4): icmp_seq=2 ttl=64 time=0.064 ms
6、结论:
想要跨网络操作别人,就需要使用docker network connect 连通!
五、实战部署Redis集群
1、集群,需要建立自己的网卡
2、分片+高可用+负载均衡
3、shell 脚本! 来启动这6个容器
4、部署Redis集群过程如下:
# 准备工作,移除掉系统其他的容器,避免启动过多的容器导致系统奔溃 docker rm -f $(docker ps -a) # 创建redis 网卡 docker network create redis --subnet 172.38.0.0/16 # 检查一下redis 网卡的信息 docker network ls docker network inspect redis
- shell脚本创建六个redis配置
# 通过脚本创建六个redis配置 for port in $(seq 1 6); \ do \ mkdir -p /mydata/redis/node-${port}/conf touch /mydata/redis/node-${port}/conf/redis.conf cat << EOF >/mydata/redis/node-${port}/conf/redis.conf port 6379 cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 5000 cluster-announce-ip 172.38.0.1${port} cluster-announce-port 6379 cluster-announce-bus-port 16379 appendonly yes EOF done
- 查看一下结点:
[root@iZwz9535z41cmgcpkm7i81Z ~]# cd /mydata/redis/ [root@iZwz9535z41cmgcpkm7i81Z redis]# ls node-1 node-2 node-3 node-4 node-5 node-6
- 启动结点容器:
docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} \ -v /mydata/redis/node-${port}/data:/data \ -v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \ -d --net redis --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
- 一个一个启动的方式:
docker run -p 6371:6379 -p 16371:16379 --name redis-1 \ -v /mydata/redis/node-1/data:/data \ -v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \ -d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf docker run -p 6372:6379 -p 16372:16379 --name redis-2 \ -v /mydata/redis/node-2/data:/data \ -v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \ -d --net redis --ip 172.38.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf docker run -p 6373:6379 -p 16373:16379 --name redis-3 \ -v /mydata/redis/node-3/data:/data \ -v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \ -d --net redis --ip 172.38.0.13 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf docker run -p 6374:6379 -p 16374:16379 --name redis-4 \ -v /mydata/redis/node-4/data:/data \ -v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \ -d --net redis --ip 172.38.0.14 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf docker run -p 6375:6379 -p 16375:16379 --name redis-5 \ -v /mydata/redis/node-5/data:/data \ -v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \ -d --net redis --ip 172.38.0.15 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf docker run -p 6376:6379 -p 16376:16379 --name redis-6 \ -v /mydata/redis/node-6/data:/data \ -v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \ -d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
- 集群:
docker exec -it redis-1 /bin/sh redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0. 16:6379 --cluster-replicas 1 >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 172.38.0.15:6379 to 172.38.0.11:6379 Adding replica 172.38.0.16:6379 to 172.38.0.12:6379 Adding replica 172.38.0.14:6379 to 172.38.0.13:6379 M: 0bd617e83421999d29fb55c25f798d3600495e76 172.38.0.11:6379 slots:[0-5460] (5461 slots) master M: 8b91a88e817dcff1a5f82d1ea577acf77799bd95 172.38.0.12:6379 slots:[5461-10922] (5462 slots) master M: d5baadcc8b4db9ae93f9c01ed2a204e7d84d0619 172.38.0.13:6379 slots:[10923-16383] (5461 slots) master S: 8806e059a5c76468aed86fddc1ec9f006c0de203 172.38.0.14:6379 replicates d5baadcc8b4db9ae93f9c01ed2a204e7d84d0619 S: 155b2b1ef7443e87b944cd745c22584aa5660628 172.38.0.15:6379 replicates 0bd617e83421999d29fb55c25f798d3600495e76 S: 33e7146e8084a4cb93b1d057612f6a46652e357f 172.38.0.16:6379 replicates 8b91a88e817dcff1a5f82d1ea577acf77799bd95 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ... >>> Performing Cluster Check (using node 172.38.0.11:6379) M: 0bd617e83421999d29fb55c25f798d3600495e76 172.38.0.11:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 8806e059a5c76468aed86fddc1ec9f006c0de203 172.38.0.14:6379 slots: (0 slots) slave replicates d5baadcc8b4db9ae93f9c01ed2a204e7d84d0619 S: 33e7146e8084a4cb93b1d057612f6a46652e357f 172.38.0.16:6379 slots: (0 slots) slave replicates 8b91a88e817dcff1a5f82d1ea577acf77799bd95 M: d5baadcc8b4db9ae93f9c01ed2a204e7d84d0619 172.38.0.13:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 155b2b1ef7443e87b944cd745c22584aa5660628 172.38.0.15:6379 slots: (0 slots) slave replicates 0bd617e83421999d29fb55c25f798d3600495e76 M: 8b91a88e817dcff1a5f82d1ea577acf77799bd95 172.38.0.12:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
- 查询集群信息:
/data # redis-cli -c 127.0.0.1:6379> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:315 cluster_stats_messages_pong_sent:323 cluster_stats_messages_sent:638 cluster_stats_messages_ping_received:318 cluster_stats_messages_pong_received:315 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:638 127.0.0.1:6379> cluster nodes 8806e059a5c76468aed86fddc1ec9f006c0de203 172.38.0.14:6379@16379 slave d5baadcc8b4db9ae93f9c01ed2a204e7d84d0619 0 1651111739893 4 connected 33e7146e8084a4cb93b1d057612f6a46652e357f 172.38.0.16:6379@16379 slave 8b91a88e817dcff1a5f82d1ea577acf77799bd95 0 1651111741407 6 connected d5baadcc8b4db9ae93f9c01ed2a204e7d84d0619 172.38.0.13:6379@16379 master - 0 1651111740000 3 connected 10923-16383 155b2b1ef7443e87b944cd745c22584aa5660628 172.38.0.15:6379@16379 slave 0bd617e83421999d29fb55c25f798d3600495e76 0 1651111740000 5 connected 8b91a88e817dcff1a5f82d1ea577acf77799bd95 172.38.0.12:6379@16379 master - 0 1651111740906 2 connected 5461-10922 0bd617e83421999d29fb55c25f798d3600495e76 172.38.0.11:6379@16379 myself,master - 0 165111739000 1 connected 0-5460
- 测试设置key-value 键值对:
127.0.0.1:6379> set a b -> Redirected to slot [15495] located at 172.38.0.13:6379 OK
- 再开一个窗口,测试高可用性
- stop掉当前集群中正在运行的redis-3,若是高可用架构搭建成功,则主机宕掉,从机会替代主机的
# 再开一个窗口,停止当前正在运行的容器redis-3 [root@iZwz9535z41cmgcpkm7i81Z ~]# docker stop redis-3 redis-3 [root@iZwz9535z41cmgcpkm7i81Z ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5c15f03d7a55 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:6376->6379/tcp, 0.0.0.0:16376->16379/tcp redis-6 f375fc1baaec redis:5.0.9-alpine3.11 "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:6375->6379/tcp, 0.0.0.0:16375->16379/tcp redis-5 7e335e02b33d redis:5.0.9-alpine3.11 "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:6374->6379/tcp, 0.0.0.0:16374->16379/tcp redis-4 4e721d20f8fd redis:5.0.9-alpine3.11 "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:6372->6379/tcp, 0.0.0.0:16372->16379/tcp redis-2 e438501487a1 redis:5.0.9-alpine3.11 "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:6371->6379/tcp, 0.0.0.0:16371->16379/tcp redis-1 # 测试在容器redis-3 被停止掉了,是否从机会替代上去 172.38.0.13:6379> get a Could not connect to Redis at 172.38.0.13:6379: Host is unreachable (32.33s) not connected> /data # redis-cli -c 127.0.0.1:6379> get a -> Redirected to slot [15495] located at 172.38.0.14:6379 "b" 172.38.0.14:6379>
- 宕掉redis-3之后
- 至此证明docker 搭建redis 集群完成!
☺ 参考来源:
狂神的B站视频《【狂神说Java】Docker最新超详细版教程通俗易懂》 https://www.bilibili.com/video/BV1og4y1q7M4