Redis集群动态缩容

本文涉及的产品
云数据库 Redis 版,社区版 2GB
推荐场景:
搭建游戏排行榜
简介: 一般redis集群都是3以上的偶数构成,否则容易脑裂,本实践仅演示一台机器的删除过程,同样方法可以删除第二对主从redis,实现三主三从架构。

redis 集群运行之后,难免由于硬件故障、网络规划变化、业务变化等原因对已有集群进行相应的调整, 比如: 增加节点、减少节点、节点迁移、更换服务器等。增加节点和删除节点会涉及到已有的槽位重新分配及数据迁移。

本次模拟案例:某个公司可能因为业务萎缩或主机故障,需要现有的五主五从的redis cluster架构缩小规模,我们模拟不能影响业务使用和不造成数据丢失情况下,将其动态缩容从四主四从的集群。

特别说明:一般redis集群都是3以上的偶数构成,否则容易脑裂,本实践仅演示一台机器的删除过程,同样方法可以删除第二对主从redis,实现三主三从架构。

image.png

集群维护之动态缩容过程:

先将被要删除的Redis node(IP58)上的槽位迁移到集群中的其他Redis node节点上
然后再将其(主从两个节点)删除,如果一个Redis node节点上的槽位没有被完全迁移,删除该node的时候会提示有数据且无法删除。

  1. 迁移槽位至其他节点

注意: 做数据迁移前一定要做好规划和数据的备份!这应该是每个SRE工程师必须记得的玉律!!!被迁移Redis master源服务器尽可能地保证没有数据在上面运行了,否则迁移过程容易出错并可能被强制中断而失败。

#### 查看所有节点和从属关系,确定好装备删除的节点
[root@CentOS84-IP172-18 ]#redis-cli -a 123456 --no-auth-warning --cluster check 172.16.0.18:6379
172.16.0.18:6379 (d5462f69...) -> 0 keys | 3278 slots | 1 slaves.
172.16.0.28:6379 (5163e9ab...) -> 0 keys | 3278 slots | 1 slaves.
172.16.0.58:6379 (c4af78bc...) -> 0 keys | 3274 slots | 1 slaves.
172.16.0.48:6379 (aaa956c2...) -> 1 keys | 3275 slots | 1 slaves.
172.16.0.38:6379 (4c429a48...) -> 0 keys | 3279 slots | 1 slaves.
[OK] 1 keys in 5 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 172.16.0.18:6379)
M: d5462f6961c0f45ecbdf12d6606e6993c33e3e29 172.16.0.18:6379
   slots:[2184-5461] (3278 slots) master
   1 additional replica(s)
S: 93a4ba65d181a756c08bd3b6c2a7a4d24cf5855d 172.16.0.148:6379
   slots: (0 slots) slave
   replicates aaa956c280b5d9fe18fca48a910c1085b5f22122
S: aeee686e355fa7784b383fec3543232126dcfbad 172.16.0.158:6379
   slots: (0 slots) slave
   replicates c4af78bc4a26490d51edc78b6c547a7abaf4e1aa
M: 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e 172.16.0.28:6379
   slots:[7645-10922] (3278 slots) master
   1 additional replica(s)
M: c4af78bc4a26490d51edc78b6c547a7abaf4e1aa 172.16.0.58:6379
   slots:[1093-2183],[6553-7644],[12014-13104] (3274 slots) master
   1 additional replica(s)
S: a7583f69703921c6b3d14a97f54b3015966155ab 172.16.0.118:6379
   slots: (0 slots) slave
   replicates d5462f6961c0f45ecbdf12d6606e6993c33e3e29
S: 3d69cddc61df9443ff7de9850c220fc9e9187c03 172.16.0.138:6379
   slots: (0 slots) slave
   replicates 4c429a48054a771cbc154319182a3d16cf4ce7a1
M: aaa956c280b5d9fe18fca48a910c1085b5f22122 172.16.0.48:6379
   slots:[0-1092],[5462-6552],[10923-12013] (3275 slots) master
   1 additional replica(s)
S: 3bbdbc3ab34b67161655974fed9de5667def8ed0 172.16.0.128:6379
   slots: (0 slots) slave
   replicates 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e
M: 4c429a48054a771cbc154319182a3d16cf4ce7a1 172.16.0.38:6379
   slots:[13105-16383] (3279 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@CentOS84-IP172-18 ]#

#### 从集群中删除下面这对(主从)节点 IP58,演示缩容过程
M: c4af78bc4a26490d51edc78b6c547a7abaf4e1aa 172.16.0.58:6379
   slots:[1093-2183],[6553-7644],[12014-13104] (3274 slots) master
   1 additional replica(s)
S: aeee686e355fa7784b383fec3543232126dcfbad 172.16.0.158:6379
   slots: (0 slots) slave
   replicates c4af78bc4a26490d51edc78b6c547a7abaf4e1aa

#### 考虑五组变成四组脑裂问题,可以继续从集群中删除下面这对(主从)节点 IP48,本次仅演示删除IP58,如果要删除IP48方法相同即可。
M: aaa956c280b5d9fe18fca48a910c1085b5f22122 172.16.0.48:6379
   slots:[0-1092],[5462-6552],[10923-12013] (3275 slots) master
   1 additional replica(s)
S: 93a4ba65d181a756c08bd3b6c2a7a4d24cf5855d 172.16.0.148:6379
   slots: (0 slots) slave
   replicates aaa956c280b5d9fe18fca48a910c1085b5f22122
#### 从集群中删除下面这对(主从)节点 IP58,演示缩容过程
M: c4af78bc4a26490d51edc78b6c547a7abaf4e1aa 172.16.0.58:6379
   slots:[1093-2183],[6553-7644],[12014-13104] (3274 slots) master
   1 additional replica(s)
S: aeee686e355fa7784b383fec3543232126dcfbad 172.16.0.158:6379
   slots: (0 slots) slave
   replicates c4af78bc4a26490d51edc78b6c547a7abaf4e1aa

## 第一步先挪动了  slots:[1093-2183]  给IP18
###################################################################################
## IP172.16.0.58 上的slots:[1093-2183]迁移到 IP172.16.0.28 上去
[root@CentOS84-IP172-18 ]#redis-cli -a 123456 --no-auth-warning --cluster reshard 172.16.0.18:6379

>>> Performing Cluster Check (using node 172.16.0.18:6379)
M: d5462f6961c0f45ecbdf12d6606e6993c33e3e29 172.16.0.18:6379
   slots:[2184-5461] (3278 slots) master
   1 additional replica(s)
S: 93a4ba65d181a756c08bd3b6c2a7a4d24cf5855d 172.16.0.148:6379
   slots: (0 slots) slave
   replicates aaa956c280b5d9fe18fca48a910c1085b5f22122
S: aeee686e355fa7784b383fec3543232126dcfbad 172.16.0.158:6379
   slots: (0 slots) slave
   replicates c4af78bc4a26490d51edc78b6c547a7abaf4e1aa
M: 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e 172.16.0.28:6379
   slots:[7645-10922] (3278 slots) master
   1 additional replica(s)
M: c4af78bc4a26490d51edc78b6c547a7abaf4e1aa 172.16.0.58:6379
   slots:[1093-2183],[6553-7644],[12014-13104] (3274 slots) master
   1 additional replica(s)
S: a7583f69703921c6b3d14a97f54b3015966155ab 172.16.0.118:6379
   slots: (0 slots) slave
   replicates d5462f6961c0f45ecbdf12d6606e6993c33e3e29
S: 3d69cddc61df9443ff7de9850c220fc9e9187c03 172.16.0.138:6379
   slots: (0 slots) slave
   replicates 4c429a48054a771cbc154319182a3d16cf4ce7a1
M: aaa956c280b5d9fe18fca48a910c1085b5f22122 172.16.0.48:6379
   slots:[0-1092],[5462-6552],[10923-12013] (3275 slots) master
   1 additional replica(s)
S: 3bbdbc3ab34b67161655974fed9de5667def8ed0 172.16.0.128:6379
   slots: (0 slots) slave
   replicates 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e
M: 4c429a48054a771cbc154319182a3d16cf4ce7a1 172.16.0.38:6379
   slots:[13105-16383] (3279 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 1091
What is the receiving node ID? d5462f6961c0f45ecbdf12d6606e6993c33e3e29
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: c4af78bc4a26490d51edc78b6c547a7abaf4e1aa
Source node #2: done      

Ready to move 1091 slots.
  Source nodes:
    M: c4af78bc4a26490d51edc78b6c547a7abaf4e1aa 172.16.0.58:6379
       slots:[1093-2183],[6553-7644],[12014-13104] (3274 slots) master
       1 additional replica(s)
  Destination node:
    M: d5462f6961c0f45ecbdf12d6606e6993c33e3e29 172.16.0.18:6379
       slots:[2184-5461] (3278 slots) master
       1 additional replica(s)
  Resharding plan:
    Moving slot 1093 from c4af78bc4a26490d51edc78b6c547a7abaf4e1aa
    .....................
        Moving slot 2183 from c4af78bc4a26490d51edc78b6c547a7abaf4e1aa
Do you want to proceed with the proposed reshard plan (yes/no)? yes
Moving slot 1093 from 172.16.0.58:6379 to 172.16.0.18:6379: 
.........................

###################################################################################
## 第二步 IP172.16.0.58 上的slots:[6553-7644] 迁移到 IP172.16.0.28 上去
[root@CentOS84-IP172-18 ]#redis-cli -a 123456 --no-auth-warning --cluster check 172.16.0.18:6379  
172.16.0.18:6379 (d5462f69...) -> 0 keys | 4369 slots | 1 slaves.
172.16.0.28:6379 (5163e9ab...) -> 0 keys | 3278 slots | 1 slaves.
172.16.0.58:6379 (c4af78bc...) -> 0 keys | 2183 slots | 1 slaves.
172.16.0.48:6379 (aaa956c2...) -> 1 keys | 3275 slots | 1 slaves.
172.16.0.38:6379 (4c429a48...) -> 0 keys | 3279 slots | 1 slaves.
[OK] 1 keys in 5 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 172.16.0.18:6379)
M: d5462f6961c0f45ecbdf12d6606e6993c33e3e29 172.16.0.18:6379
   slots:[1093-5461] (4369 slots) master
   1 additional replica(s)
S: 93a4ba65d181a756c08bd3b6c2a7a4d24cf5855d 172.16.0.148:6379
   slots: (0 slots) slave
   replicates aaa956c280b5d9fe18fca48a910c1085b5f22122
S: aeee686e355fa7784b383fec3543232126dcfbad 172.16.0.158:6379
   slots: (0 slots) slave
   replicates c4af78bc4a26490d51edc78b6c547a7abaf4e1aa
M: 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e 172.16.0.28:6379
   slots:[7645-10922] (3278 slots) master
   1 additional replica(s)
M: c4af78bc4a26490d51edc78b6c547a7abaf4e1aa 172.16.0.58:6379
   slots:[6553-7644],[12014-13104] (2183 slots) master
   1 additional replica(s)
S: a7583f69703921c6b3d14a97f54b3015966155ab 172.16.0.118:6379
   slots: (0 slots) slave
   replicates d5462f6961c0f45ecbdf12d6606e6993c33e3e29
S: 3d69cddc61df9443ff7de9850c220fc9e9187c03 172.16.0.138:6379
   slots: (0 slots) slave
   replicates 4c429a48054a771cbc154319182a3d16cf4ce7a1
M: aaa956c280b5d9fe18fca48a910c1085b5f22122 172.16.0.48:6379
   slots:[0-1092],[5462-6552],[10923-12013] (3275 slots) master
   1 additional replica(s)
S: 3bbdbc3ab34b67161655974fed9de5667def8ed0 172.16.0.128:6379
   slots: (0 slots) slave
   replicates 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e
M: 4c429a48054a771cbc154319182a3d16cf4ce7a1 172.16.0.38:6379
   slots:[13105-16383] (3279 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@CentOS84-IP172-18 ]#


[root@CentOS84-IP172-18 ]#redis-cli -a 123456 --no-auth-warning --cluster reshard 172.16.0.18:6379
>>> Performing Cluster Check (using node 172.16.0.18:6379)
M: d5462f6961c0f45ecbdf12d6606e6993c33e3e29 172.16.0.18:6379
   slots:[1093-5461] (4369 slots) master
   1 additional replica(s)
S: 93a4ba65d181a756c08bd3b6c2a7a4d24cf5855d 172.16.0.148:6379
   slots: (0 slots) slave
   replicates aaa956c280b5d9fe18fca48a910c1085b5f22122
S: aeee686e355fa7784b383fec3543232126dcfbad 172.16.0.158:6379
   slots: (0 slots) slave
   replicates c4af78bc4a26490d51edc78b6c547a7abaf4e1aa
M: 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e 172.16.0.28:6379
   slots:[7645-10922] (3278 slots) master
   1 additional replica(s)
M: c4af78bc4a26490d51edc78b6c547a7abaf4e1aa 172.16.0.58:6379
   slots:[6553-7644],[12014-13104] (2183 slots) master
   1 additional replica(s)
S: a7583f69703921c6b3d14a97f54b3015966155ab 172.16.0.118:6379
   slots: (0 slots) slave
   replicates d5462f6961c0f45ecbdf12d6606e6993c33e3e29
S: 3d69cddc61df9443ff7de9850c220fc9e9187c03 172.16.0.138:6379
   slots: (0 slots) slave
   replicates 4c429a48054a771cbc154319182a3d16cf4ce7a1
M: aaa956c280b5d9fe18fca48a910c1085b5f22122 172.16.0.48:6379
   slots:[0-1092],[5462-6552],[10923-12013] (3275 slots) master
   1 additional replica(s)
S: 3bbdbc3ab34b67161655974fed9de5667def8ed0 172.16.0.128:6379
   slots: (0 slots) slave
   replicates 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e
M: 4c429a48054a771cbc154319182a3d16cf4ce7a1 172.16.0.38:6379
   slots:[13105-16383] (3279 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 1092
What is the receiving node ID? 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: c4af78bc4a26490d51edc78b6c547a7abaf4e1aa
Source node #2: done

......................
Do you want to proceed with the proposed reshard plan (yes/no)? yes
......................

###################################################################################
## 第三步 IP172.16.0.58 上的slots:[1093-5461] 迁移到 IP172.16.0.38 上去

[root@CentOS84-IP172-18 ]#redis-cli -a 123456 --no-auth-warning --cluster check 172.16.0.18:6379  
172.16.0.18:6379 (d5462f69...) -> 0 keys | 4369 slots | 1 slaves.
172.16.0.28:6379 (5163e9ab...) -> 0 keys | 4370 slots | 1 slaves.
172.16.0.58:6379 (c4af78bc...) -> 0 keys | 1091 slots | 1 slaves.
172.16.0.48:6379 (aaa956c2...) -> 1 keys | 3275 slots | 1 slaves.
172.16.0.38:6379 (4c429a48...) -> 0 keys | 3279 slots | 1 slaves.
[OK] 1 keys in 5 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 172.16.0.18:6379)
M: d5462f6961c0f45ecbdf12d6606e6993c33e3e29 172.16.0.18:6379
   slots:[1093-5461] (4369 slots) master
   1 additional replica(s)
S: 93a4ba65d181a756c08bd3b6c2a7a4d24cf5855d 172.16.0.148:6379
   slots: (0 slots) slave
   replicates aaa956c280b5d9fe18fca48a910c1085b5f22122
S: aeee686e355fa7784b383fec3543232126dcfbad 172.16.0.158:6379
   slots: (0 slots) slave
   replicates c4af78bc4a26490d51edc78b6c547a7abaf4e1aa
M: 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e 172.16.0.28:6379
   slots:[6553-10922] (4370 slots) master
   1 additional replica(s)
M: c4af78bc4a26490d51edc78b6c547a7abaf4e1aa 172.16.0.58:6379
   slots:[12014-13104] (1091 slots) master
   1 additional replica(s)
S: a7583f69703921c6b3d14a97f54b3015966155ab 172.16.0.118:6379
   slots: (0 slots) slave
   replicates d5462f6961c0f45ecbdf12d6606e6993c33e3e29
S: 3d69cddc61df9443ff7de9850c220fc9e9187c03 172.16.0.138:6379
   slots: (0 slots) slave
   replicates 4c429a48054a771cbc154319182a3d16cf4ce7a1
M: aaa956c280b5d9fe18fca48a910c1085b5f22122 172.16.0.48:6379
   slots:[0-1092],[5462-6552],[10923-12013] (3275 slots) master
   1 additional replica(s)
S: 3bbdbc3ab34b67161655974fed9de5667def8ed0 172.16.0.128:6379
   slots: (0 slots) slave
   replicates 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e
M: 4c429a48054a771cbc154319182a3d16cf4ce7a1 172.16.0.38:6379
   slots:[13105-16383] (3279 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@CentOS84-IP172-18 ]#redis-cli -a 123456 --no-auth-warning --cluster reshard 172.16.0.18:6379
>>> Performing Cluster Check (using node 172.16.0.18:6379)
M: d5462f6961c0f45ecbdf12d6606e6993c33e3e29 172.16.0.18:6379
   slots:[1093-5461] (4369 slots) master
   1 additional replica(s)
S: 93a4ba65d181a756c08bd3b6c2a7a4d24cf5855d 172.16.0.148:6379
   slots: (0 slots) slave
   replicates aaa956c280b5d9fe18fca48a910c1085b5f22122
S: aeee686e355fa7784b383fec3543232126dcfbad 172.16.0.158:6379
   slots: (0 slots) slave
   replicates c4af78bc4a26490d51edc78b6c547a7abaf4e1aa
M: 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e 172.16.0.28:6379
   slots:[6553-10922] (4370 slots) master
   1 additional replica(s)
M: c4af78bc4a26490d51edc78b6c547a7abaf4e1aa 172.16.0.58:6379
   slots:[12014-13104] (1091 slots) master
   1 additional replica(s)
S: a7583f69703921c6b3d14a97f54b3015966155ab 172.16.0.118:6379
   slots: (0 slots) slave
   replicates d5462f6961c0f45ecbdf12d6606e6993c33e3e29
S: 3d69cddc61df9443ff7de9850c220fc9e9187c03 172.16.0.138:6379
   slots: (0 slots) slave
   replicates 4c429a48054a771cbc154319182a3d16cf4ce7a1
M: aaa956c280b5d9fe18fca48a910c1085b5f22122 172.16.0.48:6379
   slots:[0-1092],[5462-6552],[10923-12013] (3275 slots) master
   1 additional replica(s)
S: 3bbdbc3ab34b67161655974fed9de5667def8ed0 172.16.0.128:6379
   slots: (0 slots) slave
   replicates 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e
M: 4c429a48054a771cbc154319182a3d16cf4ce7a1 172.16.0.38:6379
   slots:[13105-16383] (3279 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 1091
What is the receiving node ID? 4c429a48054a771cbc154319182a3d16cf4ce7a1
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: c4af78bc4a26490d51edc78b6c547a7abaf4e1aa
Source node #2: done

Ready to move 1091 slots.
  Source nodes:
    M: c4af78bc4a26490d51edc78b6c547a7abaf4e1aa 172.16.0.58:6379
       slots:[12014-13104] (1091 slots) master
       1 additional replica(s)
  Destination node:
    M: 4c429a48054a771cbc154319182a3d16cf4ce7a1 172.16.0.38:6379
       slots:[13105-16383] (3279 slots) master
       1 additional replica(s)
  Resharding plan:
    Moving slot 12014 from c4af78bc4a26490d51edc78b6c547a7abaf4e1aa
    ..........................
    Moving slot 13104 from c4af78bc4a26490d51edc78b6c547a7abaf4e1aa
Do you want to proceed with the proposed reshard plan (yes/no)? yes
..............................

###################################################################################
# 确认172.16.0.58 的所有slot都移走了,上面的 slave 也自动删除,成为其它 master (IP38)的 slave
[root@CentOS84-IP172-18 ]#redis-cli -a 123456 --no-auth-warning --cluster check 172.16.0.18:6379    
172.16.0.18:6379 (d5462f69...) -> 0 keys | 4369 slots | 1 slaves.
172.16.0.28:6379 (5163e9ab...) -> 0 keys | 4370 slots | 1 slaves.
172.16.0.48:6379 (aaa956c2...) -> 1 keys | 3275 slots | 1 slaves.
172.16.0.38:6379 (4c429a48...) -> 0 keys | 4370 slots | 3 slaves.   #自动并入
[OK] 1 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 172.16.0.18:6379)
M: d5462f6961c0f45ecbdf12d6606e6993c33e3e29 172.16.0.18:6379
   slots:[1093-5461] (4369 slots) master
   1 additional replica(s)
S: 93a4ba65d181a756c08bd3b6c2a7a4d24cf5855d 172.16.0.148:6379
   slots: (0 slots) slave
   replicates aaa956c280b5d9fe18fca48a910c1085b5f22122
S: aeee686e355fa7784b383fec3543232126dcfbad 172.16.0.158:6379
   slots: (0 slots) slave
   replicates 4c429a48054a771cbc154319182a3d16cf4ce7a1
M: 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e 172.16.0.28:6379
   slots:[6553-10922] (4370 slots) master
   1 additional replica(s)
S: c4af78bc4a26490d51edc78b6c547a7abaf4e1aa 172.16.0.58:6379
   slots: (0 slots) slave
   replicates 4c429a48054a771cbc154319182a3d16cf4ce7a1
S: a7583f69703921c6b3d14a97f54b3015966155ab 172.16.0.118:6379
   slots: (0 slots) slave
   replicates d5462f6961c0f45ecbdf12d6606e6993c33e3e29
S: 3d69cddc61df9443ff7de9850c220fc9e9187c03 172.16.0.138:6379
   slots: (0 slots) slave
   replicates 4c429a48054a771cbc154319182a3d16cf4ce7a1
M: aaa956c280b5d9fe18fca48a910c1085b5f22122 172.16.0.48:6379
   slots:[0-1092],[5462-6552],[10923-12013] (3275 slots) master
   1 additional replica(s)
S: 3bbdbc3ab34b67161655974fed9de5667def8ed0 172.16.0.128:6379
   slots: (0 slots) slave
   replicates 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e
M: 4c429a48054a771cbc154319182a3d16cf4ce7a1 172.16.0.38:6379
   slots:[12014-16383] (4370 slots) master
   3 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@CentOS84-IP172-18 ]#

###################################################################################
## 在IP58上也看到自动成为其他的主master redis 的从节点;IP158也自动将成了其他 主master redis 的从节点了
[root@CentOS84-IP172-58 ]#redis-cli -a 123456 --no-auth-warning info replication
# Replication
role:slave
master_host:172.16.0.38
master_port:6379
master_link_status:up
master_last_io_seconds_ago:9
master_sync_in_progress:0
slave_read_repl_offset:222208
slave_repl_offset:222208
slave_priority:100
slave_read_only:1
replica_announced:1
connected_slaves:0
master_failover_state:no-failover
master_replid:348918a6f1e9fc1b9eab3289584760767ed883b4
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:222208
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:221957
repl_backlog_histlen:252
[root@CentOS84-IP172-58 ]#


[root@CentOS84-IP172-18 ]#redis-cli -a 123456 --no-auth-warning cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:10  #集群中还有10个节点
cluster_size:4          #只有4对主从
cluster_current_epoch:14
cluster_my_epoch:12
cluster_stats_messages_ping_sent:160157
cluster_stats_messages_pong_sent:171005
cluster_stats_messages_sent:331162
cluster_stats_messages_ping_received:161182
cluster_stats_messages_pong_received:168889
cluster_stats_messages_meet_received:4
cluster_stats_messages_update_received:2
cluster_stats_messages_received:330077
total_cluster_links_buffer_limit_exceeded:0
[root@CentOS84-IP172-18 ]#

#### 至此IP58上的所有槽位已经全部迁移走了,可以从集群中删除主机了

2. 从集群删除服务器
虽然槽位已经迁移完成,但是服务器IP信息还在集群当中,因此还需要将主机的IP信息从集群删除等。

注意: 删除服务器前,必须清除主机上面的槽位,否则会删除主机失败。

Redis 5版本以后的删除命令格式:

[root@CentOS84-IP172-58 ]#redis-cli -a <集群密码> --no-auth-warning --cluster del-node  <集群中任一节点>:6379 <要删除的节点ID>

#删除节点后,redis进程自动关闭
#删除节点信息
[root@redis-node1 ~]#rm -rf /??/???/redis/nodes-6379.conf
# 删除IP58节点  c4af78bc4a26490d51edc78b6c547a7abaf4e1aa 是IP58的ID信息
[root@CentOS84-IP172-18 ]#redis-cli -a 123456 --no-auth-warning --cluster del-node 172.16.0.18:6379 c4af78bc4a26490d51edc78b6c547a7abaf4e1aa
>>> Removing node c4af78bc4a26490d51edc78b6c547a7abaf4e1aa from cluster 172.16.0.18:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
[root@CentOS84-IP172-18 ]#

# 删除IP158节点  aeee686e355fa7784b383fec3543232126dcfbad 是IP158的ID信息
[root@CentOS84-IP172-18 ]#redis-cli -a 123456 --no-auth-warning --cluster del-node 172.16.0.18:6379 aeee686e355fa7784b383fec3543232126dcfbad
>>> Removing node aeee686e355fa7784b383fec3543232126dcfbad from cluster 172.16.0.18:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
[root@CentOS84-IP172-18 ]#

###################################################################################
# 登入删除节点的信息
[root@CentOS84-IP172-58 ]#find / -name nodes-6379.conf
/apps/redis/data/nodes-6379.conf
[root@CentOS84-IP172-58 ]#rm -rf /apps/redis/data/nodes-6379.conf

[root@CentOS84-IP172-158 ]#find / -name nodes-6379.conf 
/apps/redis/data/nodes-6379.conf
[root@CentOS84-IP172-158 ]#rm -rf /apps/redis/data/nodes-6379.conf
[root@CentOS84-IP172-158 ]#find / -name nodes-6379.conf           
[root@CentOS84-IP172-158 ]#
3. 验证结果
#### 查看删除后的节点情况
###################################################################################
## 再查看相关信息
[root@CentOS84-IP172-18 ]#redis-cli -a 123456 --no-auth-warning --cluster check 172.16.0.18:6379
172.16.0.18:6379 (d5462f69...) -> 0 keys | 4369 slots | 1 slaves.
172.16.0.28:6379 (5163e9ab...) -> 0 keys | 4370 slots | 1 slaves.
172.16.0.48:6379 (aaa956c2...) -> 1 keys | 3275 slots | 1 slaves.
172.16.0.38:6379 (4c429a48...) -> 0 keys | 4370 slots | 1 slaves.
[OK] 1 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 172.16.0.18:6379)
M: d5462f6961c0f45ecbdf12d6606e6993c33e3e29 172.16.0.18:6379
   slots:[1093-5461] (4369 slots) master
   1 additional replica(s)
S: 93a4ba65d181a756c08bd3b6c2a7a4d24cf5855d 172.16.0.148:6379
   slots: (0 slots) slave
   replicates aaa956c280b5d9fe18fca48a910c1085b5f22122
M: 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e 172.16.0.28:6379
   slots:[6553-10922] (4370 slots) master
   1 additional replica(s)
S: a7583f69703921c6b3d14a97f54b3015966155ab 172.16.0.118:6379
   slots: (0 slots) slave
   replicates d5462f6961c0f45ecbdf12d6606e6993c33e3e29
S: 3d69cddc61df9443ff7de9850c220fc9e9187c03 172.16.0.138:6379
   slots: (0 slots) slave
   replicates 4c429a48054a771cbc154319182a3d16cf4ce7a1
M: aaa956c280b5d9fe18fca48a910c1085b5f22122 172.16.0.48:6379
   slots:[0-1092],[5462-6552],[10923-12013] (3275 slots) master
   1 additional replica(s)
S: 3bbdbc3ab34b67161655974fed9de5667def8ed0 172.16.0.128:6379
   slots: (0 slots) slave
   replicates 5163e9abbf42bd3540d9c04f6fb384ea23a1f58e
M: 4c429a48054a771cbc154319182a3d16cf4ce7a1 172.16.0.38:6379
   slots:[12014-16383] (4370 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@CentOS84-IP172-18 ]#redis-cli -h 172.16.0.18 -a 123456 --no-auth-warning cluster info

cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:8
cluster_size:4
cluster_current_epoch:14
cluster_my_epoch:12
cluster_stats_messages_ping_sent:161555
cluster_stats_messages_pong_sent:172378
cluster_stats_messages_sent:333933
cluster_stats_messages_ping_received:162555
cluster_stats_messages_pong_received:170287
cluster_stats_messages_meet_received:4
cluster_stats_messages_update_received:2
cluster_stats_messages_received:332848
total_cluster_links_buffer_limit_exceeded:0
[root@CentOS84-IP172-18 ]#

## 经过这样的过程,集群内有4对redis主从节点,8个主机节点


###################################################################################
#### 在删除的节点上去查看相应的服务和复制信息
[root@CentOS84-IP172-158 ]#ss -tlnp |grep redis
LISTEN 0      511          0.0.0.0:16379      0.0.0.0:*    users:(("redis-server",pid=65931,fd=9))                  
LISTEN 0      511          0.0.0.0:6379       0.0.0.0:*    users:(("redis-server",pid=65931,fd=6))                  
LISTEN 0      511            [::1]:16379         [::]:*    users:(("redis-server",pid=65931,fd=10))                 
LISTEN 0      511            [::1]:6379          [::]:*    users:(("redis-server",pid=65931,fd=7))                  
[root@CentOS84-IP172-158 ]#redis-cli -h 172.16.0.58 -a 123456 --no-auth-warning info replication
# Replication
role:master
connected_slaves:0
master_failover_state:no-failover
master_replid:4d67815feedd81d4c669d39a78f6838417304a22
master_replid2:348918a6f1e9fc1b9eab3289584760767ed883b4
master_repl_offset:223552
second_repl_offset:223553
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:221957
repl_backlog_histlen:1596
[root@CentOS84-IP172-158 ]#

###################################################################################
[root@CentOS84-IP172-18 ]#ss -tn
State        Recv-Q        Send-Q               Local Address:Port                Peer Address:Port        Process        
ESTAB        0             0                      172.16.0.18:47751               172.16.0.118:16379                      
ESTAB        0             0                      172.16.0.18:6379                172.16.0.118:38629                      
ESTAB        0             0                      172.16.0.18:16379                172.16.0.28:36447                      
ESTAB        0             0                      172.16.0.18:23045                172.16.0.48:16379                      
ESTAB        0             0                      172.16.0.18:31225               172.16.0.148:16379                      
ESTAB        0             96                     172.16.0.18:22                  172.16.0.254:1132                       
ESTAB        0             0                      172.16.0.18:16379                172.16.0.38:45723                      
ESTAB        0             0                      172.16.0.18:27207               172.16.0.138:16379                      
ESTAB        0             0                      172.16.0.18:16379               172.16.0.148:17267                      
ESTAB        0             0                      172.16.0.18:16379               172.16.0.128:25009                      
ESTAB        0             0                      172.16.0.18:16379               172.16.0.118:41581                      
ESTAB        0             0                      172.16.0.18:16379               172.16.0.138:52927                      
ESTAB        0             0                      172.16.0.18:16379                172.16.0.48:53471                      
ESTAB        0             0                      172.16.0.18:51605               172.16.0.128:16379                      
ESTAB        0             0                      172.16.0.18:51135                172.16.0.38:16379                      
ESTAB        0             0                      172.16.0.18:63577                172.16.0.28:16379                      
[root@CentOS84-IP172-18 ]#

####至此IP58 / IP158 这对redis主机已经从集群中删除了,如果考虑到脑裂,可以将IP48 / IP148 用同样的方法删除。
相关实践学习
基于Redis实现在线游戏积分排行榜
本场景将介绍如何基于Redis数据库实现在线游戏中的游戏玩家积分排行榜功能。
云数据库 Redis 版使用教程
云数据库Redis版是兼容Redis协议标准的、提供持久化的内存数据库服务,基于高可靠双机热备架构及可无缝扩展的集群架构,满足高读写性能场景及容量需弹性变配的业务需求。 产品详情:https://www.aliyun.com/product/kvstore &nbsp; &nbsp; ------------------------------------------------------------------------- 阿里云数据库体验:数据库上云实战 开发者云会免费提供一台带自建MySQL的源数据库&nbsp;ECS 实例和一台目标数据库&nbsp;RDS实例。跟着指引,您可以一步步实现将ECS自建数据库迁移到目标数据库RDS。 点击下方链接,领取免费ECS&amp;RDS资源,30分钟完成数据库上云实战!https://developer.aliyun.com/adc/scenario/51eefbd1894e42f6bb9acacadd3f9121?spm=a2c6h.13788135.J_3257954370.9.4ba85f24utseFl
相关文章
|
14天前
|
NoSQL Linux Redis
06- 你们使用Redis是单点还是集群 ? 哪种集群 ?
**Redis配置:** 使用哨兵集群,结构为1主2从,加上3个哨兵节点,总计分布在3台Linux服务器上,提供高可用性。
216 0
|
23天前
|
负载均衡 监控 NoSQL
Redis的集群方案有哪些?
Redis集群包括主从复制(基础,手动故障恢复)、哨兵模式(自动高可用)和Redis Cluster(官方分布式解决方案,自动分片和容错)。此外,还有如Codis、Redisson和Twemproxy等第三方工具用于代理和负载均衡。选择方案需考虑应用场景、数据规模和并发需求。
181 2
|
29天前
|
NoSQL Redis
Redis集群(六):集群常用命令及说明
Redis集群(六):集群常用命令及说明
176 0
|
2月前
|
运维 NoSQL 算法
Redis-Cluster 与 Redis 集群的技术大比拼
Redis-Cluster 与 Redis 集群的技术大比拼
81 0
|
3月前
|
存储 NoSQL Redis
Redis+SpringBoot企业版集群实战------【华为云版】(上)
Redis+SpringBoot企业版集群实战------【华为云版】
64 0
|
23天前
|
NoSQL Java 测试技术
面试官:如何搭建Redis集群?
**Redis Cluster** 是从 Redis 3.0 开始引入的集群解决方案,它分散数据以减少对单个主节点的依赖,提升读写性能。16384 个槽位分配给节点,客户端通过槽位信息直接路由请求。集群是无代理、去中心化的,多数命令直接由节点处理,保持高性能。通过 `create-cluster` 工具快速搭建集群,但适用于测试环境。在生产环境,需手动配置文件,启动节点,然后使用 `redis-cli --cluster create` 分配槽位和从节点。集群动态添加删除节点、数据重新分片及故障转移涉及复杂操作,包括主从切换和槽位迁移。
31 0
面试官:如何搭建Redis集群?
|
27天前
|
存储 缓存 NoSQL
【Redis深度专题】「核心技术提升」探究Redis服务启动的过程机制的技术原理和流程分析的指南(集群功能分析)(一)
【Redis深度专题】「核心技术提升」探究Redis服务启动的过程机制的技术原理和流程分析的指南(集群功能分析)
293 0
|
1月前
|
NoSQL Redis Docker
使用Docker搭建一个“一主两从”的 Redis 集群(超详细步骤)
使用Docker搭建一个“一主两从”的 Redis 集群(超详细步骤)
64 0
|
1月前
|
存储 监控 NoSQL
Redis 架构深入:主从复制、哨兵到集群
大家好,我是小康,今天我们来聊下 Redis 的几种架构模式,包括主从复制、哨兵和集群模式。
Redis 架构深入:主从复制、哨兵到集群
|
1月前
|
运维 负载均衡 NoSQL
【大厂面试官】知道Redis集群和Redis主从有什么区别吗
集群节点之间的故障检测和Redis主从中的哨兵检测很类似,都是通过PING消息来检测的。。。面试官抓抓脑袋,继续看你的简历…得想想考点你不懂的😰。
67 1

热门文章

最新文章