Redis-Cluster集群(下)

本文涉及的产品
云数据库 Redis 版,社区版 2GB
推荐场景:
搭建游戏排行榜
简介: Redis-Cluster集群(下)

-centos2的配置

[root@centos2 ~]# ln -s /usr/local/redis/* /usr/local/bin/
[root@centos2 ~]# mkdir -p /data/redis/data/{6379,6380}
[root@centos2 ~]# mkdir /var/log/redis
[root@centos2 ~]# vim /etc/redis/cluster/6379/redis.conf 
。。。。。。
  68 # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  69 bind 192.168.100.203  #因为是在centos1直接传输的,所以只需修改一个ip即可
  70 
。。。。。。
#保存退出
[root@centos2 ~]# vim /etc/redis/cluster/6380/redis.conf #和上面的一样,也是只要改一下ip地址就行
。。。。。。
  68 # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  69 bind 192.168.100.203
  70 
。。。。。。
#保存退出
[root@centos2 ~]# redis-server /etc/redis/cluster/6379/redis.conf #开启两台实例
[root@centos2 ~]# redis-server /etc/redis/cluster/6380/redis.conf
[root@centos2 ~]# netstat -anpt | grep redis
tcp        0      0 192.168.100.203:6379    0.0.0.0:*               LISTEN      19028/redis-server  
tcp        0      0 192.168.100.203:6380    0.0.0.0:*               LISTEN      19033/redis-server  
tcp        0      0 192.168.100.203:16379   0.0.0.0:*               LISTEN      19028/redis-server  
tcp        0      0 192.168.100.203:16380   0.0.0.0:*               LISTEN      19033/redis-server  
[root@centos2 ~]# echo "redis-server /etc/redis/cluster/6379/redis.conf" >> /etc/rc.local 
[root@centos2 ~]# echo "redis-server /etc/redis/cluster/6380/redis.conf" >> /etc/rc.local 
[root@centos2 ~]# ps -ef | grep redis
root      19028      1  0 02:15 ?        00:00:00 redis-server 192.168.100.203:6379 [cluster]
root      19033      1  0 02:15 ?        00:00:00 redis-server 192.168.100.203:6380 [cluster]
root      19047   1074  0 02:19 pts/0    00:00:00 grep --color=auto redis

-centos3配置

[root@centos3 ~]# ln -s /usr/local/redis/* /usr/local/bin/
[root@centos3 ~]# mkdir -p /data/redis/data/{6379,6380}
[root@centos3 ~]# mkdir /var/log/redis
[root@centos3 ~]# sed -i '69s/192.168.100.202/192.168.100.204/g' /etc/redis/cluster/6379/redis.conf 
[root@centos3 ~]# sed -i '69s/192.168.100.202/192.168.100.204/g' /etc/redis/cluster/6380/redis.conf 
[root@centos3 ~]# redis-server /etc/redis/cluster/6379/redis.conf
[root@centos3 ~]# redis-server /etc/redis/cluster/6380/redis.conf
[root@centos3 ~]# netstat -anpt | grep redis
tcp        0      0 192.168.100.204:6379    0.0.0.0:*               LISTEN      19002/redis-server  
tcp        0      0 192.168.100.204:6380    0.0.0.0:*               LISTEN      19007/redis-server  
tcp        0      0 192.168.100.204:16379   0.0.0.0:*               LISTEN      19002/redis-server  
tcp        0      0 192.168.100.204:16380   0.0.0.0:*               LISTEN      19007/redis-server  
[root@centos3 ~]# echo "redis-server /etc/redis/cluster/6379/redis.conf" >> /etc/rc.local 
[root@centos3 ~]# echo "redis-server /etc/redis/cluster/6380/redis.conf" >> /etc/rc.local 
[root@centos3 ~]# ps -ef | grep redis
root      19002      1  0 02:17 ?        00:00:00 redis-server 192.168.100.204:6379 [cluster]
root      19007      1  0 02:17 ?        00:00:00 redis-server 192.168.100.204:6380 [cluster]
root      19021   1073  0 02:20 pts/0    00:00:00 grep --color=auto redis

-创建cluster集群


注意:在redis5.0版本后不在使用ruby搭建集群,而是使用命令redis-cli

#下面的操作只需要在一台上面执行即可
[root@centos1 ~]# redis-cli --cluster create --cluster-replicas 1  192.168.100.202:6379 192.168.100.203:6379 192.168.100.204:6379 192.168.100.202:6380 192.168.100.203:6380 192.168.100.204:6380
#--cluster-replicas 1 参数1代表的是一个比例,就是主节点数/从节点数的比例,先写3个主节点,然后写3个从节点,它是按照书写顺序来确定那个是主节点。
。。。。。。
#中间输入yes
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.  #看到ok就是创建成功了
[root@centos1 ~]# echo $?  #确认创建成功
0
[root@centos1 ~]# redis-cli -h 192.168.100.202 -p 6379 -c #启动集群,-c表示激活集群模式
192.168.100.202:6379> exit
[root@centos1 ~]# redis-cli --cluster check 192.168.100.202:6379  #查看集群的状态
192.168.100.202:6379 (ba6d20ea...) -> 0 keys | 5461 slots | 1 slaves.  #可以看到每个集群都对应的一个从服务器
192.168.100.203:6379 (8ae68dec...) -> 0 keys | 5462 slots | 1 slaves.
192.168.100.204:6379 (264cc569...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.100.202:6379)
M: ba6d20ea830f40a904b5e2a0f434fb88a73a5bfa 192.168.100.202:6379   #M表示Master,S表示Slave
   slots:[0-5460] (5461 slots) master  #这里可以看到每个Master主节点对应的hash值的范围,只有Master才会有
   1 additional replica(s)
S: 20cc55347dd1d59073d7fbf52ec568b04abad945 192.168.100.202:6380
   slots: (0 slots) slave
   replicates 264cc569d6ddfbf70cc1678b99f87e83ac6fe29b  #从服务器的这个项,就是指定的主服务器,后面是id号
S: 2f5140d502a58183ac3e297028c36dc27940e48f 192.168.100.204:6380
   slots: (0 slots) slave
   replicates 8ae68dec60391f0cd5621f9d6992f2ae02dcd0dc
M: 8ae68dec60391f0cd5621f9d6992f2ae02dcd0dc 192.168.100.203:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 264cc569d6ddfbf70cc1678b99f87e83ac6fe29b 192.168.100.204:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: b4a21e9c4c64c2010da64bdec76d5e51ea8e7155 192.168.100.203:6380
   slots: (0 slots) slave
   replicates ba6d20ea830f40a904b5e2a0f434fb88a73a5bfa
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

-测试集群

******(1)创建key,查看效果
[root@centos1 ~]# redis-cli -h 192.168.100.202 -p 6379 -c
192.168.100.202:6379> set aaa bbb
-> Redirected to slot [10439] located at 192.168.100.203:6379  #可以看到提示,此键创建在了203服务器上
OK
192.168.100.203:6379> keys *  #发现这时的redis已经自动转到了203服务器上
1) "aaa"
192.168.100.203:6379> set bbb aaa  #再次创建一个
-> Redirected to slot [5287] located at 192.168.100.202:6379  #提示创建在了202上
OK
192.168.100.202:6379> exit  #发现自动转到了202上

-扩展Redis-Cluster集群


******(1)在centos1上创建两个新的实例
[root@centos1 ~]# cd /etc/redis/cluster/
[root@centos1 cluster]# ll
总用量 0
drwxr-xr-x 2 root root 24 6月  10 00:02 6379
drwxr-xr-x 2 root root 24 6月  10 00:12 6380
[root@centos1 cluster]# mkdir 6381
[root@centos1 cluster]# mkdir 6382
[root@centos1 cluster]# cp 6379/redis.conf 6381/redis.conf
[root@centos1 cluster]# cp 6379/redis.conf 6382/redis.conf
[root@centos1 cluster]# mkdir /data/redis/data/6381
[root@centos1 cluster]# mkdir /data/redis/data/6382
[root@centos1 cluster]# sed -i 's/6379/6381/g' 6381/redis.conf 
[root@centos1 cluster]# sed -i 's/6379/6382/g' 6382/redis.conf 
[root@centos1 cluster]# redis-server 6381/redis.conf 
[root@centos1 cluster]# redis-server 6382/redis.conf 
[root@centos1 cluster]# netstat -napt | grep 6381
tcp        0      0 192.168.100.202:6381    0.0.0.0:*               LISTEN      5233/redis-server 1 
tcp        0      0 192.168.100.202:16381   0.0.0.0:*               LISTEN      5233/redis-server 1 
[root@centos1 cluster]# netstat -napt | grep 6382
tcp        0      0 192.168.100.202:6382    0.0.0.0:*               LISTEN      5238/redis-server 1 
tcp        0      0 192.168.100.202:16382   0.0.0.0:*               LISTEN      5238/redis-server 1 
******(2)新增实例到集群中
[root@centos1 cluster]# redis-cli --cluster add-node 192.168.100.202:6381 192.168.100.202:6379
。。。。。。
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.100.202:6381 to make it join the cluster.
[OK] New node added correctly.  #ok表示成功
[root@centos1 cluster]# redis-cli --cluster check 192.168.100.202:6379  #查看集群信息
192.168.100.202:6379 (d45b3ec0...) -> 0 keys | 5461 slots | 1 slaves.
192.168.100.202:6381 (cafed3db...) -> 0 keys | 0 slots | 0 slaves.   #发现新添加了一个主节点,但是没有槽位
192.168.100.204:6379 (db170b2a...) -> 0 keys | 5461 slots | 1 slaves.
192.168.100.203:6379 (090bfee6...) -> 0 keys | 5462 slots | 1 slaves.
。。。。。。
[root@centos1 cluster]# redis-cli --cluster reshard 192.168.100.202:6379 #重新分配slots槽
。。。。。。
M: cafed3dbf858b73e77ac9760b80c03095b57b24b 192.168.100.202:6381  #M后面那个就是ID号
。。。。。。
How many slots do you want to move (from 1 to 16384)? 1000 #准备分配多少槽
What is the receiving node ID? cafed3dbf858b73e77ac9760b80c03095b57b24b  #要分配节点的id号
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: all   #分配全部
Do you want to proceed with the proposed reshard plan (yes/no)? yes    #yes确定分配
******(3)再次查看集群状态
[root@centos1 cluster]# redis-cli --cluster check 192.168.100.202:6379  #查看集群状态
192.168.100.202:6379 (d45b3ec0...) -> 0 keys | 5128 slots | 1 slaves.
192.168.100.202:6381 (cafed3db...) -> 0 keys | 1000 slots | 0 slaves.  #发现成功增加了槽点变成了1000个
192.168.100.204:6379 (db170b2a...) -> 0 keys | 5128 slots | 1 slaves.
192.168.100.203:6379 (090bfee6...) -> 0 keys | 5128 slots | 1 slaves.
。。。。。。
[root@centos1 cluster]# redis-cli --cluster add-node 192.168.100.202:6382 192.168.100.202:6379 #添加新的服务器到集群,让这个服务器做刚才新加的服务器的从
#发现添加成功
[root@centos1 cluster]# redis-cli --cluster check 192.168.100.202:6379  #再次查看,发现成功加入了集群但是并不是刚刚加入集群服务器的从
192.168.100.202:6379 (d45b3ec0...) -> 0 keys | 5128 slots | 1 slaves.
192.168.100.202:6382 (c1b47734...) -> 0 keys | 0 slots | 0 slaves.   #发现成功加入集群,但是还是主节点
192.168.100.202:6381 (cafed3db...) -> 0 keys | 1000 slots | 0 slaves.
192.168.100.204:6379 (db170b2a...) -> 0 keys | 5128 slots | 1 slaves.
192.168.100.203:6379 (090bfee6...) -> 0 keys | 5128 slots | 1 slaves.
[root@centos1 cluster]# redis-cli -h 192.168.100.202 -p 6382 -c #登录从服务器82,指定主服务器
192.168.100.202:6382> cluster replicate cafed3dbf858b73e77ac9760b80c03095b57b24b #这个是81的id
OK
192.168.100.202:6382> exit
[root@centos1 cluster]# redis-cli --cluster check 192.168.100.202:6379  #再次查看集群
192.168.100.202:6379 (d45b3ec0...) -> 0 keys | 5128 slots | 1 slaves.
192.168.100.202:6381 (cafed3db...) -> 0 keys | 1000 slots | 1 slaves.   #发现81已经成功有了从节点
192.168.100.204:6379 (db170b2a...) -> 0 keys | 5128 slots | 1 slaves.
192.168.100.203:6379 (090bfee6...) -> 0 keys | 5128 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.100.202:6379)
M: d45b3ec069c6e2ab1e7ffbac72c77c1024ca871e 192.168.100.202:6379
   slots:[333-5460] (5128 slots) master
   1 additional replica(s)
S: c1b47734001493c46b03b3d2e285672c2dd8e353 192.168.100.202:6382
   slots: (0 slots) slave
   replicates cafed3dbf858b73e77ac9760b80c03095b57b24b  #发现82指的主服务器的id是81节点的id
S: 0850b23f6e715e8b00d119e1eec4f6ef2215720b 192.168.100.203:6380
   slots: (0 slots) slave
   replicates d45b3ec069c6e2ab1e7ffbac72c77c1024ca871e
M: cafed3dbf858b73e77ac9760b80c03095b57b24b 192.168.100.202:6381
   slots:[0-332],[5461-5794],[10923-11255] (1000 slots) master
   1 additional replica(s)
M: db170b2abba72e701c5da2da4f0756d0cd4307e7 192.168.100.204:6379
   slots:[11256-16383] (5128 slots) master
   1 additional replica(s)
S: 6ffada3a6399afabc778ed78883e18d336eb9409 192.168.100.202:6380
   slots: (0 slots) slave
   replicates db170b2abba72e701c5da2da4f0756d0cd4307e7
S: c0fe064961fe69f53967d76fbb2b1d8871b26159 192.168.100.204:6380
   slots: (0 slots) slave
   replicates 090bfee62b4e61d5042760826e5e23ede58c5f3a
M: 090bfee62b4e61d5042760826e5e23ede58c5f3a 192.168.100.203:6379
   slots:[5795-10922] (5128 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

-删除节点


删除的顺序是先删除Slave从节点,然后在删除Master主节点!!!!!


#删除集群中的指定从节点
[root@centos1 ~]# redis-cli --cluster  del-node 192.168.100.202:6379 d45b3ec069c6e2ab1e7ffbac72c77c1024ca871e   #删除集群中的节点,指定节点ip和端口号,后面是id号
>>> Removing node ba6d20ea830f40a904b5e2a0f434fb88a73a5bfa from cluster 192.168.100.202:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@centos1 cluster]# redis-cli --cluster  del-node 192.168.100.202:6382 c1b47734001493c46b03b3d2e285672c2dd8e353  #删除82从节点
>>> Removing node c1b47734001493c46b03b3d2e285672c2dd8e353 from cluster 192.168.100.202:6382
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@centos1 cluster]# redis-cli --cluster check 192.168.100.202:6379   #查看集群状态
192.168.100.202:6379 (d45b3ec0...) -> 0 keys | 4096 slots | 1 slaves.
192.168.100.202:6381 (cafed3db...) -> 0 keys | 4096 slots | 0 slaves.
192.168.100.204:6379 (db170b2a...) -> 0 keys | 4096 slots | 1 slaves.
192.168.100.203:6379 (090bfee6...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.100.202:6379)
M: d45b3ec069c6e2ab1e7ffbac72c77c1024ca871e 192.168.100.202:6379
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
S: 0850b23f6e715e8b00d119e1eec4f6ef2215720b 192.168.100.203:6380  #发现少了82从节点
   slots: (0 slots) slave
   replicates d45b3ec069c6e2ab1e7ffbac72c77c1024ca871e
M: cafed3dbf858b73e77ac9760b80c03095b57b24b 192.168.100.202:6381
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
M: db170b2abba72e701c5da2da4f0756d0cd4307e7 192.168.100.204:6379
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 6ffada3a6399afabc778ed78883e18d336eb9409 192.168.100.202:6380
   slots: (0 slots) slave
   replicates db170b2abba72e701c5da2da4f0756d0cd4307e7
S: c0fe064961fe69f53967d76fbb2b1d8871b26159 192.168.100.204:6380
   slots: (0 slots) slave
   replicates 090bfee62b4e61d5042760826e5e23ede58c5f3a
M: 090bfee62b4e61d5042760826e5e23ede58c5f3a 192.168.100.203:6379
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.


要想删除Master主节点,可能要繁琐一些。因为在Master主节点上有数据槽(slots),为了保证数据的不丢失,必须把这些数据槽迁移到其他Master主节点上,然后在删除主节点

[root@centos1 ~]# redis-cli --cluster reshard 192.168.100.202:6379 --cluster-from d45b3ec069c6e2ab1e7ffbac72c77c1024ca871e --cluster-to 090bfee62b4e61d5042760826e5e23ede58c5f3a --cluster-slots 1000 --cluster-yes
#表示把d45b3ec069c6e2ab1e7ffbac72c77c1024ca871e节点的槽移动1000个到090bfee62b4e61d5042760826e5e23ede58c5f3a节点中
#--cluster-from要移走哈希槽的节点ID
#--cluster-to要移入哈希槽的节点ID
#--cluster-slots要移动的哈希槽数
[root@centos1 cluster]# redis-cli --cluster check 192.168.100.202:6379  #查看集群状态
192.168.100.202:6379 (d45b3ec0...) -> 0 keys | 4128 slots | 1 slaves.  #发现成功转移1000个槽点
192.168.100.202:6381 (cafed3db...) -> 0 keys | 1000 slots | 1 slaves.
192.168.100.204:6379 (db170b2a...) -> 0 keys | 5128 slots | 1 slaves.
192.168.100.203:6379 (090bfee6...) -> 0 keys | 6128 slots | 1 slaves.
[root@centos1 ~]# redis-cli --cluster  del-node 192.168.100.203:6379 090bfee62b4e61d5042760826e5e23ede58c5f3a  #要删除203的79主节点,发现无法直接删除
>>> Removing node 090bfee62b4e61d5042760826e5e23ede58c5f3a from cluster 192.168.100.203:6379
[ERR] Node 192.168.100.203:6379 is not empty! Reshard data away and try again.
[root@centos1 ~]# redis-cli --cluster reshard 192.168.100.203:6379 --cluster-from 090bfee62b4e61d5042760826e5e23ede58c5f3a  --cluster-to d45b3ec069c6e2ab1e7ffbac72c77c1024ca871e  --cluster-slots 6128 --cluster-yes   #把203的79主节点的槽点全部转移到202的79上
[root@centos1 ~]# redis-cli --cluster check 192.168.100.202:6379
192.168.100.202:6379 (d45b3ec0...) -> 0 keys | 10256 slots | 2 slaves.
192.168.100.202:6381 (cafed3db...) -> 0 keys | 1000 slots | 0 slaves.
192.168.100.204:6379 (db170b2a...) -> 0 keys | 5128 slots | 1 slaves.
192.168.100.203:6379 (090bfee6...) -> 0 keys | 0 slots | 0 slaves.
[root@centos1 ~]# redis-cli --cluster  del-node 192.168.100.203:6379 090bfee62b4e61d5042760826e5e23ede58c5f3a  #在把槽点全部转移后,再次进行删除,f
>>> Removing node 090bfee62b4e61d5042760826e5e23ede58c5f3a from cluster 192.168.100.203:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

槽位不均衡,可以使用以下方法解决


[root@centos1 ~]# redis-cli --cluster rebalance --cluster-use-empty-masters 192.168.100.202:6379
[root@centos1 cluster]# redis-cli --cluster check 192.168.100.202:6379 #查看集群状态,发现自动分配了slots槽
192.168.100.202:6379 (d45b3ec0...) -> 0 keys | 4096 slots | 1 slaves.
192.168.100.202:6381 (cafed3db...) -> 0 keys | 4096 slots | 1 slaves.
192.168.100.204:6379 (db170b2a...) -> 0 keys | 4096 slots | 1 slaves.
192.168.100.203:6379 (090bfee6...) -> 0 keys | 4096 slots | 1 slaves. 


相关实践学习
基于Redis实现在线游戏积分排行榜
本场景将介绍如何基于Redis数据库实现在线游戏中的游戏玩家积分排行榜功能。
云数据库 Redis 版使用教程
云数据库Redis版是兼容Redis协议标准的、提供持久化的内存数据库服务,基于高可靠双机热备架构及可无缝扩展的集群架构,满足高读写性能场景及容量需弹性变配的业务需求。 产品详情:https://www.aliyun.com/product/kvstore     ------------------------------------------------------------------------- 阿里云数据库体验:数据库上云实战 开发者云会免费提供一台带自建MySQL的源数据库 ECS 实例和一台目标数据库 RDS实例。跟着指引,您可以一步步实现将ECS自建数据库迁移到目标数据库RDS。 点击下方链接,领取免费ECS&RDS资源,30分钟完成数据库上云实战!https://developer.aliyun.com/adc/scenario/51eefbd1894e42f6bb9acacadd3f9121?spm=a2c6h.13788135.J_3257954370.9.4ba85f24utseFl
目录
相关文章
|
25天前
|
存储 NoSQL 算法
09- Redis分片集群中数据是怎么存储和读取的 ?
Redis分片集群使用哈希槽分区算法,包含16384个槽(0-16383)。数据存储时,通过CRC16算法对key计算并模16383,确定槽位,进而分配至对应节点。读取时,根据槽位找到相应节点直接操作。
54 12
|
25天前
|
NoSQL Linux Redis
06- 你们使用Redis是单点还是集群 ? 哪种集群 ?
**Redis配置:** 使用哨兵集群,结构为1主2从,加上3个哨兵节点,总计分布在3台Linux服务器上,提供高可用性。
337 0
|
1月前
|
负载均衡 监控 NoSQL
Redis的集群方案有哪些?
Redis集群包括主从复制(基础,手动故障恢复)、哨兵模式(自动高可用)和Redis Cluster(官方分布式解决方案,自动分片和容错)。此外,还有如Codis、Redisson和Twemproxy等第三方工具用于代理和负载均衡。选择方案需考虑应用场景、数据规模和并发需求。
276 2
|
1天前
|
NoSQL Redis
透视Redis集群:心跳检测如何维护高可用性
Redis心跳检测保障集群可靠性,通过PING命令检测主从连接状态,预防数据丢失。当连接异常时,自动触发主从切换。此外,心跳检测辅助实现`min-slaves-to-write`和`min-slaves-max-lag`策略,避免不安全写操作。还有重传机制,确保命令无丢失,维持数据一致性。合理配置心跳检测,能有效防止数据问题,提升Redis集群的高可用性。关注“软件求生”获取更多Redis知识!
25 10
透视Redis集群:心跳检测如何维护高可用性
|
4天前
|
监控 NoSQL 算法
Redis集群模式:高可用性与性能的完美结合!
小米探讨Redis集群模式,通过一致性哈希分散负载,主从节点确保高可用性。节点间健康检测、主备切换、数据复制与同步、分区策略和Majority选举机制保证服务可靠性。适合高可用性及性能需求场景,哨兵模式则适用于简单需求。一起学习技术的乐趣!关注小米微信公众号“软件求生”获取更多内容。
27 11
Redis集群模式:高可用性与性能的完美结合!
|
5天前
|
监控 NoSQL Redis
|
9天前
|
缓存 监控 NoSQL
关于Redis集群一些总结
关于Redis集群一些总结
19 0
|
10天前
|
NoSQL Redis Docker
使用Docker搭建Redis主从集群
使用Docker搭建Redis主从集群
21 1
|
10天前
|
存储 NoSQL 算法
Redis 搭建分片集群
Redis 搭建分片集群
18 2
|
10天前
|
监控 NoSQL 算法
Redis 搭建哨兵集群
Redis 搭建哨兵集群
17 1