RH236扩缩容volumes--缩容volume

简介: RH236扩缩容volumes--缩容volume

RH236扩缩容volumes–缩容volume

本章节介绍如何对GlusterFS进行缩容volumes。

RHCA专栏地址:https://blog.csdn.net/qq_41765918/category_11532281.html

缩容volume

途径:将brick从volume中剔除,实现缩容

注意:

剔除的brick个数,要考虑剔除之后,数据会不会丢,移除Replicate类型的brick数据不会丢失。


1) 移除Replicate类型的brick

需要指明移除后replica的数量,并且直接使用force指令直接执行。

# gluster volume remove-brick vol1 replica 1 \ 
node1:/brick/brick1/brick \ 
force

可能出现的报错:

volume remove-brick start: failed: Remove exactly 1 brick(s) from each subvolume.

出现这个报错主要是由于删除带有Replicate类型的复合卷时,命令内指明删除的brick存在有相同的Replicate的brick。

解决办法:相同的Replicate brick具有相同的trusted.glusterfs.dht目录扩展属性,

通过查看trusted.glusterfs.dht来修改需要剔除的目录。

# getfattr -d -m ".*" /brick/brick1/brick1 
getfattr: Removing leading '/' from absolute path names 
# file: brick/brick1/brick1 
security.selinux="system_u:object_r:unlabeled_t:s0" 
trusted.gfid=0sAAAAAAAAAAAAAAAAAAAAAQ== 
trusted.glusterfs.dht=0sAAAAAQAAAAB//w== 
trusted.glusterfs.dht.commithash="3634901324" 
trusted.glusterfs.volume-id=0sf1W8jgybTceZGk4fsq4eew==

2)移除非Replicate类型的volume

# gluster volume remove-brick vol1 replica 1 \ 
node1:/brick/brick1/brick \ 
start 
#这种方式,不会改变原有类型,但可能会丢失数据

查看移除状态:

# gluster volume remove-brick vol1 node1:/brick/brick1/brick status


移除完成之后,要进行提交操作

# gluster volume remove-brick vol1 node1:/brick/brick1/brick commit


移除操作的场景:

当有服务器down的时候,为了保证glusterfs工作正常,需要将多余server移除


课本练习

[root@workstation ~]# lab shrinkvolume setup 
Setting up servers for lab exercise work:

 • Testing if all hosts are reachable..........................  SUCCESS
 • Adding glusterfs to runtime firewall on servera.............  SUCCESS
 • Adding glusterfs to permanent firewall on servera...........  SUCCESS
 • Adding glusterfs to runtime firewall on serverb.............  SUCCESS
 • Adding glusterfs to permanent firewall on serverb...........  SUCCESS
 • Adding glusterfs to runtime firewall on serverc.............  SUCCESS
 • Adding glusterfs to permanent firewall on serverc...........  SUCCESS
 • Adding glusterfs to runtime firewall on serverd.............  SUCCESS
 • Adding glusterfs to permanent firewall on serverd...........  SUCCESS
 • Adding servera to trusted storage pool......................  SUCCESS
…………

1. 在您的servera系统上,检查shrinkme卷的布局,并确定可以删除哪两个块。

[root@servera ~]# gluster volume info shrinkme 
Volume Name: shrinkme
Type: Distributed-Replicate
Volume ID: ea91e5b1-b57b-4d72-a5e9-2c8c3d94677d
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: servera:/bricks/brick-a2/brick
Brick2: serverb:/bricks/brick-b2/brick
Brick3: serverc:/bricks/brick-c2/brick
Brick4: serverd:/bricks/brick-d2/brick
Options Reconfigured:
performance.readdir-ahead: on

2. 缩容brick-c2和brick-d2

[root@servera ~]# gluster volume remove-brick shrinkme serverc:/bricks/brick-c2/brick serverd:/bricks/brick-d2/brick start
volume remove-brick start: success
ID: 667f7585-a9c4-43e8-8f3b-0482bfb19d97

[root@servera ~]# gluster volume remove-brick shrinkme serverc:/bricks/brick-c2/brick serverd:/bricks/brick-d2/brick status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                 serverc.lab.example.com                7        0Bytes             7             0             0            completed               1.00
                 serverd.lab.example.com                0        0Bytes             0             0             0            completed               0.00

[root@servera ~]# gluster volume remove-brick shrinkme serverc:/bricks/brick-c2/brick serverd:/bricks/brick-d2/brick commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.

3. 脚本评分

[root@workstation ~]# lab shrinkvolume grade


rebalance总结

值得注意的是,在扩容GlusterFS后,需要手动执行gluster rebalance命令来触发数据均衡功能。扩容后会带来新旧节点数据不均衡的问题,进一步发展可能会导致旧节点负载过高而出现性能问题,甚至最终影响到数据可靠性和可用性,因此,扩容后进行数据均衡是非常必要的。

而在缩容GlusterFS后,并不需要手动执行命令,缩容时会自动触发执行数据均衡过程,这是因为如果缩容时没有自动进行数据均衡,那么被剔除掉的节点或子卷上的数据将不再可用,从而会导致数据的丢失,这对于用户来说是不可接受的,因此数据均衡在缩容时是不可或缺的,程序实现采用自动触发方式也就理所当然了。


章节实验

[root@workstation ~]# lab extendvolume-lab setup 
Setting up servers for lab exercise work:

 • Testing if all hosts are reachable..........................  SUCCESS
 • Adding glusterfs to runtime firewall on servera.............  SUCCESS
 • Adding glusterfs to permanent firewall on servera...........  SUCCESS
 • Adding glusterfs to runtime firewall on serverb.............  SUCCESS
 • Adding glusterfs to permanent firewall on serverb...........  SUCCESS
 • Adding glusterfs to runtime firewall on serverc.............  SUCCESS
 • Adding glusterfs to permanent firewall on serverc...........  SUCCESS
 • Adding glusterfs to runtime firewall on serverd.............  SUCCESS
 • Adding glusterfs to permanent firewall on serverd...........  SUCCESS
 • Adding servera to trusted storage pool......................  SUCCESS
…………

1. 按要求进行扩容。

[root@servera ~]# gluster volume info important 
Volume Name: important
Type: Distribute
Volume ID: b4500b45-891c-41ea-b198-3b9467b9f197
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: servera:/bricks/brick-a3/brick
Brick2: serverb:/bricks/brick-b3/brick
Options Reconfigured:
performance.readdir-ahead: on
[root@servera ~]# gluster volume add-brick important replica 2 serverc:/bricks/brick-c3/brick serverd:/bricks/brick-d3/brick
volume add-brick: success

[root@servera ~]# gluster volume rebalance important start
volume rebalance: important: success: Rebalance on important has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: b1f99fb3-3155-430a-b30e-f6ed3e73d603

[root@servera ~]# gluster volume rebalance important status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes            49             0             0            completed               1.00
                 serverb.lab.example.com                0        0Bytes            51             0             0            completed               1.00
                 serverc.lab.example.com                0        0Bytes             0             0             0            completed               0.00
                 serverd.lab.example.com                0        0Bytes             0             0             0            completed               0.00
volume rebalance: important: success

2. 脚本评分

[root@workstation ~]# lab extendvolume-lab grade

3. 重置环境

reset workstation,servera,serverb,serverc,serverd


总结

  • 介绍如何进行缩容volume。

  • 了解执行数据均衡过程。

以上就是【金鱼哥】的分享。希望能对看到此文章的小伙伴有所帮助。

如果这篇【文章】有帮助到你,希望可以给【金鱼哥】点个赞👍,创作不易,相比官方的陈述,我更喜欢用【通俗易懂】的文笔去讲解每一个知识点,如果有对【运维技术】感兴趣,也欢迎关注❤️❤️❤️ 【金鱼哥】❤️❤️❤️,我将会给你带来巨大的【收获与惊喜】💕💕!

目录
相关文章
|
存储 Kubernetes API
kubernetes【存储】1. 共享存储pv、pvc、StorageClass使用详解(1)
kubernetes【存储】1. 共享存储pv、pvc、StorageClass使用详解(1)
kubernetes【存储】1. 共享存储pv、pvc、StorageClass使用详解(1)
|
1月前
|
存储 Kubernetes 安全
k8s学习-持久化存储(Volumes、hostPath、emptyDir、PV、PVC)详解与实战
k8s学习-持久化存储(Volumes、hostPath、emptyDir、PV、PVC)详解与实战
123 0
|
1月前
|
存储 Kubernetes Cloud Native
云原生|kubernetes|持久化存储pv,pvc和StorageClass的学习
云原生|kubernetes|持久化存储pv,pvc和StorageClass的学习
176 0
|
存储 Kubernetes 调度
Kubernetes 中存储使用介绍(PV、PVC和StorageClass)
在 Kubernetes 中的应用,都是以 Pod 的形式运行的,当我们要是在 Kubernetes 上运行一些需要存放数据的应用时,便需要关注应用存放的数据是否安全可靠。因为 Pod 是有生命周期的,那么也就是说当 Pod 被删除或重启后,Pod 里面所运行的数据也会随之消失。
1576 0
Kubernetes 中存储使用介绍(PV、PVC和StorageClass)
|
9月前
|
存储 Kubernetes 关系型数据库
Kubernetes(K8S)使用PV和PVC做存储安装mysql
Kubernetes(K8S)使用PV和PVC做存储安装mysql
342 0
|
11月前
|
存储 Kubernetes Cloud Native
持久化存储PV与PVC
持久化存储PV与PVC
300 0
|
存储 Kubernetes 应用服务中间件
k8s初探(7)-kubernetes volume(1)
k8s初探(7)-kubernetes volume(1)
144 0
|
存储 Kubernetes Perl
Kubernetes 在等待 pod 中的 PVC 或 PV 挂载时超时
Kubernetes 在等待 pod 中的 PVC 或 PV 挂载时超时
474 0
|
存储 运维 Kubernetes
【k8s的持久化存储】PV、PVC、StorageClass讲解(二)
【k8s的持久化存储】PV、PVC、StorageClass讲解(二)
【k8s的持久化存储】PV、PVC、StorageClass讲解(二)
|
存储 运维 Kubernetes
kubernetes【存储】1. 共享存储pv、pvc、StorageClass使用详解(2)
kubernetes【存储】1. 共享存储pv、pvc、StorageClass使用详解(2)
kubernetes【存储】1. 共享存储pv、pvc、StorageClass使用详解(2)