分布式存储glusterfs
1.什么是glusterfs
glusterfs是一个开源分布式文件系统,具有强大的横向扩展能力,可支持数pb存储容量和数千客户端,通过网络互联成一个并行的网络文件系统,具有可扩展性、高性能、高可用等特点
常用资源:
pool 存储资源池
peer 节点
volume 卷 必须处于start才可用
brick存储单元(硬盘),可增,可减
gluster
glusterfs添加节点是默认本机是localhost,只需要添加其他机器即可,每个节点都是主
glusterfs默认监听49152端口
2.安装glusterfs
1)yum安装 [root@192 ~]# yum install centos-release-gluster -y [root@192 ~]# yum install glusterfs-server -y 2)启动glusterfs 所有节点都进行如下操作 [root@192 ~]# systemctl start glusterd.service [root@192 ~]# systemctl enable glusterd.service [root@192 ~]# mkdir -p /gfs/test1 [root@192 ~]# mkdir -p /gfs/test2 3)配置hosts解析 [root@glusterfs01 ~]# cat /etc/hosts 192.168.81.240 glusterfs01 192.168.81.250 glusterfs02 192.168.81.136 glusterfs03 [root@glusterfs01 ~]# scp /etc/hosts root@192.168.81.250:/etc/ [root@glusterfs01 ~]# scp /etc/hosts root@192.168.81.136:/etc/
3.格式化硬盘并挂载
所有节点配置一致,格式化挂载
1.格式化 [root@192 ~]# mkfs.xfs /dev/sdb [root@192 ~]# mkfs.xfs /dev/sdc 2.获取磁盘UUID 这里我们将uuid写入fstab文件中,而不是磁盘名,防止下次重启机器读错盘符 [root@192 ~]# blkid /dev/sdb /dev/sdc /dev/sdb: UUID="8835164f-78ab-4f6f-a156-9d3afd0132eb" TYPE="xfs" /dev/sdc: UUID="6f86c2be-56cc-4e98-8add-63eb43852d65" TYPE="xfs" 3.编辑/etc/fstab文件 [root@192 ~]# vim /etc/fstab UUID="8835164f-78ab-4f6f-a156-9d3afd0132eb" /gfs/test1 xfs defaults 0 0 UUID="6f86c2be-56cc-4e98-8add-63eb43852d65" /gfs/test2 xfs defaults 0 0 4.挂载 [root@192 ~]# mount -a [root@192 ~]# df -hT | grep gfs /dev/sdb xfs 10G 33M 10G 1% /gfs/test1 /dev/sdc xfs 10G 33M 10G 1% /gfs/test2
4.添加存储资源池
资源池就相当于集群,往里面添加节点,默认有一个localhost
master节点操作 查看当前的资源池列表 [root@glusterfs01 ~]# gluster pool list UUID Hostname State a2585b8c-7928-4480-9376-25c0d6e88cc0 localhost Connected 增加glusterfs02和glusterfs03节点 [root@glusterfs01 ~]# gluster peer probe glusterfs02 peer probe: success. [root@glusterfs01 ~]# gluster peer probe glusterfs03 peer probe: success. 增加完再查看 [root@glusterfs01 ~]# gluster pool list UUID Hostname State 07502cd5-4c18-4bde-9bcf-7f29f2a68af7 glusterfs02 Connected 5c76e19c-6141-4e95-9446-b3a424cd5f6e glusterfs03 Connected a2585b8c-7928-4480-9376-25c0d6e88cc0 localhost Connected 报错 20200730115047206 此报错是因为没有做hosts解析和没有关防火墙导致
报错
此报错是因为没有做hosts解析和没有关防火墙导致
5.glusertfs卷管理
企业中用的最多的就是分布式复制卷
分布式复制卷可以设置复制的数量,如replica设置的是2,那么就表示上传的文件会复制2份,比如上传10个文件实际上是上传了20个文件,起到一定的备份作用,这20个文件会随机分布在各个节点
5.1.创建一个分布式复制卷
创建之前所有节点停止防火墙!!!
[root@glusterfs01 ~]# gluster volume create web_volume01 replica 2 glusterfs01:/gfs/test1 glusterfs01:/gfs/test2 glusterfs02:/gfs/test1 glusterfs02:/gfs/test2 force volume create: web_volume01: success: please start the volume to access data [root@glusterfs01 ~]# gluster volume list web_volume01 gluster 命令关键字 volume 对卷进行操作 create 创建一个卷 web_volume01 卷名 replica 2 副本数 glusterfs01:/gfs/test1 表示glusterfs01节点上的/gfs/test1这个目录加入到卷里 force 强制生成
5.2.删除一个卷
先停止在删除 [root@glusterfs01 ~]# gluster volume stop web_volume01 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: web_volume01: success [root@glusterfs01 ~]# gluster volume delete web_volume01 Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
5.3.项目实例-创建一个web_volume01的卷,使客户端可以使用
1)添加存储资源 [root@glusterfs01 ~]# gluster peer probe glusterfs02 peer probe: success. [root@glusterfs01 ~]# gluster peer probe glusterfs03 peer probe: success. [root@glusterfs01 ~]# gluster pool list UUID Hostname State 07502cd5-4c18-4bde-9bcf-7f29f2a68af7 glusterfs02 Connected 5c76e19c-6141-4e95-9446-b3a424cd5f6e glusterfs03 Connected a2585b8c-7928-4480-9376-25c0d6e88cc0 localhost Connected 2)创建一个分布式卷 创建 [root@glusterfs01 ~]# gluster volume create web_volume01 replica 2 glusterfs01:/gfs/test1 glusterfs01:/gfs/test2 glusterfs02:/gfs/test1 glusterfs02:/gfs/test2 force volume create: web_volume01: success: please start the volume to access data 查看 [root@glusterfs01 ~]# gluster volume list web_volume01 启动这个卷 [root@glusterfs01 ~]# gluster volume start web_volume01 volume start: web_volume01: success 3)查看卷的信息 [root@glusterfs01 ~]# gluster volume info web_volume01 Volume Name: web_volume01 Type: Distributed-Replicate Volume ID: 4327e3a1-c48d-4442-9230-f0f53b04b35c Status: Started //表示可用 Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: glusterfs01:/gfs/test1 Brick2: glusterfs01:/gfs/test2 Brick3: glusterfs02:/gfs/test1 Brick4: glusterfs02:/gfs/test2 Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on performance.client-io-threads: off 4)客户端挂载 [root@glusterfs03 ~]# mount -t glusterfs 192.168.81.240:/web_volume01 /data_gfs [root@glusterfs03 ~]# df -hT | grep '/data_gfs' 192.168.81.240:/web_volume01 fuse.glusterfs 20G 270M 20G 2% /data_gfs 这里显示20G是因为咱们是复制型的卷容量会减半
5.4.验证是否分布
造几个文件进去 [root@glusterfs03 ~]# cp /etc/yum.repos.d/* /data_gfs/ 主节点去查看文件存放位置,会发现每一个文件都有备份 [root@glusterfs01 ~]# ls /gfs/* /gfs/test1: Centos-7.repo epel.repo /gfs/test2: Centos-7.repo epel.repo [root@glusterfs01 ~]# ssh 192.168.81.250 "ls /gfs/*" root@192.168.81.250's password: /gfs/test1: CentOS-Base.repo CentOS-Gluster-7.repo CentOS-Storage-common.repo /gfs/test2: CentOS-Base.repo CentOS-Gluster-7.repo CentOS-Storage-common.repo
5.5.gluster卷扩容
语法:gluster volume add-brick 卷名 节点:磁盘 force
1)扩容 [root@glusterfs01 ~]# gluster volume add-brick web_volume01 glusterfs03:/gfs/test1 glusterfs03:/gfs/test2 force volume add-brick: success 2)查看信息 [root@glusterfs01 ~]# gluster volume info web_volume01 Volume Name: web_volume01 Type: Distributed-Replicate Volume ID: 4327e3a1-c48d-4442-9230-f0f53b04b35c Status: Started Snapshot Count: 0 Number of Bricks: 3 x 2 = 6 #3x2表示有3个节点,每个节点由2块盘,共6个盘 Transport-type: tcp Bricks: Brick1: glusterfs01:/gfs/test1 Brick2: glusterfs01:/gfs/test2 Brick3: glusterfs02:/gfs/test1 Brick4: glusterfs02:/gfs/test2 Brick5: glusterfs03:/gfs/test1 Brick6: glusterfs03:/gfs/test2 Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on performance.client-io-threads: off 3)客户端刷新 重新执行df命令即可 [root@glusterfs03 ~]# df -hT | grep '/data_gfs' 192.168.81.240:/web_volume01 fuse.glusterfs 30G 404M 30G 2% /data_gfs
5.6.扩容后刷新分布信息
语法:gluster volume rebalance 卷名 start
[root@glusterfs01 ~]# gluster volume rebalance web_volume01 start volume rebalance: web_volume01: success: Rebalance on web_volume01 has been started successfully. Use rebalance status command to check status of the rebalance process. ID: c8e0f0cf-e1d1-4da5-ae79-90ec6e9db72e
5.6.缩容
缩容后会将所有文件迁移走
语法格式:gluster volume remove-brick 卷 节点名:磁盘名 start
[root@glusterfs01 ~]# gluster volume remove-brick web_volume01 glusterfs03:/gfs/test1 glusterfs03:/gfs/test2 start It is recommended that remove-brick be run with cluster.force-migration option disabled to prevent possible data corruption. Doing so will ensure that files that receive writes during migration will not be migrated and will need to be manually copied after the remove-brick commit operation. Please check the value of the option and update accordingly. Do you want to continue with your current cluster.force-migration settings? (y/n) y volume remove-brick start: success ID: b7ba1075-3bf0-40b3-adaf-9496beee2afc [root@glusterfs01 ~]# ssh 192.168.81.136 "ls /gfs/*" root@192.168.81.136's password: /gfs/test1: /gfs/test2: