shy丶gril 2016-05-25 3025浏览量
1
|
Centos 6.4 |
1
2
3
4
5
6
7
|
服务端: 172.28.26.101 172.28.26.102 172.28.26.188 172.28.26.189 客户端: 172.28.26.103 |
1
2
3
|
yum -y install glusterfs glusterfs-server
chkconfig glusterd on service glusterd start |
1
2
3
4
5
6
|
[root@db1 ~] # gluster peer probe 172.28.26.102
Probe successful [root@db1 ~] # gluster peer probe 172.28.26.188
Probe successful [root@db1 ~] # gluster peer probe 172.28.26.189
Probe successful |
1
2
3
4
5
6
7
8
9
10
11
|
[root@db1 ~] # gluster peer status
Number of Peers: 3 Hostname: 172.28.26.102 Uuid: b9437089-b2a1-4848-af2a-395f702adce8 State: Peer in Cluster (Connected)
Hostname: 172.28.26.188 Uuid: ce51e66f-7509-4995-9531-4c1a7dbc2893 State: Peer in Cluster (Connected)
Hostname: 172.28.26.189 Uuid: 66d7fd67-e667-4f9b-a456-4f37bcecab29 State: Peer in Cluster (Connected)
|
1
2
3
|
mkdir /data/gluster
[root@db1 ~] # gluster volume create img replica 2 172.28.26.101:/data/gluster 172.28.26.102:/data/gluster 172.28.26.188:/data/gluster 172.28.26.189:/data/gluster
Creation of volume img has been successful. Please start the volume to access data. |
1
2
|
[root@db1 ~] # gluster volume start img
Starting volume img has been successful |
1
2
3
4
5
6
7
8
9
10
11
|
[root@db1 ~] # gluster volume info
Volume Name: img Type: Distributed-Replicate Status: Started Number of Bricks: 2 x 2 = 4 Transport- type : tcp
Bricks: Brick1: 172.28.26.101: /data/gluster
Brick2: 172.28.26.102: /data/gluster
Brick3: 172.28.26.188: /data/gluster
Brick4: 172.28.26.189: /data/gluster
|
1
|
yum -y installglusterfs glusterfs-fuse |
1
2
3
|
mount -t glusterfs 172.28.26.102: /img /mnt/ (挂载任意一个节点即可)
mount -t nfs -o mountproto=tcp,vers=3 172.28.26.102: /img /log/mnt/ (使用NFS挂载,注意远端的rpcbind服务必须开启)
echo "172.28.26.102:/img /mnt/ glusterfs defaults,_netdev 0 0" >> /etc/fstab (开机自动挂载)
|
1
2
3
4
|
dd if = /dev/urandom of= /data/navy bs=1M count=100 # 在挂载客户端生成测试文件
cp /data/navy /mnt/ # 文件拷贝到存储上
md5sum /data/navy /mnt/navy # 在查看客户端检查文件哈希
md5sum /data/gluster/navy # 存储集群的某2个节点上会有此文件,检查其哈希
|
1
2
3
|
# 将其中一个节点停止存储服务service glusterd stop service glusterfsd stop # 在挂载客户端删除测试文件
rm -fv /mnt/navy # 此时在服务端查看,服务被停止的节点上navy并未被删除。此时启动服务:service glusterd start# 数秒后,navy就被自动删除了。新增文件效果相同!
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
删除卷 gluster volume stop img gluster volume delete img 将机器移出集群 gluster peer detach 172.28.26.102 只允许172.28.0.0的网络访问glusterfs gluster volume set img auth.allow 172.28.26.*
加入新的机器并添加到卷里(由于副本数设置为2,至少要添加2(4、6、8..)台机器) gluster peer probe 172.28.26.105 gluster peer probe 172.28.26.106 gluster volume add-brick img 172.28.26.105: /data/gluster 172.28.26.106: /data/gluster
收缩卷 # 收缩卷前gluster需要先移动数据到其他位置 gluster volume remove-brick img 172.28.26.101: /data/gluster/img 172.28.26.102: /data/gluster/img start
# 查看迁移状态 gluster volume remove-brick img 172.28.26.101: /data/gluster/img 172.28.26.102: /data/gluster/img status
# 迁移完成后提交 gluster volume remove-brick img 172.28.26.101: /data/gluster/img 172.28.26.102: /data/gluster/img commit
迁移卷 # 将172.28.26.101的数据迁移到,先将172.28.26.107加入集群 gluster peer probe 172.28.26.107 gluster volume replace-brick img 172.28.26.101: /data/gluster/img 172.28.26.107: /data/gluster/img start
# 查看迁移状态gluster volume replace-brick img 172.28.26.101:/data/gluster/img 172.28.26.107:/data/gluster/img status # 数据迁移完毕后提交gluster volume replace-brick img 172.28.26.101:/data/gluster/img 172.28.26.107:/data/gluster/img commit # 如果机器172.28.26.101出现故障已经不能运行,执行强制提交然后要求gluster马上执行一次同步 gluster volume replace-brick img 172.28.26.101: /data/gluster/img 172.28.26.102: /data/gluster/img commit -force
gluster volume heal imgs full |
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。
时时分享云计算技术内容,助您降低 IT 成本,提升运维效率,使您更专注于核心业务创新。