一、GlusterFS基础环境的介绍
1、关于GlusterFS文件系统和架构的介绍
http://jingyan.baidu.com/article/046a7b3ef65250f9c27fa9d9.html
2、实验的目的
a. 利用多台性能较低并且老旧的服务器,实现企业的云盘功能
b. GlusterFS服务端和客户端的部署和配置
c. 实现GlusterFS多节点的负载均衡功能
3、测试环境说明
操作系统:CentOS 6.7 X64
内核版本:2.6.32-573.el6.x86_64
软件版本:glusterfs 3.7.10
使用4台服务器创卷GlusterFS的DHT功能,客户端win10使用samba进行连接配置
二、GlusterFS服务端的配置(server01)
1、GlusterFS无中心化概念,很多关于GlusterFS的配置仅需要在其中一台主机设置
2、配置NTP服务器同步(这里也可以在crontab脚本里面,添加一个定时任务)
1
2
3
|
[root@server01 ~]
# ntpdate -u 10.203.10.20
18 Apr 14:16:15 ntpdate[2700]: adjust
time
server 10.203.10.20 offset 0.008930 sec
[root@server01 ~]
# hwclock -w
|
3、查看hosts表的记录
1
2
3
4
5
6
7
|
[root@server01 ~]
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.11server01
192.168.1.12server02
192.168.1.13server03
192.168.1.14server04
|
4、单独添加一块磁盘作为共享卷使用(这里也可以配置LVM)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
|
[root@server01 ~]
# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x0ef88f22.
Changes will remain
in
memory only,
until
you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (
command
'c'
) and change display
units
to
sectors (
command
'u'
).
Command (m
for
help): p
Disk
/dev/sdb
: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors
/track
, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical
/physical
): 512 bytes / 512 bytes
I
/O
size (minimum
/optimal
): 512 bytes / 512 bytes
Disk identifier: 0x0ef88f22
Device Boot Start End Blocks Id System
Command (m
for
help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-2610, default 2610):
Using default value 2610
Command (m
for
help): w
The partition table has been altered!
Calling ioctl() to re-
read
partition table.
Syncing disks.
[root@server01 ~]
# partx /dev/sdb
# 1: 63- 41929649 ( 41929587 sectors, 21467 MB)
# 2: 0- -1 ( 0 sectors, 0 MB)
# 3: 0- -1 ( 0 sectors, 0 MB)
# 4: 0- -1 ( 0 sectors, 0 MB)
[root@server01 ~]
# fdisk -l |grep /dev/sdb
Disk
/dev/sdb
: 21.5 GB, 21474836480 bytes
/dev/sdb1
1 2610 20964793+ 83 Linux
[root@server01 ~]
# mkfs.ext4 /dev/sdb1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS
type
: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5241198 blocks
262059 blocks (5.00%) reserved
for
the super user
First data block=0
Maximum filesystem blocks=4294967296
160 block
groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Writing inode tables:
done
Creating journal (32768 blocks):
done
Writing superblocks and filesystem accounting information:
done
This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
|
5、创建挂载目录,并设置开机挂载
1
2
3
4
5
6
7
8
|
[root@server01 ~]
# mkdir -p /glusterfs-xfs-mount
[root@server01 ~]
# mount /dev/sdb1 /glusterfs-xfs-mount/
[root@server01 ~]
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3
193G 7.2G 176G 4% /
tmpfs 932M 0 932M 0%
/dev/shm
/dev/sda1
190M 41M 139M 23%
/boot
/dev/sdb1
20G 44M 19G 1%
/glusterfs-xfs-mount
|
6、修改自动挂载
1
|
[root@server01 ~]
# echo '/dev/sdb1 /glusterfs-xfs-mount xfs defaults 0 0' >> /etc/fstab
|
7、添加外部源
1
2
|
[root@server01 ~]
# cd /etc/yum.repos.d/
[root@server01 yum.repos.d]
# wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
|
8、安装glusterfs服务端软件,并启动服务
1
2
3
4
5
6
7
8
9
10
11
12
|
[root@server01 yum.repos.d]
# yum -y install glusterfs-server
[root@server01 yum.repos.d]
# /etc/init.d/glusterd start
Starting glusterd: [ OK ]
[root@server01 yum.repos.d]
# chkconfig glusterd on
[root@server01 yum.repos.d]
# chkconfig --list glusterd
glusterd 0:off1:off2:on3:on4:on5:on6:off
[root@server01 yum.repos.d]
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3
193G 7.2G 176G 4% /
tmpfs 932M 0 932M 0%
/dev/shm
/dev/sda1
190M 41M 139M 23%
/boot
/dev/sdb1
20G 44M 19G 1%
/glusterfs-xfs-mount
|
9、添加集群对象(server02以及server03)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[root@server01 yum.repos.d]
# gluster peer status
Number of Peers: 0
[root@server01 yum.repos.d]
# gluster peer probe server02
peer probe: success.
[root@server01 yum.repos.d]
# gluster peer status
Number of Peers: 1
Hostname: server02
Uuid: c58d0715-32ff-4962-90d9-4275fa65793a
State: Peer
in
Cluster (Connected)
[root@server01 yum.repos.d]
# gluster peer probe server03
peer probe: success.
[root@server01 yum.repos.d]
# gluster peer status
Number of Peers: 2
Hostname: server02
Uuid: c58d0715-32ff-4962-90d9-4275fa65793a
State: Peer
in
Cluster (Connected)
Hostname: server03
Uuid: 5110d0af-fdd9-4c82-b716-991cf0601b53
State: Peer
in
Cluster (Connected)
|
10、创建Gluster Volume
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
[root@server01 yum.repos.d]
# gluster volume create dht-volume01 server01:/glusterfs-xfs-mount server02:/gluste
rfs-xfs-
mount
server03:
/glusterfs-xfs-mount
volume create: dht-volume01: failed: The brick server01:
/glusterfs-xfs-mount
is a
mount
point. Please create a
sub-directory under the
mount
point and use that as the brick directory. Or use
'force'
at the end of the com
mand
if
you want to override this behavior.
[root@server01 yum.repos.d]
# echo $?
1
[root@server01 yum.repos.d]
# gluster volume create dht-volume01 server01:/glusterfs-xfs-mount server02:/gluste
rfs-xfs-
mount
server03:
/glusterfs-xfs-mount
force
volume create: dht-volume01: success: please start the volume to access data
[root@server01 yum.repos.d]
# gluster volume start dht-volume01
volume start: dht-volume01: success
[root@server01 yum.repos.d]
# gluster volume status
Status of volume: dht-volume01
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick server01:
/glusterfs-xfs-mount
49152 0 Y 2948
Brick server02:
/glusterfs-xfs-mount
49152 0 Y 2910
Brick server03:
/glusterfs-xfs-mount
49152 0 Y 11966
NFS Server on localhost N
/A
N
/A
N N
/A
NFS Server on server02 N
/A
N
/A
N N
/A
NFS Server on server03 N
/A
N
/A
N N
/A
Task Status of Volume dht-volume01
------------------------------------------------------------------------------
There are no active volume tasks
|
11、测试写入一个512M的文件
1
2
3
4
5
6
7
|
[root@server01 yum.repos.d]
# cd /glusterfs-xfs-mount/
[root@server01 glusterfs-xfs-
mount
]
# dd if=/dev/zero of=test.img bs=1M count=512
512+0 records
in
512+0 records out
536870912 bytes (537 MB) copied, 5.20376 s, 103 MB
/s
[root@server01 glusterfs-xfs-
mount
]
# ls
lost+found
test
.img
|
三、GlusterFS服务端的配置(server02与server03上的配置类似)
1、配置时间同步,这里的ntp服务器IP地址为10.203.10.20
1
2
3
|
[root@server02 ~]
# ntpdate -u 10.203.10.20
18 Apr 14:27:58 ntpdate[2712]: adjust
time
server 10.203.10.20 offset -0.085282 sec
[root@server02 ~]
# hwclock -w
|
2、查看host表的信息
1
2
3
4
5
6
7
|
[root@server02 ~]
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.11server01
192.168.1.12server02
192.168.1.13server03
192.168.1.14server04
|
3、在本地设置一块单独的盘,组成GlusterFS卷的一部分
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
[root@server02 ~]
# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x927b5e72.
Changes will remain
in
memory only,
until
you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (
command
'c'
) and change display
units
to
sectors (
command
'u'
).
Command (m
for
help): p
Disk
/dev/sdb
: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors
/track
, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical
/physical
): 512 bytes / 512 bytes
I
/O
size (minimum
/optimal
): 512 bytes / 512 bytes
Disk identifier: 0x927b5e72
Device Boot Start End Blocks Id System
Command (m
for
help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-2610, default 2610):
Using default value 2610
Command (m
for
help): w
The partition table has been altered!
Calling ioctl() to re-
read
partition table.
Syncing disks.
|
4、更新磁盘分区表信息,使磁盘分区和格式化
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
|
[root@server02 ~]
# partx /dev/sdb
# 1: 63- 41929649 ( 41929587 sectors, 21467 MB)
# 2: 0- -1 ( 0 sectors, 0 MB)
# 3: 0- -1 ( 0 sectors, 0 MB)
# 4: 0- -1 ( 0 sectors, 0 MB)
[root@server02 ~]
# fdisk -l|grep /dev/sdb
Disk
/dev/sdb
: 21.5 GB, 21474836480 bytes
/dev/sdb1
1 2610 20964793+ 83 Linux
[root@server02 ~]
# mkfs.ext4 /dev/sdb1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS
type
: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5241198 blocks
262059 blocks (5.00%) reserved
for
the super user
First data block=0
Maximum filesystem blocks=4294967296
160 block
groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Writing inode tables:
done
Creating journal (32768 blocks):
done
Writing superblocks and filesystem accounting information:
done
This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@server02 ~]
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3
193G 7.3G 176G 4% /
tmpfs 932M 0 932M 0%
/dev/shm
/dev/sda1
190M 41M 139M 23%
/boot
|
5、创建挂载目录,并配置挂载信息
1
2
3
4
5
6
7
8
9
|
[root@server02 ~]
# mkdir -p /glusterfs-xfs-mount
[root@server02 ~]
# mount /dev/sdb1 /glusterfs-xfs-mount/
[root@server02 ~]
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3
193G 7.3G 176G 4% /
tmpfs 932M 0 932M 0%
/dev/shm
/dev/sda1
190M 41M 139M 23%
/boot
/dev/sdb1
20G 44M 19G 1%
/glusterfs-xfs-mount
[root@server02 ~]
# echo '/dev/sdb1 /glusterfs-xfs-mount xfs defaults 0 0' >> /etc/fstab
|
6、配置yum源,并安装GlusterFS服务端软件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[root@server02 ~]
# cd /etc/yum.repos.d/
[root@server02 yum.repos.d]
# wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glust
erfs-epel.repo
--2016-04-18 14:32:22-- http:
//download
.gluster.org
/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel
.
repo
Resolving download.gluster.org... 23.253.208.221, 2001:4801:7824:104:be76:4eff:fe10:23d8
Connecting to download.gluster.org|23.253.208.221|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1049 (1.0K)
Saving to: “glusterfs-epel.repo”
100%[==============================================================>] 1,049 --.-K
/s
in
0s
2016-04-18 14:32:23 (36.4 MB
/s
) - “glusterfs-epel.repo” saved [1049
/1049
]
[root@server02 yum.repos.d]
# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noa
rch.rpm
Retrieving http:
//dl
.fedoraproject.org
/pub/epel/6/x86_64/epel-release-6-8
.noarch.rpm
warning:
/var/tmp/rpm-tmp
.gaJCKd: Header V3 RSA
/SHA256
Signature, key ID 0608b895: NOKEY
Preparing...
########################################### [100%]
1:epel-release
########################################### [100%]
[root@server02 yum.repos.d]
# yum -y install glusterfs-server
|
7、启动glusterd服务
1
2
3
4
5
|
[root@server02 yum.repos.d]
# /etc/init.d/glusterd start
Starting glusterd: [ OK ]
[root@server02 yum.repos.d]
# chkconfig glusterd on
[root@server02 yum.repos.d]
# chkconfig --list glusterd
glusterd 0:off1:off2:on3:on4:on5:on6:off
|
8、查看gluster集群节点和卷的信息
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
|
[root@server02 yum.repos.d]
# gluster peer status
Number of Peers: 1
Hostname: server01
Uuid: e90a3b54-5a9d-4e57-b502-86f9aad8b576
State: Peer
in
Cluster (Connected)
[root@server02 yum.repos.d]
# gluster volume status
Status of volume: dht-volume01
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick server01:
/glusterfs-xfs-mount
49152 0 Y 2948
Brick server02:
/glusterfs-xfs-mount
49152 0 Y 2910
Brick server03:
/glusterfs-xfs-mount
49152 0 Y 11966
NFS Server on localhost 2049 0 Y 2932
NFS Server on server01 2049 0 Y 2968
NFS Server on server03 2049 0 Y 11986
Task Status of Volume dht-volume01
------------------------------------------------------------------------------
There are no active volume tasks
[root@server02 yum.repos.d]
# ll /glusterfs-xfs-mount/
total 16
drwx------ 2 root root 16384 Apr 18 14:29 lost+found
[root@server02 yum.repos.d]
# cd /glusterfs-xfs-mount/
[root@server02 glusterfs-xfs-
mount
]
# dd if=/dev/zero of=server02.img bs=1M count=512
512+0 records
in
512+0 records out
536870912 bytes (537 MB) copied, 5.85478 s, 91.7 MB
/s
[root@server02 glusterfs-xfs-
mount
]
# ls
lost+found server02.img
|
由于我这里配置的是DHT的模式,所以集群在server01与server02的数据信息是不同的,除非人为写入相同文件
四、手动添加和删除brick卷的卷节点的信息(任意一个gluster服务端节点上的操作都可以)
1、添加server04节点
1
2
3
4
5
6
7
8
9
10
11
12
13
|
[root@server01 glusterfs-xfs-
mount
]
# gluster peer probe server04
peer probe: success.
[root@server01 glusterfs-xfs-
mount
]
# gluster peer status
Number of Peers: 3
Hostname: server02
Uuid: c58d0715-32ff-4962-90d9-4275fa65793a
State: Peer
in
Cluster (Connected)
Hostname: server03
Uuid: 5110d0af-fdd9-4c82-b716-991cf0601b53
State: Peer
in
Cluster (Connected)
Hostname: server04
Uuid: d653b5c2-dac4-428c-bf6f-eea393adbb16
State: Peer
in
Cluster (Connected)
|
2、添加一个节点卷的信息到dht-volume的brick中
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
|
[root@server01 glusterfs-xfs-
mount
]
# gluster volume add-brick
Usage: volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK> ... [force]
[root@server01 glusterfs-xfs-
mount
]
# gluster volume add-brick dht-volume01 server04:/glusterfs-xfs-mount
volume add-brick: failed: Pre Validation failed on server04. The brick server04:
/glusterfs-xfs-mount
is a moun
t point. Please create a sub-directory under the
mount
point and use that as the brick directory. Or use 'forc
e' at the end of the
command
if
you want to override this behavior.
[root@server01 glusterfs-xfs-
mount
]
# gluster volume add-brick dht-volume01 server04:/glusterfs-xfs-mount force
volume add-brick: success
[root@server01 glusterfs-xfs-
mount
]
# gluster volume status
Status of volume: dht-volume01
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick server01:
/glusterfs-xfs-mount
49152 0 Y 2948
Brick server02:
/glusterfs-xfs-mount
49152 0 Y 2910
Brick server03:
/glusterfs-xfs-mount
49152 0 Y 11966
Brick server04:
/glusterfs-xfs-mount
49152 0 Y 2925
NFS Server on localhost 2049 0 Y 3258
NFS Server on server02 2049 0 Y 3107
NFS Server on server03 2049 0 Y 12284
NFS Server on server04 2049 0 Y 2945
Task Status of Volume dht-volume01
------------------------------------------------------------------------------
There are no active volume tasks
|
3、从brick中移除一个节点卷
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
[root@server01 ~]
# gluster volume remove-brick dht-volume01 server04:/glusterfs-xfs-mount/
Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force>
[root@server01 ~]
# gluster volume remove-brick dht-volume01 server04:/glusterfs-xfs-mount/ commit
Removing brick(s) can result
in
data loss. Do you want to Continue? (y
/n
) y
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster
mount
point before re-purposing the re
moved brick.
[root@server01 ~]
# gluster volume status
Status of volume: dht-volume01
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick server01:
/glusterfs-xfs-mount
49152 0 Y 2948
Brick server02:
/glusterfs-xfs-mount
49152 0 Y 2910
Brick server03:
/glusterfs-xfs-mount
49152 0 Y 11966
NFS Server on localhost 2049 0 Y 3336
NFS Server on server02 2049 0 Y 3146
NFS Server on server04 2049 0 Y 2991
NFS Server on server03 2049 0 Y 12323
Task Status of Volume dht-volume01
------------------------------------------------------------------------------
There are no active volume tasks
|
五、配置均衡分布(rebalance)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
|
[root@server01 ~]
# gluster volume rebalance dht-volume01
Usage: volume rebalance <VOLNAME> {{fix-layout start} | {start [force]|stop|status}}
[root@server01 ~]
# gluster volume rebalance dht-volume01 fix-layout start
volume rebalance: dht-volume01: success: Rebalance on dht-volume01 has been started successfully. Use rebalanc
e status
command
to check status of the rebalance process.
ID: 6ce8fd86-dd1e-4ce3-bb44-82532b5055dd
[root@server01 ~]
# gluster volume rebalance dht-volume01 fix-layout status
Usage: volume rebalance <VOLNAME> {{fix-layout start} | {start [force]|stop|status}}
[root@server01 ~]
# gluster
gluster> volume rebalance dht-volume01 status
Node Rebalanced-files size scanned failures skip
ped status run
time
in
h:m:s
--------- ----------- ----------- ----------- ----------- --------
--- ------------ --------------
localhost 0 0Bytes 0 0
0 fix-layout completed 0:0:0
server02 0 0Bytes 0 0
0 fix-layout completed 0:0:0
server03 0 0Bytes 0 0
0 fix-layout completed 0:0:0
volume rebalance: dht-volume01: success
[root@server01 glusterfs-xfs-
mount
]
# cd /glusterfs-xfs-mount/
[root@server01 glusterfs-xfs-
mount
]
# gluster volume set dht-volume01 nfs.disable on
volume
set
: success
[root@server01 glusterfs-xfs-
mount
]
# gluster volume status
Status of volume: dht-volume01
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick server01:
/glusterfs-xfs-mount
49152 0 Y 2948
Brick server02:
/glusterfs-xfs-mount
49152 0 Y 2910
Brick server03:
/glusterfs-xfs-mount
49152 0 Y 11966
Task Status of Volume dht-volume01
------------------------------------------------------------------------------
Task : Rebalance
ID : 6ce8fd86-dd1e-4ce3-bb44-82532b5055dd
Status : completed
[root@server01 glusterfs-xfs-
mount
]
# touch {1..100}.txt
[root@server01 glusterfs-xfs-
mount
]
# ls
100.txt 12.txt 16.txt 20.txt 25.txt 3.txt 55.txt 65.txt 70.txt 75.txt 92.txt lost+found
10.txt 14.txt 18.txt 21.txt 2.txt 43.txt 57.txt 67.txt 71.txt 77.txt client2.iso
test
.img
11.txt 15.txt 1.txt 22.txt 30.txt 47.txt 61.txt 6.txt 72.txt 88.txt client.iso
[root@server02 glusterfs-xfs-
mount
]
# ls
13.txt 23.txt 28.txt 34.txt 39.txt 46.txt 52.txt 66.txt 76.txt 81.txt 8.txt 93.txt
17.txt 26.txt 29.txt 35.txt 41.txt 4.txt 58.txt 68.txt 79.txt 83.txt 90.txt 94.txt
19.txt 27.txt 33.txt 37.txt 42.txt 51.txt 62.txt 73.txt 80.txt 86.txt 91.txt 97.txt
[root@server03 ~]
# cd /glusterfs-xfs-mount/
[root@server03 glusterfs-xfs-
mount
]
# ls
24.txt 36.txt 44.txt 49.txt 54.txt 5.txt 64.txt 78.txt 84.txt 89.txt 98.txt lost+found
31.txt 38.txt 45.txt 50.txt 56.txt 60.txt 69.txt 7.txt 85.txt 95.txt 99.txt server03.img
32.txt 40.txt 48.txt 53.txt 59.txt 63.txt 74.txt 82.txt 87.txt 96.txt 9.txt server04.iso
[root@client01 ~]
# cd /glusterFS-mount/
[root@client01 glusterFS-
mount
]
# ls
100.txt 18.txt 26.txt 34.txt 42.txt 50.txt 59.txt 67.txt 75.txt 83.txt 91.txt 9.txt
10.txt 19.txt 27.txt 35.txt 43.txt 51.txt 5.txt 68.txt 76.txt 84.txt 92.txt client2.iso
11.txt 1.txt 28.txt 36.txt 44.txt 52.txt 60.txt 69.txt 77.txt 85.txt 93.txt client.iso
12.txt 20.txt 29.txt 37.txt 45.txt 53.txt 61.txt 6.txt 78.txt 86.txt 94.txt lost+found
13.txt 21.txt 2.txt 38.txt 46.txt 54.txt 62.txt 70.txt 79.txt 87.txt 95.txt server03.img
14.txt 22.txt 30.txt 39.txt 47.txt 55.txt 63.txt 71.txt 7.txt 88.txt 96.txt server04.iso
15.txt 23.txt 31.txt 3.txt 48.txt 56.txt 64.txt 72.txt 80.txt 89.txt 97.txt
test
.img
16.txt 24.txt 32.txt 40.txt 49.txt 57.txt 65.txt 73.txt 81.txt 8.txt 98.txt
17.txt 25.txt 33.txt 41.txt 4.txt 58.txt 66.txt 74.txt 82.txt 90.txt 99.txt
文件均衡分布的功能实现了
|
六、配置gluster客户端
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
|
[root@client01 ~]
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.11server01
192.168.1.12server02
192.168.1.13server03
192.168.1.14server04
[root@client01 ~]
# mkdir -p /glusterFS-mount
[root@client01 ~]
# mount -t glusterfs server01:/dht-volume01 /glusterFS-mount/
[root@client01 ~]
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3
193G 7.6G 176G 5% /
tmpfs 932M 76K 932M 1%
/dev/shm
/dev/sda1
190M 41M 139M 23%
/boot
server01:
/dht-volume01
59G 1.7G 55G 3%
/glusterFS-mount
[root@client01 ~]
# cd /glusterFS-mount/
[root@client01 glusterFS-
mount
]
# LS
-
bash
: LS:
command
not found
[root@client01 glusterFS-
mount
]
# ls
lost+found server02.img server03.img
test
.img
[root@client01 glusterFS-
mount
]
# dd if=/dev/zero of=client.iso bs=1M count=123
123+0 records
in
123+0 records out
128974848 bytes (129 MB) copied, 1.52512 s, 84.6 MB
/s
[root@client01 glusterFS-
mount
]
# ls
client.iso lost+found server02.img server03.img
test
.img
[root@client01 glusterFS-
mount
]
# dd if=/dev/zero of=client2.iso bs=1M count=456
456+0 records
in
456+0 records out
478150656 bytes (478 MB) copied, 8.76784 s, 54.5 MB
/s
[root@client01 glusterFS-
mount
]
# ls
client2.iso client.iso lost+found server02.img server03.img
test
.img
[root@client01 glusterFS-
mount
]
# ls
client2.iso client.iso lost+found server02.img server03.img
test
.img
[root@client01 glusterFS-
mount
]
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3
193G 7.2G 176G 4% /
tmpfs 932M 76K 932M 1%
/dev/shm
/dev/sda1
190M 41M 139M 23%
/boot
server01:
/dht-volume01
40G 1.7G 36G 5%
/glusterFS-mount
[root@client01 glusterFS-
mount
]
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3
193G 7.6G 176G 5% /
tmpfs 932M 80K 932M 1%
/dev/shm
/dev/sda1
190M 41M 139M 23%
/boot
server01:
/dht-volume01
59G 2.2G 54G 4%
/glusterFS-mount
[root@client01 glusterFS-
mount
]
# cd ~
[root@client01 ~]
# mount -a
Mount failed. Please check the log
file
for
more
details.
Mount failed. Please check the log
file
for
more
details.
[root@client01 ~]
# ls
anaconda-ks.cfg Documents
install
.log Music Public Videos
Desktop Downloads
install
.log.syslog Pictures Templates
[root@client01 ~]
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3
193G 7.6G 176G 5% /
tmpfs 932M 80K 932M 1%
/dev/shm
/dev/sda1
190M 41M 139M 23%
/boot
server01:
/dht-volume01
79G 2.3G 72G 4%
/glusterFS-mount
通过容量差别可以发现,节点的卷已经添加成功
|
七、在挂载glusterFS的客户机的目录下,使用samba分享给windows机器使用
1、samba服务的安装
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
[root@client01 ~]
# yum -y install samba
[root@client01 ~]
# /etc/init.d/smb restart
Shutting down SMB services: [ OK ]
Starting SMB services: [ OK ]
[root@client01 ~]
# /etc/init.d/nmb restart
Shutting down NMB services: [ OK ]
Starting NMB services: [ OK ]
[root@client01 ~]
# chkconfig smb on
[root@client01 ~]
# chkconfig nmb on
[root@client01 ~]
# vim /etc/samba/smb.conf
配置文件如下:
workgroup = WORKGROUP (工作组)
server string = Samba Server Version %
v
(显示版本)
hosts allow = 127. 192.168.1. 10.10.10. (允许登陆的主机)
log
file
=
/var/log/samba/log
.%m (日志存放的地方)
max log size = 50 (最大的日志数量)
security = user (samba验证的级别)
passdb backend = tdbsam
[云盘测试平台]
comment = yunpan
browseable =
yes
writable =
yes
public =
yes
path =
/glusterFS-mount
valid
users
= wanlong
[root@client01 ~]
# /etc/init.d/smb restart
Shutting down SMB services: [ OK ]
Starting SMB services: [ OK ]
[root@client01 ~]
# /etc/init.d/nmb restart
Shutting down NMB services: [ OK ]
Starting NMB services: [ OK ]
|
2、samba用户的配置
1
2
3
4
5
6
|
[root@client01 ~]
# adduser jifang01 -s /sbin/nologin
[root@client01 ~]
# id jifang01uid=501(jifang01) gid=501(jifang01) groups=501(jifang01)
[root@client01 ~]
# smbpasswd -a jifang01
New SMB password:
Retype new SMB password:
Added user jifang01.
|
3、设置本地文件夹权限
1
2
3
4
5
6
7
8
|
[root@client01 ~]
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3
193G 7.6G 176G 5% /
tmpfs 932M 72K 932M 1%
/dev/shm
/dev/sda1
190M 41M 139M 23%
/boot
server01:
/dht-volume01
79G 1.8G 73G 3%
/glusterFS-mount
[root@client01 ~]
# chmod 777 /glusterFS-mount/ -R
|
4、在windows服务端映射网络驱动器后,进行验证