本文系统 Centos 6.5 x64
LVM的镜像功能,有点儿类似于Raid1,即多块儿磁盘互相同步,确保资料不会丢失。
1、在此添加4块物理硬盘,每块2G空间
2、将sdb、sdc、sdd、sde 创建物理卷,将sdb、sdc、sdd 添加卷组到vmTest
1
2
3
4
5
6
7
8
|
[root@node4 ~]
# pvcreate /dev/sdb
Physical volume
"/dev/sdb"
successfully created
[root@node4 ~]
# pvcreate /dev/sdc
Physical volume
"/dev/sdc"
successfully created
[root@node4 ~]
# pvcreate /dev/sdd
Physical volume
"/dev/sdd"
successfully created
[root@node4 ~]
# pvcreate /dev/sde
Physical volume
"/dev/sde"
successfully created
|
1
2
3
|
[root@node4 ~]
# vgcreate vgTest /dev/sdb /dev/sdc /dev/sdd
Volume group
"vgTest"
successfully created
[root@node4 ~]
#
|
3、创建逻辑卷
1
2
|
[root@node4 ~]
# lvcreate -L 1G -m1 -n lvTest vgTest /dev/sdb /dev/sdc /dev/sdd
Logical volume
"lvTest"
created
|
查看lvs信息
1
2
3
4
5
6
7
8
|
[root@node4 ~]
# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices
lv_root VolGroup -wi-ao---- 8.54g
/dev/sda2
(0)
lv_swap VolGroup -wi-ao---- 992.00m
/dev/sda2
(2186)
lvTest vgTest mwi-a-m--- 1.00g lvTest_mlog 100.00 lvTest_mimage_0(0),lvTest_mimage_1(0)
[lvTest_mimage_0] vgTest iwi-aom--- 1.00g
/dev/sdb
(0)
[lvTest_mimage_1] vgTest iwi-aom--- 1.00g
/dev/sdc
(0)
[lvTest_mlog] vgTest lwi-aom--- 4.00m
/dev/sdd
(0)
|
LVM镜像需要用到-m1参数,从上面可以看出,/dev/sdb和/dev/sdc互为镜像,而/dev/sdd作为日志存储使用
4、格式化分区,在逻辑卷上创建一个文件。对/dev/sdc进行破坏。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
[root@node4 ~]
# mkfs.ext4 /dev/vgTest/lvTest
mke2fs 1.41.12 (17-May-2010)
文件系统标签=
操作系统:Linux
块大小=4096 (log=2)
分块大小=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved
for
the super user
第一个数据块=0
Maximum filesystem blocks=268435456
8 block
groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
正在写入inode表: 完成
Creating journal (8192 blocks): 完成
Writing superblocks and filesystem accounting information: 完成
This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[root@node4 ~]
# dd if=/dev/zero of=/dev/sdc count=10 bs=20M
记录了10+0 的读入
记录了10+0 的写出
209715200字节(210 MB)已复制,2.08666 秒,101 MB/秒
[root@node4 ~]
# lvs -a -o +devices
Couldn't
find
device with uuid zecO8D-2Suc-rnmK-a2Z7-6613-Zy1X-whVS0X.
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices
lv_root VolGroup -wi-ao---- 8.54g
/dev/sda2
(0)
lv_swap VolGroup -wi-ao---- 992.00m
/dev/sda2
(2186)
lvTest vgTest mwi-a-m-p- 1.00g lvTest_mlog 100.00 lvTest_mimage_0(0),lvTest_mimage_1(0)
[lvTest_mimage_0] vgTest iwi-aom--- 1.00g
/dev/sdb
(0)
[lvTest_mimage_1] vgTest iwi-aom-p- 1.00g unknown device(0)
[lvTest_mlog] vgTest lwi-aom--- 4.00m
/dev/sdd
(0)
[root@node4 ~]
# lvscan
Couldn't
find
device with uuid zecO8D-2Suc-rnmK-a2Z7-6613-Zy1X-whVS0X.
ACTIVE
'/dev/vgTest/lvTest'
[1.00 GiB] inherit
ACTIVE
'/dev/VolGroup/lv_root'
[8.54 GiB] inherit
ACTIVE
'/dev/VolGroup/lv_swap'
[992.00 MiB] inherit
[root@node4 ~]
#
|
重新挂载逻辑卷,确认文件可以正常读取
1
2
3
4
5
6
7
8
9
|
[root@node4 ~]
# mkdir /lvmTest
[root@node4 ~]
# mount /dev/vgTest/lvTest /lvmTest/
[root@node4 ~]
# cd /lvmTest/
[root@node4 lvmTest]
# ls
lost+found
[root@node4 lvmTest]
# echo "ac" > ac
[root@node4 lvmTest]
# cat ac
ac
[root@node4 lvmTest]
#
|
将卷组中坏掉的物理卷(/dev/sdc)移除:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
|
[root@node4 lvmTest]
# vgdisplay
Couldn't
find
device with uuid zecO8D-2Suc-rnmK-a2Z7-6613-Zy1X-whVS0X.
--- Volume group ---
VG Name vgTest
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 3
VG Access
read
/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 3
Act PV 2
VG Size 5.99 GiB
PE Size 4.00 MiB
Total PE 1533
Alloc PE / Size 513 / 2.00 GiB
Free PE / Size 1020 / 3.98 GiB
VG UUID 1qzO3A-Tjvi-by9l-Oq49-byz3-tIkx-rfSqex
|
1
2
3
4
|
[root@node4 lvmTest]
# vgreduce --removemissing --force vgTest
Couldn't
find
device with uuid zecO8D-2Suc-rnmK-a2Z7-6613-Zy1X-whVS0X.
Wrote out consistent volume group vgTest
[root@node4 lvmTest]
#
|
5、向卷组中加入新的物理卷(/dev/sde):
1
2
3
|
[root@node4 lvmTest]
# vgextend vgTest /dev/sde
Volume group
"vgTest"
successfully extended
[root@node4 lvmTest]
#
|
6、进行数据恢复(过程中无须解除逻辑卷的挂载)
1
2
3
4
5
6
7
8
9
10
11
|
[root@node4 lvmTest]
# lvconvert -m1 /dev/vgTest/lvTest /dev/sdb /dev/sdd /dev/sde
vgTest
/lvTest
: Converted: 0.0%
vgTest
/lvTest
: Converted: 100.0%
[root@node4 lvmTest]
# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert Devices
lv_root VolGroup -wi-ao---- 8.54g
/dev/sda2
(0)
lv_swap VolGroup -wi-ao---- 992.00m
/dev/sda2
(2186)
lvTest vgTest mwi-aom--- 1.00g lvTest_mlog 100.00 lvTest_mimage_0(0),lvTest_mimage_1(0)
[lvTest_mimage_0] vgTest iwi-aom--- 1.00g
/dev/sdb
(0)
[lvTest_mimage_1] vgTest iwi-aom--- 1.00g
/dev/sdd
(0)
[lvTest_mlog] vgTest lwi-aom--- 4.00m
/dev/sde
(0)
|
7、核实原数据
1
2
3
4
5
6
7
|
[root@node4 lvmTest]
# cat ac
ac
[root@node4 lvmTest]
# echo "abcde" >> ac
[root@node4 lvmTest]
# cat ac
ac
abcde
[root@node4 lvmTest]
#
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
[root@node4 lvmTest]
# lvdisplay
--- Logical volume ---
LV Path
/dev/vgTest/lvTest
LV Name lvTest
VG Name vgTest
LV UUID a8kDmI-R3ls-SfKJ-qx3d-1Tbb-wPAd-TJcQfn
LV Write Access
read
/write
LV Creation host,
time
node4.lansgg.com, 2015-09-10 20:50:41 +0800
LV Status available
# open 1
LV Size 1.00 GiB
Current LE 256
Mirrored volumes 2
Segments 1
Allocation inherit
Read ahead sectors auto
- currently
set
to 256
Block device 253:5
|
本文转自 西索oO 51CTO博客,原文链接:http://blog.51cto.com/lansgg/1693456