前言
身为一个三年的运维工程师,从开发转测开再转运维,都是不断学习的过程,未必开发才是最优秀的,找到适合自己的职业不断深耕,你也会在自己的行业大放光彩,本系列依照《Linux就该这么学》系列随书学习练习操作,将一些课本上不顺畅的地方,全部以最简方式免费开源展示给大家,资源大家可以自行百度,也希望大家多关注刘遄老师的第二版关于centos8的丛书,学习最前沿的Linux相关技术。
常用命令汇总
RAID
通过把多个硬盘设备组合成一个容量更大、安全性更好的磁盘阵列,并把数据切割成多个区段后分别存放在各个不同的物理硬盘设备上,然后利用分散读写技术来提升磁盘阵列整体的性能,同时把多个重要数据的副本同步到不同的物理硬盘设备上,从而起到了非常好的数据冗余备份效果,简单来讲就是不把鸡蛋放到同一个篮子
RAID 0
RAID 0 技术把多块物理硬盘设备(至少两块)通过硬件或软件的方式串联在一起,组成一个大的卷组,disk1 和disk2 硬盘设
备会分别保存数据资料,最终实现分开写入、读取的效果
RAID 1
写入数据时,是将数据同时写入到多块硬盘设备上(可以将其视为数据的镜像或备份)。当其中某一块硬盘发生故障后,一般会立即自动以热交换的方式来恢复数据的正常使用
RAID 5
parity 部分存放的就是数据的奇偶校验信息,当硬盘设备出现问题后通过奇偶校验信息来尝试重建损坏的数据
RAID 10
RAID 10 技术是RAID 1+RAID 0 技术的一个“组合体”在不考虑成本的情况下RAID 10 的性能都超过了RAID 5
部署磁盘阵列
虚拟机中添加4 块硬盘设备来制作一个RAID 10 磁盘阵列,添加硬盘方式请看006章节,添加完毕后学习一个新的命令
mdadm 命令
mdadm 命令用于管理Linux 系统中的软件RAID 硬盘阵列,格式为“mdadm [模式] [选项] [成员设备名称]”
使用 mdadm 命令创建RAID 10,名称为“/dev/md0”
-C 参数代表创建一个RAID 阵列卡
-v参数显示创建的过程,追加一个设备名称/dev/md0
-a yes 参数代表自动创建设备文件
-n 4 参数代表使用4 块硬盘部署
-l 10 参数则代表RAID 10 方案
[root@linux ~]# fdisk -l Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000cac32 Device Boot Start End Blocks Id System /dev/sda1 * 2048 616447 307200 83 Linux /dev/sda2 616448 4810751 2097152 82 Linux swap / Solaris /dev/sda3 4810752 41943039 18566144 83 Linux Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdd: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sde: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes [root@linux ~]# mdadm -Cv /dev/md0 -a yes -n 4 -l 10 /dev/sdb /dev/sdc /dev/sdd /dev/sde mdadm: layout defaults to n2 mdadm: layout defaults to n2 mdadm: chunk size defaults to 512K mdadm: size set to 20954112K mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
把制作好的 RAID 磁盘阵列格式化为ext4 格式,同006章节内容
[root@linux ~]# mkfs.ext4 /dev/md0 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 2621440 inodes, 10477056 blocks 523852 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2157969408 320 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
创建挂载点然后把硬盘设备进行挂载操作
[root@linux ~]# mkdir /DAID [root@linux ~]# mount /dev/md0 /DAID [root@linux ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 18G 5.6G 13G 32% / devtmpfs 472M 0 472M 0% /dev tmpfs 488M 0 488M 0% /dev/shm tmpfs 488M 8.6M 479M 2% /run tmpfs 488M 0 488M 0% /sys/fs/cgroup /dev/sda1 297M 138M 160M 47% /boot tmpfs 98M 4.0K 98M 1% /run/user/42 tmpfs 98M 32K 98M 1% /run/user/1001 /dev/sr0 4.2G 4.2G 0 100% /run/media/weihongbin/CentOS 7 x86_64 /dev/md0 40G 49M 38G 1% /DAID
查看/dev/md0 磁盘阵列的详细信息,并把挂载信息写入到配置文件中,使其永久生效
[root@linux ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Oct 20 14:40:14 2022 Raid Level : raid10 Array Size : 41908224 (39.97 GiB 42.91 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Oct 20 14:43:43 2022 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Consistency Policy : resync Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : d9eb6af0:515bcea2:ee3cb41b:895a3b3f Events : 28 Number Major Minor RaidDevice State 0 8 16 0 active sync set-A /dev/sdb 1 8 32 1 active sync set-B /dev/sdc 2 8 48 2 active sync set-A /dev/sdd 3 8 64 3 active sync set-B /dev/sde [root@linux ~]# echo "/dev/md0 /RAID ext4 defaults 0 0" >> /etc/fstab
损坏磁盘阵列及修复
确认有一块物理硬盘设备出现损坏而不能继续正常使用后,使用mdadm 命令将其移除,查看RAID 磁盘阵列的状态,发现状态已经改变
[root@linux ~]# mdadm /dev/md0 -f /dev/sdb mdadm: set /dev/sdb faulty in /dev/md0 [root@linux ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Oct 20 14:40:14 2022 Raid Level : raid10 Array Size : 41908224 (39.97 GiB 42.91 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Oct 20 15:20:31 2022 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Consistency Policy : resync Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : d9eb6af0:515bcea2:ee3cb41b:895a3b3f Events : 30 Number Major Minor RaidDevice State - 0 0 0 removed 1 8 32 1 active sync set-B /dev/sdc 2 8 48 2 active sync set-A /dev/sdd 3 8 64 3 active sync set-B /dev/sde 0 8 16 - faulty /dev/sdb
当RAID 1 磁盘阵列中存在一个故障盘时并不影响RAID10 磁盘阵列的使用。当购买了新的硬盘设备后再使用mdadm 命令来予以替换,重启系统,然后再把新的硬盘添加到RAID 磁盘阵列中,此sdb并不是损坏的那个盘
[root@linux ~]# umount /RAID [root@linux ~]# mdadm /dev/md0 -a /dev/sdb [root@linux ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Oct 20 14:40:14 2022 Raid Level : raid10 Array Size : 41908224 (39.97 GiB 42.91 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Oct 20 14:43:43 2022 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Consistency Policy : resync Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : d9eb6af0:515bcea2:ee3cb41b:895a3b3f Events : 28 Number Major Minor RaidDevice State 4 8 16 0 active sync set-A /dev/sdb 1 8 32 1 active sync set-B /dev/sdc 2 8 48 2 active sync set-A /dev/sdd 3 8 64 3 active sync set-B /dev/sde [root@linux ~]# mount -a [root@linux ~]# reboot
磁盘阵列+备份盘
重置环境,建议打快照,利用部署RAID 5 磁盘阵列,防止磁盘全部损坏无法修复问题,至少需要用到3 块硬盘,还需要再加一块备份硬盘,总计需要在虚拟机中模拟4 块硬盘设备(和RAID 10一样的配置)创建一个RAID 5 磁盘阵列+备份盘
参数-n 3 代表创建RAID 5 磁盘阵列所需的硬盘数
参数-l 5 代表RAID 的级别
参数-x 1 则代表有一块备份盘
[root@linux ~]# mdadm -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/sdb /dev/sdc /dev/sdd /dev/sde mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512K mdadm: size set to 20954112K mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@linux ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Oct 20 16:13:53 2022 Raid Level : raid5 Array Size : 41908224 (39.97 GiB 42.91 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Oct 20 16:14:20 2022 State : clean, degraded, recovering Active Devices : 2 Working Devices : 4 Failed Devices : 0 Spare Devices : 2 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Rebuild Status : 29% complete Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : 2d812a99:5ad53ed0:2155c63a:c2891922 Events : 5 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 4 8 48 2 spare rebuilding /dev/sdd 3 8 64 - spare /dev/sde
将部署好的RAID 5 磁盘阵列格式化为ext4 文件格式并挂载到目录
[root@linux ~]# mkfs.ext4 /dev/md0 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 2621440 inodes, 10477056 blocks 523852 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2157969408 320 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done [root@linux ~]# echo "/dev/md0 /RAID ext4 defaults 0 0" >> /etc/fstab [root@linux ~]# mkdir /RAID [root@linux ~]# mount -a [root@linux ~]# reboot
模拟磁盘损坏,把硬盘设备/dev/sdb 移出磁盘阵列,迅速查看/dev/md0 磁盘阵列的状态,就会发现备份盘已经被自动顶替上去并开始数据同步,一定要做备份盘!
[root@linux ~]# mdadm /dev/md0 -f /dev/sdb mdadm: set /dev/sdb faulty in /dev/md0 [root@linux ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Oct 20 16:13:53 2022 Raid Level : raid5 Array Size : 41908224 (39.97 GiB 42.91 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Oct 20 16:17:08 2022 State : active, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Rebuild Status : 11% complete Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : 2d812a99:5ad53ed0:2155c63a:c2891922 Events : 26 Number Major Minor RaidDevice State 3 8 64 0 spare rebuilding /dev/sde 1 8 32 1 active sync /dev/sdc 4 8 48 2 active sync /dev/sdd 0 8 16 - faulty /dev/sdb
LVM
LVM 可以允许用户对硬盘资源进行动态调,LVM 技术是在硬盘分区和文件系统之间添加了一个逻辑层相,DAID如果是鸡蛋放篮子,LVM是单个鸡蛋打散之后都用来干啥,首先需要把这些鸡蛋(物理卷[PV,Physical Volume])打散到碗里(卷组[VG,Volume Group]),然后再把这个蛋液分成几份用来炒菜或者做鸡蛋糕(逻辑卷[LV,Logical Volume]),而且每个份的重量必须是按克数称重,防止不均(基本单元[PE,Physical Extent])的倍数
部署逻辑卷
部署LVM 时,需要逐个配置物理卷、卷组和逻辑卷
部署操作步骤
1.添加的两块硬盘设备支持LVM 技术
[root@linux ~]# fdisk -l Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000cac32 Device Boot Start End Blocks Id System /dev/sda1 * 2048 616447 307200 83 Linux /dev/sda2 616448 4810751 2097152 82 Linux swap / Solaris /dev/sda3 4810752 41943039 18566144 83 Linux Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes [root@linux ~]# pvcreate /dev/sdb /dev/sdc Physical volume "/dev/sdb" successfully created. Physical volume "/dev/sdc" successfully created.
2.把两块硬盘设备加入到storage 卷组
[root@linux ~]# vgcreate storage /dev/sdb /dev/sdc Volume group "storage" successfully created [root@linux ~]# vgdisplay --- Volume group --- VG Name storage System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 2 Act PV 2 VG Size 39.99 GiB PE Size 4.00 MiB Total PE 10238 Alloc PE / Size 0 / 0 Free PE / Size 10238 / 39.99 GiB VG UUID upPFx8-zqS0-QpFZ-fdN8-KD1W-af2C-VUrk39
3.切割出一个约为150MB 的逻辑卷设备
以容量为单位,所使用的参数为-L
以基本单元的个数为单位,所使用的参数为-l
每个基本单元的大小默认为4MB
如使用-l 37 可以生成一个大小为37×4MB=148MB的逻辑卷
[root@linux ~]# lvcreate -n vo -l 37 storage Logical volume "vo" created [root@linux ~]# lvdisplay --- Logical volume --- LV Path /dev/storage/vo LV Name vo VG Name storage LV UUID JC1hOS-eRNs-zcjp-zIWL-54Wp-zS4B-XIT3Pc LV Write Access read/write LV Creation host, time localhost.localdomain, 2022-10-20 16:24:42 +0800 LV Status available # open 0 LV Size 148.00 MiB Current LE 37 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:0
4.生成好的逻辑卷进行格式化再挂载使用
[root@linux ~]# mkfs.ext4 /dev/storage/vo mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 38000 inodes, 151552 blocks 7577 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=33816576 19 block groups 8192 blocks per group, 8192 fragments per group 2000 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Allocating group tables: done Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done [root@linux ~]# mkdir /linux [root@linux ~]# mount /dev/storage/vo /linux
5.查看挂载状态,并写入到配置文件,使其永久生效
[root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 18G 5.6G 13G 32% / devtmpfs 472M 0 472M 0% /dev tmpfs 488M 0 488M 0% /dev/shm tmpfs 488M 8.6M 479M 2% /run tmpfs 488M 0 488M 0% /sys/fs/cgroup /dev/sda1 297M 138M 160M 47% /boot tmpfs 98M 4.0K 98M 1% /run/user/42 tmpfs 98M 32K 98M 1% /run/user/1001 /dev/sr0 4.2G 4.2G 0 100% /run/media/weihongbin/CentOS 7 x86_64 /dev/mapper/storage-vo 140M 1.6M 128M 2% /linux
扩容逻辑卷
只要卷组中有足够的资源,就可以一直为逻辑卷扩容,扩展前一定要记得卸载设备和挂载点的关联
[root@localhost ~]# umount /linux
1.逻辑卷vo 扩展至290MB
[root@localhost ~]# lvextend -L 290M /dev/storage/vo Rounding size to boundary between physical extents: 292.00 MiB. Size of logical volume storage/vo changed from 148.00 MiB (37 extents) to 292.00 MiB (73 extents). Logical volume storage/vo successfully resized.
2.检查硬盘完整性,并重置硬盘容量
[root@localhost ~]# e2fsck -f /dev/storage/vo e2fsck 1.42.9 (28-Dec-2013) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/storage/vo: 11/38000 files (0.0% non-contiguous), 10453/151552 blocks [root@localhost ~]# resize2fs /dev/storage/vo resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/storage/vo to 299008 (1k) blocks. The filesystem on /dev/storage/vo is now 299008 blocks long.
3.重新挂载硬盘设备并查看挂载状态
[root@localhost ~]# mount /dev/storage/vo /linux [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 18G 5.6G 13G 32% / devtmpfs 472M 0 472M 0% /dev tmpfs 488M 0 488M 0% /dev/shm tmpfs 488M 8.6M 479M 2% /run tmpfs 488M 0 488M 0% /sys/fs/cgroup /dev/sda1 297M 138M 160M 47% /boot tmpfs 98M 4.0K 98M 1% /run/user/42 tmpfs 98M 32K 98M 1% /run/user/1001 /dev/sr0 4.2G 4.2G 0 100% /run/media/weihongbin/CentOS 7 x86_64 /dev/mapper/storage-vo 279M 2.1M 259M 1% /linux
缩小逻辑卷
相较于扩容逻辑卷,在对逻辑卷进行缩容操作时,其丢失数据的风险更大,生产环境中执行相应操作时,一定要提前备份好数据,先检查文件系统的完整性,执行缩容操作前记得先把文件系统卸载掉
[root@localhost ~]# umount /linux
1.检查文件系统完整性
[root@localhost ~]# e2fsck -f /dev/storage/vo e2fsck 1.42.9 (28-Dec-2013) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/storage/vo: 11/74000 files (0.0% non-contiguous), 15507/299008 blocks
2.逻辑卷vo 的容量减小到120MB
[root@localhost ~]# resize2fs /dev/storage/vo 120M resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/storage/vo to 122880 (1k) blocks. The filesystem on /dev/storage/vo is now 122880 blocks long.
3.重新挂载文件系统并查看系统状态
[root@localhost ~]# mount /dev/storage/vo /linux [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 18G 5.6G 13G 32% / devtmpfs 472M 0 472M 0% /dev tmpfs 488M 0 488M 0% /dev/shm tmpfs 488M 8.6M 479M 2% /run tmpfs 488M 0 488M 0% /sys/fs/cgroup /dev/sda1 297M 138M 160M 47% /boot tmpfs 98M 4.0K 98M 1% /run/user/42 tmpfs 98M 32K 98M 1% /run/user/1001 /dev/sr0 4.2G 4.2G 0 100% /run/media/weihongbin/CentOS 7 x86_64 /dev/mapper/storage-vo 113M 1.6M 103M 2% /linux
逻辑卷快照
LVM 的快照卷功能有两个特点:
快照卷的容量必须等同于逻辑卷的容量
快照卷仅一次有效,一旦执行还原操作后则会被立即自动删除
查看卷组的信息
[root@localhost ~]# vgdisplay --- Volume group --- VG Name storage System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 39.99 GiB PE Size 4.00 MiB Total PE 10238 Alloc PE / Size 73 / 292.00 MiB Free PE / Size 10165 / <39.71 GiB VG UUID upPFx8-zqS0-QpFZ-fdN8-KD1W-af2C-VUrk39
来用重定向往逻辑卷设备所挂载的目录中写入一个文件
[root@localhost ~]# echo "kuaizhao" > /linux/readme.txt [root@localhost ~]# ll /linux/ total 13 drwx------ 2 root root 12288 Oct 20 16:25 lost+found -rw-r--r-- 1 root root 9 Oct 20 16:45 readme.txt
1.生成快照
-s 参数生成一个快照卷
-L 参数指定切割的大小
[root@localhost ~]# lvcreate -L 120M -s -n SNAP /dev/storage/vo Logical volume "SNAP" created. [root@localhost ~]# lvdisplay --- Logical volume --- LV Path /dev/storage/vo LV Name vo VG Name storage LV UUID JC1hOS-eRNs-zcjp-zIWL-54Wp-zS4B-XIT3Pc LV Write Access read/write LV Creation host, time localhost.localdomain, 2022-10-20 16:24:42 +0800 LV snapshot status source of SNAP [active] LV Status available # open 1 LV Size 292.00 MiB Current LE 73 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:0 --- Logical volume --- LV Path /dev/storage/SNAP LV Name SNAP VG Name storage LV UUID dFeWto-cMYA-eKLz-YyNc-401y-V3vC-gj8CgK LV Write Access read/write LV Creation host, time localhost.localdomain, 2022-10-20 16:49:42 +0800 LV snapshot status active destination for vo LV Status available # open 0 LV Size 292.00 MiB Current LE 73 COW-table size 120.00 MiB COW-table LE 30 Allocated to snapshot 0.01% Snapshot chunk size 4.00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:3
2.逻辑卷所挂载的目录中创建一个100MB 的垃圾文件,查看快照卷的状态
[root@localhost ~]# dd if=/dev/zero of=/linux/files count=1 bs=100M 1+0 records in 1+0 records out 104857600 bytes (105 MB) copied, 0.480484 s, 218 MB/s [root@localhost ~]# lvdisplay --- Logical volume --- LV Path /dev/storage/vo LV Name vo VG Name storage LV UUID JC1hOS-eRNs-zcjp-zIWL-54Wp-zS4B-XIT3Pc LV Write Access read/write LV Creation host, time localhost.localdomain, 2022-10-20 16:24:42 +0800 LV snapshot status source of SNAP [active] LV Status available # open 1 LV Size 292.00 MiB Current LE 73 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:0 --- Logical volume --- LV Path /dev/storage/SNAP LV Name SNAP VG Name storage LV UUID dFeWto-cMYA-eKLz-YyNc-401y-V3vC-gj8CgK LV Write Access read/write LV Creation host, time localhost.localdomain, 2022-10-20 16:49:42 +0800 LV snapshot status active destination for vo LV Status available # open 0 LV Size 292.00 MiB Current LE 73 COW-table size 120.00 MiB COW-table LE 30 Allocated to snapshot 83.71% Snapshot chunk size 4.00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:3
3.校验SNAP 快照卷的效果,卸载掉逻辑卷设备与目录的挂载
[root@localhost ~]# umount /linux [root@localhost ~]# lvconvert --merge /dev/storage/SNAP Merging of volume storage/SNAP started. storage/vo: Merged: 100.00%
4.快照卷会被自动删除掉,在逻辑卷设备被执行快照操作后再创建出来的100MB 的垃圾文件也被清除
[root@localhost ~]# mount /dev/storage/vo /linux [root@localhost ~]# ll /linux/ total 13 drwx------ 2 root root 12288 Oct 20 16:25 lost+found -rw-r--r-- 1 root root 9 Oct 20 16:45 readme.txt
删除逻辑卷
执行LVM 的删除操作,提前备份好重要的数据信息,然后依次删除逻辑卷、卷组、物理卷设备,顺序不可颠倒
1.取消逻辑卷与目录的挂载关联,删除配置文件中永久生效的设备参数
[root@localhost ~]# umount /linux [root@localhost ~]# vim /etc/fstab # # /etc/fstab # Created by anaconda on Mon Aug 8 19:10:01 2022 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=fcea37da-6287-42e5-9e6f-a08000a539ae / xfs defaults 0 0 UUID=8abbad7c-2fce-46bd-b886-1c350b525f42 /boot xfs defaults 0 0 UUID=c1b4b7cd-7aa5-48a3-bd8e-afd715762383 swap swap defaults 0 0
2.删除逻辑卷设
[root@localhost ~]# lvremove /dev/storage/vo Do you really want to remove active logical volume storage/vo? [y/n]: y Logical volume "vo" successfully removed
3.删除卷组
[root@localhost ~]# vgremove storage Volume group "storage" successfully removed
4.删除物理卷设备
[root@localhost ~]# pvremove /dev/sdb /dev/sdc Labels on physical volume "/dev/sdb" successfully wiped. Labels on physical volume "/dev/sdc" successfully wiped.
5.检验查看LVM 的信息时就不会再看到信息
[root@localhost ~]# pvdisplay [root@localhost ~]# vgdisplay [root@localhost ~]# lvdisplay
结语
简问简答
RAID 技术主要是为了解决什么问题呢?
答:RAID 技术可以解决存储设备的读写速度问题及数据的冗余备份问题
位于LVM 最底层的是物理卷还是卷组?
答:最底层的是物理卷,然后在通过物理卷组成卷组
LVM 的快照卷能使用几次?
答:只可使用一次,而且使用后即自动删除
LVM 的删除顺序是怎么样的?
答:依次移除逻辑卷、卷组和物理卷