实验环境:Centos7.7
实验拓扑:
目的:完成基于iscsi存储的kvm迁移
三台虚拟机 2台计算节点 1台iscis存储 网络配置: 业务网 192.168.100.0 心跳网 172.16.100.0 存储网 10.1.2.0
一、完成主机名映射
ALL:cat /etc/hosts 10.1.2.156 iscsiStorage 10.1.2.157 node1 10.1.2.158 node2 192.168.100.157 node1-yw 192.168.100.158 node2-yw 172.16.100.157 node1-xt 172.16.100.158 node2-xt
二、配置yum源
yum配置 [centos] name=centos baseurl=file:///opt/centos enabled=1 gpgcheck=0
三、安装虚拟化软件
yum groups install -y "Virtualization Platform " yum groups install -y "Virtualization Hypervisor " yum groups install -y "Virtualization Tools " yum groups install -y "Virtualization Client "
SSH互信
ALL: ssh-keygen -t rsa -P '' ALL: ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1 ALL: ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2 ALL:mkdir /kvm-hosts/
在vmware的虚拟机中安装包
ALL: yum group install virtualization-client -y ALL: yum group install gnome-desktop -y ALL: yum install -y tigervnc-server tigervnc ALL: systemctl stop firewalld ALL: systemctl disable firewalld
群集软件包安装
ALL:yum install bash-completion ntpdate tigervnc-server iscsi-initiator-utils pacemaker corosync pcs psmisc policycoreutils-python fence-agents-all dlm lvm2-cluster gfs2-utils -y ALL: systemctl start pcsd ALL: systemctl enable pcsd ALL:echo "a" | passwd --stdin hacluster 单(): pcs cluster auth node1 node2 -u hacluster -p a 单(): pcs cluster setup --name kvm-ha-cluster node1 node2 单(): pcs cluster start --all 单(): pcs cluster enable --all
查看iscsi initiator的IQN
ALL:yum install iscsi-initiator-utils ALL:vi /etc/iscsi/initiatorname.iscsi 获得iqn.1994-05.com.redhat:node(x)
四、ISCSI服务器的配置
yum配置 [centos] name=centos baseurl=file:///opt/centos enabled=1 gpgcheck=0
网络配置 10.1.2.156 iscsiStorage 10.1.2.157 node1 10.1.2.158 node2 systemctl stop firewalld systemctl disable firewalld
***分两个区出来:*** [root@localhost ~]# fdisk -l Device Boot Start End Blocks Id System /dev/sdb1 2048 83888127 41943040 83 Linux /dev/sdb2 83888128 85985279 1048576 83 Linux
配置iscsi
yum install -y targetcli /targetcli的对应目录下面/ /1、block/ create wang1 dev=/dev/sdb1 create wang2 dev=/dev/sdb2 /2、iscsi/ create iqn.2019-05.wangyu.name:tomstor1 /3、lun/ create /backstores/block/wang1 create /backstores/block/wang2 /4、acl/ create iqn.1994-05.com.redhat:node1 create iqn.1994-05.com.redhat:node2 cd / /> ls o- / ............................................................................... [...] o- backstores .................................................................... [...] | o- block ........................................................ [Storage Objects: 2] | | o- caq1 ................................. [/dev/sdb1 (60.0GiB) write-thru activated] | | | o- alua ......................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ............................. [ALUA state: Active/optimized] | | o- caq2 ................................. [/dev/sdb2 (20.0GiB) write-thru activated] | | o- alua ......................................................... [ALUA Groups: 1] | | o- default_tg_pt_gp ............................. [ALUA state: Active/optimized] | o- fileio ....................................................... [Storage Objects: 0] | o- pscsi ........................................................ [Storage Objects: 0] | o- ramdisk ...................................................... [Storage Objects: 0] o- iscsi .................................................................. [Targets: 1] | o- iqn.2020-05.caq.name:tomstor1 ........................................... [TPGs: 1] | o- tpg1 ..................................................... [no-gen-acls, no-auth] | o- acls ................................................................ [ACLs: 2] | | o- iqn.1994-05.com.redhat:rs1 ................................. [Mapped LUNs: 2] | | | o- mapped_lun0 ........................................ [lun0 block/caq1 (rw)] | | | o- mapped_lun1 ........................................ [lun1 block/caq2 (rw)] | | o- iqn.1994-05.com.redhat:rs2 ................................. [Mapped LUNs: 2] | | o- mapped_lun0 ........................................ [lun0 block/caq1 (rw)] | | o- mapped_lun1 ........................................ [lun1 block/caq2 (rw)] | o- luns ................................................................ [LUNs: 2] | | o- lun0 ............................ [block/caq1 (/dev/sdb1) (default_tg_pt_gp)] | | o- lun1 ............................ [block/caq2 (/dev/sdb2) (default_tg_pt_gp)] | o- portals .......................................................... [Portals: 1] | o- 0.0.0.0:3260 ........................................................... [OK] o- loopback ............................................................... [Targets: 0] /> saveconfig exit systemctl start target systemctl enable target
五、计算节点配置
ALL: iscsiadm --mode discovery --type sendtargets --portal 10.1.2.156 ALL: iscsiadm -m node -L all ALL: fdisk -l 单(): ll /dev/disk/by-id/ |grep sd #找最小(sdb)的wwn那个,做stonith隔离设备(防止脑裂现象) pcs stonith create scsi-shooter fence_scsi pcmk_host_list="node1 node2" devices="/dev/disk/by-id/wwn-0x6001405a0dbafe526bc4a8484a66475b" meta provides=unfencing 单(): pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true ALL:pcs status #可以在双节点看一下是否成功了 ALL:lvmconf --enable-cluster ALL:reboot 重启后可以pcs status 看到都启动的状态
# 向群集中添加资源,clvm,它是群集化的逻辑资源卷管理 单(): pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true 单():pcs constraint order start dlm-clone then clvmd-clone 单():pcs constraint colocation add clvmd-clone with dlm-clone fdisk /dev/sdc //分一个区,并指定为8e类型 ALL:partprobe ; multipath -r 单():pvcreate /dev/sdc1 单():vgcreate vmvg0 /dev/sdc1 //lvcreate -n lvvm0 -l 100%FREE vmvg0 单():lvcreate -n lvvm0 -L 20G vmvg0 单():mkfs.gfs2 -p lock_dlm -j 2 -t kvm-ha-cluster:kvm /dev/vmvg0/lvvm0 #添加gfs2文件系统,能够进行文件锁的管理,不过这是在disk 动态容量调整的支持下,也就是本文上面所添加的CLVM。 单():pcs resource create VMFS Filesystem device="/dev/vmvg0/lvvm0" directory="/kvm-hosts" fstype="gfs2" clone 单():pcs constraint order clvmd-clone then VMFS-clone 单():pcs constraint colocation add VMFS-clone with clvmd-clone ALL:semanage fcontext -a -t virt_image_t "/kvm-hosts(/.*)?" ALL:restorecon -R -v /kvm-hosts
六、创建kvm虚拟机进行测试:
qemu-img create -f qcow2 /kvm-hosts/web01.qcow2 10G virt-install --name web01 --virt-type kvm --ram 1024 --cdrom=/kvm-hosts/CentOS-7-x86_64-Minimal-1810.iso --disk path=/kvm-hosts/web01.qcow2 --network network=default --graphics vnc,listen=0.0.0.0 --noautoconsole --os-type=linux --os-variant=rhel7 将虚拟机的配置文件传过去,然后undefine virsh migrate web01 qemu+ssh://root@node2/system --live --unsafe --persistent --undefinesource virsh dumpxml web01 > /kvm-hosts/web01.xml virsh undefine web01
virsh define /kvm-hosts/web01.xml pcs resource create web01_res VirtualDomain hypervisor="qemu:///system" config="/kvm-hosts/web01.xml" migration_transport="ssh" meta allow-migrate="true" pcs constraint order start VMFS-clone then web01_res pcs constraint colocation add web01_res with VMFS-clone
迁移测试
pcs cluster standby node2
迁移成功!