一种虚机lvm动态迁移方法

简介:
xen提供的虚机动态迁移需要满足以下条件:

1. 配置文件中开启迁移

# (xend-relocation-hosts-allow '')
# (xend-relocation-port 8002)
# (xend-relocation-address '')
# (xend-relocation-server yes)
2.  两台dom0能在同样的路径访问到domU相关文件。因为迁移过去的信息是包括绝对路径。
 
使用xm命令进行动态迁移,迁移过去的只是虚机的当前运行状态,而块设备则不进行任何处理。这样,一般都需要搭建共享的存储环境,如iSCSI, DRBD, NFS等。
 
注意:迁移后的主机上没有相应的配置文件。
 
在网上发现一种同步LVM进行虚机动态迁移到方法,摘录如下:

The problem

The Xen documentation on live migration states:

Currently, there is no support for providing automatic remote access to filesystems stored on local disk when a domain is migrated. Administrators should choose an appropriate storage solution (i.e. SAN, NAS, etc.) to ensure that domain filesystems are also available on their destination node. GNBD is a good method for exporting a volume from one machine to another. iSCSI can do a similar job, but is more complex to set up.

This does not mean that it is impossible though. Live migration is a more efficient migration, and migration can be seen as a save on one node, and a restore on another. Normally, if you save a VM on one machine, and try to restore it on another machine, it will fail when it is unable to read its filesystems. But what would happen if you coppied the filesystem to the other node between the save and restore? If done right, it works pretty well.

The solution?

The solution is simple:

  • Save running image
  • Sync disks
  • copy image to other node, restore

This can be somewhat sped up by syncing the disks twice:

  1. Sync disks
  2. Save running image
  3. Sync disks - only having to save any changes in the last few seconds
  4. copy image to other node, restore

Syncronizing block devices

File backed

If you are using plain files as vbds, you can sync the disks using rsync.

Raw devices

If you are using raw devices, rsync can not be used. I wrote a small utility called [[blocksync|/programs/blocksync.py]] which can syncronize 2 block devices over the network. In my testing it was easily able to max out the network on an initial sync, and max out the disk read speed on a resync.

$ blocksync.py /dev/xen/vm-root 1.2.3.4

Will sync /dev/xen/vm-root onto 1.2.3.4. The device should already exist on the destination and be the same size.

Solaris ZFS

If you are using ZFS, it should be possible to use zfs send to sync the block devices before migration. This would give an almost instantaneous sync time.

Automation

A simple script [[/programs/xen_migrate.sh]] and its helper [[/programs/xen_vbds.py]] will migrate a domain to another host. File and raw vbds are supported.  ZFS send support is not yet implemented.
 
注:
     迁移到要求:两个主机上必须有同名的vg,且vg剩余空间足够
      在他的基础上,我把创建目标主机的lv也加入到了脚本中
     在52和53上测试,迁移一个5GB系统盘/1G swap/4GB数据盘虚机所需时间为24分钟

具体方法可以参考附件中的代码。
     

Example:

#migrating a 1G / + 128M swap over the network
#physical machines are 350mhz with 64M of ram,
#total downtime is about 3 minutes

xen1:~# time ./migrate.sh test 192.168.1.2
+ '[' 2 -ne 2 ']'
+ DOMID=test
+ DSTHOST=192.168.1.2
++ xen_vbds.py test
+ FILES=/dev/xen/test-root
/dev/xen/test-swap
+ main
+ check_running
+ xm list test
Name Id Mem(MB) CPU State Time(s) Console
test 87 15 0 -b--- 0.0 9687
+ sync_disk
+ blocksync.py /dev/xen/test-root 192.168.1.2
ssh -c blowfish 192.168.1.2 blocksync.py server /dev/xen/test-root -b 1048576
same: 942, diff: 82, 1024/1024
+ blocksync.py /dev/xen/test-swap 192.168.1.2
ssh -c blowfish 192.168.1.2 blocksync.py server /dev/xen/test-swap -b 1048576
same: 128, diff: 0, 128/128
+ save_image
+ xm save test test.dump
+ sync_disk
+ blocksync.py /dev/xen/test-root 192.168.1.2
ssh -c blowfish 192.168.1.2 blocksync.py server /dev/xen/test-root -b 1048576
same: 1019, diff: 5, 1024/1024
+ blocksync.py /dev/xen/test-swap 192.168.1.2
ssh -c blowfish 192.168.1.2 blocksync.py server /dev/xen/test-swap -b 1048576
same: 128, diff: 0, 128/128
+ copy_image
+ scp test.dump 192.168.1.2:
test.dump 100% 16MB 3.2MB/s 00:05
+ restore_image
+ ssh 192.168.1.2 'xm restore test.dump && rm test.dump'
(domain
(id 89)
[domain info stuff cut out]
)
+ rm test.dump

real 6m6.272s
user 1m29.610s
sys 0m30.930s
参考:http://sysadmin.wikia.com/wiki/Live_migration_xen

本文转自feisky博客园博客,原文链接:http://www.cnblogs.com/feisky/archive/2011/11/12/2246665.html,如需转载请自行联系原作者

相关文章
|
5月前
|
运维 Linux 虚拟化
linux|磁盘管理工作|lvm逻辑管理卷的创建和使用总结(包括扩容,根目录扩容演示)
linux|磁盘管理工作|lvm逻辑管理卷的创建和使用总结(包括扩容,根目录扩容演示)
477 0
|
Ubuntu 虚拟化 开发者
虚拟机磁盘大小变更后的Ubuntu动态分区调整
家人们,今天我们来分享一下关于虚拟机磁盘大小变更后,在Ubuntu操作系统中如何进行动态分区调整的技巧。随着虚拟化技术的发展,虚拟机已经成为许多开发者和系统管理员的首选工具之一。在使用虚拟机过程中,可能会遇到需要扩展磁盘容量的情况,而Ubuntu作为一种常见的操作系统,我们将介绍如何动态调整分区以适应磁盘大小的变更。
331 0
虚拟机磁盘大小变更后的Ubuntu动态分区调整
|
虚拟化
Vmware虚拟机RAC集群绑定共享磁盘方法
Vmware虚拟机RAC集群绑定共享磁盘方法
636 0
Vmware虚拟机RAC集群绑定共享磁盘方法
|
存储 网络安全 KVM
基于iscsi存储的kvm动态迁移(V2V)
本篇内容记录了基于iscsi存储的kvm动态迁移的相关操作。
417 0
基于iscsi存储的kvm动态迁移(V2V)
|
网络安全 KVM 虚拟化
基于DRBD的KVM动态迁移
本篇内容记录了基于DRBD的KVM动态迁移的相关操作。
289 0
基于DRBD的KVM动态迁移
|
Linux
Linux - 挂载磁盘 + 通过LVM动态实现磁盘的动态扩容
Linux - 挂载磁盘 + 通过LVM动态实现磁盘的动态扩容1 LVM是什么1.1 概念解释LVM(Logical Volume Manager), 逻辑卷管理, 是一种将一至多个硬盘的分区在逻辑上进行组合, 当成一个大硬盘来使用.
3087 0