前言:
虚拟机技术里比较重要的是一点是网络配置,毕竟,虚拟机的生产目标是为了使用和管理虚拟机,对吧。网络不通畅,那是没有办法使用和管理的。
这里需要提前科普两个概念,这些概念也是一直贯穿kvm虚拟机技术的。
首先,虚拟机是需要安装在一个宿主机环境下的,宿主机也可以称为host主机,虚拟机也可以称之为guest主机。
宿主机,下面都简称host主机,是根据该host主机的硬件资源配置,比如,cpu核心数,磁盘空间大小,内存大小等等三维的参数,合理的在其内部通过libvirtd服务划分若干个guest主机,也就是虚拟机,并通过libvirt服务提供的管理接口,对划分出的虚拟机进行管理,配置,kvm虚拟机的管理活动一般指的是对虚拟机的启停,扩缩容,资源配置,网络配置,克隆,配置模板机这些活动。
而由于host和guest两者之间紧密的关系,因此,网络配置也是基于host主机来做的,毕竟,你提供了什么样的食材,厨师才能做出什么样的饭菜对吧 ,总不可能要求厨师凭空做饭菜。
kvm的网络模型:
一,nat网络模式
首先,来看看刚安装完kvm环境的host主机的网络配置:
[root@slave1 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:e9:9e:89 brd ff:ff:ff:ff:ff:ff inet 192.168.217.17/24 brd 192.168.217.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fee9:9e89/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:a5:21:b4:7d brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:f3:93:e0 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:f3:93:e0 brd ff:ff:ff:ff:ff:ff
很明显,这个服务器我是安装了两个服务,一个是docker环境,一个是kvm环境,docker0网卡是docker服务专用的虚拟网卡,这里就不提了,virtr0和virtbr0-nic是kvm环境里的libvirt服务虚拟出来的两个网卡,host主机本机只有一个ens33实体网卡和ens33的回环网卡lo。
这里需要注意,virbr0这个网卡是纯粹的虚拟网卡哦,在linux的网卡配置文件存放位置是看不到这个网卡的配置文件的,该网卡完全由libvirtd服务来进行管理,该虚拟网卡的作用主要是提供给guset主机的nat网卡,注意,是nat网卡不是bridge网卡。
如果安装虚拟机的时候不指定网络工作模式,也就是network使用默认的话,那么,guest工作机将会使用virbr0这个网卡,所有guest的网络流量通过该虚拟网卡流转,而这造成一个比较严重的问题:这个形式的网络,只有host和guest可以组网,宿主机的同网段的其它服务器是无法访问guest虚拟机的(比如,宿主机是A服务器,它的同网段内还有若干服务器,B,C。。。。 但,B,C。。。是无法访问A内的虚拟机的),只因为它使用的是nat网卡。
[root@master kvm-1.5.3]# virt-install --help |grep net --pxe Boot from the network using the PXE protocol -w NETWORK, --network NETWORK Configure a guest network interface. Ex: --network bridge=mybr0 --network network=my_libvirt_virtual_net --network network=mynet,model=virtio,mac=00:11... --network none --network help
例如这样安装guest:
virt-install --virt-type kvm --name centos --ram 1024 --disk /opt/CentOS-7-x86_64-GenericCloud-1905.qcow2,format=qcow2 --network network=default --graphics vnc,listen=0.0.0.0 --vncport=5922 --noautoconsole --os-type=linux --os-variant=centos7.0 --boot hd
default就表示使用nat网络模式。
本例计划使用一个xml配置文件启动一个nat网络模式的虚拟机,配置文件内容如下:
[root@master opt]# cat ~/linux_mini.xml <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit linux_mini or other application using the libvirt API. --> <domain type='kvm'> <name>newer</name> <uuid>187ca777-a965-4777-8e95-c1f0cfe2a363</uuid> <memory unit='KiB'>548576</memory> <currentMemory unit='KiB'>548576</currentMemory> <vcpu placement='static'>2</vcpu> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-model' check='partial'> <model fallback='allow'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/opt/newer.linux.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <interface type='network'> <mac address='52:54:00:89:52:23'/> <source network='default'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='5992' autoport='no' listen='0.0.0.0'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='vga' vram='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </rng> </devices> </domain>
vnc客户端连接虚拟机后,查询虚拟机的IP地址如下:
而192.168.122.19这个IP是无法使用xshell工具连接的,究其原因是nat网络模式没有设置gateway,无法正常的路由。
这台虚拟机是安装在192.168.217.16这个宿主机上的,那么,在另一台宿主机192.168.217.17上,是无法正常ping通虚拟机192.168.122.19的,连接方式只有vnc服务提供的接口192.168.217.16:5992。
二,bridge网络模式
桥接模式需要宿主机配置一个虚拟网卡,该虚拟网卡桥接到宿主机的一个真实物理网卡上。guest虚拟机安装的时候指定使用bridge的那个虚拟网卡即可。
例如,宿主机的IP地址是192.168.217.17,真实的物理网卡名称是ens33,那么,应该是这么配置的:
[root@slave1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 TYPE="Ethernet" BRIDGE="br0" BOOTPROTO="static" DEFROUTE="yes" IPV4_FAILURE_FATAL="no" NAME="ens33" UUID="d4876b9f-42d8-446c-b0ae-546e812bc954" DEVICE="ens33" ONBOOT="yes" [root@slave1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br0 TYPE="Bridge" NAME="br0" BOOTPROTO="static" DEFROUTE="yes" IPV4_FAILURE_FATAL="no" UUID="a276650e-af08-4270-8bac-08aa6197f2bc" DEVICE="br0" ONBOOT="yes" PREFIX="24" IPADDR=192.168.217.17 NETMASK=255.255.255.0 GATEWAY=192.168.217.2 DNS1=61.128.114.166 DNS2=8.8.8.8
此时,重启网络后,网卡的情况如下:
[root@slave1 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN qlen 1000 link/ether 00:0c:29:e9:9e:89 brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:fee9:9e89/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:a5:21:b4:7d brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:f3:93:e0 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:f3:93:e0 brd ff:ff:ff:ff:ff:ff 6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 00:0c:29:e9:9e:89 brd ff:ff:ff:ff:ff:ff inet 192.168.217.17/24 brd 192.168.217.255 scope global br0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fee9:9e89/64 scope link valid_lft forever preferred_lft forever 10: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN qlen 1000 link/ether fe:54:00:80:06:c6 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc54:ff:fe80:6c6/64 scope link valid_lft forever preferred_lft forever