1、环境描述
在esxi5.5中创建的虚拟机,系统为Centos6.4,原来只有一块网卡,名称为ifcfg-eth0。为实现网络冗余,登陆VMware vSphere Client后给虚拟机添加了一块网卡,网卡名为ifcfg-eth1。
2.双网卡绑定步骤:
2.1 修改/etc/sysconfig/network-scripts/ifcfg-eth0配置文档,修改后的内容如下:
DEVICE=eth0
HWADDR=78:2B:CB:30:66:29 #网卡MAC地址,可去掉
TYPE=Ethernet #可去掉
ONBOOT=yes #系统启动时自动启用该设备
SLAVE=yes
MASTER=bond0
BOOTPROTO=none #启动时不使用任何协议
2.2 修改/etc/sysconfig/network-scripts/ifcfg-eth1配置文档,修改后的内容如下:
DEVICE=eth1
HWADDR=78:2B:CB:30:66:2B #网卡MAC地址,可去掉
TYPE=Ethernet #类型,可去掉
ONBOOT=yes #系统启动时自动启用该设备
SLAVE=yes
MASTER=bond0
BOOTPROTO=none #启动时不使用任何协议
2.3 创建一个绑定网卡的配置文档/etc/sysconfig/network-scripts/ifcfg-bond0,内容如下:
DEVICE=bond0
TYPE=Ethernet
ONBOOT=yes
BONDING_OPTS="miimon=100 mode=0"
# mode=0表示"round-robin"策略,表示负载均衡方式,两块网卡都工作
#mode=1表示"active-backup"策略,表示冗余方式,只有一个网卡在工作,若出问题则启用另一个
#也可以在/etc/modprobe.d/dist.conf文件最后加入以下两行
#alias bond0 bonding
#options bond0 miimon=100 mode=1
BOOTPROTO=static
IPADDR=10.240.210.233
NETMASK=255.255.255.0
GATEWAY=10.240.210.4
DNS1=8.8.8.8
2.4 修改的是/etc/rc.local,负责在系统启动时将虚拟网卡和两张物理网卡相绑定
ifenslave bond0 eth2 eth3
3、重启网卡使操作生效
service network restart
4. 测试结果
任何一块网卡关闭后,不影响服务器的正常通讯
5. 实例
cat /etc/sysconfig/network-scripts/ifcfg-eth0
1
2
3
4
|
DEVICE=eth0
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
|
cat /etc/sysconfig/network-scripts/ifcfg-eth1
1
2
3
4
|
DEVICE=eth1
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
|
cat /etc/sysconfig/network-scripts/ifcfg-bond0
1
2
3
4
5
6
7
8
9
10
|
DEVICE=bond0
TYPE=Ethernet
ONBOOT=yes
BONDING_OPTS="miimon=100 mode=1"
BOOTPROTO=static
IPADDR=10.240.210.60
PREFIX=24
GATEWAY=10.240.210.4
DNS1=10.240.210.61
DNS1=10.240.210.62
|
【测试】
模式一: mode1 ,冗余,只有一个网卡在工作,若出问题则启用另一个,默认为第一块网卡在工作。
查看当前正在使用的网卡
cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:6b:63:ec
Slave queue ID: 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:6b:63:e2
Slave queue ID: 0
关闭网卡1,ifdown eth1,继续查看当前正在使用的网卡,可以看到默认使用的网卡为eth0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:6b:63:e2
Slave queue ID: 0
【小结】
bond刚开始默认走eth1,当把eth1关闭后,则走eth0。
模式零: mode0,表示负载均衡方式,两块网卡都工作
将bond配置文件中BONDING_OPTS="miimon=100 mode=1"的mode=1改为mode=0
查看bond运行状态
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
root@oldboy network-scripts$cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:6b:63:ec
Slave queue ID: 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:6b:63:e2
Slave queue ID: 0
|
可以看到,两块网卡同时在工作。
关闭任何一个网卡后不影响服务器正常通讯。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
root@oldboy network-scripts$ifdown eth0
root@oldboy network-scripts$cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:6b:63:ec
Slave queue ID: 0
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
|
root@oldboy network-scripts$ifup eth0
root@oldboy network-scripts$cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:6b:63:ec
Slave queue ID: 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:6b:63:e2
Slave queue ID: 0
root@oldboy network-scripts$ifdown eth1
root@oldboy network-scripts$cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:6b:63:e2
Slave queue ID: 0
|
本文转自 xoyabc 51CTO博客,原文链接:http://blog.51cto.com/xoyabc/1697525,如需转载请自行联系原作者