我这里是在CentOS 6.5 x64下测试的, 没有在7的环境下测试.
openvswitch是一个不错的开源虚拟交换软件, 但是目前docker还不支持直接使用ovs管理的网桥, 可能是还没有实现ovs的接口.
在centos下, 目前docker支持bridge-utils包管理的网桥, 也就是常用的brctl.
如果要使用openvswitch, 可以这么来做,
1. 以无网络方式启动container, 这样的话可以避开docker来调用一些网桥相关的接口配置网络而导致报错.
2. 使用ovs创建接口
3. 将接口指派给进程(iproute2的功能) ip netns exec .....配置IP等.
详见 :
首先我们看看如何使用自定义网桥启动docker 服务.
默认情况下在安装好docker-io包后, 会自动添加一个名为docker0的网桥, 地址是172.17.42.1/16.
docker0的地址如下 :
如果要修改默认网桥docker0的IP, 可以通过如下方法 :
如果要让docker使用我们自定义的网桥, 可以使用-b指定网桥来启动.
# 添加IP
# 启动链路
# docker server的报错信息如下.
[参考]
[root@db-172-16-3-221 ~]# ip addr show docker0
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
inet6 fe80::c814:5eff:fee0:1759/64 scope link
valid_lft forever preferred_lft forever
如果要修改默认网桥docker0的IP, 可以通过如下方法 :
[root@db-172-16-3-221 ~]# service docker0 stop
# 删除原IP
[root@db-172-16-3-221 ~]# ip addr del 172.17.42.1/16 dev docker0
# 添加新IP
[root@db-172-16-3-221 ~]# ip addr add 10.0.0.1/8 dev docker0
# 查看
[root@db-172-16-3-221 ~]# ip addr show docker0
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/8 scope global docker0
inet6 fe80::200:ff:fe00:0/64 scope link
valid_lft forever preferred_lft forever
# 重启docker server服务
[root@db-172-16-3-221 ~]# service docker start
Starting docker: [ OK ]
# 启动一个container, 查看新分配的IP, 已经正常了.
[root@db-172-16-3-221 ~]# docker run --rm -i -t --name digoal centos:centos6 /bin/bash
bash-4.1# ifconfig
eth0 Link encap:Ethernet HWaddr 02:F9:8E:61:46:75
inet addr:10.0.0.2 Bcast:0.0.0.0 Mask:255.0.0.0
inet6 addr: fe80::f9:8eff:fe61:4675/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:328 (328.0 b) TX bytes:418 (418.0 b)
如果要让docker使用我们自定义的网桥, 可以使用-b指定网桥来启动.
# 停止服务
[root@db-172-16-3-221 ~]# service docker stop
Stopping docker: [ OK ]
# 新增网桥
[root@db-172-16-3-221 ~]# brctl addbr br_digoal
# 为新增的网桥添加地址
[root@db-172-16-3-221 ~]# ip addr add 172.18.0.1/16 dev br_digoal
# 启动新增的网桥链路
[root@db-172-16-3-221 ~]# ip link set dev br_digoal up
# 修改docker服务启动脚本, 新增-b="br_digoal"的启动项
[root@db-172-16-3-221 ~]# vi /etc/init.d/docker
start() {
.......................
# 修改以下, 指定网桥
$exec -d -b="br_digoal" -g /data01/docker &>> $logfile &
# 启动docker服务.
[root@db-172-16-3-221 ~]# service docker start
Starting docker: [ OK ]
# 启动虚拟机, 已经使用了新的网桥分配的地址
[root@db-172-16-3-221 ~]# docker run --rm -i -t --name digoal centos:centos6 /bin/bash
bash-4.1# ifconfig
eth0 Link encap:Ethernet HWaddr 9E:48:5B:1D:C2:28
inet addr:172.18.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::9c48:5bff:fe1d:c228/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:3 errors:0 dropped:0 overruns:0 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:238 (238.0 b) TX bytes:418 (418.0 b)
但是如果我们想用openvswitch来管理docker的虚拟网络的话, 使用这个方法可以吗?
我们来试一试 :
# 使用ovs新增一个网桥
[root@150 ~]# ovs-vsctl add-br ovs_digoal
[root@150 ~]# ovs-vsctl list-br
ovs_digoal
[root@150 ~]# ovs-vsctl show
fb8f1987-c908-4a8f-bc2c-1f8c19f034fc
Bridge ovs_digoal
Port ovs_digoal
Interface ovs_digoal
type: internal
ovs_version: "1.9.3"
[root@150 ~]# ip link show ovs_digoal
81: ovs_digoal: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 4e:60:c1:91:55:44 brd ff:ff:ff:ff:ff:ff
# 添加IP
[root@150 ~]# ip addr add 10.1.1.1/8 dev ovs_digoal
# 启动链路
[root@150 ~]# ip link set dev ovs_digoal up
[root@150 ~]# ip link show ovs_digoal
81: ovs_digoal: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 4e:60:c1:91:55:44 brd ff:ff:ff:ff:ff:ff
[root@150 ~]# ip addr show ovs_digoal
81: ovs_digoal: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 4e:60:c1:91:55:44 brd ff:ff:ff:ff:ff:ff
inet 10.1.1.1/8 scope global ovs_digoal
inet6 fe80::4c60:c1ff:fe91:5544/64 scope link
valid_lft forever preferred_lft forever
# 在终端启动docker服务, 使用-b指定网桥, -D打开DEBUG
# 启动正常
[root@150 ~]# docker -d -b="ovs_digoal" -D
2014/11/07 15:35:55 WARNING: You are running linux kernel version 2.6.32-431.23.3.el6.x86_64, which might be unstable running docker. Please upgrade your kernel to 3.8.0.
2014/11/07 15:35:55 docker daemon: 1.1.2 d84a070/1.1.2; execdriver: native; graphdriver:
[9c7f3256] +job serveapi(unix:///var/run/docker.sock)
[9c7f3256] +job initserver()
[9c7f3256.initserver()] Creating server
2014/11/07 15:35:55 Listening for HTTP on unix (/var/run/docker.sock)
.............
# 但是当我们启动container时, 失败了,
# 因为在启动container时, 需要做一些网桥的操作, 如添加接口
[root@150 ~]# docker run --rm -t -i digoal/centos6:6.6 /bin/bash
2014/11/07 15:36:46 Error response from daemon: Cannot start container ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea: operation not supported
# docker server的报错信息如下.
[debug] server.go:1022 Calling POST /containers/create
2014/11/07 15:36:46 POST /v1.13/containers/create
[9c7f3256] +job create()
[debug] deviceset.go:448 libdevmapper(3): ioctl/libdm-iface.c:1742 (-1) device-mapper: message ioctl on docker-8:33-3407874-pool failed: File exists
[debug] deviceset.go:448 libdevmapper(3): ioctl/libdm-iface.c:1742 (-1) device-mapper: message ioctl on docker-8:33-3407874-pool failed: File exists
[debug] deviceset.go:448 libdevmapper(3): ioctl/libdm-iface.c:1742 (-1) device-mapper: message ioctl on docker-8:33-3407874-pool failed: File exists
[debug] deviceset.go:448 libdevmapper(3): ioctl/libdm-iface.c:1742 (-1) device-mapper: message ioctl on docker-8:33-3407874-pool failed: File exists
[debug] deviceset.go:448 libdevmapper(3): ioctl/libdm-iface.c:1742 (-1) device-mapper: message ioctl on docker-8:33-3407874-pool failed: File exists
[debug] deviceset.go:448 libdevmapper(3): ioctl/libdm-iface.c:1742 (-1) device-mapper: message ioctl on docker-8:33-3407874-pool failed: File exists
[debug] deviceset.go:448 libdevmapper(3): ioctl/libdm-iface.c:1742 (-1) device-mapper: message ioctl on docker-8:33-3407874-pool failed: File exists
[debug] deviceset.go:448 libdevmapper(3): ioctl/libdm-iface.c:1742 (-1) device-mapper: message ioctl on docker-8:33-3407874-pool failed: File exists
[debug] deviceset.go:252 registerDevice(8, ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea-init)
[debug] deviceset.go:278 activateDeviceIfNeeded(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea-init)
[debug] deviceset.go:252 registerDevice(9, ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea)
[debug] deviceset.go:992 [devmapper] UnmountDevice(hash=ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea-init)
[debug] deviceset.go:1015 [devmapper] Unmount(/data03/docker/devicemapper/mnt/ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea-init)
[debug] deviceset.go:1019 [devmapper] Unmount done
[debug] deviceset.go:773 [devmapper] deactivateDevice(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea-init)
[debug] deviceset.go:867 Waiting for unmount of ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea-init: opencount=0
[debug] devmapper.go:545 [devmapper] removeDevice START
[debug] devmapper.go:559 [devmapper] removeDevice END
[debug] deviceset.go:829 [deviceset docker-8:33-3407874] waitRemove(docker-8:33-3407874-ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea-init)
[debug] deviceset.go:840 Waiting for removal of docker-8:33-3407874-ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea-init: exists=0
[debug] deviceset.go:854 [deviceset docker-8:33-3407874] waitRemove(docker-8:33-3407874-ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea-init) END
[debug] deviceset.go:793 [devmapper] deactivateDevice END
[debug] deviceset.go:1028 [devmapper] UnmountDevice END
[9c7f3256] -job create() = OK (0)
[debug] server.go:1022 Calling POST /containers/{name:.*}/attach
2014/11/07 15:36:46 POST /v1.13/containers/ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea/attach?stderr=1&stdin=1&stdout=1&stream=1
[9c7f3256] +job container_inspect(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea)
[9c7f3256] -job container_inspect(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea) = OK (0)
[9c7f3256] +job attach(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea)
[debug] attach.go:22 attach: stdin: begin
[debug] attach.go:59 attach: stdout: begin
[debug] server.go:1022 Calling POST /containers/{name:.*}/start
2014/11/07 15:36:46 POST /v1.13/containers/ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea/start
[9c7f3256] +job start(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea)
[debug] deviceset.go:278 activateDeviceIfNeeded(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea)
[debug] attach.go:97 attach: stderr: begin
[debug] attach.go:143 attach: waiting for job 1/3
[9c7f3256] +job allocate_interface(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea)
[9c7f3256] -job allocate_interface(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea) = OK (0)
[error] container.go:475 Error running container: operation not supported
[9c7f3256] +job release_interface(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea)
[9c7f3256] -job release_interface(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea) = OK (0)
[error] container.go:522 ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea: Error closing terminal: invalid argument
[debug] deviceset.go:992 [devmapper] UnmountDevice(hash=ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea)
[debug] deviceset.go:1015 [devmapper] Unmount(/data03/docker/devicemapper/mnt/ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea)
[debug] attach.go:76 attach: stdout: end
[debug] attach.go:114 attach: stderr: end
[debug] attach.go:148 attach: job 1 completed successfully
[debug] attach.go:143 attach: waiting for job 2/3
[debug] attach.go:148 attach: job 2 completed successfully
[debug] attach.go:143 attach: waiting for job 3/3
[debug] server.go:2344 Closing buffered stdin pipe
[debug] attach.go:49 attach: stdin: end
[debug] attach.go:148 attach: job 3 completed successfully
[debug] attach.go:150 attach: all jobs completed successfully
[9c7f3256] -job attach(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea) = OK (0)
[debug] deviceset.go:1019 [devmapper] Unmount done
[debug] deviceset.go:773 [devmapper] deactivateDevice(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea)
[debug] deviceset.go:867 Waiting for unmount of ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea: opencount=0
[debug] devmapper.go:545 [devmapper] removeDevice START
[debug] devmapper.go:559 [devmapper] removeDevice END
[debug] deviceset.go:829 [deviceset docker-8:33-3407874] waitRemove(docker-8:33-3407874-ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea)
[debug] deviceset.go:840 Waiting for removal of docker-8:33-3407874-ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea: exists=0
[debug] deviceset.go:854 [deviceset docker-8:33-3407874] waitRemove(docker-8:33-3407874-ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea) END
[debug] deviceset.go:793 [devmapper] deactivateDevice END
[debug] deviceset.go:1028 [devmapper] UnmountDevice END
[9c7f3256] +job release_interface(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea)
[9c7f3256] -job release_interface(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea) = OK (0)
[error] container.go:522 ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea: Error closing terminal: invalid argument
[debug] deviceset.go:992 [devmapper] UnmountDevice(hash=ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea)
[debug] deviceset.go:1028 [devmapper] UnmountDevice END
[error] driver.go:140 Warning: error unmounting device ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea: UnmountDevice: device not-mounted id ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea
Cannot start container ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea: operation not supported
[9c7f3256] -job start(ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea) = ERR (1)
[error] server.go:1048 Error making handler: Cannot start container ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea: operation not supported
[error] server.go:90 HTTP Error: statusCode=500 Cannot start container ea41cc0b958fb35bb130db076719935085426cdbe9469edb1eafdf8315b583ea: operation not supported
如果要使用ovs来管理虚拟交换的话, 只能在下一层使用, 而不要在上层使用, 如下图.
docker依旧使用bridge-utils管理的docker0 , 将docker0加入ovs管理的网桥br0. 这样子就可以了.
脚本如下 :
例子
# From http://goldmann.pl/blog/2014/01/21/connecting-docker-containers-on-multiple-hosts/
# Edit this variable: the 'other' host.
REMOTE_IP=188.226.138.185
# Edit this variable: the bridge address on 'this' host.
BRIDGE_ADDRESS=172.16.42.1/24
# Name of the bridge (should match /etc/default/docker).
BRIDGE_NAME=docker0
# bridges
# Deactivate the docker0 bridge
ip link set $BRIDGE_NAME down
# Remove the docker0 bridge
brctl delbr $BRIDGE_NAME
# Delete the Open vSwitch bridge
ovs-vsctl del-br br0
# Add the docker0 bridge
brctl addbr $BRIDGE_NAME
# Set up the IP for the docker0 bridge
ip a add $BRIDGE_ADDRESS dev $BRIDGE_NAME
# Activate the bridge
ip link set $BRIDGE_NAME up
# Add the br0 Open vSwitch bridge
ovs-vsctl add-br br0
# Create the tunnel to the other host and attach it to the
# br0 bridge
ovs-vsctl add-port br0 gre0 -- set interface gre0 type=gre options:remote_ip=$REMOTE_IP
# Add the br0 bridge to docker0 bridge
brctl addif $BRIDGE_NAME br0
# iptables rules
# Enable NAT
iptables -t nat -A POSTROUTING -s 172.16.42.0/24 ! -d 172.16.42.0/24 -j MASQUERADE
# Accept incoming packets for existing connections
iptables -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
# Accept all non-intercontainer outgoing packets
iptables -A FORWARD -i docker0 ! -o docker0 -j ACCEPT
# By default allow all outgoing traffic
iptables -A FORWARD -i docker0 -o docker0 -j ACCEPT
# Restart Docker daemon to use the new BRIDGE_NAME
service docker restart