pipework是Docker工程师写的一个脚本, 主要用来管理container的网络.
用法参考
网友总结的5种定制docker网络的途径, pipework是其中之一 :
http://www.infoworld.com/article/2835222/application-virtualization/5-ways-docker-is-fixing-its-networking-woes.html
下载pipework脚本 :
Docker's recent advances have made it the darling of startups and innovators throughout the IT world, but one pain point causes admins and developers alike to bite their nails: networking. Managing the interaction between Docker containers and networks has always been fraught with complications.
To some intrepid developers, that's a challenge and not an obstacle. Here are five of the most significant projects currently in the works to solve Docker networking issues, with the possibility that one or more of them might become part of Docker itself.
Weave
Created by Zett.io, Weave "makes the network fit the application, not the other way round," as the company CEO puts it. With Weave, Docker containers are all part of a virtual network switch no matter where they're running. Services can be selectively exposed across the network to the outside world through firewalls and using encryption for wide-area connections.
The gist: Possibly the best place to start, since it solves most of the really immediate problems in a straightforward way. But other solutions might offer more depending on your ambitions.
Kubernetes
Google's big first leap into the Docker world was an orchestration project, a way to tackle node balancing and many other valuable orchestration-related functions with Docker containers. It also provides, rather deliberately, some solutions to Docker's networking issues, though it goes only so far in that realm.
The gist: A good start as-is, but heavy lifting is still required to get the most out of it.
CoreOS, Flannel
Given that CoreOS is a project meant to architect an entire Linux distribution around Docker, it makes sense for networking to be part of the package. A project named Flannel (previously Rudder) is at the heart of how CoreOS manages container networking, connecting all the containers in a cluster via their own private mesh network. Presto, no more clumsy port mapping!
The gist: Best with CoreOS; requires Kubernetes at least.
Pipework
A solution devised by one of Docker's engineers, Pipework allows containers to be connected "in arbitrarily complex scenarios." It was devised as an interim solution and will probably end up as one.
The gist: It's best as a curiosity or a technology concept in the long run; as its creator admits, "Docker will [eventually] allow complex scenarios, and Pipework should become obsolete."
SocketPlane
Right now SocketPlane's work consists of little more than a press announcement "to bring software defined networking to Docker." The idea is to use the same devops tools used to deploy Docker to manage virtualized networks for containers, and build what amounts to an OpenDaylight/Open vSwitch fabric for Docker. It all sounds promising, but we won't be able to see any product until at least the first quarter of 2015.
The gist: Don't hold your breath until it's out.
下载pipework脚本 :
vi pipework.sh
[root@db-172-16-3-221 ~]# chmod 500 pipework.sh
[root@db-172-16-3-221 ~]# ./pipework.sh
Syntax:
pipework <hostinterface> [-i containerinterface] <guest> <ipaddr>/<subnet>[@default_gateway] [macaddr][@vlan]
pipework <hostinterface> [-i containerinterface] <guest> dhcp [macaddr][@vlan]
pipework --wait [-i containerinterface]
pipework目前加入了openvswitch的支持. 所以如果你要直接使用openvswitch+docker的话, 用pipework来配置网络会很方便.
以前我们要使用netns来配置, 步骤很繁琐.
注意, pipework通过link名字来判断是bridge还是ovs switch.
br开头为网桥, ovs开头为openvswitch.
pipework会自动创建peer, 配置netns, 判断IP是否重复, 配置路由, 等, 配置好后最后还会删除netns, 所以我们用netns看不到.
测试如下 :
创建容器, 不指定网络.
使用pipework配置容器IP, 将手工完成的netns配置自动化完成了
可以在容器中看到新增的接口
[参考]
else
case "$IFNAME" in
br*)
IFTYPE=bridge
BRTYPE=linux
;;
ovs*)
if ! $(which ovs-vsctl >/dev/null)
then
echo "Need OVS installed on the system to create an ovs bridge"
exit 1
fi
IFTYPE=bridge
BRTYPE=openvswitch
;;
*)
echo "I do not know how to setup interface $IFNAME."
exit 1
;;
esac
pipework会自动创建peer, 配置netns, 判断IP是否重复, 配置路由, 等, 配置好后最后还会删除netns, 所以我们用netns看不到.
# Give our ARP neighbors a nudge about the new interface
if which arping > /dev/null 2>&1
then
IPADDR=$(echo $IPADDR | cut -d/ -f1)
ip netns exec $NSPID arping -c 1 -A -I $CONTAINER_IFNAME $IPADDR > /dev/null 2>&1 || true
else
echo "Warning: arping not found; interface may not be immediately reachable"
fi
# Remove NSPID to avoid `ip netns` catch it.
[ -f /var/run/netns/$NSPID ] && rm -f /var/run/netns/$NSPID
exit 0
测试如下 :
创建虚拟交换机
[root@localhost ~]# ovs-vsctl add-br ovs0
[root@localhost ~]# ip link set ovs0 up
[root@localhost ~]# ip addr add 172.0.42.1/24 dev ovs0
[root@localhost ~]# ovs-vsctl show
f345b7e3-fcb0-4ef3-8295-36d3ef69ceef
Bridge "ovs0"
Port "ovs0"
Interface "ovs0"
type: internal
ovs_version: "2.3.0"
创建容器, 不指定网络.
[root@localhost ~]# docker run --rm -t -i --net=none --name=testpipework centos:centos7 /bin/bash
[root@7eb07fe4bbf8 /]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
使用pipework配置容器IP, 将手工完成的netns配置自动化完成了
[root@localhost ~]# ./pipework.sh ovs0 testpipework 172.0.42.2/24
可以在容器中看到新增的接口
[root@7eb07fe4bbf8 /]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
56: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 3e:15:74:c0:64:21 brd ff:ff:ff:ff:ff:ff
网络已正常
[root@7eb07fe4bbf8 /]# ping 172.0.42.1
PING 172.0.42.1 (172.0.42.1) 56(84) bytes of data.
64 bytes from 172.0.42.1: icmp_seq=1 ttl=64 time=0.369 ms
64 bytes from 172.0.42.1: icmp_seq=2 ttl=64 time=0.055 ms
^C
--- 172.0.42.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.055/0.212/0.369/0.157 ms
路由也加了
[root@7eb07fe4bbf8 /]# ip route
172.0.42.0/24 dev eth1 proto kernel scope link src 172.0.42.2
[参考]
7. pipework shell脚本
https://github.com/jpetazzo/pipework/blob/master/pipework