Play with docker 1.12

简介:

Docker v1.12 brings in its integrated orchestration into docker engine.

Starting with Docker 1.12, we have added features to the core Docker Engine to make multi-host and multi-container orchestration easy. We’ve added new API objects, like Service and Node, that will let you use the Docker API to deploy and manage apps on a group of Docker Engines called a swarm. With Docker 1.12, the best way to orchestrate Docker is Docker!

docker-v1 12

Playing on GCE

Create swarm-manager:

gcloud init
docker-machine create swarm-manager --engine-install-url experimental.docker.com -d google --google-machine-type n1-standard-1 --google-zone us-central1-f --google-disk-size "500" --google-tags swarm-cluster --google-project k8s-dev-prj

Check what version has been installed:

$ eval $(docker-machine env swarm-manager)
$ docker version
Client:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   906eacd
 Built:        Fri Jun 17 20:35:33 2016
 OS/Arch:      darwin/amd64
 Experimental: true

Server:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   906eacd
 Built:        Fri Jun 17 21:07:35 2016
 OS/Arch:      linux/amd64
 Experimental: true

Create worker node:

docker-machine create swarm-worker-1 \
    --engine-install-url experimental.docker.com \
    -d google \
    --google-machine-type n1-standard-1 \
    --google-zone us-central1-f \
    --google-disk-size "500" \
    --google-tags swarm-cluster \
    --google-project k8s-dev-prj

Initialize swarm

# init manager
eval $(docker-machine env swarm-manager)
docker swarm init

Under the hood this creates a Raft consensus group of one node. This first node has the role of manager, meaning it accepts commands and schedule tasks. As you join more nodes to the swarm, they will by default be workers, which simply execute containers dispatched by the manager. You can optionally add additional manager nodes. The manager nodes will be part of the Raft consensus group. We use an optimized Raft store in which reads are serviced directly from memory which makes scheduling performance fast.

# join worker
eval $(docker-machine env swarm-worker-1)
manager_ip=$(gcloud compute instances list | awk '/swarm-manager/{print $4}')
docker swarm join ${manager_ip}:2377

List all nodes:

$ eval $(docker-machine env swarm-manager)
$ docker node ls
ID                           NAME            MEMBERSHIP  STATUS  AVAILABILITY  MANAGER STATUS
0m2qy40ch1nqfpmhnsvj8jzch *  swarm-manager   Accepted    Ready   Active        Leader
4v1oo055unqiz9fy14u8wg3fn    swarm-worker-1  Accepted    Ready   Active

Playing with service

eval $(docker-machine env swarm-manager)
docker service create --replicas 2 -p 80:80/tcp --name nginx nginx

This command declares a desired state on your swarm of 2 nginx containers, reachable as a single, internally load balanced service on port 80 of any node in your swarm. Internally, we make this work using Linux IPVS, an in-kernel Layer 4 multi-protocol load balancer that’s been in the Linux kernel for more than 15 years. With IPVS routing packets inside the kernel, swarm’s routing mesh delivers high performance container-aware load-balancing.

When you create services, can optionally create replicated or global services. Replicated services mean any number of containers that you define will be spread across the available hosts. Global services, by contrast, schedule one instance the same container on every host in the swarm.

Let’s turn to how Docker provides resiliency. Swarm mode enabled engines are self-healing, meaning that they are aware of the application you defined and will continuously check and reconcile the environment when things go awry. For example, if you unplug one of the machines running an nginx instance, a new container will come up on another node. Unplug the network switch for half the machines in your swarm, and the other half will take over, redistributing the containers amongst themselves. For updates, you now have flexibility in how you re-deploy services once you make a change. You can set a rolling or parallel update of the containers on your swarm.

docker service scale nginx=3

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
b51a902db8bc        nginx:latest        "nginx -g 'daemon off"   2 minutes ago       Up 2 minutes        80/tcp, 443/tcp     nginx.1.8yvwxbquvz1ptuqsc8hewwbau
# switch to worker
$ eval $(docker-machine env swarm-worker-1)
$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS               NAMES
da6a8250bef4        nginx:latest        "nginx -g 'daemon off"   About a minute ago   Up About a minute   80/tcp, 443/tcp     nginx.2.bqko7fyj1nowwj1flxva3ur0g
54d9ffd07894        nginx:latest        "nginx -g 'daemon off"   About a minute ago   Up About a minute   80/tcp, 443/tcp     nginx.3.02k4d34gjooa9f8m6yhfi5hyu

As seen above, one container runs on swarm-manager, and the others run on swarm-worker-1.

Expose services

Visit by node node ip

gcloud compute firewall-rules create nginx-swarm \
  --allow tcp:80 \
  --description "nginx swarm service" \
  --target-tags swarm-cluster

Then use external IP (get by exec gcloud compute instances list) to visit nginx service.

GCP Load Balancer (tcp)

gcloud compute addresses create network-lb-ip-1 --region us-central1
gcloud compute http-health-checks create basic-check
gcloud compute target-pools create www-pool --region us-central1 --health-check basic-check
gcloud compute target-pools add-instances www-pool --instances swarm-manager,swarm-worker-1 --zone us-central1-f

# Get lb addresses
STATIC_EXTERNAL_IP=$(gcloud compute addresses list | awk '/network-lb-ip-1/{print $3}')
# create forwarding rules
gcloud compute forwarding-rules create www-rule --region us-central1 --port-range 80 --address ${STATIC_EXTERNAL_IP} --target-pool www-pool

Now you could visit http://${STATIC_EXTERNAL_IP} for nginx service.

BTW, Docker for aws and azure will do this more easily as integrated:

  • Use an SSH key already associated with your IaaS account for access control
  • Provision infrastructure load balancers and update them dynamically as apps are created and updated
  • Configure security groups and virtual networks to create secure Docker setups that are easy for operations to understand and manage

By default, apps deployed with bundles do not have ports publicly exposed. Update port mappings for services, and Docker will automatically wire up the underlying platform loadbalancers:docker service update -p 80:80 <example-service>

Networking

Local networking

2016-06-24 12 05 13

Create local scope network and place containers in existing vlans:

docker network create -d macvlan --subnet=192.168.0.0/16 --ip-range=192.168.41.0/24 --aux-address="favoriate_ip_ever=192.168.41.2" --gateway=192.168.41.1 -o parent=eth0.41 macnet41
docker run --net=macnet41 -it --rm alpine /bin/sh

Multi-host networking

A typical two-tier (web+db) application runs on swarm scope network would be created like this:

docker network create -d overlay mynet
docker service create –name frontend –replicas 5 -p 80:80/tcp –network mynet mywebapp
docker service create –name redis –network mynet redis:latest

2016-06-26 10 27 20

2016-06-24 12 05 30
2016-06-24 12 05 58
2016-06-24 12 06 11

Conclusion

Docker v1.12 indeeds introduced easy-of-use interface for orchestrating containers, but I’m concerned whether this way could scale for large clusters. Maybe we could see it on Docker’s further iterations.

Further more

本文转自feisky博客园博客,原文链接:http://www.cnblogs.com/feisky/p/20160624Playwithdockerv112.html,如需转载请自行联系原作者
相关实践学习
部署高可用架构
本场景主要介绍如何使用云服务器ECS、负载均衡SLB、云数据库RDS和数据传输服务产品来部署多可用区高可用架构。
负载均衡入门与产品使用指南
负载均衡(Server Load Balancer)是对多台云服务器进行流量分发的负载均衡服务,可以通过流量分发扩展应用系统对外的服务能力,通过消除单点故障提升应用系统的可用性。 本课程主要介绍负载均衡的相关技术以及阿里云负载均衡产品的使用方法。
相关文章
|
Web App开发 Docker 容器
【实战演练】Play With Docker:免费自学Docker的最佳方法
Play with Docker 是一个Docker的演练场,它可以让用户在几秒钟内运行Docker命令。
3548 0
|
2天前
|
存储 安全 数据安全/隐私保护
【Docker 专栏】Docker 容器化应用的备份与恢复策略
【5月更文挑战第9天】本文探讨了Docker容器化应用的备份与恢复策略,强调了备份在数据保护、业务连续性和合规要求中的关键作用。内容涵盖备份的重要性、内容及方法,推荐了Docker自带工具和第三方工具如Portainer、Velero。制定了备份策略,包括频率、存储位置和保留期限,并详细阐述了恢复流程及注意事项。文章还提及案例分析和未来发展趋势,强调了随着技术发展,备份与恢复策略将持续演进,以应对数字化时代的挑战。
【Docker 专栏】Docker 容器化应用的备份与恢复策略
|
2天前
|
监控 Kubernetes Docker
【Docker 专栏】Docker 容器内应用的健康检查与自动恢复
【5月更文挑战第9天】本文探讨了Docker容器中应用的健康检查与自动恢复,强调其对应用稳定性和系统性能的重要性。健康检查包括进程、端口和应用特定检查,而自动恢复则涉及重启容器和重新部署。Docker原生及第三方工具(如Kubernetes)提供了相关功能。配置检查需考虑检查频率、应用特性和监控告警。案例分析展示了实际操作,未来发展趋势将趋向更智能和高效的检查恢复机制。
【Docker 专栏】Docker 容器内应用的健康检查与自动恢复
|
2天前
|
Ubuntu Docker 容器
docker容器保存和导入
docker容器保存和导入
20 0
|
2天前
|
Ubuntu Docker 容器
清理docker容器
清理docker容器
12 0
|
2天前
|
Prometheus 监控 Cloud Native
构建高效稳定的Docker容器监控体系
【5月更文挑战第14天】 在现代微服务架构中,Docker容器作为应用部署的基本单元,其运行状态的监控对于保障系统稳定性和性能至关重要。本文将探讨如何构建一个高效且稳定的Docker容器监控体系,涵盖监控工具的选择、关键指标的采集、数据可视化以及告警机制的设计。通过对Prometheus和Grafana的整合使用,实现对容器资源利用率、网络IO以及应用健康状态的全方位监控,确保系统的高可用性和故障快速响应。
|
2天前
|
Prometheus 监控 Cloud Native
构建高效稳定的Docker容器监控体系
【5月更文挑战第13天】在微服务架构和容器化部署日益普及的背景下,对Docker容器的监控变得尤为重要。本文将探讨一种构建高效稳定Docker容器监控体系的方法,通过集成Prometheus和cAdvisor工具,实现对容器资源使用情况、性能指标和运行状态的实时监控。同时,结合Grafana进行数据可视化,为运维人员提供直观的分析界面,以便及时发现和解决潜在问题,保障系统的高可用性和稳定性。
29 6
|
2天前
|
存储 安全 开发者
如何删除 Docker 镜像、容器和卷?
【5月更文挑战第11天】
19 2
如何删除 Docker 镜像、容器和卷?
|
2天前
|
NoSQL Redis Docker
Mac上轻松几步搞定Docker与Redis安装:从下载安装到容器运行实测全程指南
Mac上轻松几步搞定Docker与Redis安装:从下载安装到容器运行实测全程指南
20 0
|
2天前
|
缓存 关系型数据库 数据库
【Docker 专栏】Docker 与容器化数据库的集成与优化
【5月更文挑战第9天】本文探讨了Docker与容器化数据库集成的优势,如快速部署、环境一致性、资源隔离和可扩展性,并列举了常见容器化数据库(如MySQL、PostgreSQL和MongoDB)。讨论了集成方法、注意事项、优化策略,包括资源调整、缓存优化和监控告警。此外,强调了数据备份、恢复测试及性能评估的重要性。未来,随着技术发展,二者的集成将更紧密,为数据管理带来更多可能性。掌握此技术将应对数字化时代的机遇与挑战。
【Docker 专栏】Docker 与容器化数据库的集成与优化