【Docker Swarm】搭建Docker Swarm高可用集群(详细版)(下)

简介: 【Docker Swarm】搭建Docker Swarm高可用集群(详细版)

6️⃣更改角色:将Worker晋升为Manager


以docker-n2为例,将docker-n2管理节点由worker角色变成manager角色


[root@docker-m1 ~]# docker node ls
ID                            HOSTNAME    STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
34cug51p9dw83u2np594z6ej4 *   docker-m1   Ready     Active         Leader           20.10.14
hwmwdk78u3rx0wwxged87xnun     docker-m2   Ready     Active         Reachable        20.10.14
4q34guc6hp2a5ok0g1zkjojyh     docker-m3   Ready     Active                          20.10.14
4om9sg56sg09t9whelbrkh8qn     docker-n1   Ready     Active                          20.10.14
xooolkg0g9epddfqqiicywshe     docker-n2   Ready     Active                          20.10.14
[root@docker-m1 ~]# docker node update --role manager docker-n2
docker-n2
[root@docker-m1 ~]# docker node ls
ID                            HOSTNAME    STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
34cug51p9dw83u2np594z6ej4 *   docker-m1   Ready     Active         Leader           20.10.14
hwmwdk78u3rx0wwxged87xnun     docker-m2   Ready     Active         Reachable        20.10.14
4q34guc6hp2a5ok0g1zkjojyh     docker-m3   Ready     Active                          20.10.14
4om9sg56sg09t9whelbrkh8qn     docker-n1   Ready     Active                          20.10.14
xooolkg0g9epddfqqiicywshe     docker-n2   Ready     Active         Reachable        20.10.14



7️⃣移除再添加管理节点


将集群中某台管理节点移除集群,重新获取管理节点的令牌,再添加至集群中。


# 查看帮助命令
[root@docker-m1 ~]# docker swarm leave --help
Usage:  docker swarm leave [OPTIONS]
Leave the swarm
Options:
  -f, --force   Force this node to leave the swarm, ignoring warnings
[root@docker-m1 ~]#


在docker-m3节点执行操作,将docker-m3管理节点移除集群


[root@docker-m3 ~]# docker node ls
ID                            HOSTNAME    STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
34cug51p9dw83u2np594z6ej4     docker-m1   Ready     Active         Leader           20.10.14
hwmwdk78u3rx0wwxged87xnun     docker-m2   Ready     Active         Reachable        20.10.14
4q34guc6hp2a5ok0g1zkjojyh *   docker-m3   Ready     Active         Reachable        20.10.14
4om9sg56sg09t9whelbrkh8qn     docker-n1   Ready     Active                          20.10.14
xooolkg0g9epddfqqiicywshe     docker-n2   Ready     Active                          20.10.14
[root@docker-m3 ~]# docker swarm leave -f
Node left the swarm.



在docker-m1管理节点上查看。发现docker-m3管理节点已经关闭


[root@docker-m1 ~]# docker node ls
ID                            HOSTNAME    STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
34cug51p9dw83u2np594z6ej4 *   docker-m1   Ready     Active         Leader           20.10.14
hwmwdk78u3rx0wwxged87xnun     docker-m2   Ready     Active         Reachable        20.10.14
4q34guc6hp2a5ok0g1zkjojyh     docker-m3   Ready     Active         Reachable        20.10.14
4om9sg56sg09t9whelbrkh8qn     docker-n1   Ready     Active                          20.10.14
xooolkg0g9epddfqqiicywshe     docker-n2   Ready     Active                          20.10.14
[root@docker-m1 ~]# docker node ls
ID                            HOSTNAME    STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
34cug51p9dw83u2np594z6ej4 *   docker-m1   Ready     Active         Leader           20.10.14
hwmwdk78u3rx0wwxged87xnun     docker-m2   Ready     Active         Reachable        20.10.14
4q34guc6hp2a5ok0g1zkjojyh     docker-m3   Down      Active         Unreachable      20.10.14
4om9sg56sg09t9whelbrkh8qn     docker-n1   Ready     Active                          20.10.14
xooolkg0g9epddfqqiicywshe     docker-n2   Ready     Active                          20.10.14



重新获取添加管理节点的令牌命令。


执行 docker swarm join-token manager命令,获取命令。


[root@docker-m1 ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-528o8bfk061miheduvuvnnohhpystvxnwiqfqqf04gou6n1wmz-1z6k8msio37as0vaa467glefx 192.168.200.81:2377
[root@docker-m1 ~]#


重新将docker-m3管理节点添加到集群中。


[root@docker-m3 ~]# docker swarm join --token SWMTKN-1-528o8bfk061miheduvuvnnohhpystvxnwiqfqqf04gou6n1wmz-1z6k8msio37as0vaa467glefx 192.168.200.81:2377
This node joined a swarm as a manager.
[root@docker-m3 ~]# docker node ls
ID                            HOSTNAME    STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
34cug51p9dw83u2np594z6ej4     docker-m1   Ready     Active         Leader           20.10.14
hwmwdk78u3rx0wwxged87xnun     docker-m2   Ready     Active         Reachable        20.10.14
4q34guc6hp2a5ok0g1zkjojyh     docker-m3   Down      Active         Reachable        20.10.14
jvtiwv8eu45ev4qbm0ausivv2 *   docker-m3   Ready     Active         Reachable        20.10.14
4om9sg56sg09t9whelbrkh8qn     docker-n1   Ready     Active                          20.10.14
xooolkg0g9epddfqqiicywshe     docker-n2   Ready     Active                          20.10.14
[root@docker-m3 ~]#


8️⃣移除再添加工作节点


将集群中某台工作节点移除集群,重新获取工作节点的令牌,再添加至集群中。


在docker-n1节点执行操作,将docker-n1工作节点移除集群


[root@docker-n1 ~]# docker swarm leave
Node left the swarm.


在docker-m1管理节点上查看。发现docker-n1工作节点已经关闭


[root@docker-m1 ~]# docker node ls
ID                            HOSTNAME    STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
34cug51p9dw83u2np594z6ej4 *   docker-m1   Ready     Active         Leader           20.10.14
hwmwdk78u3rx0wwxged87xnun     docker-m2   Ready     Active         Reachable        20.10.14
4q34guc6hp2a5ok0g1zkjojyh     docker-m3   Down      Active         Reachable        20.10.14
jvtiwv8eu45ev4qbm0ausivv2     docker-m3   Ready     Active         Reachable        20.10.14
4om9sg56sg09t9whelbrkh8qn     docker-n1   Down      Active                          20.10.14
xooolkg0g9epddfqqiicywshe     docker-n2   Ready     Active                          20.10.14


重新获取添加工作节点的令牌命令。


执行 docker swarm join-token worker命令,获取命令。


[root@docker-m1 ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-528o8bfk061miheduvuvnnohhpystvxnwiqfqqf04gou6n1wmz-3ixu6we70ghk69wghfrmo0y6a 192.168.200.81:2377
[root@docker-m1 ~]#


重新将docker-n1工作节点添加到集群中。


[root@docker-n1 ~]# docker swarm join --token SWMTKN-1-528o8bfk061miheduvuvnnohhpystvxnwiqfqqf04gou6n1wmz-3ixu6we70ghk69wghfrmo0y6a 192.168.200.81:2377
This node joined a swarm as a worker.



删除多余的节点。


[root@docker-m1 ~]# docker node rm 34emdxnfc139d6kc4ht2xsp4b 4om9sg56sg09t9whelbrkh8qn
34emdxnfc139d6kc4ht2xsp4b
4om9sg56sg09t9whelbrkh8qn
[root@docker-m1 ~]#



9️⃣在集群中部署NGINX应用测试


🏉 查看service帮助命令


# 查看service 帮助命令
[root@docker-m1 ~]# docker service
Usage:  docker service COMMAND
Manage services
Commands:
  create      Create a new service
  inspect     Display detailed information on one or more services
  logs        Fetch the logs of a service or task
  ls          List services
  ps          List the tasks of one or more services
  rm          Remove one or more services
  rollback    Revert changes to a service's configuration
  scale       Scale one or multiple replicated services
  update      Update a service
Run 'docker service COMMAND --help' for more information on a command.
[root@docker-m1 ~]#


⚾️ 创建NGINX服务


# 1、搜索镜像
[root@docker-m1 ~]# docker search nginx
NAME                                              DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
nginx                                             Official build of Nginx.                        16720     [OK]
bitnami/nginx                                     Bitnami nginx Docker Image                      124                  [OK]
ubuntu/nginx                                      Nginx, a high-performance reverse proxy & we…   46
bitnami/nginx-ingress-controller                  Bitnami Docker Image for NGINX Ingress Contr…   17                   [OK]
rancher/nginx-ingress-controller                                                                  10
ibmcom/nginx-ingress-controller                   Docker Image for IBM Cloud Private-CE (Commu…   4
bitnami/nginx-ldap-auth-daemon                                                                    3
bitnami/nginx-exporter                                                                            2
rancher/nginx-ingress-controller-defaultbackend                                                   2
circleci/nginx                                    This image is for internal use                  2
vmware/nginx                                                                                      2
vmware/nginx-photon                                                                               1
bitnami/nginx-intel                                                                               1
rancher/nginx                                                                                     1
wallarm/nginx-ingress-controller                  Kubernetes Ingress Controller with Wallarm e…   1
rancher/nginx-conf                                                                                0
rancher/nginx-ssl                                                                                 0
ibmcom/nginx-ppc64le                              Docker image for nginx-ppc64le                  0
rancher/nginx-ingress-controller-amd64                                                            0
continuumio/nginx-ingress-ws                                                                      0
ibmcom/nginx-ingress-controller-ppc64le           Docker Image for IBM Cloud Private-CE (Commu…   0
kasmweb/nginx                                     An Nginx image based off nginx:alpine and in…   0
rancher/nginx-proxy                                                                               0
wallarm/nginx-ingress-controller-amd64            Kubernetes Ingress Controller with Wallarm e…   0
ibmcom/nginx-ingress-controller-amd64                                                             0
[root@docker-m1 ~]#
# 2、下载镜像 pull
[root@docker-m1 ~]# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
a2abf6c4d29d: Pull complete
a9edb18cadd1: Pull complete
589b7251471a: Pull complete
186b1aaa4aa6: Pull complete
b4df32aa5a72: Pull complete
a0bcbecc962e: Pull complete
Digest: sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
[root@docker-m1 ~]#
# 3、查看镜像
[root@docker-m1 ~]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
nginx        latest    605c77e624dd   4 months ago   141MB
# 4、使用service命令启动Nginx
docker run 容器启动,不具有扩缩容器。
docker service 服务启动,具有扩缩容,滚动更新。
[root@docker-m1 ~]# docker service create -p 8888:80 --name xybdiy-nginx nginx
ngoi21hcjan5qoro9amd7n1jh
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged
[root@docker-m1 ~]#


🏀 查看NGINX服务


发现nginx服务被部署到了docker-n2 工作节点上。随机分布。


# 查看NGINX服务
[root@docker-m1 ~]# docker service ls
ID             NAME           MODE         REPLICAS   IMAGE          PORTS
ngoi21hcjan5   xybdiy-nginx   replicated   1/1        nginx:latest   *:8888->80/tcp
[root@docker-m1 ~]# docker service ps xybdiy-nginx
ID             NAME             IMAGE          NODE        DESIRED STATE   CURRENT STATE            ERROR     PORTS
w5azhbc3xrta   xybdiy-nginx.1   nginx:latest   docker-n2   Running         Running 20 minutes ago
[root@docker-m1 ~]#
[root@docker-n2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
d65e6e8bf5fd   nginx:latest   "/docker-entrypoint.…"   28 minutes ago   Up 28 minutes   80/tcp    xybdiy-nginx.1.w5azhbc3xrtafxvftkgh7x9vk
[root@docker-n2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
d65e6e8bf5fd   nginx:latest   "/docker-entrypoint.…"   28 minutes ago   Up 28 minutes   80/tcp    xybdiy-nginx.1.w5azhbc3xrtafxvftkgh7x9vk
# 查看NGNIX服务详细信息
[root@docker-m1 ~]# docker service inspect xybdiy-nginx
[
    {
        "ID": "ngoi21hcjan5qoro9amd7n1jh",
        "Version": {
            "Index": 34
        },
        "CreatedAt": "2022-05-03T12:38:22.234486876Z",
        "UpdatedAt": "2022-05-03T12:38:22.238903441Z",
        "Spec": {
            "Name": "xybdiy-nginx",
            "Labels": {},
            "TaskTemplate": {
                "ContainerSpec": {
                    "Image": "nginx:latest@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31",
                    "Init": false,
                    "StopGracePeriod": 10000000000,
                    "DNSConfig": {},
                    "Isolation": "default"
                },
                "Resources": {
                    "Limits": {},
                    "Reservations": {}
                },
                "RestartPolicy": {
                    "Condition": "any",
                    "Delay": 5000000000,
                    "MaxAttempts": 0
                },
                "Placement": {
                    "Platforms": [
                        {
                            "Architecture": "amd64",
                            "OS": "linux"
                        },
                        {
                            "OS": "linux"
                        },
                        {
                            "OS": "linux"
                        },
                        {
                            "Architecture": "arm64",
                            "OS": "linux"
                        },
                        {
                            "Architecture": "386",
                            "OS": "linux"
                        },
                        {
                            "Architecture": "mips64le",
                            "OS": "linux"
                        },
                        {
                            "Architecture": "ppc64le",
                            "OS": "linux"
                        },
                        {
                            "Architecture": "s390x",
                            "OS": "linux"
                        }
                    ]
                },
                "ForceUpdate": 0,
                "Runtime": "container"
            },
            "Mode": {
                "Replicated": {
                    "Replicas": 1
                }
            },
            "UpdateConfig": {
                "Parallelism": 1,
                "FailureAction": "pause",
                "Monitor": 5000000000,
                "MaxFailureRatio": 0,
                "Order": "stop-first"
            },
            "RollbackConfig": {
                "Parallelism": 1,
                "FailureAction": "pause",
                "Monitor": 5000000000,
                "MaxFailureRatio": 0,
                "Order": "stop-first"
            },
            "EndpointSpec": {
                "Mode": "vip",
                "Ports": [
                    {
                        "Protocol": "tcp",
                        "TargetPort": 80,
                        "PublishedPort": 8888,
                        "PublishMode": "ingress"
                    }
                ]
            }
        },
        "Endpoint": {
            "Spec": {
                "Mode": "vip",
                "Ports": [
                    {
                        "Protocol": "tcp",
                        "TargetPort": 80,
                        "PublishedPort": 8888,
                        "PublishMode": "ingress"
                    }
                ]
            },
            "Ports": [
                {
                    "Protocol": "tcp",
                    "TargetPort": 80,
                    "PublishedPort": 8888,
                    "PublishMode": "ingress"
                }
            ],
            "VirtualIPs": [
                {
                    "NetworkID": "uhjulzndxnofx63e2bb3r8iq9",
                    "Addr": "10.0.0.7/24"
                }
            ]
        }
    }
]
[root@docker-m1 ~]#


🏈 创建多个NGINX服务副本


动态扩容,缓解主机被访问的压力。


查看update帮助命令


[root@docker-m1 ~]# docker service update --help
Usage:  docker service update [OPTIONS] SERVICE
Update a service
Options:
......
  -q, --quiet                              Suppress progress output
      --read-only                          Mount the container's root filesystem as read only
      --replicas uint                      Number of tasks
      --replicas-max-per-node uint         Maximum number of tasks per node (default 0 = unlimited)
......


创建多个NGINX服务副本


[root@docker-m1 ~]# docker service update --replicas 2 xybdiy-nginx
xybdiy-nginx
overall progress: 2 out of 2 tasks
1/2: running   [==================================================>]
2/2: running   [==================================================>]
verify: Service converged
[root@docker-m1 ~]#


查看创建的NGINX服务副本


[root@docker-m1 ~]# docker service ls
ID             NAME           MODE         REPLICAS   IMAGE          PORTS
ngoi21hcjan5   xybdiy-nginx   replicated   2/2        nginx:latest   *:8888->80/tcp
[root@docker-m1 ~]# docker service ps xybdiy-nginx
ID             NAME             IMAGE          NODE        DESIRED STATE   CURRENT STATE            ERROR     PORTS
w5azhbc3xrta   xybdiy-nginx.1   nginx:latest   docker-n2   Running         Running 36 minutes ago
rgtjq163z9ch   xybdiy-nginx.2   nginx:latest   docker-m1   Running         Running 33 seconds ago


测试访问NGINX服务


http://192.168.200.81:8888/
http://192.168.200.91:8888/



🏐 模拟故障情况


当docker-m1管理主机发生宕机时,查看NGINX服务是否能够正常运行访问。


# 关闭docker-m1节点
[root@docker-m1 ~]# shutdown -h now
Connection to 192.168.200.81 closed by remote host.
Connection to 192.168.200.81 closed.


查看节点状态


[root@docker-m2 ~]# docker node ls
ID                            HOSTNAME    STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
75dxq2qmzr2bv4tkg20gh0syr     docker-m1   Down      Active         Unreachable      20.10.14
l2is4spmgd4b5xmmxwo3jvuf4 *   docker-m2   Ready     Active         Reachable        20.10.14
u89a2ie2buxuc5bsew4a2wrpo     docker-m3   Ready     Active         Leader           20.10.14
aon2nakgk87rds5pque74itw4     docker-n1   Ready     Active                          20.10.14
ljdb9d3xkzjruuxsxrpmuei7s     docker-n2   Ready     Active                          20.10.14
[root@docker-m2 ~]#


查看服务状态


[root@docker-m2 ~]# docker service ls
ID             NAME           MODE         REPLICAS   IMAGE          PORTS
ngoi21hcjan5   xybdiy-nginx   replicated   3/2        nginx:latest   *:8888->80/tcp
[root@docker-m2 ~]# docker service ps xybdiy-nginx
ID             NAME                 IMAGE          NODE        DESIRED STATE   CURRENT STATE            ERROR     PORTS
w5azhbc3xrta   xybdiy-nginx.1       nginx:latest   docker-n2   Running         Running 2 minutes ago
tteb16dnir6u   xybdiy-nginx.2       nginx:latest   docker-n1   Running         Running 2 minutes ago
rgtjq163z9ch    \_ xybdiy-nginx.2   nginx:latest   docker-m1   Shutdown        Running 17 minutes ago
[root@docker-m2 ~]#



🔟参 考 链 接


🔴群模式概述 |Docker 文档


🟠群模式入门|Docker 文档


🟡群模式关键概念|Docker 文档


🟢节点如何|Docker 文档


🔵创建群|Docker 文档


🟣docker node ls |Docker 文档

相关文章
|
4天前
|
应用服务中间件 nginx Docker
Docker Swarm、Docker Stack和Portainer的使用
Docker Swarm、Docker Stack 和 Portainer 各有其独特的功能和优势。Docker Swarm 适用于分布式服务的管理和编排,Docker Stack 便于多容器应用的定义和部署,而 Portainer 提供了直观的 UI,简化了 Docker 环境的管理。结合使用这些工具,可以大大提高容器化应用的部署和管理效率。希望本文对您理解和应用这些工具有所帮助。
13 5
|
20天前
|
负载均衡 应用服务中间件 网络安全
docker swarm添加更多的服务
【10月更文挑战第16天】
18 6
|
20天前
|
Docker 容器
docker swarm启动服务并连接到网络
【10月更文挑战第16天】
20 5
|
5天前
|
API Docker 容器
【赵渝强老师】构建Docker Swarm集群
本文介绍了如何使用三台虚拟主机构建Docker Swarm集群。首先在master节点上初始化集群,然后通过特定命令将node1和node2作为worker节点加入集群。最后,在master节点上查看集群的节点信息,确认集群构建成功。文中还提供了相关图片和视频教程,帮助读者更好地理解和操作。
|
5天前
|
调度 Docker 容器
【赵渝强老师】Docker Swarm集群的体系架构
Docker Swarm自1.12.0版本起集成至Docker引擎,无需单独安装。它内置服务发现功能,支持跨多服务器或宿主机创建容器,形成集群提供服务。相比之下,Docker Compose仅限于单个宿主机。Docker Swarm采用主从架构,Swarm Manager负责管理和调度集群中的容器资源,用户通过其接口发送指令,Swarm Node根据指令创建容器运行应用。
|
18天前
|
负载均衡 安全 调度
深入调查研究Docker Swarm
【10月更文挑战第19天】
27 0
|
6月前
|
数据安全/隐私保护 虚拟化 Docker
Docker Swarm 集群搭建
Docker Swarm 集群搭建
|
6月前
|
Kubernetes 应用服务中间件 nginx
Docker六脉神剑 (五) Docker Swarm集群搭建及基础服务部署
Docker六脉神剑 (五) Docker Swarm集群搭建及基础服务部署
89 1
|
存储 Kubernetes Ubuntu
Docker六脉神剑 (五) Docker Swarm集群搭建及基础服务部署
Docker六脉神剑 (五) Docker Swarm集群搭建及基础服务部署
185 0
|
Linux 网络安全 开发者
Docker swarm 集群搭建实现|学习笔记
快速学习Docker swarm 集群搭建实现
Docker swarm 集群搭建实现|学习笔记