01 引言
在前面的博客,关于容器化技术,我写了相关的博客,有兴趣的同学可以参阅下:
本文主要讲解的是swarm
。
02 简介
Docker Swarm
是 Docker
的集群管理工具。它将 Docker
主机池转变为单个虚拟 Docker
主机。 Docker Swarm
提供了标准的 Docker API
,所有任何已经与Docker
守护程序通信的工具都可以使用 Swarm
轻松地扩展到多个主机。
支持的工具包括但不限于以下各项:
- Dokku
- Docker Compose
- Docker Machine
- Jenkins
03 原理
如下图所示,swarm
集群由管理节点(manager
)和工作节点(work node
)构成。
- swarm mananger:负责整个集群的管理工作包括集群配置、服务管理等所有跟集群有关的工作。
- work node:即图中的
available node
,主要负责运行相应的服务来执行任务(task
)。
04 swarm 集群部署
以下内容为实操内容,仅做参考!简单的案例可以参考:https://www.runoob.com/docker/docker-swarm.html
4.1 开启防火墙以及初始化端口
注意:每台机子都要操作!
172.16.3.0这个需要替换为实际机器的网关
ufw allow 22/tcp ;\ ufw allow from 172.16.3.0/24 to any port 4789 proto udp;\ ufw allow from 172.16.3.0/24 to any port 2377 proto tcp;\ ufw allow from 172.16.3.0/24 to any port 15389 proto tcp;\ ufw allow from 172.16.3.0/24 to any port 7946 ;\ ufw enable
4.2 安装docker
4.2.1 添加仓库
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - add-apt-repository "deb [arch=amd64] http://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
4.2.2 安装docker
注意:每台机子都要操作!
4.2.2.1 安装docker-cli
apt update && apt install docker-ce docker-ce-cli -y
4.2.2.2 按需拷贝cert
证书到对应的目录
/etc/docker/daemon.json /etc/docker/ca.pem /etc/docker/server-cert.pem /etc/docker/server-key.pem
4.2.2.3 编辑systemd文件
编辑systemd
文件docker.service
并保存:
vim /etc/systemd/system/multi-user.target.wants/docker.service
在ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
后追加
-H tcp://0.0.0.0:15389
4.2.2.4 资源限制
执行以下指令确认机器资源:
cat /proc/cpuinfo | grep processor | wc -l ; cat /proc/meminfo | grep MemTotal | awk '{printf "MEM: %.2fG\n",$2/1024/1024 }'
编辑 /etc/systemd/system/docker-limit.slice
文件添加容器资源限制CPUQuota
的值设置为cpu
线程数*100-25
,然后加上半角百分号(MemoryMax
的值为内存g
数取整-0.5
,MemoryHigh
的值为内存g
数取整-1
,都以大写半角G
结尾)
vim /etc/systemd/system/docker-limit.slice
添加以下内容
[Unit] Before=slices.target [Slice] CPUAccounting=true MemoryAccounting=true CPUQuota=375% MemoryHigh=14G MemoryMax=14.5G [Install] WantedBy=multi-user.target
4.2.2.5 拷贝文件
拷贝文件并重启服务:
cp /etc/systemd/system/multi-user.target.wants/docker.service /etc/systemd/system/docker.service && systemctl daemon-reload && systemctl enable docker && systemctl restart docker && systemctl enable docker-limit.slice && systemctl start docker-limit.slice
4.2.2.6 查看资源限制是否生效
systemctl status docker-limit.slice
预计返回:
docker-limit.slice Loaded: loaded (/etc/systemd/system/docker-limit.slice; enabled; vendor preset: enabled) Active: active since Wed 2021-11-03 09:37:07 CST; 2 weeks 1 days ago Tasks: 650 Memory: 4.9G (high: 14.0G max: 14.5G) CPU: 1d 9h 5min 47.297s CGroup: /docker.slice/docker-limit.slice ├─docker-3551bddf43c8d857e777a75623a03adbab7a52a52b0d0440effed70ec4cbe1ac.scope │ ├─456640 /bin/sh -c mkdir -p /var/log/nginx/ && chown -R nginx /var/log/nginx/ && sed -i "s/__WORKER_PROCESSES__/${NGINX_WORKER_PROCESSES:-1}/" /etc/nginx/nginx.conf &&sed -i "s/__WORKER_CO> │ ├─456726 nginx: master process nginx -g daemon off; │ └─456727 nginx: worker process ├─docker-47223bd402422f5377b849cdfbb61e67a94b662509df961137de1c785217af61.scope │ ├─3601224 /bin/sh -c mkdir -p /app/logs; if [ -d pre-exec-shells ];then for n in `ls pre-exec-shells/`;do bash pre-exec-shells/$n;done;fi;bash pre_exec.sh;mkdir -p /app/logs/${APP_VERSION}-> │ └─3601313 /usr/lib/jvm/java-11-openjdk-amd64/bin/java -server -Xmx384m -Xms16m -XX:MetaspaceSize=32m -XX:MaxMetaspaceSize=256m -XX:OnOutOfMemoryError=kill -9 %p -XX:+UnlockExperimentalVMOpt> ├─docker-5a28f65f25453c7afe5d05e9151565b846f242a7b42703779d9616315bc2c099.scope │ ├─620135 /sbin/tini -- node server/index.js │ └─620292 node server/index.js ├─docker-65acea9543a5468ead7ae6cb61747aa194bc344a4dab7547b02406469a9bcea5.scope │ ├─3651833 /bin/sh -c mkdir -p /app/logs; if [ -d pre-exec-shells ];then for n in `ls pre-exec-shells/`;do bash pre-exec-shells/$n;done;fi;bash pre_exec.sh;mkdir -p /app/logs/${APP_VERSION}-> │ └─3651926 /usr/lib/jvm/java-11-openjdk-amd64/bin/java -server -Xmx384m -Xms16m -XX:MetaspaceSize=32m -XX:MaxMetaspaceSize=256m -XX:OnOutOfMemoryError=kill -9 %p -XX:+UnlockExperimentalVMOpt> ├─docker-8560bf704d8183fea5eaac8ff60ecc2fd948498bcba999e7121f6909f0032515.scope │ ├─325697 /bin/sh -c cd /app && if [ -d pre-exec-shells ];then for n in `ls pre-exec-shells/`;do bash pre-exec-shells/$n;done;fi;npm run start:demonstrate │ ├─325782 npm │ ├─325832 sh -c cross-env NODE_ENV=demonstrate nuxt start │ ├─325833 node /app/node_modules/.bin/cross-env NODE_ENV=demonstrate nuxt start │ └─325840 node /app/node_modules/.bin/nuxt start ├─docker-8903c4de66d6f6046176f65d2bcafbfc0a54b106e0f89839e3e7f3efe0cbdc91.scope │ ├─3537685 /bin/sh -c mkdir -p /app/logs; if [ -d pre-exec-shells ];then for n in `ls pre-exec-shells/`;do bash pre-exec-shells/$n;done;fi;bash pre_exec.sh;mkdir -p /app/logs/${APP_VERSION}-> │ └─3537807 /usr/lib/jvm/java-11-openjdk-amd64/bin/java -server -Xmx384m -Xms16m -XX:MetaspaceSize=32m -XX:MaxMetaspaceSize=256m -XX:OnOutOfMemoryError=kill -9 %p -XX:+UnlockExperimentalVMOpt> ├─docker-a51f96092c3fa7b61987e1b5a8e1c3c67c3abaae6b4685041e533320f8fa3852.scope │ ├─623900 /bin/sh -c /bin/bash -c "cd /app;${CMD_RUNTIME_PREFIX} /app/docker-watchdog ${CMD_RUNTIME_PARAM}" │ ├─623952 /bin/bash -c cd /app; /app/docker-watchdog │ └─623953 /app/docker-watchdog ├─docker-ac512bfea9b13d4ed275609725bee11fa7aad8e14a8b077c4551ec2981b83d63.scope │ ├─3543383 /bin/sh -c mkdir -p /var/log/nginx/ && chown -R nginx /var/log/nginx/ && sed -i "s/__WORKER_PROCESSES__/${NGINX_WORKER_PROCESSES:-1}/" /etc/nginx/nginx.conf &&sed -i "s/__WORKER_C> │ ├─3543500 nginx: master process nginx -g daemon off; │ └─3543502 nginx: worker process ├─docker-bde66e65f2ca02ce21a6e177e9857275224b6678d666fa5655a7836c4839d0f0.scope │ ├─478014 /bin/sh -c mkdir -p /var/log/nginx/ && chown -R nginx /var/log/nginx/ && sed -i "s/__WORKER_PROCESSES__/${NGINX_WORKER_PROCESSES:-1}/" /etc/nginx/nginx.conf &&sed -i "s/__WORKER_CO> │ ├─478116 nginx: master process nginx -g daemon off; │ └─478117 nginx: worker process ├─docker-c48c87c6fe9bdf05cd121c24bb0123d973024d46f607fe0dd8e89ce1833bb1ac.scope │ ├─3639174 /bin/sh -c mkdir -p /app/logs; if [ -d pre-exec-shells ];then for n in `ls pre-exec-shells/`;do bash pre-exec-shells/$n;done;fi;bash pre_exec.sh;mkdir -p /app/logs/${APP_VERSION}-> │ └─3639261 /usr/lib/jvm/java-11-openjdk-amd64/bin/java -server -Xmx384m -Xms16m -XX:MetaspaceSize=32m -XX:MaxMetaspaceSize=256m -XX:OnOutOfMemoryError=kill -9 %p -XX:+UnlockExperimentalVMOpt> ├─docker-e0b51d5bd34295da2a312f82b1b85ec9ff8d142298d6e957a6b23757a91be5d3.scope │ ├─2603728 /bin/sh -c mkdir -p /app/logs; if [ -d pre-exec-shells ];then for n in `ls pre-exec-shells/`;do bash pre-exec-shells/$n;done;fi;bash pre_exec.sh;mkdir -p /app/logs/${APP_VERSION}-> │ └─2603819 /usr/lib/jvm/java-11-openjdk-amd64/bin/java -server -Xmx512m -Xms16m -XX:MetaspaceSize=32m -XX:MaxMetaspaceSize=256m -XX:OnOutOfMemoryError=kill -9 %p -XX:+UnlockExperimentalVMOpt> └─docker-fae96ca1eebbdb010e5b2562e27ae980e849e00a3c175ddc1635c5da704de9a8.scope ├─3178068 /bin/sh -c mkdir -p /var/log/nginx/ && chown -R nginx /var/log/nginx/ && sed -i "s/__WORKER_PROCESSES__/${NGINX_WORKER_PROCESSES:-1}/" /etc/nginx/nginx.conf &&sed -i "s/__WORKER_C> ├─3178156 nginx: master process nginx -g daemon off; └─3178158 nginx: worker process Nov 03 09:37:07 demo-swarm-master-1 systemd[1]: Created slice docker-limit.slice.
4.2.3 docker验证
docker login ccr.ccs.tencentyun.com --username 默认账号 --password 默认密码
4.3 初始化swarm master节点
4.3.1 查看ip信息
ip a
预期返回:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: ens16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:16:3e:ce:95:32 brd ff:ff:ff:ff:ff:ff inet 172.16.2.12/24 brd 172.16.2.255 scope global ens16 valid_lft forever preferred_lft forever inet6 fe80::216:3eff:fece:9532/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:75:60:67:74 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever
4.3.2 初始化第一台manager节点
使用机器的内网ip
初始化第一台manager
节点
docker swarm init --advertise-addr 172.16.3.8
预期返回:
Swarm initialized: current node (73hz6uqcl58v71sn0rsoalsss) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-sdfdfd-sdfd 172.16.3.8:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
4.3.3 获取 master key
docker swarm join-token manager
预期返回:
To add a manager to this swarm, run the following command: docker swarm join --token SWMTKN-1-1qr56gcrfmq3oh9xv38lpdifk1rbvl37gs1xpisvge7jrh3kpc-ssdfd 172.16.3.8:2377
4.3.4 获取 worker key
docker swarm join-token worker
预期返回:
docker swarm join --token SWMTKN-1-1qr56gcrfmq3oh9xv38lpdifk1rbvl37gs1xpisvge7jrh3kpc-ssss 172.16.3.8:2377
4.4 初始化其它master节点
执行命令(拿mater key):
docker swarm join --token SWMTKN-1-1qr56gcrfmq3oh9xv38lpdifk1rbvl37gs1xpisvge7jrh3kpc-xxx 172.16.3.8:2377
预期返回:
This node joined a swarm as a manager.
4.5 初始化worker节点
执行命令(拿worker key):
docker swarm join --token SWMTKN-1-1qr56gcrfmq3oh9xv38lpdifk1rbvl37gs1xpisvge7jrh3kpc-xxx 172.16.3.8:2377
预期返回:
This node joined a swarm as a worker
4.6 按需修改ingress网络
在master1执行:
step1:删除原有ingress网络
docker network rm ingress
step2: 创建新的/16的ingress
网络
docker network create --driver overlay --ingress --subnet=10.0.0.0/16 --gateway=10.0.0.1 ingress