基于docker部署高可用Kubernetes1.25.2(上)

简介: 基于docker部署高可用Kubernetes1.25.2

前言

  • 本次使用kubeadm部署
  • 使用docker
  • nginx反向代理apiserver实现管理面高可用
  • Ubuntu 20.04系统
  • 使用外部etcd集群

一、环境准备

本次部署准备了8台Ubuntu虚拟机,分为3个master节点、3个work节点、1个负载均衡节点、1个harbor节点*(正常其实可以多上一个负载均衡节点通过keepalived做高可用,但是我这次用的云主机没法使用keepalived)*

具体的准备工作可以看这一篇,为了方便免密,这里也准备一个脚本

root@master1:~# cat ssh-ssh.sh 
#!/bin/bash
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa -q
for host in  `awk '{print $1}' /etc/hosts`
do
    expect -c "
    spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@${host}
        expect {
                *yes/no* {send -- "yes\r"; exp_continue}
                *assword* {send zettakit\r; exp_continue}
               }"
done
root@master1:~# sh ssh-ssh.sh

二、安装etcd集群

本次etcd复用的3个master节点

root@master1:~# cat  /etc/hosts 
10.10.21.170  master1
10.10.21.172  master2
10.10.21.175  master3
10.10.21.171  node1
10.10.21.173  node2
10.10.21.176  node3
10.10.21.178  kubeapi
10.10.21.174  harbor
EOF

下载etcd安装包

root@master1:~# wget https://github.com/etcd-io/etcd/releases/download/v3.3.5/etcd-v3.3.5-linux-amd64.tar.gz
root@master1:~#  mkdir -p /etc/etcd/pki 
root@master1:~# tar -xf etcd-v3.3.5-linux-amd64.tar.gz  &&  mv etcd-v3.3.5-linux-amd64 /etc/etcd/

使用cfssl工具在/etc/etcd/pki 目录创建私有证书

具体过程参考harbor博客的证书部分

root@master1:~ # cd /etc/etcd/pki
root@master1:/etc/etcd/pki # cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "10.10.21.170",
    "10.10.21.172",
    "10.10.21.175"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "HuBei",
      "L": "WuHan",
      "O": "etcd",
      "OU": "org"
    }
  ]
}
EOF
root@master1:/etc/etcd/pki # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd.json | cfssljson -bare peer    #生成需要的证书文件
  • hosts需要将所有etcd节点地址都写上

编辑etcd的service文件,为了避免出问题,我把一些易错的地方写了注释,实际部署的时候需要把注释删除

root@master1:~# cat /lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/etc/etcd/etcd-v3.3.5-linux-amd64/etcd \   #目录对应etcd二进制文件的目录
  --name=master1 \              # 需要和本节点名对应,否则会报错
  --cert-file=/etc/etcd/pki/server.pem \  # 证书文件和目录对应路径,写错了起不来
  --key-file=/etc/etcd/pki/server-key.pem \
  --peer-cert-file=/etc/etcd/pki/peer.pem \
  --peer-key-file=/etc/etcd/pki/peer-key.pem \
  --trusted-ca-file=/etc/etcd/pki/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/pki/ca.pem \
  --initial-advertise-peer-urls=https://10.10.21.170:2380 \  #本节点IP地址
  --listen-peer-urls=https://10.10.21.170:2380 \      #本节点IP地址
  --listen-client-urls=https://10.10.21.170:2379 \      #本节点IP地址
  --advertise-client-urls=https://10.10.21.170:2379 \   #本节点IP地址
  --initial-cluster-token=etcd-cluster-0 \          #集群名字自定义
  --initial-cluster=master1=https://10.10.21.170:2380,master2=https://10.10.21.172:2380,master3=https://10.10.21.175:2380 \         #依然是节点名字和IP对应
  --initial-cluster-state=new \
  --data-dir=/data/etcd \
  --snapshot-count=50000 \
  --auto-compaction-retention=1 \
  --max-request-bytes=10485760 \
  --quota-backend-bytes=8589934592
Restart=always
RestartSec=15
LimitNOFILE=65536
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target

将文件传输到其他节点

root@master1:~# for i in master2 master3;do scp -r /etc/etcd $i:/etc/ ;scp /lib/systemd/system/etcd.service $i:/lib/systemd/system/etcd.service ;done

按要求修改完配置文件之后启动etcd

root@master1:~# for i in master2 master3;do ssh $i systemctl daemon-reload ;ssh $i systemctl enable --now etcd ;done

查询etcd集群状态信息

root@master1:~#  export NODE_IPS="10.10.21.170 10.10.21.172 10.10.21.175"
root@master1:~# for ip in ${NODE_IPS};do ETCDCTL_API=3 /etc/etcd/etcd-v3.3.5-linux-amd64/etcdctl --write-out=table endpoint status --endpoints=https://${ip}:2379 --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/server.pem --key=/etc/etcd/pki/server-key.pem;done
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
|         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://10.10.21.170:2379 | 3f5dcb4f9728903b |   3.3.5 |  3.0 MB |     false |        32 |    1239646 |
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
|         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://10.10.21.172:2379 | 13dde2c0d8695730 |   3.3.5 |  3.0 MB |      true |        32 |    1239646 |
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
|         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://10.10.21.175:2379 | 6acd32f3e7cb1ab7 |   3.3.5 |  3.0 MB |     false |        32 |    1239646 |
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
root@master1:/opt# ETCDCTL_API=3  /etc/etcd/etcd-v3.3.5-linux-amd64/etcdctl --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/server.pem --key=/etc/etcd/pki/server-key.pem --endpoints="https://10.10.21.170:2379,https://10.10.21.172:2379,https://10.10.21.175:2379" endpoint health --write-out=table
https://10.10.21.172:2379 is healthy: successfully committed proposal: took = 577.956µs
https://10.10.21.175:2379 is healthy: successfully committed proposal: took = 1.122021ms
https://10.10.21.170:2379 is healthy: successfully committed proposal: took = 1.013689ms
# 这里etcdctl的命令路径、证书文件路径、etcd节点的地址需要确保别写错
root@master1:~# ETCDCTL_API=3 /etc/etcd/etcd-v3.3.5-linux-amd64/etcdctl --endpoints="10.10.21.170:2379,10.10.21.172:2379,10.10.21.175:2379" --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/server.pem --key=/etc/etcd/pki/server-key.pem  endpoint status --write-out=table
+-------------------+------------------+---------+---------+-----------+-----------+------------+
|     ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+-------------------+------------------+---------+---------+-----------+-----------+------------+
| 10.10.21.170:2379 | 3f5dcb4f9728903b |   3.3.5 |  7.5 MB |     false |       112 |    5704640 |
| 10.10.21.172:2379 | 13dde2c0d8695730 |   3.3.5 |  7.5 MB |     false |       112 |    5704640 |
| 10.10.21.175:2379 | 6acd32f3e7cb1ab7 |   3.3.5 |  7.5 MB |      true |       112 |    5704640 |
+-------------------+------------------+---------+---------+-----------+-----------+------------+

至此etcd部署完成

三、配置nginx四层代理

安装nginx并修改配置文件

root@lb:~# apt-get install nginx -y
root@lb:~# egrep -v "^#|^$" /etc/nginx/nginx.conf 
user www-data;
worker_processes 2;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
  worker_connections 768;
  # multi_accept on;
}
stream {
    upstream backend {
        hash \$remote_addr consistent;
        server 10.10.21.170:6443        max_fails=3 fail_timeout=30s;  #以下三行是代理apiserver的,配置连接失败3次之后熔断30秒
        server 10.10.21.172:6443        max_fails=3 fail_timeout=30s;
        server 10.10.21.175:6443        max_fails=3 fail_timeout=30s;
    }
    server {
        listen 6443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
    upstream dashboard {
        server 10.10.21.170:40000       max_fails=3 fail_timeout=30s;  #这里六行是我用来代理dashboard的,如不需要可以将这个upstream删除
        server 10.10.21.172:40000       max_fails=3 fail_timeout=30s;
        server 10.10.21.175:40000       max_fails=3 fail_timeout=30s;
        server 10.10.21.171:40000       max_fails=3 fail_timeout=30s;
        server 10.10.21.173:40000       max_fails=3 fail_timeout=30s;
        server 10.10.21.176:40000       max_fails=3 fail_timeout=30s;
    }
    server {
        listen 40000;
        proxy_connect_timeout 1s;
        proxy_pass dashboard;
    }
}
root@lb:~# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
root@lb:~# systemctl restart nginx    #验证语法无误之后重启nginx生效配置

额外补充Redhat系列

Centos直接使用yum安装nginx默认不带stream模块,如果需要使用相同的手法可以编译安装nginx

yum -y install pcre-devel zlib-devel gcc gcc-c++ make
useradd -r nginx -M -s /sbin/nologin
wget http://nginx.org/download/nginx-1.16.1.tar.gz
tar xf nginx-1.16.1.tar.gz  && cd /nginx-1.16.1/
./configure  --prefix=/opt \  #指定nginx的安装路径
--user=nginx \          #指定用户名
--group=nginx \         #指定组名
--with-stream \         #安装stream模块
--without-http \
--without-http_uwsgi_module \ 
--with-http_stub_status_module  #启用 http_stub_status_module 模块以支持状态统计
make && make install

或者执行以下命令安装模块

yum -y install nginx
yum -y install nginx-all-modules

四、所有k8s节点安装docker并修改配置

#Ubuntu20.04可以利用内置仓库安装docker

root@master1:~# for i in master2 master3 node1 node2 node3;do ssh $i apt update;done
root@master1:~# for i in master2 master3 node1 node2 node3;do ssh $i apt -y install docker.io;done

如果安装不上也可以按阿里镜像源指导来安装

安装完成之后配置docker镜像仓库加速和cgroup驱动为systemd

root@master1:~# cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": [
"https://docker.mirrors.ustc.edu.cn",     #这几个是国内的docker仓库地址,填一个即可
"https://hub-mirror.c.163.com",
"https://reg-mirror.qiniu.com",
"https://registry.docker-cn.com"
],
"insecure-registries":["10.10.21.174:443"],    #这个是我的harbor仓库地址
"exec-opts": ["native.cgroupdriver=systemd"]   #配置cgroup driver
}
EOF
root@master1:~# for i in master2 master3 node1 node2 node3;do scp /etc/docker/daemon.json $i:/etc/docker/daemon.json ;done
root@master1:~# for i in master2 master3 node1 node2 node3;do ssh $i systemctl daemon-reload ;ssh $i systemctl restart docker; done
root@master1:~# for i in master2 master3 node1 node2 node3;do ssh $i docker info |grep Cgroup ;echo $i;done
root@master1:~# for i in master2 master3 node1 node2 node3;do ssh $i docker info |grep Cgroup ;echo $i;done
WARNING: No swap limit support      #提示没有swap限制,但是实际上swap已经关掉了,所以可以忽略
 Cgroup Driver: systemd
 Cgroup Version: 1
master2
WARNING: No swap limit support
 Cgroup Driver: systemd
 Cgroup Version: 1
master3
WARNING: No swap limit support
 Cgroup Driver: systemd
 Cgroup Version: 1
node1
 Cgroup Driver: systemd
 Cgroup Version: 1
WARNING: No swap limit support
node2
WARNING: No swap limit support
 Cgroup Driver: systemd
 Cgroup Version: 1
node3
相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
目录
相关文章
|
16小时前
|
数据可视化 Linux Docker
如何使用Docker部署Dashy并无公网ip远程访问管理界面
如何使用Docker部署Dashy并无公网ip远程访问管理界面
4 0
|
16小时前
|
NoSQL Unix MongoDB
【docker 】docker-compose 部署mongoDB
【docker 】docker-compose 部署mongoDB
6 1
|
16小时前
|
NoSQL MongoDB 数据库
docker部署mongoDB
docker部署mongoDB
5 0
|
17小时前
|
关系型数据库 Java 数据库
docker部署postgresql数据库和整合springboot连接数据源
docker部署postgresql数据库和整合springboot连接数据源
3 0
|
20小时前
|
存储 Cloud Native 文件存储
云原生之使用Docker部署home-page个人导航页
【5月更文挑战第4天】云原生之使用Docker部署home-page个人导航页
7 1
|
2天前
|
存储 Cloud Native 文件存储
云原生之使用Docker部署Nas-Cab个人NAS平台
【5月更文挑战第2天】云原生之使用Docker部署Nas-Cab个人NAS平台
56 1
|
2天前
|
Kubernetes Cloud Native Go
Golang深入浅出之-Go语言中的云原生开发:Kubernetes与Docker
【5月更文挑战第5天】本文探讨了Go语言在云原生开发中的应用,特别是在Kubernetes和Docker中的使用。Docker利用Go语言的性能和跨平台能力编写Dockerfile和构建镜像。Kubernetes,主要由Go语言编写,提供了方便的客户端库与集群交互。文章列举了Dockerfile编写、Kubernetes资源定义和服务发现的常见问题及解决方案,并给出了Go语言构建Docker镜像和与Kubernetes交互的代码示例。通过掌握这些技巧,开发者能更高效地进行云原生应用开发。
37 1
|
运维 Kubernetes 前端开发
【云原生】阿里云服务器部署 Docker Swarm集群
阿里云服务器 一键部署 Docker Swarm 集群!
536 0
【云原生】阿里云服务器部署 Docker Swarm集群
|
弹性计算 Linux Shell
阿里云一键部署 Docker Datacenter
使用阿里云ROS一键部署Docker Datacenter
12226 1
|
弹性计算 Shell Docker
阿里云一键部署 Docker Datacenter
使用ROS模板在阿里云上一键部署Docker Datacenter
7718 0