二进制搭建K8S1.20和1.23.6高可用集群

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
日志服务 SLS,月写入数据量 50GB 1个月
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
简介: 二进制搭建K8S1.20和1.23.6高可用集群

文件下载地址:
链接:https://caiyun.139.com/m/i?165CkwR8LoiwP
提取码:DVsm

K8S环境规划:
Pod网段: 10.0.0.0/16
Service网段: 10.255.0.0/16

实战环境规划
操作系统:centos7.7
配置: 4G内存/6vCPU/100G硬盘
网络模式:桥接

K8S集群角色IP主机名安装的组件控制节点10.10.1.11master1apiserver、contrller-manager、scheduler、etcd、kubectl、keepalived、nginx控制节点10.10.1.12master2apiserver、contrller-manager、scheduler、etcd、kubectl、keepalived、nginx控制节点10.10.1.13master3apiserver、contrller-manager、scheduler、etcd、kubectl工作节点10.10.1.21node1kubelet、kube-proxy、docker、calico、corednsVip10.10.1.99

原理图如下:

一、初始化
1.配置静态 IP
配置master1的IP
vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=10.10.1.11
NETMASK=255.255.255.0
GATEWAY=10.10.1.1
DNS1=223.5.5.5
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
DEVICE=ens33
ONBOOT=yes
配置master2的IP
vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=10.10.1.12
NETMASK=255.255.255.0
GATEWAY=10.10.1.1
DNS1=223.5.5.5
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
DEVICE=ens33
ONBOOT=yes
配置master3的IP
vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=10.10.1.13
NETMASK=255.255.255.0
GATEWAY=10.10.1.1
DNS1=223.5.5.5
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
DEVICE=ens33
ONBOOT=yes
配置node1的IP
vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=10.10.1.21
NETMASK=255.255.255.0
GATEWAY=10.10.1.1
DNS1=223.5.5.5
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
DEVICE=ens33
ONBOOT=yes

2.配置主机名
配置master1的本地hosts文件
vi /etc/hosts
10.10.1.11 master1
10.10.1.12 master2
10.10.1.13 master3
10.10.1.21 node1

配置master2的本地hosts文件
vi /etc/hosts
10.10.1.11 master1
10.10.1.12 master2
10.10.1.13 master3
10.10.1.21 node1

配置master3的本地hosts文件
vi /etc/hosts
10.10.1.11 master1
10.10.1.12 master2
10.10.1.13 master3
10.10.1.21 node1

配置node1的本地hosts文件
vi /etc/hosts
10.10.1.11 master1
10.10.1.12 master2
10.10.1.13 master3
10.10.1.21 node1

3.配置阿里云 repo 源,在 master1、master2、master3、node1、上操作:

备份基础 repo 源

mkdir /root/repo.bak
cd /etc/yum.repos.d/
mv * /root/repo.bak/

配置阿里云 repo 源

wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
清理缓存并生成新的缓存
yum clean all
yum makecache

安装 rzsz 命令

yum install lrzsz -y

安装 scp命令

yum install openssh-clients

配置国内阿里云 docker 的 repo 源,node1上操作:

yum -y install yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
4.安装 docker-ce,node1、上操作:
yum install docker-ce docker-ce-cli containerd.io -y
systemctl start docker && systemctl enable docker.service && systemctl status docker
5.配置时间同步,在 master1、master2、master3、node1上操作:

安装 ntpdate 命令,

yum install ntpdate -y

跟网络源做同步

ntpdate cn.pool.ntp.org

把时间同步做成计划任务

crontab -e

  • /1 /usr/sbin/ntpdate cn.pool.ntp.org

重启 crond 服务

service crond restart
6.安装基础软件包,在 master1、master2、master3、node1上操作:
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet rsync
7.关闭所有主机防火墙
master1关闭防火墙,master2关闭防火墙,master3关闭防火墙,node1关闭防火墙,,每台服务器都要操作
systemctl stop firewalld ; systemctl disable firewalld
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

查看是否关闭成功
getenforce

显示 Disabled 说明 selinux 已经关闭

8.配置主机之间无密码登录
master1上操作,#生成 ssh 密钥对
ssh-keygen -t rsa #一路回车,不输入密码
把本地的 ssh 公钥文件安装到远程主机对应的账户
ssh-copy-id -i .ssh/id_rsa.pub master1
ssh-copy-id -i .ssh/id_rsa.pub master2
ssh-copy-id -i .ssh/id_rsa.pub master3
ssh-copy-id -i .ssh/id_rsa.pub node1

9.关闭交换分区 swap,在master1,master2,master3,node1上操作

临时关闭

swapoff -a

永久关闭:注释 swap 挂载,给 swap 这行开头加一下注释

vim /etc/fstab

/dev/mapper/centos-swap swap swap defaults 0 0

如果是克隆的虚拟机,需要删除 UUID

重启

reboot
10.修改内核参数,在 master1、master2、master3、node1、上操作

加载 br_netfilter 模块(修改内核参数需要加载这个模块,否则报错)

modprobe br_netfilter

验证模块是否加载成功:

lsmod |grep br_netfilter

修改内核参数

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

使刚才修改的内核参数生效

sysctl -p /etc/sysctl.d/k8s.conf

11.开启 ipvs (如果不开启,端口转发就会用iptables,ipvs的转发比iptables效率高)在 master1、master2、master3、node1上操作:
vi /etc/sysconfig/modules/ipvs.modules

!/bin/bash

ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ 0 -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done

权限755,在 master1、master2、master3、node1上操作:

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

12.配置 docker 镜像加速器,node1上操作:
tee /etc/docker/daemon.json << 'EOF'
{
"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com","https://rncxm540.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

systemctl daemon-reload (重新加载配置)
systemctl restart docker
systemctl status docker

初始化完成

二、搭建 etcd 集群 (存数据用的)
1.配置etcd工作目录,在 master1、master2、master3上操作:
mkdir -p /etc/etcd
mkdir -p /etc/etcd/ssl

2.安装签发证书工具 cfssl 用于生成证书使用的,在 master1上操作
mkdir /data/work -p
cd /data/work/

cfssl-certinfo_linux-amd64 、cfssljson_linux-amd64 、cfssl_linux-amd64 上传到 /data/work/目录下

把文件变成可执行权限,在 master1上操作

chmod +x *
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

3.配置 ca 证书,在 master1上操作

生成 ca 证书请求文件,在 master1上操作

vim ca-csr.json
{

"CN": "kubernetes",
"key": {
    "algo": "rsa",
    "size": 2048
},
"names": [
    {
        "C": "CN",
        "ST": "Hubei",
        "L": "Wuhan",
        "O": "k8s",
        "OU": "system"
    }
],
"ca": {
            "expiry": "87600h"
}

}
注:
CN:Common Name(公用名称)kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);
O:Organization(单位名称),kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);
L 字段:所在城市
S 字段:所在省份
C 字段:只能是国家字母缩写

生成 ca 证书文件,在 master1上操作

vim ca-config.json
{

"signing": {
    "default": {
        "expiry": "87600h"
    },
    "profiles": {
        "kubernetes": {
            "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ],
            "expiry": "87600h"
        }
    }
}

}

制作证书,在 master1上操作

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

4.生成 etcd 证书,在 master1上操作
vim etcd-csr.json
{

"CN": "etcd",
"hosts": [
    "127.0.0.1",
    "10.10.1.11",
    "10.10.1.12",
    "10.10.1.13",
    "10.10.1.99"
],
"key": {
    "algo": "rsa",
    "size": 2048
},
"names": [{
    "C": "CN",
    "ST": "Hubei",
    "L": "Wuhan",
    "O": "k8s",
    "OU": "system"
}]

}
制作证书,在 master1上操作
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

5.部署 etcd 集群,在 master1上操作:
把 etcd-v3.4.13-linux-amd64.tar.gz 上传到/data/work 目录下
cd /data/work
tar -xf etcd-v3.4.13-linux-amd64.tar.gz
cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
scp -r etcd-v3.4.13-linux-amd64/etcd* master2:/usr/local/bin/
scp -r etcd-v3.4.13-linux-amd64/etcd* master3:/usr/local/bin/

创建配置文件,在 master1上操作:

vim etcd.conf

[Member]

ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.10.1.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.10.1.11:2379,http://127.0.0.1:2379"

[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.1.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.10.1.11:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://10.10.1.11:2380,etcd2=https://10.10.1.12:2380,etcd3=https://10.10.1.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

注释:

ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群 Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new 是新集群,existing 表示加入已有集群

创建启动服务文件,在master1上操作

vim etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-client-cert-auth \
--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
移动文件位置,在master1上操作
cp ca*.pem /etc/etcd/ssl/
cp etcd*.pem /etc/etcd/ssl/
cp etcd.conf /etc/etcd/
cp etcd.service /usr/lib/systemd/system/
for i in master2 master3;do rsync -vaz etcd.conf $i:/etc/etcd/;done
for i in master2 master3;do rsync -vaz etcd.pem ca.pem $i:/etc/etcd/ssl/;done
for i in master2 master3;do rsync -vaz etcd.service $i:/usr/lib/systemd/system/;done

修改etcd.conf配置文件,在 master2上操作:
vim /etc/etcd/etcd.conf

[Member]

ETCD_NAME="etcd2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.10.1.12:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.10.1.12:2379,http://127.0.0.1:2379"

[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.1.12:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.10.1.12:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://10.10.1.11:2380,etcd2=https://10.10.1.12:2380,etcd3=https://10.10.1.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
修改etcd.conf配置文件,在 master3上操作:
vim /etc/etcd/etcd.conf

[Member]

ETCD_NAME="etcd3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.10.1.13:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.10.1.13:2379,http://127.0.0.1:2379"

[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.1.13:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.10.1.13:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://10.10.1.11:2380,etcd2=https://10.10.1.12:2380,etcd3=https://10.10.1.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

启动 etcd 集群

创建etcd数据存放目录,,在 master1、master2、master3上操作:
mkdir -p /var/lib/etcd/default.etcd
systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service
启动 etcd 的时候,先启动 master1 的 etcd 服务,会一直卡住在启动的状态,然后接着再启动 master2 的 etcd,这样master1 这个节点 etcd 才会正常起来

6.查看 etcd 集群,在 master1上操作:
ETCDCTL_API=3
/usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://10.10.1.11:2379,https://10.10.1.12:2379,https://10.10.1.13:2379 endpoint health

三、安装 kubernetes 组件
1.下载安装包https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/

把 kubernetes-server-linux-amd64.tar.gz 上传到 master1 上的/data/work 目录下: 在 master1上操作:

tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl master2:/usr/local/bin/
rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl master3:/usr/local/bin/
scp kubelet kube-proxy node1:/usr/local/bin/
cd /data/work/
mkdir -p /etc/kubernetes/
mkdir -p /etc/kubernetes/ssl
mkdir /var/log/kubernetes

2.部署 apiserver 组件

创建 token.csv 文件,,在 master1上操作:

cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

创建 csr 请求文件,在 master1上操作:

vim kube-apiserver-csr.json
{
"CN": "kubernetes",
"hosts": [

"127.0.0.1",
"10.10.1.11",
"10.10.1.12",
"10.10.1.13",
"10.10.1.21",
"10.10.1.99",#VIP地址
"10.255.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"

],
"key": {

"algo": "rsa",
"size": 2048

},
"names": [
{

"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "k8s",
"OU": "system"

}
]
}

生成证书,在 master1上操作:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

创建 api-server 的配置文件,在 master1上操作:

vim kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=10.10.1.11 \
--secure-port=6443 \
--advertise-address=10.10.1.11 \
--insecure-port=0 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.255.0.0/16 \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer= \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://10.10.1.11:2379,https://10.10.1.12:2379,https://10.10.1.13:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=4"

注:

--logtostderr:启用日志
--v:日志等级
--log-dir:日志目录
--etcd-servers:etcd 集群地址
--bind-address:监听地址
--secure-port:https 安全端口
--advertise-address:集群通告地址
--allow-privileged:启用授权
--service-cluster-ip-range:Service 虚拟 IP 地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用 RBAC 授权和节点自管理
--enable-bootstrap-token-auth:启用 TLS bootstrap 机制
--token-auth-file:bootstrap token 文件
--service-node-port-range:Service nodeport 类型默认分配端口范围
--kubelet-client-xxx:apiserver 访问 kubelet 客户端证书
--tls-xxx-file:apiserver https 证书
--etcd-xxxfile:连接 Etcd 集群证书 –
-audit-log-xxx:审计日志

创建服务启动文件,在 master1上操作:

vim kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

复制文件和同步文件,在 master1上操作:
cp ca*.pem /etc/kubernetes/ssl
cp kube-apiserver*.pem /etc/kubernetes/ssl/
cp token.csv /etc/kubernetes/
cp kube-apiserver.conf /etc/kubernetes/
cp kube-apiserver.service /usr/lib/systemd/system/
rsync -vaz token.csv master2:/etc/kubernetes/
rsync -vaz token.csv master3:/etc/kubernetes/
rsync -vaz kube-apiserver*.pem master2:/etc/kubernetes/ssl/
rsync -vaz kube-apiserver*.pem master3:/etc/kubernetes/ssl/
rsync -vaz ca*.pem master2:/etc/kubernetes/ssl/
rsync -vaz ca*.pem master3:/etc/kubernetes/ssl/
rsync -vaz kube-apiserver.conf master2:/etc/kubernetes/
rsync -vaz kube-apiserver.conf master3:/etc/kubernetes/
rsync -vaz kube-apiserver.service master2:/usr/lib/systemd/system/
rsync -vaz kube-apiserver.service master3:/usr/lib/systemd/system/

修改kube-apiserver.conf配置文件,在 master2上操作:

vi /etc/kubernetes/kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=10.10.1.12 \
--secure-port=6443 \
--advertise-address=10.10.1.12 \
--insecure-port=0 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.255.0.0/16 \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer= \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://10.10.1.11:2379,https://10.10.1.12:2379,https://10.10.1.13:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=4"

修改kube-apiserver.conf配置文件,在 master3上操作:

vi /etc/kubernetes/kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=10.10.1.13 \
--secure-port=6443 \
--advertise-address=10.10.1.13 \
--insecure-port=0 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.255.0.0/16 \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer= \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://10.10.1.11:2379,https://10.10.1.12:2379,https://10.10.1.13:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=4"

启动kube-apiserver,在 master1、master2、master3上操作:

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

测试在master1上

curl --insecure https://10.10.1.11:6443/
返回401是正常的,因为还没认证

3.部署 kubectl 组件

创建 csr 请求文件,在master1上操作

cd /data/work
vim admin-csr.json
{

"CN": "admin",
"hosts": [],
"key": {
    "algo": "rsa",
    "size": 2048
},
"names": [
    {
    "C": "CN",
    "ST": "Hubei",
    "L": "Wuhan",
    "O": "system:masters",
    "OU": "system"
    }
]

}

生成证书,在master1上操作

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
cp admin*.pem /etc/kubernetes/ssl/

配置安全上下文,在master1上操作

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.1.11:6443 --kubeconfig=kube.config

设置客户端认证参数,在master1上操作

kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config

设置上下文参数,在master1上操作

kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config

设置当前上下文,在master1上操作

kubectl config use-context kubernetes --kubeconfig=kube.config

复制配置文件到root目录下,在master1上操作

mkdir ~/.kube -p
cp kube.config ~/.kube/config

授权 kubernetes 证书访问 kubelet api 权限,在master1上操作

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

查看集群组件状态,在master1上操作

kubectl cluster-info
kubectl get componentstatuses
返回如下

创建文件夹在master2上操作

mkdir /root/.kube/

创建文件夹在master3上操作

mkdir /root/.kube/

同步 kubectl 文件到其他节点,在master1上操作

cd /data/work/
rsync -vaz /root/.kube/config master2:/root/.kube/
rsync -vaz /root/.kube/config master3:/root/.kube/

配置 kubectl 子命令补全在master1、master2、master3上操作

yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source /usr/share/bash-completion/bash_completion
kubectl completion bash > ~/.kube/completion.bash.inc
source '/root/.kube/completion.bash.inc'
source $HOME/.bash_profile

4.部署 kube-controller-manager 组件

创建 csr 请求文件,在master1上操作

cd /data/work/
vim kube-controller-manager-csr.json
{

"CN": "system:kube-controller-manager",
"key": {
    "algo": "rsa",
    "size": 2048
},
"hosts": [
  "127.0.0.1",
  "10.10.1.11",
  "10.10.1.12",
  "10.10.1.13",
  "10.10.1.99"
],
"names": [
  {
    "C": "CN",
    "ST": "Hubei",
    "L": "Wuhan",
    "O": "system:kube-controller-manager",
    "OU": "system"
  }
]

}

生成证书,在master1上操作

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
创建 kube-controller-manager 的 kubeconfig

设置集群参数,在master1上操作

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.1.11:6443 --kubeconfig=kube-controller-manager.kubeconfig

设置客户端认证参数,在master1上操作

kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig

设置上下文参数,在master1上操作

kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

设置当前上下文,在master1上操作

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

创建配置文件 kube-controller-manager.conf,在master1上操作

vim kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \
--secure-port=10252 \
--bind-address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
--service-cluster-ip-range=10.255.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--allocate-node-cidrs=true \
--cluster-cidr=10.0.0.0/16 \
--experimental-cluster-signing-duration=87600h \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--leader-elect=true \
--feature-gates=RotateKubeletServerCertificate=true \
--controllers=*,bootstrapsigner,tokencleaner \
--horizontal-pod-autoscaler-use-rest-clients=true \#1.23.6版本,删除这个
--horizontal-pod-autoscaler-sync-period=10s \#1.23.6版本,删除这个
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
--use-service-account-credentials=true \
--alsologtostderr=true \
--logtostderr=false \#1.23.6版本,删除这个
--log-dir=/var/log/kubernetes \
--v=2"

创建启动文件,在master1上操作

vim kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

拷贝文件到其他master上,在master1上操作

cp kube-controller-manager*.pem /etc/kubernetes/ssl/
cp kube-controller-manager.kubeconfig /etc/kubernetes/
cp kube-controller-manager.conf /etc/kubernetes/
cp kube-controller-manager.service /usr/lib/systemd/system/
rsync -vaz kube-controller-manager*.pem master2:/etc/kubernetes/ssl/
rsync -vaz kube-controller-manager*.pem master3:/etc/kubernetes/ssl/
rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master2:/etc/kubernetes/
rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master3:/etc/kubernetes/
rsync -vaz kube-controller-manager.service master2:/usr/lib/systemd/system/
rsync -vaz kube-controller-manager.service master3:/usr/lib/systemd/system/

启动服务,在master1、master2、master3上操作

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

查看端口绑定地址,在master1上操作

ss -antulp | grep :10252

5.部署 kube-scheduler 组件

创建 csr 请求,在master1上操作

vim kube-scheduler-csr.json
{

"CN": "system:kube-scheduler",
"hosts": [
  "127.0.0.1",
  "10.10.1.11",
  "10.10.1.12",
  "10.10.1.13",
  "10.10.1.99"
],
"key": {
    "algo": "rsa",
    "size": 2048
},
"names": [
  {
    "C": "CN",
    "ST": "Hubei",
    "L": "Wuhan",
    "O": "system:kube-scheduler",
    "OU": "system"
  }
]

}

生成证书,在master1上操作

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

创建 kube-scheduler 的 kubeconfig

设置集群参数,在master1上操作

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.1.11:6443 --kubeconfig=kube-scheduler.kubeconfig

设置客户端认证参数,在master1上操作

kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig

设置上下文参数,在master1上操作

kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

设置当前上下文,在master1上操作

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

创建配置文件 kube-scheduler.conf,在master1上操作

vim kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \#1.23.6版本,删除这个
--log-dir=/var/log/kubernetes \
--v=2"

创建服务启动文件,在master1上操作

vim kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

拷贝文件到其他master上,在master1上操作

cp kube-scheduler*.pem /etc/kubernetes/ssl/
cp kube-scheduler.kubeconfig /etc/kubernetes/
cp kube-scheduler.conf /etc/kubernetes/
cp kube-scheduler.service /usr/lib/systemd/system/
rsync -vaz kube-scheduler*.pem master2:/etc/kubernetes/ssl/
rsync -vaz kube-scheduler*.pem master3:/etc/kubernetes/ssl/
rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master2:/etc/kubernetes/
rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master3:/etc/kubernetes/
rsync -vaz kube-scheduler.service master2:/usr/lib/systemd/system/
rsync -vaz kube-scheduler.service master3:/usr/lib/systemd/system/

启动服务,在master1、master2、master3上操作

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler
查看端口绑定
ps -ef |grep kube-scheduler:10251

6.导入离线镜像压缩包,在node1上操作

把 pause-cordns.tar.gz 上传到 node1 、节点,手动解压

docker load -i pause-cordns.tar.gz

7.部署 kubelet 组件

创建 kubelet-bootstrap.kubeconfig,在master1上操作

cd /data/work/
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
rm -r kubelet-bootstrap.kubeconfig
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.1.11:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

创建配置文件 kubelet.json,在master1上操作

"cgroupDriver": "systemd"要和 docker 的驱动一致。address 替换为自己node1 的 IP 地址。
查看docker驱动
docker info|grep "Cgroup Driver"
"cgroupDriver": "systemd",替换这个里面的systemd
vim kubelet.json
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {

"x509": {
  "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
},
"webhook": {
  "enabled": true,
  "cacheTTL": "2m0s"
},
"anonymous": {
  "enabled": false
}

},
"authorization": {

"mode": "Webhook",
"webhook": {
  "cacheAuthorizedTTL": "5m0s",
  "cacheUnauthorizedTTL": "30s"
}

},
"address": "10.10.1.21",
"port": 10250,
"readOnlyPort": 10255,
"cgroupDriver": "systemd",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"featureGates": {

"RotateKubeletClientCertificate": true,
"RotateKubeletServerCertificate": true

},
"clusterDomain": "cluster.local.",
"clusterDNS": ["10.255.0.2"]
}

创建服务启动文件,在master1上操作

vim kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--config=/etc/kubernetes/kubelet.json \
--network-plugin=cni \
--pod-infra-container-image=k8s.gcr.io/pause:3.2 \
--alsologtostderr=true \
--logtostderr=false \#1.23.6版本,删除这个
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

注: –hostname-override:显示名称,集群中唯一

–network-plugin:启用 CNI
–kubeconfig:空路径,会自动生成,后面用于连接 apiserver
–bootstrap-kubeconfig:首次启动向 apiserver 申请证书
–config:配置参数文件
–cert-dir:kubelet 证书生成目录
–pod-infra-container-image:管理 Pod 网络容器的镜像

注:kubelete.json 配置文件 address 改为各个节点的 ip 地址,在各个 work 节点上启动服务

创建文件夹,在node1上操作

mkdir /etc/kubernetes/ssl -p

拷贝文件到master,在master1上操作

scp kubelet-bootstrap.kubeconfig kubelet.json node1:/etc/kubernetes/
scp ca.pem node1:/etc/kubernetes/ssl/
scp kubelet.service node1:/usr/lib/systemd/system/

启动 kubelet 服务,在node1上操作

mkdir /var/lib/kubelet
mkdir /var/log/kubernetes
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
CSR查看,在master1上操作
kubectl get csr
查看到name后,批准node证书
kubectl certificate approve node-csr-SY6gROGEmH0qVZhMVhJKKWN3UaWkKKQzV8dopoIO9Uc
查看nodes
kubectl get nodes

注意:STATUS 是 NotReady 表示还没有安装网络插件

8.部署 kube-proxy 组件

创建 csr 请求,在master1上操作

vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {

"algo": "rsa",
"size": 2048

},
"names": [

{
  "C": "CN",
  "ST": "Hubei",
  "L": "Wuhan",
  "O": "k8s",
  "OU": "system"
}

]
}

生成证书,在master1上操作

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

创建 kubeconfig 文件,在master1上操作

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.1.11:6443 --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

创建 kube-proxy 配置文件,在master1上操作

vim kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 10.10.1.21
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.10.1.0/24
healthzBindAddress: 10.10.1.21:10256
kind: KubeProxyConfiguration
metricsBindAddress: 10.10.1.21:10249
mode: "ipvs"

创建服务启动文件,在master1上操作

vim kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.yaml \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

拷贝文件到node,在master1上操作

scp kube-proxy.kubeconfig kube-proxy.yaml node1:/etc/kubernetes/
scp kube-proxy.service node1:/usr/lib/systemd/system/

启动服务,在node1上操作

mkdir -p /var/lib/kube-proxy
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

9.部署 calico 组件

解压离线镜像压缩包,在node1、上操作

把 calico.tar.gz 上传到 node1 节点,手动解压,在node1、上操作

docker load -i calico.tar.gz

把 calico.yaml 文件上传到 xmaster1 上的的/data/work 目录,在master1上操作

kubectl apply -f calico.yaml
kubectl get pods -n kube-system

kubectl get nodes

10.部署 coredns 组件

把 coredns.yaml 上传到 master1 节点/root目录下,在master1上操作

cd ~
kubectl apply -f coredns.yaml
kubectl get pods -n kube-system

kubectl get svc -n kube-system

查看集群状态,在master1上操作

kubectl get pods -n kube-system -o wide
kubectl get nodes

11.测试 k8s 集群部署 tomcat 服务

把 tomcat.tar.gz 和 busybox-1-28.tar.gz 上传到 node1,手动解压,在node1、上操作

docker load -i tomcat.tar.gz
docker load -i busybox-1-28.tar.gz

把 tomcat.yaml 上传到 master1,在master1上操作

kubectl apply -f tomcat.yaml
kubectl get pods

kubectl apply -f tomcat-service.yaml
kubectl get svc

在浏览器访问 node1、 节点的 ip:30080 即可请求到浏览器

验证 cordns 是否正常,在master1上操作

kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
/ # ping www.baidu.com

通过上面可以看到能访问网络

/ # nslookup kubernetes.default.svc.cluster.local
/ # nslookup tomcat.default.svc.cluster.local

删除pods(无需操作)

kubectl delete pods busybox

四、安装 keepalived+nginx 实现 k8s apiserver 高可用
把 epel.repo 上传到 master1 的/etc/yum.repos.d 目录下,这样才能安装 keepalived 和 nginx

把 epel.repo 传到 master2、master3、node1、上,在master1上操作,配置了阿里元的这一步无须操作

scp /etc/yum.repos.d/epel.repo master2:/etc/yum.repos.d/
scp /etc/yum.repos.d/epel.repo master3:/etc/yum.repos.d/
scp /etc/yum.repos.d/epel.repo node1:/etc/yum.repos.d/

安装 nginx 主备,在 master1 和 master2 上做 nginx 主备安装,在master1、master2上操作

yum install nginx keepalived -y

修改 nginx 配置文件。在master1、master2上操作

vi /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {

worker_connections 1024;

}

四层负载均衡,为两台Master apiserver组件提供负载均衡

stream {

log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

access_log  /var/log/nginx/k8s-access.log  main;

upstream k8s-apiserver {
   server 10.10.1.11:6443;   # xianchaomaster1 APISERVER IP:PORT
   server 10.10.1.12:6443;   # xianchaomaster2 APISERVER IP:PORT
   server 10.10.1.13:6443;   # xianchaomaster3 APISERVER IP:PORT

}

server {
   listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
   proxy_pass k8s-apiserver;
}

}

http {

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';

access_log  /var/log/nginx/access.log  main;

sendfile            on;
tcp_nopush          on;
tcp_nodelay         on;
keepalive_timeout   65;
types_hash_max_size 2048;

include             /etc/nginx/mime.types;
default_type        application/octet-stream;

server {
    listen       80 default_server;
    server_name  _;

    location / {
    }
}

}

keepalive 配置,主 keepalived,在master1上操作

vi /etc/keepalived/keepalived.conf
global_defs {
notification_email {

 acassen@firewall.loc 
 failover@firewall.loc 
 sysadmin@firewall.loc 

}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}

vrrp_script check_nginx {

script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 {

state MASTER 
interface ens33  # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
priority 100    # 优先级,备服务器设置 90 
advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
authentication { 
    auth_type PASS      
    auth_pass 1111 
}  
# 虚拟IP
virtual_ipaddress { 
    10.10.1.99/24
} 
track_script {
    check_nginx
} 

}

vrrp_script:指定检查 nginx 工作状态脚本(根据 nginx 状态判断是否故障转移)

virtual_ipaddress:虚拟 IP(VIP),在master1上操作

vi /etc/keepalived/check_nginx.sh

!/bin/bash

1、判断Nginx是否存活

counter=ps -C nginx --no-header | wc -l
if [ $counter -eq 0 ]; then

#2、如果不存活则尝试启动Nginx
systemctl start nginx
sleep 2
#3、等待2秒后再次获取一次Nginx状态
counter=`ps -C nginx --no-header | wc -l`
#4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移
if [ $counter -eq 0 ]; then
    systemctl start keepalived
fi

fi
添加可执行权限,在master1上操作
chmod +x /etc/keepalived/check_nginx.sh

keepalive 配置,备 keepalived,在master2上操作

vi /etc/keepalived/keepalived.conf
global_defs {
notification_email {

 acassen@firewall.loc 
 failover@firewall.loc 
 sysadmin@firewall.loc 

}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}

vrrp_script check_nginx {

script "/etc/keepalived/check_nginx.sh"

}

vrrp_instance VI_1 {

state BACKUP 
interface ens33  # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
priority 90    # 优先级,备服务器设置 90 
advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
authentication { 
    auth_type PASS      
    auth_pass 1111 
}  
# 虚拟IP
virtual_ipaddress { 
    10.10.1.99/24
} 
track_script {
    check_nginx
} 

}

vrrp_script:指定检查 nginx 工作状态脚本(根据 nginx 状态判断是否故障转移)

virtual_ipaddress:虚拟 IP(VIP),在master2上操作

vi /etc/keepalived/check_nginx.sh

!/bin/bash

1、判断Nginx是否存活

counter=ps -C nginx --no-header | wc -l
if [ $counter -eq 0 ]; then

#2、如果不存活则尝试启动Nginx
systemctl start nginx
sleep 2
#3、等待2秒后再次获取一次Nginx状态
counter=`ps -C nginx --no-header | wc -l`
#4、再次进行判断,如Nginx还不存活则停止Keepalived,让地址进行漂移
if [ $counter -eq 0 ]; then
    systemctl start keepalived
fi

fi
添加可执行权限,在master2上操作
chmod +x /etc/keepalived/check_nginx.sh

注:keepalived 根据脚本返回状态码(0 为工作正常,非 0 不正常)判断是否故障转移。

启动服务,在master1、master2上操作

systemctl daemon-reload
yum install nginx-mod-stream -y
systemctl start nginx
systemctl start keepalived
systemctl enable nginx keepalived

测试 vip 是否绑定成功master1操作

ip add

测试 keepalived

停掉 master1 上的 nginx。vip 会漂移到 master2

五、吧node节点上的单节点ip改为vip地址
原来 10.10.1.11 修改为 10.10.1.99(VIP)。,在node1、上操作
sed -i 's#10.10.1.11:6443#10.10.1.99:16443#' /etc/kubernetes/kubelet-bootstrap.kubeconfig
sed -i 's#10.10.1.11:6443#10.10.1.99:16443#' /etc/kubernetes/kubelet.json
sed -i 's#10.10.1.11:6443#10.10.1.99:16443#' /etc/kubernetes/kubelet.kubeconfig
sed -i 's#10.10.1.11:6443#10.10.1.99:16443#' /etc/kubernetes/kube-proxy.yaml
sed -i 's#10.10.1.11:6443#10.10.1.99:16443#' /etc/kubernetes/kube-proxy.kubeconfig
systemctl restart kubelet kube-proxy
这样高可用集群就安装好了

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
6天前
|
Kubernetes 监控 Cloud Native
Kubernetes集群的高可用性与伸缩性实践
Kubernetes集群的高可用性与伸缩性实践
26 1
|
27天前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
22天前
|
Kubernetes 关系型数据库 MySQL
Kubernetes入门:搭建高可用微服务架构
【10月更文挑战第25天】在快速发展的云计算时代,微服务架构因其灵活性和可扩展性备受青睐。本文通过一个案例分析,展示了如何使用Kubernetes将传统Java Web应用迁移到Kubernetes平台并改造成微服务架构。通过定义Kubernetes服务、创建MySQL的Deployment/RC、改造Web应用以及部署Web应用,最终实现了高可用的微服务架构。Kubernetes不仅提供了服务发现和负载均衡的能力,还通过各种资源管理工具,提升了系统的可扩展性和容错性。
63 3
|
28天前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
1月前
|
Kubernetes 应用服务中间件 nginx
搭建Kubernetes v1.31.1服务器集群,采用Calico网络技术
在阿里云服务器上部署k8s集群,一、3台k8s服务器,1个Master节点,2个工作节点,采用Calico网络技术。二、部署nginx服务到k8s集群,并验证nginx服务运行状态。
535 1
|
1月前
|
Kubernetes Ubuntu Linux
Centos7 搭建 kubernetes集群
本文介绍了如何搭建一个三节点的Kubernetes集群,包括一个主节点和两个工作节点。各节点运行CentOS 7系统,最低配置为2核CPU、2GB内存和15GB硬盘。详细步骤包括环境配置、安装Docker、关闭防火墙和SELinux、禁用交换分区、安装kubeadm、kubelet、kubectl,以及初始化Kubernetes集群和安装网络插件Calico或Flannel。
149 0
|
1月前
|
Kubernetes Cloud Native 流计算
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
75 3
|
1月前
|
Kubernetes 网络协议 安全
[kubernetes]二进制方式部署单机k8s-v1.30.5
[kubernetes]二进制方式部署单机k8s-v1.30.5
|
缓存 Kubernetes 数据安全/隐私保护
k8s1.18多master节点高可用集群安装-超详细中文官方文档
k8s1.18多master节点高可用集群安装-超详细中文官方文档
|
6月前
|
Kubernetes 负载均衡 监控
Kubernetes高可用集群二进制部署(一)主机准备和负载均衡器安装
Kubernetes高可用集群二进制部署(一)主机准备和负载均衡器安装
下一篇
无影云桌面