CentOS 7.9二进制部署K8S 1.28.3+集群实战

本文涉及的产品
云解析 DNS,旗舰版 1个月
全局流量管理 GTM,标准版 1个月
公共DNS(含HTTPDNS解析),每月1000万次HTTP解析
简介: 本文详细介绍了在CentOS 7.9上通过二进制方式部署Kubernetes 1.28.3+集群的全过程,包括环境准备、组件安装、证书生成、高可用配置以及网络插件部署等关键步骤。

前置知识: 部署Kubernetes集群的方式

目前生产环境部署kubernetes集群主要由两种方式:
    - kubeadm:
        kubeadm是一个K8S部署工具,提供kubeadm init和kubeadm join,用于快速部署kubernetes集群。
    - 二进制部署:
        从GitHub下载发行版的二进制包,手动部署每个组件,组成kubernetes集群。


除了上述介绍的两种方式部署外,还有其他部署方式的途径:
    - yum: 
        已废弃,目前支持的最新版本为2017年发行的1.5.2版本。
    - minikube:
        适合开发环境,能够快速在Windows或者Linux构建K8S集群。
        参考链接:
            https://minikube.sigs.k8s.io/docs/
    - rancher:
        基于K8S改进发行了轻量级K8S,让K3S孕育而生。
        参考链接:
            https://www.rancher.com/
    - KubeSphere:
        青云科技基于开源KubeSphere快速部署K8S集群。
        参考链接:
            https://kubesphere.com.cn
    - kuboard:
        也是对k8s进行二次开发的产品,新增了很多独有的功能。
        参考链接: 
            https://kuboard.cn/
    - kubeasz:
        使用ansible部署,扩容,缩容kubernetes集群,安装步骤官方文档已经非常详细了。
        参考链接: 
            https://github.com/easzlab/kubeasz/

    - 第三方云厂商:
        比如aws,阿里云,腾讯云,京东云等云厂商均有K8S的相关SAAS产品。

    - 更多的第三方部署工具:
        参考链接:
           https://landscape.cncf.io/
AI 代码解读

一.K8S二进制部署准备环境

1.集群角色划分

主机名 IP地址 角色划分
k8s-master01 10.0.0.241 api-server,control manager,scheduler,etcd
k8s-master02 10.0.0.242 api-server,control manager,scheduler,etcd
k8s-master03 10.0.0.243 api-server,control manager,scheduler,etcd
k8s-worker04 10.0.0.244 kubelet,kube-proxy
k8s-worker05 10.0.0.245 kubelet,kube-proxy
apiserver-lb 10.0.0.240 apiserver的负载均衡器IP地址

2.所有节点安装常用的软件包

    1.所有节点CentOS 7安装yum源如下:
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
curl  -s -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo


    2.所有节点安装常用的软件包
yum -y install bind-utils expect rsync wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git ntpdate bash-completion


将软件包打包命令: (下面这条命令可以跳过,是我用于内网打包软件时使用的哈!)
    mkdir 01-linux-env && find /var/cache/yum -name "*.rpm" | xargs mv -t 01-linux-env/
AI 代码解读

3.k8s-master01节点免密钥登录集群并同步数据

    1.设置主机名,各节点参考如下命令修改即可
hostnamectl set-hostname k8s-master01

    2.设置相应的主机名及hosts文件解析
cat >> /etc/hosts <<'EOF'
10.0.0.240 apiserver-lb
10.0.0.241 k8s-master01
10.0.0.242 k8s-master02
10.0.0.243 k8s-master03
10.0.0.244 k8s-worker04
10.0.0.245 k8s-worker05
EOF


    3.配置免密码登录其他节点
cat > password_free_login.sh <<'EOF'
#!/bin/bash
# auther: Jason Yin

# 创建密钥对
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa -q

# 声明你服务器密码,建议所有节点的密码均一致,否则该脚本需要再次进行优化
export mypasswd=yinzhengjie

# 定义主机列表
k8s_host_list=(k8s-master02 k8s-master03 k8s-worker04 k8s-worker05)

# 配置免密登录,利用expect工具免交互输入
for i in ${k8s_host_list[@]};do
expect -c "
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
  expect {
    \"*yes/no*\" {send \"yes\r\"; exp_continue}
    \"*password*\" {send \"$mypasswd\r\"; exp_continue}
  }"
done
EOF
sh password_free_login.sh



    4.编写同步脚本
cat > /usr/local/sbin/data_rsync.sh <<'EOF'
#!/bin/bash
# Auther: Jason Yin

if  [ $# -ne 1 ];then
   echo "Usage: $0 /path/to/file(绝对路径)"
   exit
fi 

if [ ! -e $1 ];then
    echo "[ $1 ] dir or file not find!"
    exit
fi

fullpath=`dirname $1`

basename=`basename $1`

cd $fullpath

k8s_host_list=(k8s-master02 k8s-master03 k8s-worker04 k8s-worker05)

for host in ${k8s_host_list[@]};do
  tput setaf 2
    echo ===== rsyncing ${host}: $basename =====
    tput setaf 7
    rsync -az $basename  `whoami`@${host}:$fullpath
    if [ $? -eq 0 ];then
      echo "命令执行成功!"
    fi
done
EOF
chmod +x /usr/local/sbin/data_rsync.sh


    5.同步"/etc/hosts"文件到集群
data_rsync.sh /etc/hosts
AI 代码解读

4.所有节点Linux基础环境优化

    1.所有节点关闭firewalld,selinux,NetworkManager,postfix
systemctl disable --now NetworkManager firewalld postfix
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config


    2.所有节点关闭swap分区,fstab注释swap
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
free -h


    3.所有节点同步时间
        - 手动同步时区和时间
ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
ntpdate ntp.aliyun.com

        - 定期任务同步(也可以使用"crontab -e"手动编辑,但我更推荐我下面的做法,可以非交互)
echo "*/5 * * * * /usr/sbin/ntpdate ntp.aliyun.com" > /var/spool/cron/root
crontab -l

    4.所有节点配置limit
cat >> /etc/security/limits.conf <<'EOF'
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF


    5.所有节点优化sshd服务
sed -i 's@#UseDNS yes@UseDNS no@g' /etc/ssh/sshd_config
sed -i 's@^GSSAPIAuthentication yes@GSSAPIAuthentication no@g' /etc/ssh/sshd_config

        - UseDNS选项:
    打开状态下,当客户端试图登录SSH服务器时,服务器端先根据客户端的IP地址进行DNS PTR反向查询出客户端的主机名,然后根据查询出的客户端主机名进行DNS正向A记录查询,验证与其原始IP地址是否一致,这是防止客户端欺骗的一种措施,但一般我们的是动态IP不会有PTR记录,打开这个选项不过是在白白浪费时间而已,不如将其关闭。

        - GSSAPIAuthentication:
    当这个参数开启( GSSAPIAuthentication  yes )的时候,通过SSH登陆服务器时候会有些会很慢!这是由于服务器端启用了GSSAPI。登陆的时候客户端需要对服务器端的IP地址进行反解析,如果服务器的IP地址没有配置PTR记录,那么就容易在这里卡住了。



    6.Linux内核调优
cat > /etc/sysctl.d/k8s.conf <<'EOF'
# 以下3个参数是containerd所依赖的内核参数
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv6.conf.all.disable_ipv6 = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system



    7.修改终端颜色
cat <<EOF >>  ~/.bashrc 
PS1='[\[\e[34;1m\]\u@\[\e[0m\]\[\e[32;1m\]\H\[\e[0m\]\[\e[31;1m\] \W\[\e[0m\]]# '
EOF
source ~/.bashrc
AI 代码解读

5.所有节点升级Linux内核并更新系统

    1.k8s-master01节点下载并安装内核软件包
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm

    2.k8s-master01节点将下载的软件包同步到其他节点
data_rsync.sh kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
data_rsync.sh kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm

    3.所有节点执行安装升级Linux内核命令
yum -y localinstall kernel-ml*

    4.更改内核启动顺序
grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
grubby --default-kernel

    5.所有节点更新软件版本,但不需要更新内核,因为我内核已经更新到了指定的版本
yum -y update --exclude=kernel*
AI 代码解读

6.所有节点安装ipvsadm以实现kube-proxy的负载均衡

    1.安装ipvsadm等相关工具
yum -y install ipvsadm ipset sysstat conntrack libseccomp 

    2.所有节点创建要开机自动加载的模块配置文件
cat > /etc/modules-load.d/ipvs.conf << 'EOF'
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

    3 修改ens33网卡名称为eth0【选做,建议修改】
        3.1 修改配置文件
vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="... net.ifnames=0 biosdevname=0"

        3.2 用grub2-mkconfig重新生成配置。
grub2-mkconfig -o /boot/grub2/grub.cfg

        3.3 修改网卡配置
mv /etc/sysconfig/network-scripts/ifcfg-{ens33,eth0}
sed -i 's#ens33#eth0#g' /etc/sysconfig/network-scripts/ifcfg-eth0
cat /etc/sysconfig/network-scripts/ifcfg-eth0 


    4 重启操作系统即可
reboot 

温馨提示:
    如果无法正常启动,则可用考虑将ens33网卡替换为eth0网卡,建议不要忘记写"DEVICE"字段哟。

    5 验证加载的模块
lsmod | grep --color=auto -e ip_vs -e nf_conntrack
uname -r
ifconfig

温馨提示:
    Linux kernel 4.19+版本已经将之前的"nf_conntrack_ipv4"模块更名为"nf_conntrack"模块哟~
AI 代码解读

二.安装K8S相关的基础组件

1.所有节点安装containerd

1.1 所有节点安装containerd组件

wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
yum -y install containerd.io


温馨提示:
    其实我们只需要安装containerd组件即可,但是安装docker也无妨,可以管理起来很方便。
    如果你想要安装docker的话,我推荐使用"docker-ce-20.10.24 docker-ce-cli-20.10.24"版本。
AI 代码解读

1.2 配置containerd需要的模块

    1.临时手动加载模块
modprobe -- overlay
modprobe -- br_netfilter

    2.开机自动加载所需的内核模块
cat > /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
AI 代码解读

1.3 修改containerd的配置文件

    1.重新初始化containerd的配置文件
containerd config default | tee /etc/containerd/config.toml 

    2.修改Cgroup的管理者为systemd组件
sed -ri 's#(SystemdCgroup = )false#\1true#' /etc/containerd/config.toml 
grep SystemdCgroup /etc/containerd/config.toml

    3.修改pause的基础镜像名称
sed -i 's#registry.k8s.io/pause:3.6#registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7#' /etc/containerd/config.toml
grep sandbox_image /etc/containerd/config.toml
AI 代码解读

1.4 所有节点启动containerd

    1.启动containerd服务
systemctl daemon-reload
systemctl enable --now containerd
systemctl status containerd

    2.配置crictl客户端连接的运行时位置
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

    3.查看containerd的版本
[root@k8s-master01 ~]# ctr version
Client:
  Version:  1.6.27
  Revision: a1496014c916f9e62104b33d1bb5bd03b0858e59
  Go version: go1.20.13

Server:
  Version:  1.6.27
  Revision: a1496014c916f9e62104b33d1bb5bd03b0858e59
  UUID: 4a5766bc-691f-49be-9182-b467ed31e330
[root@k8s-master01 ~]#
AI 代码解读

2.安装etcd组件

2.1 下载etcd软件包

wget https://github.com/etcd-io/etcd/releases/download/v3.5.10/etcd-v3.5.10-linux-amd64.tar.gz
AI 代码解读

2.2 解压etcd的二进制程序包到PATH环境变量路径

    1.解压软件包
tar -xf etcd-v3.5.10-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.10-linux-amd64/etcd{,ctl}

    2.查看etcd版本
[root@k8s-master01 ~]# etcdctl version
etcdctl version: 3.5.10
API version: 3.5
[root@k8s-master01 ~]#
AI 代码解读

2.3 将软件包下发到所有节点

[root@k8s-master01 ~]# MasterNodes='k8s-master02 k8s-master03'
[root@k8s-master01 ~]# for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
AI 代码解读

3.安装k8s组件

3.1 下载k8s二进制软件版

3.1.1 选择K8S的版本

如上图所示,我们需要选择我们要下载的K8S版本号。
AI 代码解读

3.1.2 进入CHANGELOG目录

如上图所示,我们需要进入到CHANGELOG目录,以便于查看个版本的CHANGELOGs日志信息。
AI 代码解读

3.1.3 查看对应K8S版本的CHANGELOG文档

如上图所示,目前官方最新的版本是K8S 1.28,本次我打算部署最新版本的K8S。
AI 代码解读

3.1.4 选择K8S的服务端二进制软件包链接

如上图所示,目前K8S 1.28.3是最新版本,可以下载对应服务端的二进制软件包。
AI 代码解读

3.1.5 根据CPU架构选择合适的K8S软件包

wget https://dl.k8s.io/v1.28.3/kubernetes-server-linux-amd64.tar.gz
AI 代码解读

3.2 解压K8S的二进制程序包到PATH环境变量路径

    1.解压软件包
[root@k8s-master01 ~]# tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}


    2.查看kubelet的版本
[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.28.3
[root@k8s-master01 ~]#
AI 代码解读

3.3 将软件包下发到所有节点

MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-worker04 k8s-worker05'
for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; done
for NODE in $WorkNodes; do echo $NODE; scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/; done
AI 代码解读

三.生成k8s和etcd证书文件

1.安装cfssl证书管理工具

github下载地址:
    https://github.com/cloudflare/cfssl

温馨提示:
    生成K8S和etcd证书这一步骤很关键,我建议各位在做实验前先对K8S集群的所有节点拍一下快照,以避免你实验做失败了方便回滚
    关于cfssl证书可以自行在github下载即可,当然也可以使用我课堂上给大家下载好的软件包哟

具体操作如下:
    1.解压压缩包
[root@k8s-master01 ~]# unzip yinzhengjie-cfssl.zip 

    2.重命名cfssl的版本号信息
[root@k8s-master01 ~]# rename _1.6.4_linux_amd64 "" *

    3.将cfssl证书拷贝到环境变量并授权执行权限
[root@k8s-master01 ~]# mv cfssl* /usr/local/bin/
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# chmod +x /usr/local/bin/cfssl*
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# ll /usr/local/bin/cfssl*
-rwxr-xr-x 1 root root 12054528 Aug 30 15:46 /usr/local/bin/cfssl
-rwxr-xr-x 1 root root  9560064 Aug 30 15:45 /usr/local/bin/cfssl-certinfo
-rwxr-xr-x 1 root root  7643136 Aug 30 15:48 /usr/local/bin/cfssljson
[root@k8s-master01 ~]#
AI 代码解读

2.生成etcd证书

2.1 k8s-master01节点创建etcd证书存储目录

[root@k8s-master01 ~]# mkdir -pv /yinzhengjie/certs/{etcd,pki}/ && cd /yinzhengjie/certs/pki/
AI 代码解读

2.2 k8s-master01节点生成etcd证书的自建ca证书

    1.生成证书的CSR文件: 证书签发请求文件,配置了一些域名,公司,单位
[root@k8s-master01 pki]# cat etcd-ca-csr.json 
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ],
  "ca": {
    "expiry": "876000h"
  }
}
[root@k8s-master01 pki]# 


    2.生成etcd CA证书和CA证书的key
[root@k8s-master01 pki]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /yinzhengjie/certs/etcd/etcd-ca
[root@k8s-master01 pki]# 
[root@k8s-master01 pki]# ll /yinzhengjie/certs/etcd/
total 12
-rw-r--r-- 1 root root 1050 Jan  7 15:36 etcd-ca.csr
-rw------- 1 root root 1679 Jan  7 15:36 etcd-ca-key.pem
-rw-r--r-- 1 root root 1318 Jan  7 15:36 etcd-ca.pem
[root@k8s-master01 pki]#
AI 代码解读

2.3 k8s-master01节点基于自建ca证书颁发etcd证书

    1.生成etcd证书的有效期为100年
[root@k8s-master01 pki]# cat ca-config.json 
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
[root@k8s-master01 pki]# 


    2.生成证书的CSR文件: 证书签发请求文件,配置了一些域名,公司,单位
[root@k8s-master01 pki]# cat etcd-csr.json 
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ]
}
[root@k8s-master01 pki]# 


    3.基于自建的ectd ca证书生成etcd的证书
[root@k8s-master01 pki]# cfssl gencert \
  -ca=/yinzhengjie/certs/etcd/etcd-ca.pem \
  -ca-key=/yinzhengjie/certs/etcd/etcd-ca-key.pem \
  -config=ca-config.json \
  --hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,10.0.0.241,10.0.0.242,10.0.0.243 \
  --profile=kubernetes \
  etcd-csr.json  | cfssljson -bare /yinzhengjie/certs/etcd/etcd-server

[root@k8s-master01 pki]# ll /yinzhengjie/certs/etcd/
total 24
-rw-r--r-- 1 root root 1050 Jan  7 15:36 etcd-ca.csr
-rw------- 1 root root 1679 Jan  7 15:36 etcd-ca-key.pem
-rw-r--r-- 1 root root 1318 Jan  7 15:36 etcd-ca.pem
-rw-r--r-- 1 root root 1131 Jan  7 15:40 etcd-server.csr
-rw------- 1 root root 1679 Jan  7 15:40 etcd-server-key.pem
-rw-r--r-- 1 root root 1464 Jan  7 15:40 etcd-server.pem
[root@k8s-master01 pki]#
AI 代码解读

2.4 k8s-master01节点将etcd证书拷贝到其他两个master节点

MasterNodes='k8s-master02 k8s-master03'

for NODE in $MasterNodes; do
     echo $NODE; ssh $NODE "mkdir -pv /yinzhengjie/certs/etcd/"
     for FILE in etcd-ca-key.pem etcd-ca.pem etcd-server-key.pem etcd-server.pem; do
       scp /yinzhengjie/certs/etcd/${FILE} $NODE:/yinzhengjie/certs/etcd/${FILE}
     done
 done
AI 代码解读

3.生成k8s组件相关证书

3.1 所有节点创建k8s证书存储目录

[root@k8s-master01 pki]# mkdir -pv /yinzhengjie/certs/kubernetes/
AI 代码解读

3.2 k8s-master01节点生成kubernetes自建ca证书

    1.生成证书的CSR文件: 证书签发请求文件,配置了一些域名,公司,单位
[root@k8s-master01 pki]# cat k8s-ca-csr.json 
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ],
  "ca": {
    "expiry": "876000h"
  }
}
[root@k8s-master01 pki]# 



    2.生成kubernetes证书
[root@k8s-master01 pki]# cfssl gencert -initca k8s-ca-csr.json | cfssljson -bare /yinzhengjie/certs/kubernetes/k8s-ca
[root@k8s-master01 pki]# 
[root@k8s-master01 pki]# ll /yinzhengjie/certs/kubernetes/
total 12
-rw-r--r-- 1 root root 1070 Jan  7 15:47 k8s-ca.csr
-rw------- 1 root root 1675 Jan  7 15:47 k8s-ca-key.pem
-rw-r--r-- 1 root root 1363 Jan  7 15:47 k8s-ca.pem
[root@k8s-master01 pki]#
AI 代码解读

3.3 k8s-master01节点基于自建ca证书颁发apiserver相关证书

    1.生成k8s证书的有效期为100年
[root@k8s-master01 pki]# cat k8s-ca-config.json
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
[root@k8s-master01 pki]# 

    2.生成apiserver证书的CSR文件: 证书签发请求文件,配置了一些域名,公司,单位
[root@k8s-master01 pki]# cat apiserver-csr.json 
{
  "CN": "kube-apiserver",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ]
}
[root@k8s-master01 pki]# 

    3.基于自建ca证书生成apiServer的证书文件
[root@k8s-master01 pki]# cfssl gencert \
  -ca=/yinzhengjie/certs/kubernetes/k8s-ca.pem \
  -ca-key=/yinzhengjie/certs/kubernetes/k8s-ca-key.pem \
  -config=k8s-ca-config.json \
  --hostname=10.200.0.1,10.0.0.240,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.yinzhengjie,kubernetes.default.svc.yinzhengjie.com,10.0.0.241,10.0.0.242,10.0.0.243,10.0.0.244,10.0.0.245 \
  --profile=kubernetes \
   apiserver-csr.json  | cfssljson -bare /yinzhengjie/certs/kubernetes/apiserver

[root@k8s-master01 pki]# ll /yinzhengjie/certs/kubernetes/apiserver*
-rw-r--r-- 1 root root 1314 Jan  7 17:03 /yinzhengjie/certs/kubernetes/apiserver.csr
-rw------- 1 root root 1679 Jan  7 17:03 /yinzhengjie/certs/kubernetes/apiserver-key.pem
-rw-r--r-- 1 root root 1712 Jan  7 17:03 /yinzhengjie/certs/kubernetes/apiserver.pem
[root@k8s-master01 pki]# 




温馨提示:
    "10.200.0.1"为咱们的svc网段的第一个地址,您需要根据自己的场景稍作修改。
    "10.0.0.240"是负载均衡器的VIP地址。
    "kubernetes,...,kubernetes.default.svc.yinzhengjie.com"对应的是apiServer解析的A记录。
    "10.0.0.241,...,10.0.0.245"对应的是K8S集群的地址。
AI 代码解读

3.4 生成第三方组件与apiServer通信的聚合证书

聚合证书的作用就是让第三方组件(比如metrics-server等)能够拿这个证书文件和apiServer进行通信

    1.生成聚合证书的用于自建ca的CSR文件
[root@k8s-master01 pki]# cat front-proxy-ca-csr.json 
{
  "CN": "kubernetes",
  "key": {
     "algo": "rsa",
     "size": 2048
  }
}
[root@k8s-master01 pki]# 


    2.生成聚合证书的自建ca证书
[root@k8s-master01 pki]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /yinzhengjie/certs/kubernetes/front-proxy-ca
[root@k8s-master01 pki]# 
[root@k8s-master01 pki]# ll /yinzhengjie/certs/kubernetes/front-proxy-ca*
-rw-r--r-- 1 root root  891 Jan  7 17:05 /yinzhengjie/certs/kubernetes/front-proxy-ca.csr
-rw------- 1 root root 1675 Jan  7 17:05 /yinzhengjie/certs/kubernetes/front-proxy-ca-key.pem
-rw-r--r-- 1 root root 1094 Jan  7 17:05 /yinzhengjie/certs/kubernetes/front-proxy-ca.pem
[root@k8s-master01 pki]# 


    3.生成聚合证书的用于客户端的CSR文件
[root@k8s-master01 pki]# cat front-proxy-client-csr.json 
{
  "CN": "front-proxy-client",
  "key": {
     "algo": "rsa",
     "size": 2048
  }
}
[root@k8s-master01 pki]# 


    4.基于聚合证书的自建ca证书签发聚合证书的客户端证书
[root@k8s-master01 pki]# cfssl gencert \
  -ca=/yinzhengjie/certs/kubernetes/front-proxy-ca.pem \
  -ca-key=/yinzhengjie/certs/kubernetes/front-proxy-ca-key.pem \
  -config=k8s-ca-config.json \
  -profile=kubernetes \
  front-proxy-client-csr.json | cfssljson -bare /yinzhengjie/certs/kubernetes/front-proxy-client
[root@k8s-master01 pki]# 
[root@k8s-master01 pki]# ll /yinzhengjie/certs/kubernetes/front-proxy-client*
-rw-r--r-- 1 root root  903 Jan  7 17:06 /yinzhengjie/certs/kubernetes/front-proxy-client.csr
-rw------- 1 root root 1679 Jan  7 17:06 /yinzhengjie/certs/kubernetes/front-proxy-client-key.pem
-rw-r--r-- 1 root root 1188 Jan  7 17:06 /yinzhengjie/certs/kubernetes/front-proxy-client.pem
[root@k8s-master01 pki]#
AI 代码解读

3.5 生成controller-manager证书及kubeconfig文件

    1.生成controller-manager的CSR文件
[root@k8s-master01 pki]# cat controller-manager-csr.json
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes-manual"
    }
  ]
}
[root@k8s-master01 pki]# 


    2.生成controller-manager证书文件
[root@k8s-master01 pki]#  cfssl gencert \
  -ca=/yinzhengjie/certs/kubernetes/k8s-ca.pem \
  -ca-key=/yinzhengjie/certs/kubernetes/k8s-ca-key.pem \
  -config=k8s-ca-config.json \
  -profile=kubernetes \
  controller-manager-csr.json | cfssljson -bare /yinzhengjie/certs/kubernetes/controller-manager

[root@k8s-master01 pki]# ll /yinzhengjie/certs/kubernetes/controller-manager*
-rw-r--r-- 1 root root 1082 Nov  5 11:31 /yinzhengjie/certs/kubernetes/controller-manager.csr
-rw------- 1 root root 1675 Nov  5 11:31 /yinzhengjie/certs/kubernetes/controller-manager-key.pem
-rw-r--r-- 1 root root 1501 Nov  5 11:31 /yinzhengjie/certs/kubernetes/controller-manager.pem
[root@k8s-master01 pki]#

    3.创建一个kubeconfig目录
[root@k8s-master01 pki]# mkdir -pv /yinzhengjie/certs/kubeconfig

    4.设置一个集群
[root@k8s-master01 pki]# kubectl config set-cluster yinzhengjie-k8s \
  --certificate-authority=/yinzhengjie/certs/kubernetes/k8s-ca.pem \
  --embed-certs=true \
  --server=https://10.0.0.240:8443 \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-controller-manager.kubeconfig

    5.设置一个用户项
[root@k8s-master01 pki]# kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=/yinzhengjie/certs/kubernetes/controller-manager.pem \
  --client-key=/yinzhengjie/certs/kubernetes/controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-controller-manager.kubeconfig

    6.设置一个上下文环境
[root@k8s-master01 pki]# kubectl config set-context system:kube-controller-manager@kubernetes \
  --cluster=yinzhengjie-k8s \
  --user=system:kube-controller-manager \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-controller-manager.kubeconfig

    7.使用默认的上下文
[root@k8s-master01 pki]# kubectl config use-context system:kube-controller-manager@kubernetes \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-controller-manager.kubeconfig
AI 代码解读

3.6 生成scheduler证书及kubeconfig文件

    1.生成scheduler的CSR文件
[root@k8s-master01 pki]# cat scheduler-csr.json 
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes-manual"
    }
  ]
}
[root@k8s-master01 pki]# 

    2.生成scheduler证书文件
[root@k8s-master01 pki]#  cfssl gencert \
  -ca=/yinzhengjie/certs/kubernetes/k8s-ca.pem \
  -ca-key=/yinzhengjie/certs/kubernetes/k8s-ca-key.pem \
  -config=k8s-ca-config.json \
  -profile=kubernetes \
  scheduler-csr.json | cfssljson -bare /yinzhengjie/certs/kubernetes/scheduler

[root@k8s-master01 pki]# ll /yinzhengjie/certs/kubernetes/scheduler*
-rw-r--r-- 1 root root 1058 Jan  7 18:56 /yinzhengjie/certs/kubernetes/scheduler.csr
-rw------- 1 root root 1679 Jan  7 18:56 /yinzhengjie/certs/kubernetes/scheduler-key.pem
-rw-r--r-- 1 root root 1476 Jan  7 18:56 /yinzhengjie/certs/kubernetes/scheduler.pem
[root@k8s-master01 pki]# 


    3.设置一个集群
[root@k8s-master01 pki]# kubectl config set-cluster yinzhengjie-k8s \
  --certificate-authority=/yinzhengjie/certs/kubernetes/k8s-ca.pem \
  --embed-certs=true \
  --server=https://10.0.0.240:8443 \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-scheduler.kubeconfig


   4.设置一个用户项
[root@k8s-master01 pki]# kubectl config set-credentials system:kube-scheduler \
  --client-certificate=/yinzhengjie/certs/kubernetes/scheduler.pem \
  --client-key=/yinzhengjie/certs/kubernetes/scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-scheduler.kubeconfig

  5.设置一个上下文环境
[root@k8s-master01 pki]# kubectl config set-context system:kube-scheduler@kubernetes \
  --cluster=yinzhengjie-k8s \
  --user=system:kube-scheduler \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-scheduler.kubeconfig

  6.使用默认的上下文
[root@k8s-master01 pki]# kubectl config use-context system:kube-scheduler@kubernetes \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-scheduler.kubeconfig
AI 代码解读

3.7 配置k8s集群管理员证书及kubeconfig文件

    1.生成管理员的CSR文件
[root@k8s-master01 pki]# cat admin-csr.json
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:masters",
      "OU": "Kubernetes-manual"
    }
  ]
}

[root@k8s-master01 pki]# 


    2.生成k8s集群管理员证书
[root@k8s-master01 pki]# cfssl gencert \
  -ca=/yinzhengjie/certs/kubernetes/k8s-ca.pem \
  -ca-key=/yinzhengjie/certs/kubernetes/k8s-ca-key.pem \
  -config=k8s-ca-config.json \
  -profile=kubernetes \
  admin-csr.json | cfssljson -bare /yinzhengjie/certs/kubernetes/admin

[root@k8s-master01 pki]# ll /yinzhengjie/certs/kubernetes/admin*
-rw-r--r-- 1 root root 1025 Jan  7 19:00 /yinzhengjie/certs/kubernetes/admin.csr
-rw------- 1 root root 1675 Jan  7 19:00 /yinzhengjie/certs/kubernetes/admin-key.pem
-rw-r--r-- 1 root root 1444 Jan  7 19:00 /yinzhengjie/certs/kubernetes/admin.pem
[root@k8s-master01 pki]# 



    2.设置一个集群
[root@k8s-master01 pki]#  kubectl config set-cluster yinzhengjie-k8s \
  --certificate-authority=/yinzhengjie/certs/kubernetes/k8s-ca.pem \
  --embed-certs=true \
  --server=https://10.0.0.240:8443 \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-admin.kubeconfig


   4.设置一个用户项
[root@k8s-master01 pki]#  kubectl config set-credentials kube-admin \
  --client-certificate=/yinzhengjie/certs/kubernetes/admin.pem \
  --client-key=/yinzhengjie/certs/kubernetes/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-admin.kubeconfig

  5.设置一个上下文环境
[root@k8s-master01 pki]#  kubectl config set-context kube-admin@kubernetes \
  --cluster=yinzhengjie-k8s \
  --user=kube-admin \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-admin.kubeconfig

  6.使用默认的上下文
[root@k8s-master01 pki]#  kubectl config use-context kube-admin@kubernetes \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-admin.kubeconfig
AI 代码解读

3.8 创建ServiceAccount

    1.ServiceAccount是k8s一种认证方式,创建ServiceAccount的时候会创建一个与之绑定的secret,这个secret会生成一个token
[root@k8s-master01 pki]# openssl genrsa -out /yinzhengjie/certs/kubernetes/sa.key 2048


    2.基于sa.key创建sa.pub
[root@k8s-master01 pki]# openssl rsa -in /yinzhengjie/certs/kubernetes/sa.key -pubout -out /yinzhengjie/certs/kubernetes/sa.pub
[root@k8s-master01 pki]# 
[root@k8s-master01 pki]# ll /yinzhengjie/certs/kubernetes/sa*
-rw-r--r-- 1 root root 1679 Jan  7 19:02 /yinzhengjie/certs/kubernetes/sa.key
-rw-r--r-- 1 root root  451 Jan  7 19:03 /yinzhengjie/certs/kubernetes/sa.pub
[root@k8s-master01 pki]#
AI 代码解读

3.9 k8s-master01节点K8S组件证书拷贝到其他两个master节点

    1.k8s-master01节点将etcd证书拷贝到其他两个master节点
[root@k8s-master01 pki]# for NODE in k8s-master02 k8s-master03; do 
       echo $NODE; ssh $NODE "mkdir -pv /yinzhengjie/certs/{kubernetes,kubeconfig}"
     for FILE in $(ls /yinzhengjie/certs/kubernetes); do 
        scp /yinzhengjie/certs/kubernetes/${FILE} $NODE:/yinzhengjie/certs/kubernetes/${FILE};
     done; 
     for FILE in kube-admin.kubeconfig  kube-controller-manager.kubeconfig  kube-scheduler.kubeconfig; do 
        scp /yinzhengjie/certs/kubeconfig/${FILE} $NODE:/yinzhengjie/certs/kubeconfig/${FILE};
     done;
done


    2.其他两个节点验证文件数量是否正确
[root@k8s-master02 ~]# ls /yinzhengjie/certs/kubernetes  | wc -l
23
[root@k8s-master02 ~]# 

[root@k8s-master03 ~]# ls /yinzhengjie/certs/kubernetes  | wc -l
23
[root@k8s-master03 ~]#
AI 代码解读

四.部署K8S高可用集群

1.高可用组件haproxy+keepalived安装

1.1 所有master节点安装高可用组件

温馨提示:
    - 对于高可用组件,其实我们也可以单独找两台虚拟机来部署,但我为了节省2台机器,就直接在master节点复用了。
    - 如果在云上安装K8S则无安装高可用组件了,毕竟公有云大部分都是不支持keepalived的,可以直接使用云产品,比如阿里的"SLB",腾讯的"ELB"等SAAS产品;
    - 推荐使用ELB,SLB有回环的问题,也就是SLB代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题;


具体实操:
    yum -y install keepalived haproxy
AI 代码解读

1.2 所有master节点配置haproxy

温馨提示:
    - haproxy的负载均衡器监听地址我配置是8443,你可以修改为其他端口,haproxy会用来反向代理各个master组件的地址;
    - 如果你真的修改晴一定注意上面的证书配置的kubeconfig文件,也要一起修改,否则就会出现链接集群失败的问题;


具体实操:
    1.备份配置文件
cp /etc/haproxy/haproxy.cfg{,`date +%F`}


    2.所有节点的配置文件内容相同
cat > /etc/haproxy/haproxy.cfg <<'EOF'
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-haproxy
  bind *:33305
  mode http
  option httplog
  monitor-uri /ayouok

frontend yinzhengjie-k8s
  bind 0.0.0.0:8443
  bind 127.0.0.1:8443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend yinzhengjie-k8s

backend yinzhengjie-k8s
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01   10.0.0.241:6443  check
  server k8s-master02   10.0.0.242:6443  check
  server k8s-master03   10.0.0.243:6443  check
EOF
AI 代码解读

1.3 所有master节点配置keepalived

温馨提示:
    - 注意"interface"字段为你的物理网卡的名称,如果你的网卡是ens33,请将"eth0"修改为"ens33"哟;
    - 注意"mcast_src_ip"各master节点的配置均不相同,修改根据实际环境进行修改哟;
    - 注意"virtual_ipaddress"指定的是负载均衡器的VIP地址,这个地址也要和kubeconfig文件的Apiserver地址要一致哟;
    - 注意"script"字段的脚本用于检测后端的apiServer是否健康;
    - 注意"router_id"字段为节点ip,master每个节点配置自己的IP

具体实操:
    1.备份配置文件
cp /etc/keepalived/keepalived.conf{,`date +%F`}

    2."k8s-master01"节点创建配置文件
[root@k8s-master01 ~]# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.241  netmask 255.255.255.0  broadcast 10.0.0.255
        ether 00:0c:29:32:73:ac  txqueuelen 1000  (Ethernet)
        RX packets 324292  bytes 234183010 (223.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 242256  bytes 31242156 (29.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

...

[root@k8s-master01 ~]# 
[root@k8s-master01 ~]#  cat > /etc/keepalived/keepalived.conf <<'EOF'
! Configuration File for keepalived
global_defs {
   router_id 10.0.0.241
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 8443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 10.0.0.241
    nopreempt
    authentication {
        auth_type PASS
        auth_pass yinzhengjie_k8s
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        10.0.0.240
    }
}
EOF


    3."k8s-master02"节点创建配置文件
[root@k8s-master02 ~]# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.242  netmask 255.255.255.0  broadcast 10.0.0.255
        ether 00:0c:29:cf:ad:0a  txqueuelen 1000  (Ethernet)
        RX packets 256743  bytes 42628265 (40.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 252589  bytes 34277384 (32.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

...

[root@k8s-master02 ~]# 
[root@k8s-master02 ~]# cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
   router_id 10.0.0.242
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 8443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 10.0.0.242
    nopreempt
    authentication {
        auth_type PASS
        auth_pass yinzhengjie_k8s
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        10.0.0.240
    }
}
EOF

    4."k8s-master03"节点创建配置文件
[root@k8s-master03 ~]# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.243  netmask 255.255.255.0  broadcast 10.0.0.255
        ether 00:0c:29:5f:f7:4f  txqueuelen 1000  (Ethernet)
        RX packets 178577  bytes 34808750 (33.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 171025  bytes 26471309 (25.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

...

[root@k8s-master03 ~]# 
[root@k8s-master03 ~]# cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
   router_id 10.0.0.243
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 8443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 10.0.0.243
    nopreempt
    authentication {
        auth_type PASS
        auth_pass yinzhengjie_k8s
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        10.0.0.240
    }
}
EOF

    5.所有keepalived节点均需要创建健康检查脚本
vi /etc/keepalived/check_port.sh
iCHK_PORT=$1
if [ -n "$CHK_PORT" ];then
    PORT_PROCESS=\`ss -lt|grep $CHK_PORT|wc -l\`
    if [ $PORT_PROCESS -eq 0 ];then
        echo "Port $CHK_PORT Is Not Used,End."
        exit 1
    fi
else
    echo "Check Port Cant Be Empty!"
fi
AI 代码解读

1.4 启动keepalived服务并验证

    1.启动keepalived服务
systemctl daemon-reload
systemctl enable --now keepalived
systemctl status keepalived


    2.验证服务是否正常
[root@k8s-master03 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:5f:f7:4f brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.243/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.240/32 scope global eth0
       valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
[root@k8s-master03 ~]# 
[root@k8s-master03 ~]# ping 10.0.0.240
PING 10.0.0.240 (10.0.0.240) 56(84) bytes of data.
64 bytes from 10.0.0.240: icmp_seq=1 ttl=64 time=0.019 ms
64 bytes from 10.0.0.240: icmp_seq=2 ttl=64 time=0.027 ms
64 bytes from 10.0.0.240: icmp_seq=3 ttl=64 time=0.023 ms
...


    3.单独开一个终端尝试停止keepalived服务
[root@k8s-master03 ~]# systemctl stop keepalived


    4.再次观察终端输出
[root@k8s-master03 ~]# ping 10.0.0.240
PING 10.0.0.240 (10.0.0.240) 56(84) bytes of data.
64 bytes from 10.0.0.240: icmp_seq=1 ttl=64 time=0.019 ms
64 bytes from 10.0.0.240: icmp_seq=2 ttl=64 time=0.027 ms
64 bytes from 10.0.0.240: icmp_seq=3 ttl=64 time=0.023 ms
...
64 bytes from 10.0.0.240: icmp_seq=36 ttl=64 time=0.037 ms
64 bytes from 10.0.0.240: icmp_seq=37 ttl=64 time=0.023 ms
From 10.0.0.242: icmp_seq=38 Redirect Host(New nexthop: 10.0.0.240)
From 10.0.0.242: icmp_seq=39 Redirect Host(New nexthop: 10.0.0.240)
64 bytes from 10.0.0.240: icmp_seq=40 ttl=64 time=1.81 ms
64 bytes from 10.0.0.240: icmp_seq=41 ttl=64 time=0.680 ms
64 bytes from 10.0.0.240: icmp_seq=42 ttl=64 time=0.751 ms


    5.验证vip是否飘逸到其他节点,果不其然,真的飘逸到其他master节点啦!
[root@k8s-master02 ~]# ip a
...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:cf:ad:0a brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.242/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.240/32 scope global eth0
       valid_lft forever preferred_lft forever
...
[root@k8s-master02 ~]#
AI 代码解读

1.5 验证haproxy服务并验证

    1.启动haproxy服务
systemctl enable --now haproxy 
systemctl status haproxy 

    2.基于telnet验证haporxy是否正常
[root@k8s-master02 ~]# telnet 10.0.0.240 8443
Trying 10.0.0.240...
Connected to 10.0.0.240.
Escape character is '^]'.
Connection closed by foreign host.
[root@k8s-master02 ~]# 

    3.基于webUI进行验证
[root@k8s-master02 ~]# curl http://10.0.0.240:33305/ayouok
<html><body><h1>200 OK</h1>
Service ready.
</body></html>
[root@k8s-master02 ~]#
AI 代码解读

2.启动etcd集群

2.1 创建etcd集群各节点配置文件

    1.k8s-master01节点的配置文件
[root@k8s-master01 ~]# mkdir -pv /yinzhengjie/softwares/etcd
[root@k8s-master01 ~]# cat > /yinzhengjie/softwares/etcd/etcd.config.yml <<'EOF'
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.0.0.241:2380'
listen-client-urls: 'https://10.0.0.241:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.0.0.241:2380'
advertise-client-urls: 'https://10.0.0.241:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.0.0.241:2380,k8s-master02=https://10.0.0.242:2380,k8s-master03=https://10.0.0.243:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/yinzhengjie/certs/etcd/etcd-server.pem'
  key-file: '/yinzhengjie/certs/etcd/etcd-server-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/yinzhengjie/certs/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/yinzhengjie/certs/etcd/etcd-server.pem'
  key-file: '/yinzhengjie/certs/etcd/etcd-server-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/yinzhengjie/certs/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF


    2.k8s-master02节点的配置文件
[root@k8s-master02 ~]# mkdir -pv /yinzhengjie/softwares/etcd
[root@k8s-master02 ~]# cat > /yinzhengjie/softwares/etcd/etcd.config.yml << 'EOF'
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.0.0.242:2380'
listen-client-urls: 'https://10.0.0.242:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.0.0.242:2380'
advertise-client-urls: 'https://10.0.0.242:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.0.0.241:2380,k8s-master02=https://10.0.0.242:2380,k8s-master03=https://10.0.0.243:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/yinzhengjie/certs/etcd/etcd-server.pem'
  key-file: '/yinzhengjie/certs/etcd/etcd-server-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/yinzhengjie/certs/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/yinzhengjie/certs/etcd/etcd-server.pem'
  key-file: '/yinzhengjie/certs/etcd/etcd-server-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/yinzhengjie/certs/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF


    3.k8s-master03节点的配置文件
[root@k8s-master03 ~]# mkdir -pv /yinzhengjie/softwares/etcd
[root@k8s-master03 ~]# cat > /yinzhengjie/softwares/etcd/etcd.config.yml << 'EOF'
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.0.0.243:2380'
listen-client-urls: 'https://10.0.0.243:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.0.0.243:2380'
advertise-client-urls: 'https://10.0.0.243:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.0.0.241:2380,k8s-master02=https://10.0.0.242:2380,k8s-master03=https://10.0.0.243:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/yinzhengjie/certs/etcd/etcd-server.pem'
  key-file: '/yinzhengjie/certs/etcd/etcd-server-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/yinzhengjie/certs/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/yinzhengjie/certs/etcd/etcd-server.pem'
  key-file: '/yinzhengjie/certs/etcd/etcd-server-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/yinzhengjie/certs/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF
AI 代码解读

2.2 编写etcd启动脚本

cat > /usr/lib/systemd/system/etcd.service <<'EOF'
[Unit]
Description=Jason Yin's Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/yinzhengjie/softwares/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service
EOF
AI 代码解读

2.3 启动etcd集群

    1.启动服务
systemctl daemon-reload && systemctl enable --now etcd
systemctl status etcd

    2.查看etcd集群状态
etcdctl --endpoints="10.0.0.241:2379,10.0.0.242:2379,10.0.0.243:2379" --cacert=/yinzhengjie/certs/etcd/etcd-ca.pem --cert=/yinzhengjie/certs/etcd/etcd-server.pem --key=/yinzhengjie/certs/etcd/etcd-server-key.pem  endpoint status --write-out=table
AI 代码解读

2.4 验证etcd集群高可用

[root@k8s-master01 ~]# systemctl stop etcd  # 我们可以故意停止一个etcd节点,观察能否查看集群状态
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl --endpoints="10.0.0.241:2379,10.0.0.242:2379,10.0.0.243:2379" --cacert=/yinzhengjie/certs/etcd/etcd-ca.pem --cert=/yinzhengjie/certs/etcd/etcd-server.pem --key=/yinzhengjie/certs/etcd/etcd-server-key.pem  endpoint status --write-out=table
...
+-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|    ENDPOINT     |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 10.0.0.242:2379 | b83b69ba7d246b29 |  3.5.10 |   29 kB |      true |      false |         3 |         10 |                 10 |        |
| 10.0.0.243:2379 |  47b70f9ecb1f200 |  3.5.10 |   29 kB |     false |      false |         3 |         10 |                 10 |        |
+-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# systemctl start etcd
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl --endpoints="10.0.0.241:2379,10.0.0.242:2379,10.0.0.243:2379" --cacert=/yinzhengjie/certs/etcd/etcd-ca.pem --cert=/yinzhengjie/certs/etcd/etcd-server.pem --key=/yinzhengjie/certs/etcd/etcd-server-key.pem  endpoint status --write-out=table
+-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|    ENDPOINT     |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 10.0.0.241:2379 | 566d563f3c9274ed |  3.5.10 |   25 kB |     false |      false |         6 |         20 |                 20 |        |
| 10.0.0.242:2379 | b83b69ba7d246b29 |  3.5.10 |   29 kB |      true |      false |         6 |         20 |                 20 |        |
| 10.0.0.243:2379 |  47b70f9ecb1f200 |  3.5.10 |   29 kB |     false |      false |         6 |         20 |                 20 |        |
+-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
[root@k8s-master01 ~]#
AI 代码解读

3 部署ApiServer组件

3.1 k8s-master01节点启动ApiServer

温馨提示:
    - "--advertise-address"是对应的master节点的IP地址;
    - "--service-cluster-ip-range"对应的是svc的网段
    - "--service-node-port-range"对应的是svc的NodePort端口范围;
    - "--etcd-servers"指定的是etcd集群地址

配置文件参考链接:
    https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/


具体实操:
    1.创建k8s-master01节点的配置文件
cat > /usr/lib/systemd/system/kube-apiserver.service << 'EOF'
[Unit]
Description=Jason Yin's Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --allow_privileged=true \
      --advertise-address=10.0.0.241 \
      --service-cluster-ip-range=10.200.0.0/16  \
      --service-node-port-range=3000-50000  \
      --etcd-servers=https://10.0.0.241:2379,https://10.0.0.242:2379,https://10.0.0.243:2379 \
      --etcd-cafile=/yinzhengjie/certs/etcd/etcd-ca.pem  \
      --etcd-certfile=/yinzhengjie/certs/etcd/etcd-server.pem  \
      --etcd-keyfile=/yinzhengjie/certs/etcd/etcd-server-key.pem  \
      --client-ca-file=/yinzhengjie/certs/kubernetes/k8s-ca.pem  \
      --tls-cert-file=/yinzhengjie/certs/kubernetes/apiserver.pem  \
      --tls-private-key-file=/yinzhengjie/certs/kubernetes/apiserver-key.pem  \
      --kubelet-client-certificate=/yinzhengjie/certs/kubernetes/apiserver.pem  \
      --kubelet-client-key=/yinzhengjie/certs/kubernetes/apiserver-key.pem  \
      --service-account-key-file=/yinzhengjie/certs/kubernetes/sa.pub  \
      --service-account-signing-key-file=/yinzhengjie/certs/kubernetes/sa.key \
      --service-account-issuer=https://kubernetes.default.svc.yinzhengjie.com \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/yinzhengjie/certs/kubernetes/front-proxy-ca.pem  \
      --proxy-client-cert-file=/yinzhengjie/certs/kubernetes/front-proxy-client.pem  \
      --proxy-client-key-file=/yinzhengjie/certs/kubernetes/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF


    2.启动服务
systemctl daemon-reload && systemctl enable --now kube-apiserver
systemctl status kube-apiserver
AI 代码解读

3.2 k8s-master02节点启动ApiServer

温馨提示:
    - "--advertise-address"是对应的master节点的IP地址;
    - "--service-cluster-ip-range"对应的是svc的网段
    - "--service-node-port-range"对应的是svc的NodePort端口范围;
    - "--etcd-servers"指定的是etcd集群地址


具体实操:
    1.创建k8s-master02节点的配置文件
cat > /usr/lib/systemd/system/kube-apiserver.service << 'EOF'
[Unit]
Description=Jason Yin's Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --advertise-address=10.0.0.242 \
      --service-cluster-ip-range=10.200.0.0/16  \
      --service-node-port-range=3000-50000  \
      --etcd-servers=https://10.0.0.241:2379,https://10.0.0.242:2379,https://10.0.0.243:2379 \
      --etcd-cafile=/yinzhengjie/certs/etcd/etcd-ca.pem  \
      --etcd-certfile=/yinzhengjie/certs/etcd/etcd-server.pem  \
      --etcd-keyfile=/yinzhengjie/certs/etcd/etcd-server-key.pem  \
      --client-ca-file=/yinzhengjie/certs/kubernetes/k8s-ca.pem  \
      --tls-cert-file=/yinzhengjie/certs/kubernetes/apiserver.pem  \
      --tls-private-key-file=/yinzhengjie/certs/kubernetes/apiserver-key.pem  \
      --kubelet-client-certificate=/yinzhengjie/certs/kubernetes/apiserver.pem  \
      --kubelet-client-key=/yinzhengjie/certs/kubernetes/apiserver-key.pem  \
      --service-account-key-file=/yinzhengjie/certs/kubernetes/sa.pub  \
      --service-account-signing-key-file=/yinzhengjie/certs/kubernetes/sa.key \
      --service-account-issuer=https://kubernetes.default.svc.yinzhengjie.com \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/yinzhengjie/certs/kubernetes/front-proxy-ca.pem  \
      --proxy-client-cert-file=/yinzhengjie/certs/kubernetes/front-proxy-client.pem  \
      --proxy-client-key-file=/yinzhengjie/certs/kubernetes/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF


    2.启动服务
systemctl daemon-reload && systemctl enable --now kube-apiserver
systemctl status kube-apiserver
AI 代码解读

3.3 k8s-master03节点启动ApiServer

温馨提示:
    - "--advertise-address"是对应的master节点的IP地址;
    - "--service-cluster-ip-range"对应的是svc的网段
    - "--service-node-port-range"对应的是svc的NodePort端口范围;
    - "--etcd-servers"指定的是etcd集群地址


具体实操:
    1.创建k8s-master03节点的配置文件
cat > /usr/lib/systemd/system/kube-apiserver.service << 'EOF'
[Unit]
Description=Jason Yin's Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --advertise-address=10.0.0.243 \
      --service-cluster-ip-range=10.200.0.0/16  \
      --service-node-port-range=3000-50000  \
      --etcd-servers=https://10.0.0.241:2379,https://10.0.0.242:2379,https://10.0.0.243:2379 \
      --etcd-cafile=/yinzhengjie/certs/etcd/etcd-ca.pem  \
      --etcd-certfile=/yinzhengjie/certs/etcd/etcd-server.pem  \
      --etcd-keyfile=/yinzhengjie/certs/etcd/etcd-server-key.pem  \
      --client-ca-file=/yinzhengjie/certs/kubernetes/k8s-ca.pem  \
      --tls-cert-file=/yinzhengjie/certs/kubernetes/apiserver.pem  \
      --tls-private-key-file=/yinzhengjie/certs/kubernetes/apiserver-key.pem  \
      --kubelet-client-certificate=/yinzhengjie/certs/kubernetes/apiserver.pem  \
      --kubelet-client-key=/yinzhengjie/certs/kubernetes/apiserver-key.pem  \
      --service-account-key-file=/yinzhengjie/certs/kubernetes/sa.pub  \
      --service-account-signing-key-file=/yinzhengjie/certs/kubernetes/sa.key \
      --service-account-issuer=https://kubernetes.default.svc.yinzhengjie.com \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/yinzhengjie/certs/kubernetes/front-proxy-ca.pem  \
      --proxy-client-cert-file=/yinzhengjie/certs/kubernetes/front-proxy-client.pem  \
      --proxy-client-key-file=/yinzhengjie/certs/kubernetes/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF


    2.启动服务
systemctl daemon-reload && systemctl enable --now kube-apiserver
systemctl status kube-apiserver
AI 代码解读

6 部署ControlerManager组件

6.1 所有节点创建配置文件

温馨提示:
    - "--cluster-cidr"是Pod的网段地址,我们可以自行修改。

配置文件参考链接:
    https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/


所有节点的controller-manager组件配置文件相同: (前提是证书文件存放的位置也要相同哟!)
cat > /usr/lib/systemd/system/kube-controller-manager.service << 'EOF'
[Unit]
Description=Jason Yin's Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
      --v=2 \
      --root-ca-file=/yinzhengjie/certs/kubernetes/k8s-ca.pem \
      --cluster-signing-cert-file=/yinzhengjie/certs/kubernetes/k8s-ca.pem \
      --cluster-signing-key-file=/yinzhengjie/certs/kubernetes/k8s-ca-key.pem \
      --service-account-private-key-file=/yinzhengjie/certs/kubernetes/sa.key \
      --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-controller-manager.kubeconfig \
      --leader-elect=true \
      --use-service-account-credentials=true \
      --node-monitor-grace-period=40s \
      --node-monitor-period=5s \
      --controllers=*,bootstrapsigner,tokencleaner \
      --allocate-node-cidrs=true \
      --cluster-cidr=10.100.0.0/16 \
      --requestheader-client-ca-file=/yinzhengjie/certs/kubernetes/front-proxy-ca.pem \
      --node-cidr-mask-size=24

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF
AI 代码解读

6.2 启动controller-manager服务

systemctl daemon-reload
systemctl enable --now kube-controller-manager
systemctl  status kube-controller-manager
AI 代码解读

7 部署Scheduler组件

7.1 所有节点创建配置文件

配置文件参考链接:
    https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/

所有节点的controller-manager组件配置文件相同: (前提是证书文件存放的位置也要相同哟!)
cat > /usr/lib/systemd/system/kube-scheduler.service <<'EOF'
[Unit]
Description=Jason Yin's Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
      --v=2 \
      --leader-elect=true \
      --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF
AI 代码解读

7.2 启动scheduler服务

systemctl daemon-reload
systemctl enable --now kube-scheduler
systemctl  status kube-scheduler
AI 代码解读

8.创建Bootstrapping自动颁发kubelet证书配置

8.1 k8s-master01节点创建bootstrap-kubelet.kubeconfig文件

温馨提示:
    - "--server"只想的是负载均衡器的IP地址,由负载均衡器对master节点进行反向代理哟。
    - "--token"也可以自定义,但也要同时修改"bootstrap"Secret"token-id""token-secret"对应值哟;

    1.设置集群
kubectl config set-cluster yinzhengjie-k8s \
  --certificate-authority=/yinzhengjie/certs/kubernetes/k8s-ca.pem \
  --embed-certs=true \
  --server=https://10.0.0.240:8443 \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/bootstrap-kubelet.kubeconfig

    2.创建用户
kubectl config set-credentials tls-bootstrap-token-user  \
  --token=yindao.jasonyinzhengjie \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/bootstrap-kubelet.kubeconfig


    3.将集群和用户进行绑定
kubectl config set-context tls-bootstrap-token-user@kubernetes \
  --cluster=yinzhengjie-k8s \
  --user=tls-bootstrap-token-user \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/bootstrap-kubelet.kubeconfig


    4.配置默认的上下文
kubectl config use-context tls-bootstrap-token-user@kubernetes \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/bootstrap-kubelet.kubeconfig
AI 代码解读

8.2 所有master节点拷贝管理证书

温馨提示:
    下面的操作我以k8s-master01为案例来操作的,实际上你可以使用所有的master节点完成下面的操作哟~

    1.所有master都拷贝管理员的证书文件
[root@k8s-master01 ~]#  mkdir -p /root/.kube
[root@k8s-master01 ~]#  cp /yinzhengjie/certs/kubeconfig/kube-admin.kubeconfig /root/.kube/config

    2.查看master组件,该组件官方在1.19+版本开始弃用,但是在1.28依旧没有移除哟~
[root@k8s-master01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
scheduler            Healthy   ok        
controller-manager   Healthy   ok        
etcd-0               Healthy   ok        
[root@k8s-master01 ~]# 

    3.查看集群状态,如果未来cs组件移除了也没关系,我们可以使用"cluster-info"子命令查看集群状态
[root@k8s-master01 ~]# kubectl cluster-info 
Kubernetes control plane is running at https://10.0.0.240:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-master01 ~]#
AI 代码解读

8.3 创建bootstrap-secret授权

    1.创建配bootstrap-secret文件用于授权
[root@k8s-master01 ~]# cat > bootstrap-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: bootstrap-token-yindao
  namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
  description: "The default bootstrap token generated by 'kubelet '."
  token-id: yindao
  token-secret: jasonyinzhengjie
  usage-bootstrap-authentication: "true"
  usage-bootstrap-signing: "true"
  auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubelet-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-certificate-rotation
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kube-apiserver
EOF


    2.应用bootstrap-secret配置文件
[root@k8s-master01 ~]# kubectl apply -f bootstrap-secret.yaml 
secret/bootstrap-token-yindao created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
[root@k8s-master01 ~]#
AI 代码解读

9.部署worker节点

9.1 复制证书

    1.k8s-master01节点分发证书到其他节点
cd /yinzhengjie/certs/
for NODE in k8s-master02 k8s-master03 k8s-worker04 k8s-worker05; do
     echo $NODE
     ssh $NODE "mkdir -p /yinzhengjie/certs/kube{config,rnetes}"
     for FILE in k8s-ca.pem k8s-ca-key.pem front-proxy-ca.pem; do
       scp kubernetes/$FILE $NODE:/yinzhengjie/certs/kubernetes/${FILE}
     done
     scp kubeconfig/bootstrap-kubelet.kubeconfig $NODE:/yinzhengjie/certs/kubeconfig/
done


    2.worker节点进行验证
[root@k8s-worker05 ~]# ll /yinzhengjie/ -R
/yinzhengjie/:
total 0
drwxr-xr-x 4 root root 42 Nov  5 16:27 certs

/yinzhengjie/certs:
total 0
drwxr-xr-x 2 root root 42 Nov  5 16:27 kubeconfig
drwxr-xr-x 2 root root 72 Nov  5 16:27 kubernetes

/yinzhengjie/certs/kubeconfig:
total 4
-rw------- 1 root root 2243 Nov  5 16:27 bootstrap-kubelet.kubeconfig

/yinzhengjie/certs/kubernetes:
total 12
-rw-r--r-- 1 root root 1094 Nov  5 16:27 front-proxy-ca.pem
-rw------- 1 root root 1675 Nov  5 16:27 k8s-ca-key.pem
-rw-r--r-- 1 root root 1363 Nov  5 16:27 k8s-ca.pem
[root@k8s-worker05 ~]#
AI 代码解读

9.2 启动kubelet服务

温馨提示:
    - 在"10-kubelet.con"文件中使用"--kubeconfig"指定的"kubelet.kubeconfig"文件并不存在,这个证书文件后期会自动生成;
    - 对于"clusterDNS"是NDS地址,我们可以自定义,比如"10.200.0.254";
    - “clusterDomain”对应的是域名信息,要和我们设计的集群保持一致,比如"yinzhengjie.com";
    - "10-kubelet.conf"文件中的"ExecStart="需要写2次,否则可能无法启动kubelet;


具体实操:
    1.所有节点创建工作目录
mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/


    2.所有创建kubelet的配置文件
cat > /etc/kubernetes/kubelet-conf.yml <<'EOF'
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /yinzhengjie/certs/kubernetes/k8s-ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.200.0.254
clusterDomain: yinzhengjie.com
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF




    3.所有节点配置kubelet service
cat >  /usr/lib/systemd/system/kubelet.service <<'EOF'
[Unit]
Description=JasonYin's Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF


    4.所有节点配置kubelet service的配置文件
cat > /etc/systemd/system/kubelet.service.d/10-kubelet.conf <<'EOF'
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/yinzhengjie/certs/kubeconfig/bootstrap-kubelet.kubeconfig --kubeconfig=/yinzhengjie/certs/kubeconfig/kubelet.kubeconfig"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
Environment="KUBELET_SYSTEM_ARGS=--container-runtime-endpoint=unix:///run/containerd/containerd.sock"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
EOF


    5.启动所有节点kubelet
systemctl daemon-reload
systemctl enable --now kubelet
systemctl status kubelet


    6.在所有master节点上查看nodes信息。
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE   VERSION
k8s-master01   NotReady   <none>   30s   v1.29.3
k8s-master02   NotReady   <none>   31s   v1.29.3
k8s-master03   NotReady   <none>   31s   v1.29.3
k8s-worker04   NotReady   <none>   31s   v1.29.3
k8s-worker05   NotReady   <none>   31s   v1.29.3
[root@k8s-master01 ~]# 


    7.可以查看到有相应的csr用户客户端的证书请求
[root@k8s-master01 ~]# kubectl get csr
NAME        AGE    SIGNERNAME                                    REQUESTOR                 REQUESTEDDURATION   CONDITION
csr-5j4xx   110s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:yindao   <none>              Approved,Issued
csr-9cmsh   110s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:yindao   <none>              Approved,Issued
csr-ght4f   110s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:yindao   <none>              Approved,Issued
csr-v6sbq   111s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:yindao   <none>              Approved,Issued
csr-xcq44   110s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:yindao   <none>              Approved,Issued
[root@k8s-master01 ~]#
AI 代码解读

9.3 启动kube-proxy服务

    1.生成kube-proxy的csr文件
[root@k8s-master01 pki]# cat kube-proxy-csr.json 
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-proxy",
      "OU": "Kubernetes-manual"
    }
  ]
}
[root@k8s-master01 pki]# 

    2.创建kube-proxy需要的证书文件
[root@k8s-master01 pki]# cfssl gencert \
  -ca=/yinzhengjie/certs/kubernetes/k8s-ca.pem \
  -ca-key=/yinzhengjie/certs/kubernetes/k8s-ca-key.pem \
  -config=k8s-ca-config.json \
  -profile=kubernetes \
  kube-proxy-csr.json | cfssljson -bare /yinzhengjie/certs/kubernetes/kube-proxy


[root@k8s-master01 pki]# ll /yinzhengjie/certs/kubernetes/kube-proxy*
-rw-r--r-- 1 root root 1045 Jan  9 09:43 /yinzhengjie/certs/kubernetes/kube-proxy.csr
-rw------- 1 root root 1679 Jan  9 09:43 /yinzhengjie/certs/kubernetes/kube-proxy-key.pem
-rw-r--r-- 1 root root 1464 Jan  9 09:43 /yinzhengjie/certs/kubernetes/kube-proxy.pem
[root@k8s-master01 pki]# 


    3.设置集群
[root@k8s-master01 pki]# kubectl config set-cluster yinzhengjie-k8s \
  --certificate-authority=/yinzhengjie/certs/kubernetes/k8s-ca.pem \
  --embed-certs=true \
  --server=https://10.0.0.240:8443 \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-proxy.kubeconfig

    4.设置一个用户项
[root@k8s-master01 pki]# kubectl config set-credentials system:kube-proxy \
  --client-certificate=/yinzhengjie/certs/kubernetes/kube-proxy.pem \
  --client-key=/yinzhengjie/certs/kubernetes/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-proxy.kubeconfig

    5.设置一个上下文环境
[root@k8s-master01 pki]# kubectl config set-context kube-proxy@kubernetes \
  --cluster=yinzhengjie-k8s \
  --user=system:kube-proxy \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-proxy.kubeconfig

    6.使用默认的上下文
[root@k8s-master01 pki]# kubectl config use-context kube-proxy@kubernetes \
  --kubeconfig=/yinzhengjie/certs/kubeconfig/kube-proxy.kubeconfig

    7.将kube-proxy的systemd Service文件发送到其他节点
for NODE in k8s-master02 k8s-master03 k8s-worker04 k8s-worker05; do
     echo $NODE
     scp /yinzhengjie/certs/kubeconfig/kube-proxy.kubeconfig $NODE:/yinzhengjie/certs/kubeconfig/
done


    8.所有节点创建kube-proxy.conf配置文件,
cat > /etc/kubernetes/kube-proxy.yml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
metricsBindAddress: 127.0.0.1:10249
clientConnection:
  acceptConnection: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /yinzhengjie/certs/kubeconfig/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 10.100.0.0/16
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
mode: "ipvs"
nodeProtAddress: null
oomScoreAdj: -999
portRange: ""
udpIdelTimeout: 250ms
EOF


    9.所有节点使用systemd管理kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Jason Yin's Kubernetes Proxy
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yml \
  --v=2 
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF


    10.所有节点启动kube-proxy
systemctl daemon-reload && systemctl enable --now kube-proxy
systemctl status kube-proxy
AI 代码解读

五.部署CNI网络插件

参考链接:
    https://github.com/flannel-io/flannel
    https://gitee.com/jasonyin2020/cloud-computing-stack/blob/linux89/linux89/manifests/22-cni/flannel/kube-flannel.yml#
AI 代码解读

1.下载flannel所需的二进制文件

[root@k8s-master01 ~]#  wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz
AI 代码解读

2.解压flannel所需的程序包

[root@k8s-master01 ~]# mkdir -p /opt/cni/bin
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# tar -C /opt/cni/bin -xzf cni-plugins-linux-amd64-v1.2.0.tgz
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# ll /opt/cni/bin/
total 68936
-rwxr-xr-x 1 root root  3859475 Jan 17  2023 bandwidth
-rwxr-xr-x 1 root root  4299004 Jan 17  2023 bridge
-rwxr-xr-x 1 root root 10167415 Jan 17  2023 dhcp
-rwxr-xr-x 1 root root  3986082 Jan 17  2023 dummy
-rwxr-xr-x 1 root root  4385098 Jan 17  2023 firewall
-rwxr-xr-x 1 root root  3870731 Jan 17  2023 host-device
-rwxr-xr-x 1 root root  3287319 Jan 17  2023 host-local
-rwxr-xr-x 1 root root  3999593 Jan 17  2023 ipvlan
-rwxr-xr-x 1 root root  3353028 Jan 17  2023 loopback
-rwxr-xr-x 1 root root  4029261 Jan 17  2023 macvlan
-rwxr-xr-x 1 root root  3746163 Jan 17  2023 portmap
-rwxr-xr-x 1 root root  4161070 Jan 17  2023 ptp
-rwxr-xr-x 1 root root  3550152 Jan 17  2023 sbr
-rwxr-xr-x 1 root root  2845685 Jan 17  2023 static
-rwxr-xr-x 1 root root  3437180 Jan 17  2023 tuning
-rwxr-xr-x 1 root root  3993252 Jan 17  2023 vlan
-rwxr-xr-x 1 root root  3586502 Jan 17  2023 vrf
[root@k8s-master01 ~]#
AI 代码解读

3.将软件包同步到集群其他节点

[root@k8s-master01 ~]# data_rsync.sh /opt/cni/bin/
===== rsyncing k8s-master02: bin =====
命令执行成功!
===== rsyncing k8s-master03: bin =====
命令执行成功!
===== rsyncing k8s-worker04: bin =====
命令执行成功!
===== rsyncing k8s-worker05: bin =====
命令执行成功!
[root@k8s-master01 ~]#
AI 代码解读

4.修改flannel官方的资源清单

apiVersion: v1
kind: Namespace
metadata:
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
  name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.100.0.0/16",
      "Backend": {
        "Type": "vxlan",
        "Directrouting": true
      }
    }
kind: ConfigMap
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-cfg
  namespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-ds
  namespace: kube-flannel
spec:
  selector:
    matchLabels:
      app: flannel
      k8s-app: flannel
  template:
    metadata:
      labels:
        app: flannel
        k8s-app: flannel
        tier: node
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      containers:
      - args:
        - --ip-masq
        - --kube-subnet-mgr
        command:
        - /opt/bin/flanneld
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        image: docker.io/flannel/flannel:v0.24.0
        name: kube-flannel
        resources:
          requests:
            cpu: 100m
            memory: 50Mi
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
          privileged: false
        volumeMounts:
        - mountPath: /run/flannel
          name: run
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
        - mountPath: /run/xtables.lock
          name: xtables-lock
      hostNetwork: true
      initContainers:
      - args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        command:
        - cp
        image: docker.io/flannel/flannel-cni-plugin:v1.2.0
        name: install-cni-plugin
        volumeMounts:
        - mountPath: /opt/cni/bin
          name: cni-plugin
      - args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        command:
        - cp
        image: docker.io/flannel/flannel:v0.24.0
        name: install-cni
        volumeMounts:
        - mountPath: /etc/cni/net.d
          name: cni
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
      priorityClassName: system-node-critical
      serviceAccountName: flannel
      tolerations:
      - effect: NoSchedule
        operator: Exists
      volumes:
      - hostPath:
          path: /run/flannel
        name: run
      - hostPath:
          path: /opt/cni/bin
        name: cni-plugin
      - hostPath:
          path: /etc/cni/net.d
        name: cni
      - configMap:
          name: kube-flannel-cfg
        name: flannel-cfg
      - hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
        name: xtables-lock
AI 代码解读

5.创建资源清单部署flannel程序

[root@k8s-master01 ~]# kubectl apply -f kube-flannel.yml 
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@k8s-master01 ~]#
AI 代码解读

6.观察flannel组件是否正常运行

[root@k8s-master01 ~]# kubectl get pods -A -o wide
NAMESPACE      NAME                    READY   STATUS    RESTARTS   AGE     IP           NODE           NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-2b6dg   1/1     Running   0          2m59s   10.0.0.244   k8s-worker04   <none>           <none>
kube-flannel   kube-flannel-ds-4zjdd   1/1     Running   0          2m59s   10.0.0.245   k8s-worker05   <none>           <none>
kube-flannel   kube-flannel-ds-b2d96   1/1     Running   0          2m59s   10.0.0.242   k8s-master02   <none>           <none>
kube-flannel   kube-flannel-ds-s48rw   1/1     Running   0          2m59s   10.0.0.241   k8s-master01   <none>           <none>
kube-flannel   kube-flannel-ds-tz49n   1/1     Running   0          2m59s   10.0.0.243   k8s-master03   <none>           <none>
[root@k8s-master01 ~]#
AI 代码解读

7.部署服务测试网络的可用性

[root@k8s-master01 ~]# cat deploy-apple.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-apple
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: apple
  template:
    metadata:
      labels:
        apps: apple
    spec:
      containers:
      - name: apple
        image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:apple
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: svc-apple
spec:
  type: NodePort
  selector:
    apps: apple
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 8080
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# kubectl apply -f deploy-apple.yaml
deployment.apps/deployment-apple created
service/svc-apple created
[root@k8s-master01 ~]#
AI 代码解读

8.访问测试

http://10.0.0.243:8080/
AI 代码解读

可能会遇到的错误

1. Warning ClusterIPOutOfRange 27m (x10 over 54m) ipallocator-repair-controller Cluster IP [IPv4]:10.100.0.1 is not within the service CIDR 10.200.0.0/16; please recreate service

报错原因:
    kube-proxy创建的svc和配置的svc不在一个网段。

解决方案:
    查看"kube-proxy"的配置文件,观察是否在"clusterCIDR: 10.200.0.0/16"网段内。
AI 代码解读

2.[ERROR] Get "https://10.100.0.1:443/api?timeout=32s": tls: failed to verify certificate: x509: certificate is valid for 10.200.0.1, 10.0.0.240, 10.0.0.241, 10.0.0.242, 10.0.0.243, 10.0.0.244, 10.0.0.245, not 10.100.0.1

报错原因:
    使用"10.100.0.1"作为svc地址,和证书预定义的svc的IP地址不匹配导致的错误。

解决方案:
    如果依旧修改了kube-proxy的配置文件依旧无效,可以尝试先删除现有的svc应该就能解决问题。


举个例子:
[root@k8s-master01 ~]# kubectl get svc -A
NAMESPACE   NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   5h45m
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# kubectl delete svc kubernetes 
service "kubernetes" deleted
[root@k8s-master01 ~]# kubectl get svc -A
No resources found
[root@k8s-master01 ~]# kubectl get svc -A
NAMESPACE   NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     kubernetes   ClusterIP   10.200.0.1   <none>        443/TCP   0s
[root@k8s-master01 ~]#
AI 代码解读
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
部署DeepSeek但GPU不足,ACK One注册集群助力解决IDC GPU资源不足
借助阿里云ACK One注册集群,充分利用阿里云强大ACS GPU算力,实现DeepSeek推理模型高效部署。
企业级LLM推理部署新范式:基于ACK的DeepSeek蒸馏模型生产环境落地指南
本教程演示如何在ACK中使用vLLM框架快速部署DeepSeek R1模型推理服务。
NVIDIA NIM on ACK:优化生成式AI模型的部署与管理
本文结合NVIDIA NIM和阿里云容器服务,提出了基于ACK的完整服务化管理方案,用于优化生成式AI模型的部署和管理。
阿里云ACK+GitLab企业级部署实战教程
GitLab 是一个功能强大的基于 Web 的 DevOps 生命周期平台,整合了源代码管理、持续集成/持续部署(CI/CD)、项目管理等多种工具。其一体化设计使得开发团队能够在同一平台上进行代码协作、自动化构建与部署及全面的项目监控,极大提升了开发效率和项目透明度。 GitLab 的优势在于其作为一体化平台减少了工具切换,高度可定制以满足不同项目需求,并拥有活跃的开源社区和企业级功能,如高级权限管理和专业的技术支持。借助这些优势,GitLab 成为许多开发团队首选的 DevOps 工具,实现从代码编写到生产部署的全流程自动化和优化。
大道至简-基于ACK的Deepseek满血版分布式推理部署实战
本教程演示如何在ACK中多机分布式部署DeepSeek R1满血版。
K8S部署nexus
该配置文件定义了Nexus 3的Kubernetes部署,包括PersistentVolumeClaim、Deployment和服务。PVC请求20Gi存储,使用NFS存储类。Deployment配置了一个Nexus 3容器,内存限制为6G,CPU为1000m,并挂载数据卷。Service类型为NodePort,通过30520端口对外提供服务。所有资源位于`nexus`命名空间中。
二进制安装Kubernetes(k8s)v1.32.0
本指南提供了一个详细的步骤,用于在Linux系统上通过二进制文件安装Kubernetes(k8s)v1.32.0,支持IPv4+IPv6双栈。具体步骤包括环境准备、系统配置、组件安装和配置等。
700 10
云原生入门与实践:Kubernetes的简易部署
云原生技术正改变着现代应用的开发和部署方式。本文将引导你了解云原生的基础概念,并重点介绍如何使用Kubernetes进行容器编排。我们将通过一个简易的示例来展示如何快速启动一个Kubernetes集群,并在其上运行一个简单的应用。无论你是云原生新手还是希望扩展现有知识,本文都将为你提供实用的信息和启发性的见解。
Centos7下图形化部署单点KFS同步工具并将Oracle增量同步到KES
Centos7下图形化部署单点KFS同步工具并将Oracle增量同步到KES
Centos7下图形化部署单点KFS同步工具并将Oracle增量同步到KES
|
7天前
|
【02】客户端服务端C语言-go语言-web端PHP语言整合内容发布-优雅草网络设备监控系统-2月12日优雅草简化Centos stream8安装zabbix7教程-本搭建教程非docker搭建教程-优雅草solution
【02】客户端服务端C语言-go语言-web端PHP语言整合内容发布-优雅草网络设备监控系统-2月12日优雅草简化Centos stream8安装zabbix7教程-本搭建教程非docker搭建教程-优雅草solution
57 20

热门文章

最新文章

目录
目录
AI助理

你好,我是AI助理

可以解答问题、推荐解决方案等