Kubernetes安装篇(下):基于Kubeadm方式的集群部署

本文涉及的产品
日志服务 SLS,月写入数据量 50GB 1个月
简介: 在实际生产环境中,Kubernetes 环境就比这复杂的多,起码也是集群起步,因此,本文将从生产环境出发,为你带来基于生产环境下的 Kubernetes 集群部署,让你真正懂得该如何部署真正的 Kubernetes 集群环境。

上一篇文章《Kubernetes安装篇(上):基于Minikube方式部署本地环境》介绍了基于本地环境下的 `Kubernetes` 安装,目的在于搭建本地开发环境。但在实际生产环境中,`Kubernetes` 环境就比这复杂的多,起码也是集群起步,因此,本文将从生产环境出发,为你带来基于生产环境下的 `Kubernetes` 集群部署,让你真正懂得该如何部署真正的 `Kubernetes` 集群环境。


1、环境准备


采用VMware虚拟机安装Kubernetes集群,准备环境情况如下:


  • 2台虚拟机:CentOS 7,配置越高越好!
  • Docker Version:19.03.13
  • kubeadm Version:V1.20.0


2、系统初始化


在安装之前,一些系统参数、配置需统一配置,确保后续安装的顺利进行。


系统初始化部分,均需在Master、Node节点上执行。 


2.1 设置系统主机名


hostnamectl set-hostname <hostname>


执行过程:


  • Master节点


[root@localhost xcbeyond]# hostnamectl set-hostname k8s-master


  • Node节点:


[root@localhost xcbeyond]# hostnamectl set-hostname k8s-node01


2.2 修改host文件


为了方便集群间各节点间可以直接通过主机名互通,因此建议修改host文件。


在Master、node节点上分别修改host文件`/etc/hosts`,添加以下内容:


192.168.11.100 k8s-master
192.168.11.101 k8s-node01


上述IP是对应节点的实际IP。


2.3 安装依赖包


在接下来Kubernetes使用过程中,可能涉及一些工具,事先安装便于后期使用。

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp net-tools


2.4 设置防火墙为 Iptables 并设置空规则

systemctl  stop firewalld  &&  systemctl  disable firewalld
yum -y install iptables-services  &&  systemctl  start iptables  &&  systemctl  enable iptables&&  iptables -F  &&  service iptables save



2.5 关闭SELINUX

swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# 将SELinux禁用
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config


2.6 调整内核参数

cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1  # 节点上的iptables能够正确地查看桥接流量
net.bridge.bridge-nf-call-ip6tables=1 # 节点上的iptables能够正确地查看桥接流量
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0   # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1  # 不检查物理内存是否够用
vm.panic_on_oom=0   # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf  /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf


2.7 调整时区


(如果时区正确,则无需调整)

# 设置系统时区为中国/上海
timedatectl set-timezone Asia/Shanghai
# 将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond


2.8 升级系统内核为5.4


CentOS 7.x 系统自带的3.10.x 内核存在一些Bugs,导致运行的Docker、Kubernetes 不稳定,例如: rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm


rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# 安装完成后检查/boot/grub2/grub.cfg中对应内核menuentry中是否包含initrd16配置,如果没有,再安装一次!
yum --enablerepo=elrepo-kernel install -y kernel-lt
# 设置开机从新内核启动
grub2-set-default 'CentOS Linux (5.4.93-1.el7.elrepo.x86_64) 7 (Core)'


执行过程:

[root@k8s-master xcbeyond]# uname -r
3.10.0-1127.19.1.el7.x86_64
[root@k8s-master xcbeyond]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
获取http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
警告:/var/tmp/rpm-tmp.xF145X: 头V4 DSA/SHA1 Signature, 密钥 ID baadae52: NOKEY
准备中...                          ################################# [100%]
正在升级/安装...
   1:elrepo-release-7.0-3.el7.elrepo  ################################# [100%]
[root@k8s-master xcbeyond]# yum --enablerepo=elrepo-kernel install -y kernel-lt
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 ……
警告:RPM 数据库已被非 yum 程序修改。
  正在安装    : kernel-lt-5.4.93-1.el7.elrepo.x86_64     1/1 
  验证中      : kernel-lt-5.4.93-1.el7.elrepo.x86_64     1/1 
已安装:
  kernel-lt.x86_64 0:5.4.93-1.el7.elrepo
完毕!
[root@k8s-master xcbeyond]# grub2-set-default 'CentOS Linux (5.4.93-1.el7.elrepo.x86_64) 7 (Core)'
[root@k8s-master xcbeyond]# reboot


重启完成后,查看系统内核已升级成功:

[xcbeyond@k8s-master ~]$ uname -r
5.4.93-1.el7.elrepo.x86_64


别忘了在node节点上执行! 


2.9 kube-proxy开启ipvs的前置条件

modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4


执行过程:

[root@k8s-master xcbeyond]# modprobe br_netfilter
[root@k8s-master xcbeyond]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack_ipv4
> EOF
[root@k8s-master xcbeyond]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
modprobe: FATAL: Module nf_conntrack_ipv4 not found.


别忘了在node节点上执行! 


3、Docker安装


此处不再讲述Docker安装的过程,具体可参考之前写的文章。


4、安装 kubeadm


4.1 安装 kubeadm、kubectl 和 kubelet


需要在每台机器上(master、node节点)安装以下的软件包:


  • `kubeadm`:用来初始化集群的指令。
  • `kubectl`:用来与集群通信的命令行工具。
  • `kubelet`:在集群中的每个节点上用来启动 Pod 和容器等。


(1)配置Kubernetes数据源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


执行过程:

[root@k8s-master xcbeyond]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
> http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF


(2)安装`kubeadm`、`kubectl`、`kubelet`

yum -y  install  kubeadm kubectl kubelet


执行过程:

[root@k8s-master xcbeyond]# yum -y  install  kubeadm kubectl kubelet
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.neusoft.edu.cn
 * elrepo: mirrors.neusoft.edu.cn
 * extras: mirrors.neusoft.edu.cn
 * updates: mirrors.neusoft.edu.cn
kubernetes                                                       | 1.4 kB  00:00:00     
正在解决依赖关系
--> 正在检查事务
---> 软件包 kubeadm.x86_64.0.1.20.2-0 将被 安装
--> 正在处理依赖关系 kubernetes-cni >= 0.8.6,它被软件包 kubeadm-1.20.2-0.x86_64 需要
--> 正在处理依赖关系 cri-tools >= 1.13.0,它被软件包 kubeadm-1.20.2-0.x86_64 需要
---> 软件包 kubectl.x86_64.0.1.20.2-0 将被 安装
---> 软件包 kubelet.x86_64.0.1.20.2-0 将被 安装
--> 正在处理依赖关系 socat,它被软件包 kubelet-1.20.2-0.x86_64 需要
--> 正在检查事务
---> 软件包 cri-tools.x86_64.0.1.13.0-0 将被 安装
---> 软件包 kubernetes-cni.x86_64.0.0.8.7-0 将被 安装
---> 软件包 socat.x86_64.0.1.7.3.2-2.el7 将被 安装
--> 解决依赖关系完成
依赖关系解决
=========================================================================================
 Package           架构            版本            源             大小
=========================================================================================
正在安装:
 kubeadm           x86_64         1.20.2-0        kubernetes          8.3 M
 kubectl           x86_64         1.20.2-0        kubernetes          8.5 M
 kubelet           x86_64         1.20.2-0        kubernetes          20 M
为依赖而安装:
 cri-tools         x86_64         1.13.0-0        kubernetes          5.1 M
 kubernetes-cni    x86_64         0.8.7-0         kubernetes          19 M
 socat             x86_64         1.7.3.2-2.el7   base                290 k
事务概要
=========================================================================================
安装  3 软件包 (+3 依赖软件包)
总计:61 M
总下载量:52 M
安装大小:262 M
Downloading packages:
(1/5): 14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm                                                                   | 5.1 MB  00:00:03     
(2/5): b46459afb07aaf12937f7f310b876fab9f5f904eaa8f4a88a21547477eafba78-kubeadm-1.20.2-0.x86_64.rpm                                                                     | 8.3 MB  00:00:06     
(3/5): socat-1.7.3.2-2.el7.x86_64.rpm                                                                                                                                   | 290 kB  00:00:02     
(4/5): a79d632b1f8c40d2a00e2f98cba68b55c3928d70b97c32aad61c10e17965c2f1-kubelet-1.20.2-0.x86_64.rpm                                                                     |  20 MB  00:00:14     
(5/5): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm                                                               |  19 MB  00:00:11     
-----------------------------------------------------------------------------------------
总计                                                                                                                                                           2.8 MB/s |  52 MB  00:00:18     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在安装    : socat-1.7.3.2-2.el7.x86_64                     1/6 
  正在安装    : kubelet-1.20.2-0.x86_64                        2/6 
  正在安装    : kubernetes-cni-0.8.7-0.x86_64                  3/6 
  正在安装    : kubectl-1.20.2-0.x86_64                        4/6 
  正在安装    : cri-tools-1.13.0-0.x86_64                      5/6 
  正在安装    : kubeadm-1.20.2-0.x86_64                        6/6 
  验证中      : kubernetes-cni-0.8.7-0.x86_64                  1/6 
  验证中      : kubelet-1.20.2-0.x86_64                        2/6 
  验证中      : kubeadm-1.20.2-0.x86_64                        3/6 
  验证中      : cri-tools-1.13.0-0.x86_64                      4/6 
  验证中      : kubectl-1.20.2-0.x86_64                        5/6 
  验证中      : socat-1.7.3.2-2.el7.x86_64                     6/6 
已安装:
  kubeadm.x86_64 0:1.20.2-0    kubectl.x86_64 0:1.20.2-0    kubelet.x86_64 0:1.20.2-0     
作为依赖被安装:
  cri-tools.x86_64 0:1.13.0-0  kubernetes-cni.x86_64 0:0.8.7-0  socat.x86_64 0:1.7.3.2-2.el7                                 
完毕!


(3)设置开机启动`kubelet`

systemctl enable kubelet.service


执行过程:

[root@k8s-master xcbeyond]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.


别忘了在node节点上执行 


4.2 创建集群


4.2.1 安装镜像拉取


kubeadm 创建集群默认使用的docker镜像仓库为 k8s.gcr.io ,而国内无法直接访问,需要中转才能使用。


(已将所需镜像构建发布在docker hub上,方便国内直接使用)


master、node节点都需要执行 


安装镜像拉取脚本[k8s-images-pull.sh](https://github.com/xcbeyond/deploy-scripts/blob/master/kubernetes/k8s-images-pull.sh)如下:

#!/bin/bash
kubernetes_version="v1.20.0"
# 下载需要的镜像(docker hub)
kubeadm config images list --kubernetes-version=${kubernetes_version} |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#xcbeyond#g' |sh -x
# 重命名镜像
docker images |grep xcbeyond |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#xcbeyond#k8s.gcr.io#2' |sh -x
# 删除xcbeyond镜像
docker images |grep xcbeyond |awk '{print "docker rmi ", $1":"$2}' |sh -x


查看需要哪些镜像:`kubeadm config images list`


执行过程如下:

[root@k8s-master xcbeyond]# ./k8s-images-pull.sh 
+ docker pull xcbeyond/kube-apiserver:v1.20.0
v1.20.0: Pulling from xcbeyond/kube-apiserver
f398b465657e: Pull complete 
cbcdf8ef32b4: Pull complete 
a9b56b1d4e55: Pull complete 
Digest: sha256:c54e33e290aa1463eae80f6bd4440af3def87f01f86a37a12ec213eb205e538a
Status: Downloaded newer image for xcbeyond/kube-apiserver:v1.20.0
docker.io/xcbeyond/kube-apiserver:v1.20.0
+ docker pull xcbeyond/kube-controller-manager:v1.20.0
v1.20.0: Pulling from xcbeyond/kube-controller-manager
f398b465657e: Already exists 
cbcdf8ef32b4: Already exists 
2ffb969cde54: Pull complete 
Digest: sha256:5f6321aaa0d9880bd3a96a0d589fc96e912e30f7f5f6d6f53c406eb2b4b20b68
Status: Downloaded newer image for xcbeyond/kube-controller-manager:v1.20.0
docker.io/xcbeyond/kube-controller-manager:v1.20.0
+ docker pull xcbeyond/kube-scheduler:v1.20.0
v1.20.0: Pulling from xcbeyond/kube-scheduler
f398b465657e: Already exists 
cbcdf8ef32b4: Already exists 
2f71710e6dc2: Pull complete 
Digest: sha256:10f3ae3ed09f92b3be037e1dd465214046135eabd9879db43b3fe7159a1bae1c
Status: Downloaded newer image for xcbeyond/kube-scheduler:v1.20.0
docker.io/xcbeyond/kube-scheduler:v1.20.0
+ docker pull xcbeyond/kube-proxy:v1.20.0
v1.20.0: Pulling from xcbeyond/kube-proxy
e5a8c1ed6cf1: Pull complete 
f275df365c13: Pull complete 
6a2802bb94f4: Pull complete 
cb3853c52da4: Pull complete 
db342cbe4b1c: Pull complete 
9a72dd095a53: Pull complete 
6943e8f5bc84: Pull complete 
Digest: sha256:d583d644b186519597dfdfe420710ab0888927e286ea43b2a6f54ba4329e93e4
Status: Downloaded newer image for xcbeyond/kube-proxy:v1.20.0
docker.io/xcbeyond/kube-proxy:v1.20.0
+ docker pull xcbeyond/pause:3.2
3.2: Pulling from xcbeyond/pause
c74f8866df09: Pull complete 
Digest: sha256:4dcd2075946239537e21adcf4bb300f07eb5c2c8058d699480f2ae62a5cc5085
Status: Downloaded newer image for xcbeyond/pause:3.2
docker.io/xcbeyond/pause:3.2
+ docker pull xcbeyond/etcd:3.4.13-0
3.4.13-0: Pulling from xcbeyond/etcd
4000adbbc3eb: Already exists 
d72167780652: Already exists 
d60490a768b5: Already exists 
4a4b5535d134: Pull complete 
0dac37e8b31a: Pull complete 
Digest: sha256:79d32edd429163b1ae404eeb078c75fc2f63fc3d606e0cd57285c832e8181ea3
Status: Downloaded newer image for xcbeyond/etcd:3.4.13-0
docker.io/xcbeyond/etcd:3.4.13-0
+ docker pull xcbeyond/coredns:1.7.0
1.7.0: Pulling from xcbeyond/coredns
c6568d217a00: Pull complete 
6937ebe10f02: Pull complete 
Digest: sha256:4310e3ed7a0a9b82cfb2d31c6a7c102b8d05fef2b0208072b87dc4ceca3c47bb
Status: Downloaded newer image for xcbeyond/coredns:1.7.0
docker.io/xcbeyond/coredns:1.7.0
+ docker tag xcbeyond/pause:3.2 k8s.gcr.io/pause:3.2
+ docker tag xcbeyond/kube-controller-manager:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0
+ docker tag xcbeyond/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
+ docker tag xcbeyond/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
+ docker tag xcbeyond/kube-proxy:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0
+ docker tag xcbeyond/kube-scheduler:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0
+ docker tag xcbeyond/kube-apiserver:v1.20.0 k8s.gcr.io/kube-apiserver:v1.20.0
+ docker rmi xcbeyond/pause:3.2
Untagged: xcbeyond/pause:3.2
Untagged: xcbeyond/pause@sha256:4dcd2075946239537e21adcf4bb300f07eb5c2c8058d699480f2ae62a5cc5085
+ docker rmi xcbeyond/kube-controller-manager:v1.20.0
Untagged: xcbeyond/kube-controller-manager:v1.20.0
Untagged: xcbeyond/kube-controller-manager@sha256:5f6321aaa0d9880bd3a96a0d589fc96e912e30f7f5f6d6f53c406eb2b4b20b68
+ docker rmi xcbeyond/coredns:1.7.0
Untagged: xcbeyond/coredns:1.7.0
Untagged: xcbeyond/coredns@sha256:4310e3ed7a0a9b82cfb2d31c6a7c102b8d05fef2b0208072b87dc4ceca3c47bb
+ docker rmi xcbeyond/etcd:3.4.13-0
Untagged: xcbeyond/etcd:3.4.13-0
Untagged: xcbeyond/etcd@sha256:79d32edd429163b1ae404eeb078c75fc2f63fc3d606e0cd57285c832e8181ea3
+ docker rmi xcbeyond/kube-proxy:v1.20.0
Untagged: xcbeyond/kube-proxy:v1.20.0
Untagged: xcbeyond/kube-proxy@sha256:d583d644b186519597dfdfe420710ab0888927e286ea43b2a6f54ba4329e93e4
+ docker rmi xcbeyond/kube-scheduler:v1.20.0
Untagged: xcbeyond/kube-scheduler:v1.20.0
Untagged: xcbeyond/kube-scheduler@sha256:10f3ae3ed09f92b3be037e1dd465214046135eabd9879db43b3fe7159a1bae1c
+ docker rmi xcbeyond/kube-apiserver:v1.20.0
Untagged: xcbeyond/kube-apiserver:v1.20.0
Untagged: xcbeyond/kube-apiserver@sha256:c54e33e290aa1463eae80f6bd4440af3def87f01f86a37a12ec213eb205e538a
[root@k8s-master xcbeyond]# docker image ls
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/pause                     3.2                 b76329639608        16 hours ago        683kB
k8s.gcr.io/kube-controller-manager   v1.20.0             630f45a9961f        16 hours ago        116MB
k8s.gcr.io/coredns                   1.7.0               4e42ad8cda50        21 hours ago        45.2MB
k8s.gcr.io/etcd                      3.4.13-0            999b6137af27        21 hours ago        253MB
k8s.gcr.io/kube-proxy                v1.20.0             51912faaf3a3        21 hours ago        118MB
k8s.gcr.io/kube-scheduler            v1.20.0             62181d1bf9a1        21 hours ago        46.4MB
k8s.gcr.io/kube-apiserver            v1.20.0             0f7e1178e374        22 hours ago        122MB


别忘了在node节点上执行! 


4.2.2 初始化主节点


Master 节点是指 Kubernetes 集群中的控制节点,包括 etcd (集群数据库) 和 API Server (集群控制的入口进程)。


初始化主节点,执行 kubeadm init <args>


(1)修改 kubeadm 初始化配置文件。


执行` kubeadm config print init-defaults 命令获取 kubeadm 初始配置文件模板,将其存放在kubeadm-config.yml:

kubeadm config print init-defaults > kubeadm-config.yml



并修改以下参数:

localAPIEndpoint:
  advertiseAddress: 192.168.66.10  # 主节点实际IP
kubernetesVersion: v1.20.0
networking:
  podSubnet: "10.244.0.0/16"    # 
  serviceSubnet: 10.96.0.0/12
# 新增如下内容:
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs
kubeadm init --config=kubeadm-config.yml --upload-certs | tee kubeadm-init.log


(2)初始化。

kubeadm init --config=kubeadm-config.yml  | tee kubeadm-init.log


方便初始化日志查看,将其留存在kubeadm-init.log文件。


如果说某次执行 kubeadm init 初始化失败,在下一次执行 kubeadm init 初始化语句之前,先执行 kubeadm reset 命令。这个命令的作用是重置节点,可以把这个命令理解为:上一次 kubeadm init 初始化操作失败了,该命令清理了之前的失败环境。


执行过程:

[root@k8s-master xcbeyond]# kubeadm init --config=kubeadm-config.yml  | tee kubeadm-init.log
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.11.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.11.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.009413 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.11.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:79f34a5872b3df5817d29330ec055d14509a66c96c5de01bfa0d640fab671d90


4.2.3 加入主节点


在Master节点上 kubeadm init 执行成功后,注意日志末尾的提示,按要求在Master、Node节点上执行相关命令即可。


kubeadm init 初始化执行日志如下:

……
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.11.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:79f34a5872b3df5817d29330ec055d14509a66c96c5de01bfa0d640fab671d90


要使得非root用户可以运行 kubectl ,请运行以下命令(是 kubeadm init  输出日志的部分内容):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


或者,如果你是 root 用户,则可以运行:

export KUBECONFIG=/etc/kubernetes/admin.conf


4.2.4 加入工作节点


工作节点是你的工作负载(容器和 Pod 等)运行的地方。要将新节点添加到集群,请对每台工作节点执行以下操作。


root用户执行 kubeadm init 输出的命令:

kubeadm join 192.168.11.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:79f34a5872b3df5817d29330ec055d14509a66c96c5de01bfa0d640fab671d90

执行过程:

[root@k8s-node01 xcbeyond]# kubeadm join 192.168.11.100:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:79f34a5872b3df5817d29330ec055d14509a66c96c5de01bfa0d640fab671d90
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


4.2.5 安装Pod网络附加插件


至此,在Master节点执行 kubectl get nodes 命令:

[root@k8s-master xcbeyond]# kubectl get nodes
NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   NotReady   control-plane,master   1m8s   v1.20.2
k8s-node01   NotReady   <none>                 18s   v1.20.2


发现是 NotReady 状态,因为Kubernetes要求必须要存在一个网络,即:目前还没有构建Pod网络附加插件,此时需要安装Pod网络附加插件。


可直接使用官方提供的kube-flannel.yml文件,进行创建。


(1)下载官方提供的kube-flannel.yml文件。


文件地址:https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml 


(2)创建网络。

[root@k8s-master xcbeyond]# kubectl create -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

(3)查看Pod。


如果还未 Running 状态,可稍等一会儿,等待构建成功。

[root@k8s-master xcbeyond]# kubectl get pod -n kube-system
NAME                                 READY   STATUS              RESTARTS   AGE
coredns-74ff55c5b-fr4jj              0/1     ContainerCreating   0          6m3s
coredns-74ff55c5b-wcj2h              0/1     ContainerCreating   0          6m3s
etcd-k8s-master                      1/1     Running             0          6m5s
kube-apiserver-k8s-master            1/1     Running             0          6m5s
kube-controller-manager-k8s-master   1/1     Running             0          6m5s
kube-flannel-ds-2nkcv                1/1     Running             0          13s
kube-flannel-ds-m8tf2                1/1     Running             0          13s
kube-proxy-mft9t                     0/1     CrashLoopBackOff    6          6m3s
kube-proxy-n67px                     0/1     CrashLoopBackOff    3          68s
kube-scheduler-k8s-master            1/1     Running             0          6m5s


(4)查看节点状态。


此时已经 Ready 状态。

[root@k8s-master xcbeyond]# kubectl get nodes
NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   6m30s   v1.20.2
k8s-node01   Ready    <none>                 85s     v1.20.2


4.3 集群环境验证


至此,基于 kubeadm 方式的集群搭建已完成,让我们一起在Kubernetes集群环境下开启Kubernetes的探索吧!


5、总结


安装过程中,可能会遇到各种形形色色的问题与障碍,大可不必担心,初次安装肯定会是这样的。


面对问题,有以下几点看法或建议:


  1. 遇到问题,说明你亲自动手过,这本身就是一种乐趣。(坑就是这么被踩出来的)
  2. 遇事不要慌,认真查看出错日志及提示。
  3. 根据关键错误信息,各种搜索齐上阵,尤其是在官方网站或github。
  4. 解决问题后,要记录。




参考文章:


1. https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

2. https://www.cnblogs.com/nb-blog/p/10636733.html

3. https://www.cnblogs.com/shoufu/p/13047723.html

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
17天前
|
Kubernetes 监控 负载均衡
深入云原生:Kubernetes 集群部署与管理实践
【10月更文挑战第37天】在数字化转型的浪潮中,云原生技术以其弹性、可扩展性成为企业IT架构的首选。本文将引导你了解如何部署和管理一个Kubernetes集群,包括环境准备、安装步骤和日常维护技巧。我们将通过实际代码示例,探索云原生世界的秘密,并分享如何高效运用这一技术以适应快速变化的业务需求。
50 1
|
19天前
|
Kubernetes Ubuntu Linux
我应该如何安装Kubernetes
我应该如何安装Kubernetes
|
2月前
|
Kubernetes Ubuntu Docker
从0开始搞K8S:使用Ubuntu进行安装(环境安装)
通过上述步骤,你已经在Ubuntu上成功搭建了一个基本的Kubernetes单节点集群。这只是开始,Kubernetes的世界广阔且深邃,接下来你可以尝试部署应用、了解Kubernetes的高级概念如Services、Deployments、Ingress等,以及探索如何利用Helm等工具进行应用管理,逐步提升你的Kubernetes技能树。记住,实践是最好的老师,不断实验与学习,你将逐渐掌握这一强大的容器编排技术。
179 1
|
2月前
|
Kubernetes Linux 开发工具
centos7通过kubeadm安装k8s 1.27.1版本
centos7通过kubeadm安装k8s 1.27.1版本
|
2月前
|
Kubernetes Docker 容器
rancher docker k8s安装(一)
rancher docker k8s安装(一)
44 2
|
2月前
|
Kubernetes 网络安全 容器
基于Ubuntu-22.04安装K8s-v1.28.2实验(一)部署K8s
基于Ubuntu-22.04安装K8s-v1.28.2实验(一)部署K8s
244 2
|
2月前
|
存储 Kubernetes 负载均衡
基于Ubuntu-22.04安装K8s-v1.28.2实验(四)使用域名访问网站应用
基于Ubuntu-22.04安装K8s-v1.28.2实验(四)使用域名访问网站应用
32 1
|
2月前
|
负载均衡 应用服务中间件 nginx
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
59 1
|
2月前
|
Kubernetes 监控 调度
k8s学习--kubernetes服务自动伸缩之垂直伸缩(资源伸缩)VPA详细解释与安装
k8s学习--kubernetes服务自动伸缩之垂直伸缩(资源伸缩)VPA详细解释与安装
107 1
|
2月前
|
缓存 Kubernetes 应用服务中间件
k8s学习--helm的详细解释及安装和常用命令
k8s学习--helm的详细解释及安装和常用命令
k8s学习--helm的详细解释及安装和常用命令