kubeadm安装高可用的K8S集群

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
简介: kubead安装集群

一.K8S1.20.x的重要更新

1、Kubectl debug 设置一个临时容器
2、Sidecar 
3、Volume:更改目录权限,fsGroup
4、ConfigMap和Secret

K8S官网:https://kubernetes.io/docs/setup/
最新版高可用安装:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

二.K8S1.20.x的安装

2.1 集群规划

主机名 IP地址 说明
k8s-worker6 172.26.119.238 master节点
k8s-worker7 172.26.119.239 worker01节点
k8s-worker8 172.26.119.240 worker02节点
#查看Centos版本
cat /etc/redhat-release 
CentOS Linux release 7.9.2009 (Core)
##https://www.cnblogs.com/liucx/

k8s高可用结构图

img

修改主机名

# master节点
hostnamectl set-hostname k8s-worker6
#node1节点
hostnamectl set-hostname k8s-worker7
#node2节点
hostnamectl set-hostname k8s-worker8

所有节点配置hosts,修改/etc/hosts如下:

所有节点配置hosts,修改/etc/hosts如下:

cat /etc/hosts 

::1    localhost    localhost.localdomain    localhost6    localhost6.localdomain6
127.0.0.1 localhost  localhost

172.26.119.240 k8s-worker8  k8s-worker8
172.26.119.238 k8s-worker6  k8s-worker6
172.26.119.239 k8s-worker7  k8s-worker7

2.2 更新配置 (所有节点全部安装)

# 所有节点安装
# Centos 7安装yum源如下:
# 更改为国内阿里yum源
[root@k8s-worker6 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-worker6 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-worker6 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-worker6 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-worker6 ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

安装必备工具

[root@k8s-worker6 ~]# yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

所有节点关闭防火墙、selinux、dnsmasq、swap。服务器配置如下:

[root@k8s-worker6 ~]# systemctl disable --now firewalld 
[root@k8s-worker6 ~]# systemctl disable --now dnsmasq
[root@k8s-worker6 ~]# systemctl disable --now NetworkManager

[root@k8s-worker6 ~]# setenforce 0
[root@k8s-worker6 ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
[root@k8s-worker6 ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

关闭swap分区(所有节点)

[root@k8s-worker6 ~]# swapoff -a && sysctl -w vm.swappiness=0
[root@k8s-worker6 ~]# sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

时钟同步

#安装同步时钟ntpdate
[root@k8s-worker6 ~]# rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
[root@k8s-worker6 ~]# yum install ntpdate -y

# 所有节点同步时间。时间同步配置如下:
[root@k8s-worker6 ~]# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
[root@k8s-worker6 ~]# echo 'Asia/Shanghai' >/etc/timezone
[root@k8s-worker6 ~]# ntpdate time2.aliyun.com

# 加入到crontab,每5分钟同步一次
[root@k8s-worker6 ~]# crontab -e
*/5 * * * * ntpdate time2.aliyun.com

配置limit

[root@k8s-worker6 ~]# ulimit -SHn 65535
[root@k8s-worker6 ~]# vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimite

配置免密登录

# Master01节点免密钥登录其他节点:
[root@k8s-worker6 ~]# ssh-keygen -t rsa
[root@k8s-worker6 ~]# ssh-copy-id -i root@172.26.119.239
[root@k8s-worker6 ~]# ssh-copy-id -i root@172.26.119.240

所有节点升级重启

[root@k8s-worker6 ~]# yum update -y  && reboot 

下载安装源码文件

cd /root/
git clone https://github.com/dotbalo/k8s-ha-install.git     

2.3 Linux内核升级(所有节点)

CentOS7 需要升级内核至4.18+  https://www.kernel.org/ 和 https://elrepo.org/linux/kernel/el7/x86_64/
CentOS 7 dnf可能无法安装内核
[root@k8s-worker6 ~]# dnf --disablerepo=\* --enablerepo=elrepo -y install kernel-ml kernel-ml-devel
[root@k8s-worker6 ~]# grubby --default-kernel

# 使用如下指令查看内核版本
[root@k8s-worker6 ~]# uname -a
# 使用如下指令安装最新内核
#导入ELRepo软件仓库的公共秘钥
[root@k8s-worker6 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
#安装ELRepo软件仓库的yum源
[root@k8s-worker6 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
# 查看最新版内核
[root@k8s-worker6 ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

# 安装最新版内核
[root@k8s-worker6 ~]# yum --enablerepo=elrepo-kernel install kernel-ml kernel-ml-devel –y
[root@k8s-worker6 ~]# reboot
# 更改内核顺序
[root@k8s-worker6 ~]# grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg && grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)" && reboot
# 开机查看内核
[root@k8s-worker6 ~]# uname -a

安装ipvsadm

# 所有节点安装ipvsadm
[root@k8s-worker6 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y

# 所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack。
[root@k8s-worker6 ~]# vim /etc/modules-load.d/ipvs.conf
[root@k8s-worker6 ~]# systemctl enable --now systemd-modules-load.service

img

开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:

[root@k8s-worker6 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384

net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
# k8s内核装载并应用
[root@k8s-worker6 ~]# sysctl --system

三.所有节点K8S基本组件安装

3.1 安装docker-ce

[root@k8s-worker8 ~]# wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm

# 安装docker-ce 19.03版本
[root@k8s-worker8 ~]# yum install -y docker-ce-cli-19.03.8-3.el7.x86_64 docker-ce-19.03.8-3.el7.x86_64

# 查看安装的docker版本
[root@k8s-worker8 ~]# rpm -qa|grep 
温馨提示:由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd。(重要)
[root@k8s-worker8 ~]# cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# 重启
[root@k8s-worker8 ~]# systemctl restart docker
# 查看docekr配置文件,主要看Cgroup Driver: systemd
[root@k8s-worker7 ~]# docker info

#启动docker
[root@k8s-worker8 ~]# service docker start
[root@k8s-worker8 ~]# chkconfig docker on

3.2安装k8s组件

# 所有节点安装最新版的kubeadm,可以不执行
[root@k8s-worker8 ~]# yum install kubeadm -y

# 所有节点安装指定的K8S组件
[root@k8s-worker8 ~]# [root@k8s-worker8 ~]# yum install -y kubeadm-1.22.2-0.x86_64 kubectl-1.22.2-0.x86_64 kubelet-1.22.2-0.x86_64
# 所有节点设置开机启动docker
[root@k8s-worker8 ~]# systemctl daemon-reload && systemctl enable --now docker
# 查看Docker的状态
[root@k8s-worker8 ~]# systemctl status docker

修改iptables相关参数

[root@k8s-worker8 ~]# vi /etc/sysctl.conf
#在文件末尾加入右述字段:net.bridge.bridge-nf-call-ip6tables = 1
                   net.bridge.bridge-nf-call-iptables = 1
[root@k8s-worker8 ~]# sysctl -p

设置kubelet 开机自动启动

[root@k8s-worker8 ~]# systemctl daemon-reload
[root@k8s-worker8 ~]# systemctl enable --now kubelet
[root@k8s-worker8 ~]# systemctl enable kubelet

四.集群初始化

4.1 master节点生成kubeadmin-config.yaml文件

[root@k8s-worker6 ~]# kubeadm config print init-defaults > kubeadmin-config.yaml

将生成的kubeadmin-config.yaml文件移动到/root/yaml/目录下,下面为未修改之前的yaml文件,将其中的四处部分修改为本机地址:

[root@k8s-worker6 ~]# mkdir -p yaml
[root@k8s-worker6 ~]# mv kubeadmin-config.yaml /root/yaml/
[root@k8s-worker6 yaml]# cat kubeadmin-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4 #该处修改为master节点ip地址,如172.26.119.238
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: node  # 该处修改为主机名称,如k8s-worker6
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}  # 该处修改为:去掉大括号,修改如下:dns:
         #                                    type: CoreDNS

etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io 
kind: ClusterConfiguration
kubernetesVersion: 1.22.0  #修改为当前所装的K8S版本,1.22.2通过  kubelet --version 查看
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

所有节点拉取Kubeadm初始化需要的镜像

[root@k8s-worker6 yaml]# kubeadm config images pull --config /root/yaml/kubeadm-config.yaml

上述命令可能会提示连接超时,原因是谷歌的镜像仓库在国内无法访问,修改kubeadm-config.yaml中镜像仓库的地址:imageRepository为registry.cn-hangzhou.aliyuncs.com/google_containers,然后再次运行上述信息。

也可以采用如下方法拉取初始化需要的镜像:

4.2.1 首先使用如下命令查看需要下载的镜像信息

[root@k8s-worker6 yaml]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.22.2
k8s.gcr.io/kube-controller-manager:v1.22.2
k8s.gcr.io/kube-scheduler:v1.22.2
k8s.gcr.io/kube-proxy:v1.22.2
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4

4.2.2 kubeadm初始化默认使用的镜像仓库是k8s.gcr.io,为了解决问题,我们可以使用国内云计算厂商都提供了kubernetes的镜像服务。

[root@k8s-worker6 yaml]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.2
[root@k8s-worker6 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.2
[root@k8s-worker6 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.2
[root@k8s-worker6 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.2
[root@k8s-worker6 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
[root@k8s-worker6 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
[root@k8s-worker6 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4

4.2.3 将下载后的镜像打上tag,来符合kudeadm init初始化时候的要求。

[root@k8s-worker8 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.2 k8s.gcr.io/kube-apiserver:v1.22.2
[root@k8s-worker8 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.2  k8s.gcr.io/kube-controller-manager:v1.22.2
[root@k8s-worker8 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.2   k8s.gcr.io/kube-scheduler:v1.22.2
[root@k8s-worker8 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.2   k8s.gcr.io/kube-proxy:v1.22.2
[root@k8s-worker8 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5   k8s.gcr.io/pause:3.5
[root@k8s-worker8 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0   k8s.gcr.io/etcd:3.5.0-0
[root@k8s-worker8 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4   k8s.gcr.io/coredns/coredns:v1.8.4

4.2.4 查看镜像信息

[root@k8s-worker6 ~]# docker images

4.3 Master节点kubeadm初始化

4.3.1Master节点初始化

[root@k8s-worker6 yaml]# kubeadm init --config /root/yaml/kubeadmin-config.yaml --upload-certs

运行结果部分:

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.26.119.238:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:0a5d83cbe09bed069aa62a16e52c4f71beb1fec8b2fd63dd6365ab125e0315ff 

4.3.2 Master节点配置环境变量,用于访问Kubernetes集群**

[root@k8s-worker6 ~]# cat <<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf

EOF
[root@k8s-worker6 ~]# source /root/.bashrc

4.3.3 master节点执行

[root@k8s-worker6 ~]# mkdir -p $HOME/.kube
[root@k8s-worker6 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-worker6 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

4.3.4 node节点加入集群

#在Node 节点执行,使用kubeadm join 注册Node节点到Matser
#kubeadm join 的内容,在上面kubeadm init 已经生成好了
# 重启kubelet
[root@k8s-worker7 ~]# systemctl restart kubelet
[root@k8s-worker7 ~]# systemctl status kubelet

查看Kubelet的状态,如果不是running状态,查看日志查找原因

[root@k8s-worker8 ~]# journalctl -xeu kubelet > 1.txt
[root@k8s-worker8 ~]# sz 1.txt

查找失败原因并解决,常见的失败原因是因为kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\

解决方案如下:

[root@k8s-worker8 ~]# cat > /etc/docker/daemon.json <<EOF
> {"exec-opts": ["native.cgroupdriver=systemd"]}
> EOF
[root@k8s-worker8 ~]# systemctl restart docker
[root@k8s-worker8 ~]# docker info
[root@k8s-worker8 ~]# systemctl start kubelet
[root@k8s-worker8 ~]# systemctl status kubelet

如果kubelet的状态为running,则在各个worker节点上面执行初始化生成的kubeadm join 指令。

[root@k8s-worker7 ~]kubeadm join 172.26.119.238:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:0a5d83cbe09bed069aa62a16e52c4f71beb1fec8b2fd63dd6365ab125e0315ff 

注意:这个token24小时后会失效,如果后面有其他节点要加入的话,处理方法:

# kubeadm token create
[root@k8s-master ~]# kubeadm token create
0w3a92.ijgba9ia0e3scicg

[root@k8s-master ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
0w3a92.ijgba9ia0e3scicg   23h       2019-09-08T22:02:40+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
t0ehj8.k4ef3gq0icr3etl0   22h       2019-09-08T20:58:34+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
ce07a7f5b259961884c55e3ff8784b1eda6f8b5931e6fa2ab0b30b6a4234c09a

然后加入集群
kubeadm join 172.26.119.238:6443 --token yhns57.4s3y2yll21ew8mta \
    --discovery-token-ca-cert-hash sha256:ce07a7f5b259961884c55e3ff8784b1eda6f8b5931e6fa2ab0b30b6a4234c09a

4.3.5 查看集群状态

[root@k8s-worker6 yaml]# kubectl get nodes

4.3.6 查看cs状态

[root@k8s-worker6 ~]# kubectl get cs

如若为status为unhealthy,则执行下述操作:

[root@k8s-worker6 ~]# vi /etc/kubernetes/manifests/kube-scheduler.yaml
[root@k8s-worker6 ~]# vi /etc/kubernetes/manifests/kube-controller-manager.yaml
## 将两个文件中的- --port=0这一行注释掉

4.4 安装flannel插件(Master节点)

[root@k8s-worker6 ~]# curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

如果显示超时,直接复制下述文件,下面为kube-flannel.yml文件的具体内容:

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

使用kubectl安装flannel插件

# 安装flannel插件
[root@k8s-worker6 yaml]# kubectl apply -f kube-flannel.yml

# 再次查看node状态,此时应该全为Ready状态
[root@k8s-worker6 yaml]# kubectl get nodes

4.4.1部署flannel网络插件时发现flannel pod一直处于CrashLoopBackOff状态,查看日志提示没有分配cidr

# 查看pods运行状态
[root@k8s-worker6 yaml]# kubectl get pods --all-namespaces
# 针对失败的pods通过日志查找原因
[root@k8s-worker6 yaml]# kubectl logs kube-flannel-ds-2qhdt -n kube-system

解决方法如下,master节点修改/etc/kubernetes/manifests/kube-controller-manager.yaml文件:

[root@k8s-worker6 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
增加参数:
--allocate-node-cidrs=true
--cluster-cidr=10.244.0.0/16
重启kubelet
[root@k8s-worker6 ~]# systemctl restart kubelet
[root@k8s-worker6 yaml]# kubectl get pods --all-namespaces

4.5 Master节点添加自动补全脚本到系统

[root@k8s-worker6 ~]# yum install -y bash-completion
[root@k8s-worker6 ~]# source /usr/share/bash-completion/bash_completion
[root@k8s-worker6 ~]# source <(kubectl completion bash)
[root@k8s-worker6 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
13天前
|
Kubernetes 关系型数据库 MySQL
Kubernetes入门:搭建高可用微服务架构
【10月更文挑战第25天】在快速发展的云计算时代,微服务架构因其灵活性和可扩展性备受青睐。本文通过一个案例分析,展示了如何使用Kubernetes将传统Java Web应用迁移到Kubernetes平台并改造成微服务架构。通过定义Kubernetes服务、创建MySQL的Deployment/RC、改造Web应用以及部署Web应用,最终实现了高可用的微服务架构。Kubernetes不仅提供了服务发现和负载均衡的能力,还通过各种资源管理工具,提升了系统的可扩展性和容错性。
37 3
|
3天前
|
Kubernetes Ubuntu Linux
我应该如何安装Kubernetes
我应该如何安装Kubernetes
|
26天前
|
Kubernetes Ubuntu Docker
从0开始搞K8S:使用Ubuntu进行安装(环境安装)
通过上述步骤,你已经在Ubuntu上成功搭建了一个基本的Kubernetes单节点集群。这只是开始,Kubernetes的世界广阔且深邃,接下来你可以尝试部署应用、了解Kubernetes的高级概念如Services、Deployments、Ingress等,以及探索如何利用Helm等工具进行应用管理,逐步提升你的Kubernetes技能树。记住,实践是最好的老师,不断实验与学习,你将逐渐掌握这一强大的容器编排技术。
97 1
|
1月前
|
Kubernetes Linux 开发工具
centos7通过kubeadm安装k8s 1.27.1版本
centos7通过kubeadm安装k8s 1.27.1版本
|
1月前
|
Kubernetes Docker 容器
rancher docker k8s安装(二)
rancher docker k8s安装(二)
37 0
|
缓存 Kubernetes 数据安全/隐私保护
k8s1.18多master节点高可用集群安装-超详细中文官方文档
k8s1.18多master节点高可用集群安装-超详细中文官方文档
|
6月前
|
Kubernetes 负载均衡 监控
Kubernetes高可用集群二进制部署(一)主机准备和负载均衡器安装
Kubernetes高可用集群二进制部署(一)主机准备和负载均衡器安装
|
Kubernetes Linux 网络安全
k8s1.18高可用集群安装-超详细中文官方文档
k8s1.18高可用集群安装-超详细中文官方文档
|
数据采集 弹性计算 Prometheus
Kubernetes-1.18.4二进制高可用安装(下)
Kubernetes-1.18.4二进制高可用安装(下)
89 0
Kubernetes-1.18.4二进制高可用安装(下)
|
canal Kubernetes 负载均衡
Kubernetes-1.18.4二进制高可用安装(上)
Kubernetes-1.18.4二进制高可用安装(上)
165 0
Kubernetes-1.18.4二进制高可用安装(上)