kubeadm安装高可用的K8S集群

简介: kubead安装集群

一.K8S1.20.x的重要更新

1、Kubectl debug 设置一个临时容器
2、Sidecar 
3、Volume:更改目录权限,fsGroup
4、ConfigMap和Secret

K8S官网:https://kubernetes.io/docs/setup/
最新版高可用安装:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

二.K8S1.20.x的安装

2.1 集群规划

主机名 IP地址 说明
k8s-worker6 172.26.119.238 master节点
k8s-worker7 172.26.119.239 worker01节点
k8s-worker8 172.26.119.240 worker02节点
#查看Centos版本
cat /etc/redhat-release 
CentOS Linux release 7.9.2009 (Core)
##https://www.cnblogs.com/liucx/

k8s高可用结构图

img

修改主机名

# master节点
hostnamectl set-hostname k8s-worker6
#node1节点
hostnamectl set-hostname k8s-worker7
#node2节点
hostnamectl set-hostname k8s-worker8

所有节点配置hosts,修改/etc/hosts如下:

所有节点配置hosts,修改/etc/hosts如下:

cat /etc/hosts 

::1    localhost    localhost.localdomain    localhost6    localhost6.localdomain6
127.0.0.1 localhost  localhost

172.26.119.240 k8s-worker8  k8s-worker8
172.26.119.238 k8s-worker6  k8s-worker6
172.26.119.239 k8s-worker7  k8s-worker7

2.2 更新配置 (所有节点全部安装)

# 所有节点安装
# Centos 7安装yum源如下:
# 更改为国内阿里yum源
[root@k8s-worker6 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-worker6 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-worker6 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-worker6 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-worker6 ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

安装必备工具

[root@k8s-worker6 ~]# yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

所有节点关闭防火墙、selinux、dnsmasq、swap。服务器配置如下:

[root@k8s-worker6 ~]# systemctl disable --now firewalld 
[root@k8s-worker6 ~]# systemctl disable --now dnsmasq
[root@k8s-worker6 ~]# systemctl disable --now NetworkManager

[root@k8s-worker6 ~]# setenforce 0
[root@k8s-worker6 ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
[root@k8s-worker6 ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

关闭swap分区(所有节点)

[root@k8s-worker6 ~]# swapoff -a && sysctl -w vm.swappiness=0
[root@k8s-worker6 ~]# sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

时钟同步

#安装同步时钟ntpdate
[root@k8s-worker6 ~]# rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
[root@k8s-worker6 ~]# yum install ntpdate -y

# 所有节点同步时间。时间同步配置如下:
[root@k8s-worker6 ~]# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
[root@k8s-worker6 ~]# echo 'Asia/Shanghai' >/etc/timezone
[root@k8s-worker6 ~]# ntpdate time2.aliyun.com

# 加入到crontab,每5分钟同步一次
[root@k8s-worker6 ~]# crontab -e
*/5 * * * * ntpdate time2.aliyun.com

配置limit

[root@k8s-worker6 ~]# ulimit -SHn 65535
[root@k8s-worker6 ~]# vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimite

配置免密登录

# Master01节点免密钥登录其他节点:
[root@k8s-worker6 ~]# ssh-keygen -t rsa
[root@k8s-worker6 ~]# ssh-copy-id -i root@172.26.119.239
[root@k8s-worker6 ~]# ssh-copy-id -i root@172.26.119.240

所有节点升级重启

[root@k8s-worker6 ~]# yum update -y  && reboot 

下载安装源码文件

cd /root/
git clone https://github.com/dotbalo/k8s-ha-install.git     

2.3 Linux内核升级(所有节点)

CentOS7 需要升级内核至4.18+  https://www.kernel.org/ 和 https://elrepo.org/linux/kernel/el7/x86_64/
CentOS 7 dnf可能无法安装内核
[root@k8s-worker6 ~]# dnf --disablerepo=\* --enablerepo=elrepo -y install kernel-ml kernel-ml-devel
[root@k8s-worker6 ~]# grubby --default-kernel

# 使用如下指令查看内核版本
[root@k8s-worker6 ~]# uname -a
# 使用如下指令安装最新内核
#导入ELRepo软件仓库的公共秘钥
[root@k8s-worker6 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
#安装ELRepo软件仓库的yum源
[root@k8s-worker6 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
# 查看最新版内核
[root@k8s-worker6 ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

# 安装最新版内核
[root@k8s-worker6 ~]# yum --enablerepo=elrepo-kernel install kernel-ml kernel-ml-devel –y
[root@k8s-worker6 ~]# reboot
# 更改内核顺序
[root@k8s-worker6 ~]# grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg && grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)" && reboot
# 开机查看内核
[root@k8s-worker6 ~]# uname -a

安装ipvsadm

# 所有节点安装ipvsadm
[root@k8s-worker6 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y

# 所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack。
[root@k8s-worker6 ~]# vim /etc/modules-load.d/ipvs.conf
[root@k8s-worker6 ~]# systemctl enable --now systemd-modules-load.service

img

开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:

[root@k8s-worker6 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384

net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
# k8s内核装载并应用
[root@k8s-worker6 ~]# sysctl --system

三.所有节点K8S基本组件安装

3.1 安装docker-ce

[root@k8s-worker8 ~]# wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm

# 安装docker-ce 19.03版本
[root@k8s-worker8 ~]# yum install -y docker-ce-cli-19.03.8-3.el7.x86_64 docker-ce-19.03.8-3.el7.x86_64

# 查看安装的docker版本
[root@k8s-worker8 ~]# rpm -qa|grep 
温馨提示:由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd。(重要)
[root@k8s-worker8 ~]# cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# 重启
[root@k8s-worker8 ~]# systemctl restart docker
# 查看docekr配置文件,主要看Cgroup Driver: systemd
[root@k8s-worker7 ~]# docker info

#启动docker
[root@k8s-worker8 ~]# service docker start
[root@k8s-worker8 ~]# chkconfig docker on

3.2安装k8s组件

# 所有节点安装最新版的kubeadm,可以不执行
[root@k8s-worker8 ~]# yum install kubeadm -y

# 所有节点安装指定的K8S组件
[root@k8s-worker8 ~]# [root@k8s-worker8 ~]# yum install -y kubeadm-1.22.2-0.x86_64 kubectl-1.22.2-0.x86_64 kubelet-1.22.2-0.x86_64
# 所有节点设置开机启动docker
[root@k8s-worker8 ~]# systemctl daemon-reload && systemctl enable --now docker
# 查看Docker的状态
[root@k8s-worker8 ~]# systemctl status docker

修改iptables相关参数

[root@k8s-worker8 ~]# vi /etc/sysctl.conf
#在文件末尾加入右述字段:net.bridge.bridge-nf-call-ip6tables = 1
                   net.bridge.bridge-nf-call-iptables = 1
[root@k8s-worker8 ~]# sysctl -p

设置kubelet 开机自动启动

[root@k8s-worker8 ~]# systemctl daemon-reload
[root@k8s-worker8 ~]# systemctl enable --now kubelet
[root@k8s-worker8 ~]# systemctl enable kubelet

四.集群初始化

4.1 master节点生成kubeadmin-config.yaml文件

[root@k8s-worker6 ~]# kubeadm config print init-defaults > kubeadmin-config.yaml

将生成的kubeadmin-config.yaml文件移动到/root/yaml/目录下,下面为未修改之前的yaml文件,将其中的四处部分修改为本机地址:

[root@k8s-worker6 ~]# mkdir -p yaml
[root@k8s-worker6 ~]# mv kubeadmin-config.yaml /root/yaml/
[root@k8s-worker6 yaml]# cat kubeadmin-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4 #该处修改为master节点ip地址,如172.26.119.238
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: node  # 该处修改为主机名称,如k8s-worker6
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}  # 该处修改为:去掉大括号,修改如下:dns:
         #                                    type: CoreDNS

etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io 
kind: ClusterConfiguration
kubernetesVersion: 1.22.0  #修改为当前所装的K8S版本,1.22.2通过  kubelet --version 查看
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

所有节点拉取Kubeadm初始化需要的镜像

[root@k8s-worker6 yaml]# kubeadm config images pull --config /root/yaml/kubeadm-config.yaml

上述命令可能会提示连接超时,原因是谷歌的镜像仓库在国内无法访问,修改kubeadm-config.yaml中镜像仓库的地址:imageRepository为registry.cn-hangzhou.aliyuncs.com/google_containers,然后再次运行上述信息。

也可以采用如下方法拉取初始化需要的镜像:

4.2.1 首先使用如下命令查看需要下载的镜像信息

[root@k8s-worker6 yaml]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.22.2
k8s.gcr.io/kube-controller-manager:v1.22.2
k8s.gcr.io/kube-scheduler:v1.22.2
k8s.gcr.io/kube-proxy:v1.22.2
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4

4.2.2 kubeadm初始化默认使用的镜像仓库是k8s.gcr.io,为了解决问题,我们可以使用国内云计算厂商都提供了kubernetes的镜像服务。

[root@k8s-worker6 yaml]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.2
[root@k8s-worker6 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.2
[root@k8s-worker6 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.2
[root@k8s-worker6 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.2
[root@k8s-worker6 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
[root@k8s-worker6 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
[root@k8s-worker6 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4

4.2.3 将下载后的镜像打上tag,来符合kudeadm init初始化时候的要求。

[root@k8s-worker8 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.2 k8s.gcr.io/kube-apiserver:v1.22.2
[root@k8s-worker8 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.2  k8s.gcr.io/kube-controller-manager:v1.22.2
[root@k8s-worker8 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.2   k8s.gcr.io/kube-scheduler:v1.22.2
[root@k8s-worker8 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.2   k8s.gcr.io/kube-proxy:v1.22.2
[root@k8s-worker8 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5   k8s.gcr.io/pause:3.5
[root@k8s-worker8 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0   k8s.gcr.io/etcd:3.5.0-0
[root@k8s-worker8 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4   k8s.gcr.io/coredns/coredns:v1.8.4

4.2.4 查看镜像信息

[root@k8s-worker6 ~]# docker images

4.3 Master节点kubeadm初始化

4.3.1Master节点初始化

[root@k8s-worker6 yaml]# kubeadm init --config /root/yaml/kubeadmin-config.yaml --upload-certs

运行结果部分:

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.26.119.238:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:0a5d83cbe09bed069aa62a16e52c4f71beb1fec8b2fd63dd6365ab125e0315ff 

4.3.2 Master节点配置环境变量,用于访问Kubernetes集群**

[root@k8s-worker6 ~]# cat <<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf

EOF
[root@k8s-worker6 ~]# source /root/.bashrc

4.3.3 master节点执行

[root@k8s-worker6 ~]# mkdir -p $HOME/.kube
[root@k8s-worker6 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-worker6 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

4.3.4 node节点加入集群

#在Node 节点执行,使用kubeadm join 注册Node节点到Matser
#kubeadm join 的内容,在上面kubeadm init 已经生成好了
# 重启kubelet
[root@k8s-worker7 ~]# systemctl restart kubelet
[root@k8s-worker7 ~]# systemctl status kubelet

查看Kubelet的状态,如果不是running状态,查看日志查找原因

[root@k8s-worker8 ~]# journalctl -xeu kubelet > 1.txt
[root@k8s-worker8 ~]# sz 1.txt

查找失败原因并解决,常见的失败原因是因为kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\

解决方案如下:

[root@k8s-worker8 ~]# cat > /etc/docker/daemon.json <<EOF
> {"exec-opts": ["native.cgroupdriver=systemd"]}
> EOF
[root@k8s-worker8 ~]# systemctl restart docker
[root@k8s-worker8 ~]# docker info
[root@k8s-worker8 ~]# systemctl start kubelet
[root@k8s-worker8 ~]# systemctl status kubelet

如果kubelet的状态为running,则在各个worker节点上面执行初始化生成的kubeadm join 指令。

[root@k8s-worker7 ~]kubeadm join 172.26.119.238:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:0a5d83cbe09bed069aa62a16e52c4f71beb1fec8b2fd63dd6365ab125e0315ff 

注意:这个token24小时后会失效,如果后面有其他节点要加入的话,处理方法:

# kubeadm token create
[root@k8s-master ~]# kubeadm token create
0w3a92.ijgba9ia0e3scicg

[root@k8s-master ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
0w3a92.ijgba9ia0e3scicg   23h       2019-09-08T22:02:40+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
t0ehj8.k4ef3gq0icr3etl0   22h       2019-09-08T20:58:34+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
ce07a7f5b259961884c55e3ff8784b1eda6f8b5931e6fa2ab0b30b6a4234c09a

然后加入集群
kubeadm join 172.26.119.238:6443 --token yhns57.4s3y2yll21ew8mta \
    --discovery-token-ca-cert-hash sha256:ce07a7f5b259961884c55e3ff8784b1eda6f8b5931e6fa2ab0b30b6a4234c09a

4.3.5 查看集群状态

[root@k8s-worker6 yaml]# kubectl get nodes

4.3.6 查看cs状态

[root@k8s-worker6 ~]# kubectl get cs

如若为status为unhealthy,则执行下述操作:

[root@k8s-worker6 ~]# vi /etc/kubernetes/manifests/kube-scheduler.yaml
[root@k8s-worker6 ~]# vi /etc/kubernetes/manifests/kube-controller-manager.yaml
## 将两个文件中的- --port=0这一行注释掉

4.4 安装flannel插件(Master节点)

[root@k8s-worker6 ~]# curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

如果显示超时,直接复制下述文件,下面为kube-flannel.yml文件的具体内容:

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

使用kubectl安装flannel插件

# 安装flannel插件
[root@k8s-worker6 yaml]# kubectl apply -f kube-flannel.yml

# 再次查看node状态,此时应该全为Ready状态
[root@k8s-worker6 yaml]# kubectl get nodes

4.4.1部署flannel网络插件时发现flannel pod一直处于CrashLoopBackOff状态,查看日志提示没有分配cidr

# 查看pods运行状态
[root@k8s-worker6 yaml]# kubectl get pods --all-namespaces
# 针对失败的pods通过日志查找原因
[root@k8s-worker6 yaml]# kubectl logs kube-flannel-ds-2qhdt -n kube-system

解决方法如下,master节点修改/etc/kubernetes/manifests/kube-controller-manager.yaml文件:

[root@k8s-worker6 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
增加参数:
--allocate-node-cidrs=true
--cluster-cidr=10.244.0.0/16
重启kubelet
[root@k8s-worker6 ~]# systemctl restart kubelet
[root@k8s-worker6 yaml]# kubectl get pods --all-namespaces

4.5 Master节点添加自动补全脚本到系统

[root@k8s-worker6 ~]# yum install -y bash-completion
[root@k8s-worker6 ~]# source /usr/share/bash-completion/bash_completion
[root@k8s-worker6 ~]# source <(kubectl completion bash)
[root@k8s-worker6 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
相关文章
|
3天前
|
存储 运维 Kubernetes
Kubernetes 集群的监控与维护策略
【4月更文挑战第23天】 在微服务架构日益盛行的当下,容器编排工具如 Kubernetes 成为了运维工作的重要环节。然而,随着集群规模的增长和复杂性的提升,如何确保 Kubernetes 集群的高效稳定运行成为了一大挑战。本文将深入探讨 Kubernetes 集群的监控要点、常见问题及解决方案,并提出一系列切实可行的维护策略,旨在帮助运维人员有效管理和维护 Kubernetes 环境,保障服务的持续可用性和性能优化。
|
4天前
|
存储 运维 Kubernetes
Kubernetes 集群的持续性能优化实践
【4月更文挑战第22天】在动态且复杂的微服务架构中,确保 Kubernetes 集群的高性能运行是至关重要的。本文将深入探讨针对 Kubernetes 集群性能优化的策略与实践,从节点资源配置、网络优化到应用部署模式等多个维度展开,旨在为运维工程师提供一套系统的性能调优方法论。通过实际案例分析与经验总结,读者可以掌握持续优化 Kubernetes 集群性能的有效手段,以适应不断变化的业务需求和技术挑战。
17 4
|
23天前
|
数据库 存储 监控
什么是 SAP HANA 内存数据库 的 Delta Storage
什么是 SAP HANA 内存数据库 的 Delta Storage
17 0
什么是 SAP HANA 内存数据库 的 Delta Storage
|
12天前
|
Kubernetes Linux 网络安全
kubeadm安装k8s
该文档提供了一套在CentOS 7.6上安装Docker和Kubernetes(kubeadm)的详细步骤,包括安装系统必备软件、关闭防火墙和SELinux、禁用swap、开启IP转发、设置内核参数、配置Docker源和加速器、安装指定版本Docker、启动Docker、设置kubelet开机启动、安装kubelet、kubeadm、kubectl、下载和配置Kubernetes镜像、初始化kubeadm、创建kubeconfig文件、获取节点加入集群命令、下载Calico YAML文件以及安装Calico。这些步骤不仅适用于v1.19.14,也适用于更高版本。
67 1
|
12天前
|
Kubernetes 搜索推荐 Docker
使用 kubeadm 部署 Kubernetes 集群(二)k8s环境安装
使用 kubeadm 部署 Kubernetes 集群(二)k8s环境安装
56 17
|
24天前
|
Kubernetes Ubuntu 应用服务中间件
Ubuntu 22.04 利用kubeadm方式部署Kubernetes(v1.28.2版本)
Ubuntu 22.04 利用kubeadm方式部署Kubernetes(v1.28.2版本)
103 0
|
2月前
|
Kubernetes Ubuntu 应用服务中间件
在Ubuntu22.04 LTS上搭建Kubernetes集群
在Ubuntu22.04.4上安装Kubernetes v1.28.7,步骤超详细
338 1
在Ubuntu22.04 LTS上搭建Kubernetes集群
|
1月前
|
Prometheus 监控 Kubernetes
Kubernetes 集群监控与日志管理实践
【2月更文挑战第29天】 在微服务架构日益普及的当下,Kubernetes 已成为容器编排的事实标准。然而,随着集群规模的扩大和业务复杂度的提升,有效的监控和日志管理变得至关重要。本文将探讨构建高效 Kubernetes 集群监控系统的策略,以及实施日志聚合和分析的最佳实践。通过引入如 Prometheus 和 Fluentd 等开源工具,我们旨在为运维专家提供一套完整的解决方案,以保障系统的稳定性和可靠性。
|
25天前
|
消息中间件 Kubernetes Kafka
Terraform阿里云创建资源1分钟创建集群一键发布应用Terraform 创建 Kubernetes 集群
Terraform阿里云创建资源1分钟创建集群一键发布应用Terraform 创建 Kubernetes 集群
18 0
|
25天前
|
Kubernetes 安全 网络安全
搭建k8s集群kubeadm搭建Kubernetes二进制搭建Kubernetes集群
搭建k8s集群kubeadm搭建Kubernetes二进制搭建Kubernetes集群
108 0

推荐镜像

更多