使用Kubeadm工具快速部署一个最小化的Kubernetes v1.22.1集群

简介: 使用Kubeadm工具快速部署一个最小化的Kubernetes v1.22.1集群

环境介绍:


master 192.168.2.18


node1 192.168.2.19


node2 192.168.2.20


CentOS 7.5


Docker 19.03.13


2核+CPU,2GB+内存


Kubernetes架构图

1.png

准备工作:


1.修改/etc/hostname文件更改三台主机名称(不是必须)

[root@localhost ~]# cat > /etc/hostname << EOF
> k8s-master
> EOF
[root@localhost ~]# cat /etc/hostname
k8s-master
[root@localhost ~]# hostname k8s-master
[root@localhost ~]# cat > /etc/hostname << EOF
> k8s-node1
> EOF
[root@localhost ~]# hostname k8s-ndoe1
[root@localhost ~]# 登出(Ctrl+D退出当前终端,重新连接就会显示新主机名)

2.更改/etc/hosts文件添加主机名与IP映射关系


master、node1、node2三台主机都需要操作。

[root@k8s-master ~]# cat >> /etc/hosts << EOF
> 192.168.1.18 k8s-master
> 192.168.1.19 k8s-node1
> 192.168.1.20 k8s-node2
> EOF
[root@k8s-master ~]# cat /etc/hosts
...
192.168.1.18 k8s-master
192.168.1.19 k8s-node1
192.168.1.20 k8s-node2
[root@k8s-master ~]# for i in {19,20}        //发送/etc/hosts文件到两台node节点
> do
> scp /etc/hosts root@192.168.1.$i:/etc/
> done

验证:

[root@k8s-ndoe1 ~]# cat /etc/hosts
...
192.168.1.18 k8s-master
192.168.1.19 k8s-node1
192.168.1.20 k8s-node2
[root@k8s-node2 ~]# cat /etc/hosts
...
192.168.1.18 k8s-master
192.168.1.19 k8s-node1
192.168.1.20 k8s-node2
[root@k8s-master ~]# ping -c 2 k8s-node1
PING k8s-node1 (192.168.1.19) 56(84) bytes of data.
64 bytes from k8s-node1 (192.168.1.19): icmp_seq=1 ttl=64 time=0.180 ms
...
[root@k8s-master ~]# ping -c 2 k8s-node2
PING k8s-node2 (192.168.1.20) 56(84) bytes of data.
64 bytes from k8s-node2 (192.168.1.20): icmp_seq=1 ttl=64 time=0.166 ms
...

3.清空Iptables规则并永久关闭防火墙和Selinux


master、node1、node2三台主机都需要操作。

[root@k8s-master ~]# iptables -F
[root@k8s-master ~]# systemctl stop firewalld
[root@k8s-master ~]# systemctl disable firewalld
[root@k8s-master ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@k8s-master ~]# setenforce 0
setenforce: SELinux is disabled
[root@k8s-master ~]# getenforce
Disabled

4.校正系统时间


系统时间如果不一致,会导致node节点无法加入集群中。(三台主机都需要操作)

[root@k8s-master ~]# yum -y install ntp
[root@k8s-master ~]# ntpdate cn.pool.ntp.org

5.关闭swap分区


编辑etc/fstab将swap那一行注释掉,因为K8S中不支持swap分区。

[root@k8s-master ~]# swapoff -a   //临时关闭
[root@k8s-master ~]# vim /etc/fstab    //永久关闭(注释掉最后一条配置)
...
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
[root@k8s-master ~]# free -h | grep Swap   //验证Swap关闭情况(显示0代表成功关闭)
Swap:            0B          0B          0B

6.将桥接的IPv4流量传递到iptables的链

[root@k8s-master ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> vm.swappiness = 0
> EOF
[root@k8s-master ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.ipv4.ip_forward = 1
vm.swappiness = 0
* Applying /etc/sysctl.conf ...

7.保证每个节点唯一主机名、Mac地址、Product_uuid


-使用命令 ip link 或 ifconfig -a 来获取网络接口的 MAC 地址


-使用 sudo cat /sys/class/dmi/id/product_uuid 命令对 product_uuid 校验

-master-
[root@k8s-master ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:39:01:df brd ff:ff:ff:ff:ff:ff
[root@k8s-master ~]# cat /sys/class/dmi/id/product_uuid
76574D56-B953-9DBB-B691-256C8B3901DF
-node1-
[root@k8s-ndoe1 ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:11:c3:a5 brd ff:ff:ff:ff:ff:ff
[root@k8s-ndoe1 ~]# cat /sys/class/dmi/id/product_uuid
84FD4D56-EC2A-420E-194E-D8827711C3A5
-node2-
[root@k8s-node2 ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:78:d8:be brd ff:ff:ff:ff:ff:ff
[root@k8s-node2 ~]# cat /sys/class/dmi/id/product_uuid
AC234D56-C031-0FF5-14B5-56608678D8BE

安装Docker


master、node1、node2三台主机都需要操作。


Docker与Kubernetes的关系:

2.png

Docker安装部署详解:https://blog.csdn.net/qq_44895681/article/details/105540702


[root@k8s-master/node1/2 ~]# cd /etc/yum.repos.d/
[root@k8s-master/node1/2 ~]#wget  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo   //添加阿里云docker yum源
[root@k8s-master/node1/2 ~]# systemctl start docker
[root@k8s-master/node1/2 ~]# docker version
Client: Docker Engine - Community
 Version:           19.03.13
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        4484c46d9d
 Built:             Wed Sep 16 17:03:45 2020
 OS/Arch:           linux/amd64
 Experimental:      false
 ...

安装 kubeadm工具


 Kubeadm工具的出发点很简单:就是把大部分组件都容器化,并通过StaticPod方式运行,大大简化了集群的配置及认证等工作,就是尽可能简单的部署一个生产可用的Kubernetes集群。Kubeadm部署实际要安装的组件有Kubeadm、Kubelet、Kubectl三个。Kubeadm部署Kubernetes集群的基本步骤如下:


  • kubeadm:引导集群的命令


  • kubelet:集群中运行任务的代理程序


  • kubectl:命令行管理工具


master、node1、node2三台主机都需要操作。


1.添加阿里云K8s的yum源

[root@k8s-master ~]# cat >> /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes Repo
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> enabled=1
> EOF
[root@k8s-master ~]# for i in {19,20}         //拷贝阿里云kubernetes的yum源至其他节点
> do
> scp /etc/yum.repos.d/kubernetes.repo root@192.168.1.$i:/etc/yum.repos.d/
> done

2.安装Kubeadm、Kubelet、Kubectl组件


本次安装部署的K8s版本为1.22.1

[root@k8s-master ~]# yum  list  |grep kubeadm   //搜索kubeadm相关包
kubeadm.x86_64                              1.22.1-0                   kubernetes
[root@k8s-master ~]# yum -y install kubelet-1.22.1-0 kubeadm-1.22.1-0 kubectl-1.22.1-0  //指定版本号部署
...
已安装:
  kubeadm.x86_64 0:1.22.1-0                    kubectl.x86_64 0:1.22.1-0                    kubelet.x86_64 0:1.22.1-0
作为依赖被安装:
  conntrack-tools.x86_64 0:1.4.4-7.el7         cri-tools.x86_64 0:1.13.0-0                  kubernetes-cni.x86_64 0:0.8.7-0
  libnetfilter_cthelper.x86_64 0:1.0.0-11.el7  libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7  libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
  socat.x86_64 0:1.7.3.2-2.el7
完毕!

3.配置开机自启动


  暂时不需要手动启动kubelet,初始化的时候会自动启动。

[root@k8s-master ~]# systemctl start docker && systemctl enable docker
[root@k8s-master ~]# systemctl enable kubelet

初始化Kubernetes Master


只在master主机上操作。


 记住在执行下面的初始化命令的时候,最好是加上--kubernetes-version=v “版本号”,避免后面因为版本问题报错。

[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.22.1  --apiserver-advertise-address=192.168.1.18  --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --image-repository registry.aliyuncs.com/google_containers
[init] Using Kubernetes version: v1.22.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
...
...
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!  //初始化成功
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.18:6443 --token 57y0v6.x6vl5kvcp9lqp6sj \
        --discovery-token-ca-cert-hash sha256:4e50e411707160bf753ad1490a4e495d402f11290cfe240a268fff7efda328fb

 出现上面的successfully信息之后,表示初始化已经完成,继续执行初始化完成后提示的命令。


 记住上面kubeadm join完整命令,因为后续node节点加入集群是需要用到,其中包含token。

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

注意:


 在初始化的时候可能会遇到下面的报错:error: Get "http://localhost:10248/healthz":

...
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

解决方法:

[root@k8s-master ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://hx983jf6.mirror.aliyuncs.com"],
"graph": "/mnt/data",
"exec-opts": ["native.cgroupdriver=systemd"]   //添加这行配置
}
[root@k8s-master ~]# systemctl restart docker
[root@k8s-master ~]# kubeadm reset -f
//执行清除kubeadm信息后,再执行初始化命令即可。


 当出现同样这个报错,如果使用以上这个方法无法解决时,也可以试试另外一种方法,也是我其中一次遇到并且使用该方法解决过的。


  初始化Kubenetes报错1:https://blog.csdn.net/qq_44895681/article/details/10741


  Kubernetes v1.22.1部署问题2:https://blog.csdn.net/qq_44895681/article/details/119947343?spm=1001.2014.3001.5501


两台Node节点加入集群


使用上面Master节点初始化完成时最后一行的kubeadm join完整命令将k8s node节点加入集群,如果这期间超过24小时,则需要重新生成token。


1.Master主机重新生成新的token


默认token的有效期为24小时,当过期之后,该token就不可用了,以后加入节点需要新token。

[root@k8s-master ~]# kubeadm token create
c4jjui.bpppj490ggpnmi3u
[root@k8s-master ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
c4jjui.bpppj490ggpnmi3u   22h       2020-07-21T14:37:12+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

获取ca证书sha256编码hash值

[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt|openssl rsa -pubin -outform der 2>/dev/null|openssl dgst -sha256 -hex|awk '{print $NF}'
c1df6d1ad77fbc0cbdf2bb3dccd5d87eac41b936a5f3fb944f2c14b79af4de55

2.两台Node节点加入集群

kubeadm join 192.168.1.18:6443 --token 57y0v6.x6vl5kvcp9lqp6sj \
        --discovery-token-ca-cert-hash sha256:4e50e411707160bf753ad1490a4e495d402f11290cfe240a268fff7efda328fb

 - -注意- -:


  在节点加入集群时可能也会报上面初始化时报:error: Get "http://localhost:10248/healthz":这个错误,直接按照上面初始化时的处理方法即可。

[root@k8s-ndoe1 ~]# kubeadm join 192.168.1.18:6443 --token 57y0v6.x6vl5kvcp9lqp6sj \
>         --discovery-token-ca-cert-hash sha256:4e50e411707160bf753ad1490a4e495d402f11290cfe240a268fff7efda328fb
[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "k8s-ndoe1" could not be reached
        [WARNING Hostname]: hostname "k8s-ndoe1": lookup k8s-ndoe1 on 192.168.1.1:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-node2 ~]# kubeadm join 192.168.1.18:6443 --token 57y0v6.x6vl5kvcp9lqp6sj \
>         --discovery-token-ca-cert-hash sha256:4e50e411707160bf753ad1490a4e495d402f11290cfe240a268fff7efda328fb
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

3.Master节点上查看集群状态

[root@k8s-master ~]# kubectl get nodes     //查看master节点跟其他node节点状态
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   3m53s   v1.22.1
k8s-ndoe1    NotReady   <none>                 119s    v1.22.1
k8s-node2    NotReady   <none>                 111s    v1.22.1

 可以看到集群中的节点都显示NotReady状态,需要安装完Flannel网络插件后才会变成Ready状态。

[root@k8s-master ~]# kubectl get pod -n kube-system   //查看kube-system命名空间中的pod信息
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7f6cbbb7b8-4j4x2             0/1     Pending   0          10m
coredns-7f6cbbb7b8-86j9t             0/1     Pending   0          10m
etcd-k8s-master                      1/1     Running   1          10m
kube-apiserver-k8s-master            1/1     Running   1          10m
kube-controller-manager-k8s-master   1/1     Running   1          10m
kube-proxy-5jjqg                     1/1     Running   0          10m
kube-proxy-fdq25                     1/1     Running   0          8m51s
kube-proxy-lmntm                     1/1     Running   0          8m43s
kube-scheduler-k8s-master            1/1     Running   1          10m
--这个时候还没有安装flannel插件,需要安装--

Flannel原理


 Flannel是CoreOS团队针对Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有全集群唯一的虚拟IP地址。但在默认的Docker配置中,每个Node的Docker服务会分别负责所在节点容器的IP分配。Node内部得容器之间可以相互访问,但是跨主机(Node)网络相互间是不能通信。


 Flannel设计目的就是为集群中所有节点重新规划IP地址的使用规则,从而使得不同节点上的容器能够获得"同属一个内网"且"不重复的"IP地址,并让属于不同节点上的容器能够直接通过内网IP通信。


 Flannel 使用etcd存储配置数据和子网分配信息。flannel 启动之后,后台进程首先检索配置和正在使用的子网列表,然后选择一个可用的子网,然后尝试去注册它。etcd也存储这个每个主机对应的ip。flannel 使用etcd的watch机制监视/coreos.com/network/subnets下面所有元素的变化信息,并且根据它来维护一个路由表。为了提高性能,flannel优化了Universal TAP/TUN设备,对TUN和UDP之间的ip分片做了代理。

3.png

解释上图Flannel的工作原理如下:


数据从源容器中发出后,经由所在主机的docker0虚拟网卡转发到flannel0虚拟网卡,这是个P2P的虚拟网卡,flanneld服务监听在网卡的另外一端。

   ·

Flannel通过Etcd服务维护了一张节点间的路由表,该张表里保存了各个节点主机的子网网段信息。

      ·

源主机的flanneld服务将原本的数据内容UDP封装后根据自己的路由表投递给目的节点的flanneld服务,数据到达以后被解包,然后直接进入目的节点的flannel0虚拟网卡,然后被转发到目的主机的docker0虚拟网卡,最后就像本机容器通信一样的由docker0路由到达目标容器。


安装Flannel网络插件


 以下安装部署Flannel网络插件的yaml文件内容可以直接复制使用。

[root@k8s-master ~]# cat kube-flannel.yaml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
[root@k8s-master ~]# kubectl apply -f kube-flannel.yaml       //将kube-flannel.yaml中的配置应用到一个新的pod中(安装flannel插件)
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
--上面的信息可以不用管它,会拉取flannel镜像(init表示正在拉取)--
[root@k8s-master ~]# kubectl get pod -n kube-system     //查看kube-system命名空间中的pod信息
NAMESPACE     NAME                                 READY   STATUS     RESTARTS   AGE
kube-system   coredns-7f6cbbb7b8-4j4x2             0/1     Pending    0          111m
kube-system   coredns-7f6cbbb7b8-86j9t             0/1     Pending    0          111m
kube-system   etcd-k8s-master                      1/1     Running    1          111m
kube-system   kube-apiserver-k8s-master            1/1     Running    1          111m
kube-system   kube-controller-manager-k8s-master   1/1     Running    1          111m
kube-system   kube-flannel-ds-amd64-clh6j          0/1     Init:0/1   0          2m45s
kube-system   kube-flannel-ds-amd64-ljs2t          0/1     Init:0/1   0          2m45s
kube-system   kube-flannel-ds-amd64-mw748          0/1     Init:0/1   0          2m45s
kube-system   kube-proxy-5jjqg                     1/1     Running    0          111m
kube-system   kube-proxy-fdq25                     1/1     Running    0          110m
kube-system   kube-proxy-lmntm                     1/1     Running    0          109m
kube-system   kube-scheduler-k8s-master            1/1     Running    1          111m

 镜像拉取应该会很慢,可以手动拉取,或者从其他地方拉取。(也可以将其中成功拉取了flannel镜像的节点上的flannel镜像docker save导出来,然后到docker load导入到其他拉取flannel镜像失败的节点中)


 例子:


      - 导出:docker save -o my_ubuntu.tar runoob/ubuntu:v3

     

       - 导入:docker load < my_ubuntu.tar

[root@k8s-master ~]# kubectl  describe pod  kube-flannel-ds-amd64-clh6j -n kube-system      //查看kube-system命名空间中的kube-flannel-ds-amd64-clh6j名称的pod详细信息
Name:         kube-flannel-ds-amd64-clh6j   //pod名字为kube-flannel-ds-amd64-clh6j
Namespace:    kube-system         //命名空间为kube-system
Priority:     0
Node:         k8s-ndoe1/192.168.1.19   //属于k8s-ndoe1/192.168.1.19这台节点
Start Time:   Fri, 27 Aug 2021 20:37:44 +0800
Labels:       app=flannel
              controller-revision-hash=76ccd4ff4f
              pod-template-generation=1
              tier=node
Annotations:  <none>
Status:       Pending
IP:           192.168.1.19
...
...
Node-Selectors:              <none>
Tolerations:                 :NoSchedule op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  6m56s  default-scheduler  Successfully assigned kube-system/kube-flannel-ds-amd64-clh6j to k8s-ndoe1
  Normal  Pulling    6m55s  kubelet            Pulling image "quay.io/coreos/flannel:v0.11.0-amd64"
--》从上面的信息可以看到k8s-node1这个节点已经成功拉取到这个flannel镜像

查看flannel网络插件状态

[root@k8s-master ~]# kubectl get pod  -n kube-system
NAME                                 READY   STATUS    RESTARTS        AGE
coredns-7f6cbbb7b8-4j4x2             1/1     Running   2 (2d11h ago)   10d
coredns-7f6cbbb7b8-86j9t             1/1     Running   2 (2d11h ago)   10d
etcd-k8s-master                      1/1     Running   2 (2d11h ago)   10d
kube-apiserver-k8s-master            1/1     Running   2 (2d11h ago)   10d
kube-controller-manager-k8s-master   1/1     Running   3 (2d11h ago)   6d1h
kube-flannel-ds-kngqd                1/1     Running   1 (14h ago)     10d
kube-flannel-ds-lzdpk                1/1     Running   0               10d
kube-flannel-ds-r7f4v                1/1     Running   2 (14h ago)     10d
kube-proxy-5jjqg                     1/1     Running   1 (2d11h ago)   10d
kube-proxy-fdq25                     1/1     Running   0               10d
kube-proxy-lmntm                     1/1     Running   1 (14h ago)     10d
kube-scheduler-k8s-master            1/1     Running   3 (2d11h ago)   6d1h

从上面的信息可以看到在所有K8s节点都成功安装flannel网络插件后,kube-system命名空间内的所有pod都已经是running运行状态。


再查看node节点状态:

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   10d   v1.22.1
k8s-ndoe1    Ready    <none>                 10d   v1.22.1
k8s-node2    Ready    <none>                 10d   v1.22.1

现在所有K8s节点状态都已经为Ready状态,接下来就可以使用K8s来创建发布自己的服务了!


注意:


 使用Kubeadm工具部署的K8s集群,会出现获取kube-scheduler和kube-controller-manager组件状态异常的问题,可以参考上面的文章(使用Kubeadm安装的K8s集群获取kube-scheduler和kube-controller-manager组件状态异常问题)解决。

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
9天前
|
存储 Kubernetes 关系型数据库
阿里云ACK备份中心,K8s集群业务应用数据的一站式灾备方案
本文源自2024云栖大会苏雅诗的演讲,探讨了K8s集群业务为何需要灾备及其重要性。文中强调了集群与业务高可用配置对稳定性的重要性,并指出人为误操作等风险,建议实施周期性和特定情况下的灾备措施。针对容器化业务,提出了灾备的新特性与需求,包括工作负载为核心、云资源信息的备份,以及有状态应用的数据保护。介绍了ACK推出的备份中心解决方案,支持命名空间、标签、资源类型等维度的备份,并具备存储卷数据保护功能,能够满足GitOps流程企业的特定需求。此外,还详细描述了备份中心的使用流程、控制台展示、灾备难点及解决方案等内容,展示了备份中心如何有效应对K8s集群资源和存储卷数据的灾备挑战。
|
23天前
|
Kubernetes Cloud Native 微服务
云原生入门与实践:Kubernetes的简易部署
云原生技术正改变着现代应用的开发和部署方式。本文将引导你了解云原生的基础概念,并重点介绍如何使用Kubernetes进行容器编排。我们将通过一个简易的示例来展示如何快速启动一个Kubernetes集群,并在其上运行一个简单的应用。无论你是云原生新手还是希望扩展现有知识,本文都将为你提供实用的信息和启发性的见解。
|
1月前
|
Kubernetes 监控 Cloud Native
Kubernetes集群的高可用性与伸缩性实践
Kubernetes集群的高可用性与伸缩性实践
71 1
|
1月前
|
Kubernetes 安全 容器
关于K8s,不错的开源工具
【10月更文挑战第12天】
|
2月前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
29天前
|
存储 Kubernetes Devops
Kubernetes集群管理和服务部署实战
Kubernetes集群管理和服务部署实战
48 0
|
2月前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
2月前
|
Kubernetes Ubuntu Linux
Centos7 搭建 kubernetes集群
本文介绍了如何搭建一个三节点的Kubernetes集群,包括一个主节点和两个工作节点。各节点运行CentOS 7系统,最低配置为2核CPU、2GB内存和15GB硬盘。详细步骤包括环境配置、安装Docker、关闭防火墙和SELinux、禁用交换分区、安装kubeadm、kubelet、kubectl,以及初始化Kubernetes集群和安装网络插件Calico或Flannel。
202 4
|
2月前
|
Kubernetes Cloud Native 云计算
云原生之旅:Kubernetes 集群的搭建与实践
【8月更文挑战第67天】在云原生技术日益成为IT行业焦点的今天,掌握Kubernetes已成为每个软件工程师必备的技能。本文将通过浅显易懂的语言和实际代码示例,引导你从零开始搭建一个Kubernetes集群,并探索其核心概念。无论你是初学者还是希望巩固知识的开发者,这篇文章都将为你打开一扇通往云原生世界的大门。
137 17
|
2月前
|
Kubernetes 应用服务中间件 nginx
搭建Kubernetes v1.31.1服务器集群,采用Calico网络技术
在阿里云服务器上部署k8s集群,一、3台k8s服务器,1个Master节点,2个工作节点,采用Calico网络技术。二、部署nginx服务到k8s集群,并验证nginx服务运行状态。
914 1