kubeadm安装kubernetes集群

简介: kubeadm安装kubernetes集群

kubeadm安装kubernetes集群

虚拟机安装ubuntu22.04

制作基础虚拟机

安装操作系统:省略。。。

本文使用的是ubuntu22.04安装,通常需要给虚拟机配置2个网络,一个是可以上网的网络可以是nat也可以时桥接网络,另一个是host-only网络,用于xshell链接时,设置固定IP后可以直接连接,并且重启和更换网络环境也不需要改变

系统配置:

# 安装net-tools,查看ifconfig
sudo apt install net-tools

复制生成3个节点

  1. 选择基础虚拟机,复制
    在这里插入图片描述

  2. 复制虚拟机,设置虚拟机名称、缓存路径、并重新生成mac地址
    在这里插入图片描述
    在这里插入图片描述

  3. 复制出三个虚拟机用来搭建kubernetes环境
    在这里插入图片描述

  4. 节点需要重启网络

# ubuntu22.04
sudo systemctl restart systemd-networkd

硬件环境综述

本文使用1个master节点、2个node节点进行搭建,master节点应至少配置2GB内存,2个CPU。

作者的环境使用的master节点与node均为2GB内存、4个CPU。操作系统均为ubuntu22.04.

主机环境调整

关闭防火墙

iptables防火墙,会对所有网络流量进行过滤、转发,如果是内网机器一般都会直接关闭,省的影响网
络性能,但k8s不能直接关了,k8s是需要用防火墙做ip转发和修改的,当然也看使用的网络模式,如果
采用的网络模式不需要防火墙也是可以直接关闭的

sudo systemctl stop firewalld
sudo systemctl disable firewalld

禁用selinux

selinux,这个是用来加强安全性的一个组件,但非常容易出错且难以定位,一般上来装完系统就先给禁
用了

# 查看 selinux 状态
sudo apt install selinux-utils
getenforce
# 禁用
sudo setenforce 0

禁用swap

swap,这个当内存不足时,linux会自动使用swap,将部分内存数据存放到磁盘中,这个这样会使性能
下降,为了性能考虑推荐关掉

# 查看交换区
free
# 禁用交换区
sudo swapoff -a
# 打开文件注释交换区定义
/etc/fstab
#/swap.img      none    swap    sw      0       0

修改主机名

  1. /etc/hosts文件增加主机名与本机ip映射
  2. 修改系统主机名
# 修改主机名
sudo hostnamectl set-hostname k8s-master1
# 查看主机名
hostname

转发 IPv4 并让 iptables 看到桥接流量

通过运行 lsmod | grep br_netfilter 来验证 br_netfilter 模块是否已加载。

若要显式加载此模块,请运行 sudo modprobe br_netfilter

为了让 Linux 节点的 iptables 能够正确查看桥接流量,请确认 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 设置为 1。例如:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

安装cri

cri-dockerd

前提:安装docker

下载cri-dockerd

下载官网发布的版本

https://github.com/Mirantis/cri-dockerd/releases/tag/v0.2.5

在这里插入图片描述

自己clone源代码编译

cd cri-dockerd
mkdir bin
go build -o bin/cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
cp -a packaging/systemd/* /etc/systemd/system
sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
systemctl daemon-reload
systemctl enable cri-docker.service
systemctl enable --now cri-docker.socket

cri-dockerd服务配置

  1. 创建/etc/systemd/system/cri-docker.socket
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock # 此处为sock文件的存放路径
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
  1. 创建/etc/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
# 此处为启动的命令行
# 注意启动的dockerd的路径
# 注意网络插件及pause的配置
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --networkplugin=cni \
--pod-infra-containerimage=registry.aliyuncs.com/google_containers/pause:3.7  
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
  1. 启动并验证服务
# 重新加载配置
systemctl daemon-reload
# 设置为开机自启动
systemctl enable cri-docker
# 启动服务
systemctl enable --now cri-docker
# 检查服务状态
systemctl status cri-docker

containerd

二进制文件安装

安装containerd

https://github.com/containerd/containerd/releases下载`containerd---.tar.gz`

$ cd /usr/local
$ tar Cxzvf /usr/local containerd-1.6.2-linux-amd64.tar.gz
bin/
bin/containerd-shim-runc-v2
bin/containerd-shim
bin/ctr
bin/containerd-shim-runc-v1
bin/containerd
bin/containerd-stress

配置/usr/lib/systemd/system/containerd.service

# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
#Environment="ENABLE_CRI_SANDBOXES=sandboxed"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload
systemctl enable --now containerd

安装 runc

https://github.com/opencontainers/runc/releases下载`runc.`二进制文件

cd /usr/local/sbin/
mv runc.amd64 runc
chmod 755 runc

安装cni插件

https://github.com/containernetworking/plugins/releases下载`cni-plugins---.tgz存档,然后在下面解压:/opt/cni/bin`

$ mkdir -p /opt/cni/bin
$ tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz
./
./macvlan
./static
./vlan
./portmap
./host-local
./vrf
./bridge
./tuning
./firewall
./host-device
./sbr
./loopback
./dhcp
./ptp
./ipvlan
./bandwidth

修改containerd配置

 mkdir /etc/containerd
# 生成配置文件
containerd config default > /etc/containerd/config.toml
# 重载沙箱(pause)镜像 
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"
# 设置cgroup驱动
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

# 重启containerd
sudo systemctl restart containerd

circtl默认链接unix:///var/run/dockershim.sock,所以需要修改circtl配置文件

cat <<EOF> /etc/crictl.yaml 
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

拉取镜像验证

crictl pull nginx:1.20.2
crictl images ls

kubeadm

  1. 安装相关软件
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
  1. 下载gpg密钥:这里使用阿里云的
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg \
https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg
  1. 设置kubernetes镜像源
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg]  \
https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee \
/etc/apt/sources.list.d/kubernetes.list
  1. 更新apt软件索引,并查看相关软件的可用版本
sudo apt-get update
apt-cache madison kubelet kubeadm kubectl
  1. 安装特定版本
sudo apt-get install -y kubelet=<VERSION_STRING> kubeadm=<VERSION_STRING>
kubectl=<VERSION_STRING>
例如:
sudo apt-get install -y kubelet=1.24.1-00 kubeadm=1.24.1-00 kubectl=1.24.1-00
  1. 检查
# kubeadm
kubeadm version
# kubectl
kubectl version
# kubelet
systemctl status kubelet

注意:kubelet在刚安装完成时,会处于一个自动启动状态,每10s启动一次,在没有完成初始化之前它
一致处于这种状态,所以不要纠结于kubelet安装之后没有启动。

初始化配置(仅master节点)

生成默认配置文件

kubeadm config print init-defaults > init.default.yaml

修改配置文件

# 修改地址 节点IP地址
localAPIEndpoint.advertiseAddress: 192.168.56.101
# 修改套接字,如果使用cri-docker需要修改
nodeRegistration.criSocket: unix:///var/run/cri-dockerd.sock
# 修改节点名称
nodeRegistration.name: master1
# 修改镜像仓库地址为国内开源镜像库
imageRepository: registry.aliyuncs.com/google_containers
# 修改版本号
kubernetesVersion: 1.24.1
# 增加podSubnet,由于后续会安装flannel 网络插件,该插件必须在集群初始化时指定pod地址
# 10.244.0.0/16 为flannel组件podSubnet默认值,集群配置与网络组件中的配置需保持一致
podSubnet: 10.244.0.0/16

拉取相关镜像

sudo kubeadm config images pull --config=init.default.yaml

初始化集群

# 通过配置文件初始化
sudo kubeadm init --config=init.default.yaml
# 通过参数初始化
kubeadm init --image-repository registry.aliyuncs.com/google_containers --
kubernetes-version=v1.24.1 --pod-network-cidr=10.244.0.0/16 --apiserveradvertise-address=192.168.239.142 --cri-socket unix:///var/run/cri-dockerd.sock

在这里插入图片描述

若当前用户为普通用户,请执行以下命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

当前用户为root用户,请配置环境变量:

# /etc/profile 末尾添加环境变量
export KUBECONFIG=/etc/kubernetes/admin.conf
# 执行命令,立即生效
source /etc/profile

查看节点状态

# kubectl get node
NAME      STATUS     ROLES           AGE   VERSION
master1   NotReady   control-plane   89s   v1.24.1

可以看到节点状态为NotReady

查看kubelet状态

# systemctl status kubelet
  Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Sun 2022-09-04 17:18:25 CST; 5min ago
       Docs: https://kubernetes.io/docs/home/
   Main PID: 2146 (kubelet)
      Tasks: 16 (limit: 2236)
     Memory: 34.8M
        CPU: 14.635s
     CGroup: /system.slice/kubelet.service
             └─2146 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:>

Sep 04 17:23:05 master1 kubelet[2146]: E0904 17:23:05.440384    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
Sep 04 17:23:10 master1 kubelet[2146]: E0904 17:23:10.441932    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
Sep 04 17:23:15 master1 kubelet[2146]: E0904 17:23:15.443737    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
Sep 04 17:23:20 master1 kubelet[2146]: E0904 17:23:20.445438    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
Sep 04 17:23:25 master1 kubelet[2146]: E0904 17:23:25.447628    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
Sep 04 17:23:30 master1 kubelet[2146]: E0904 17:23:30.451519    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
Sep 04 17:23:35 master1 kubelet[2146]: E0904 17:23:35.454570    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
Sep 04 17:23:40 master1 kubelet[2146]: E0904 17:23:40.459534    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
Sep 04 17:23:45 master1 kubelet[2146]: E0904 17:23:45.465543    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
Sep 04 17:23:50 master1 kubelet[2146]: E0904 17:23:50.468645    2146 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not i>
~

可以看到网络插件为准备好

使用命令查看所有pod可以看到有coredns为准备好

kubectl get pod --all-namespaces
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-74586cf9b6-22qc5          0/1     Pending   0          6m3s
kube-system   coredns-74586cf9b6-qx9ql          0/1     Pending   0          6m3s
kube-system   etcd-master1                      1/1     Running   0          6m7s
kube-system   kube-apiserver-master1            1/1     Running   0          6m7s
kube-system   kube-controller-manager-master1   1/1     Running   0          6m7s
kube-system   kube-proxy-dgmcn                  1/1     Running   0          6m3s
kube-system   kube-scheduler-master1            1/1     Running   0          6m7s

安装网络插件

Kubernetes 定义了 CNI 标准,有很多网络插件,这里我选择最常用的 Flannel,可以在它的 GitHub 仓库里(https://github.com/flannel-io/flannel/)找到相关文档。它安装也很简单,只需要使用项目的“kube-flannel.yml”在 Kubernetes 里部署一下就好了。

下载地址

你可以使用curl下载下来

curl https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml >> flannel.yml

如果前面有设置podSubnet那么,你需要修改文件里的“net-conf.json”字段,把 Network 改成刚才 kubeadm 的参数 --pod-network-cidr 设置的地址段,例如:

  net-conf.json: |
    {
      "Network": "10.10.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
kubectl apply -f flannel.yml

然后再查看节点状态就可以看到master节点已经ready了

# kubectl get node
NAME      STATUS   ROLES           AGE   VERSION
master1   Ready    control-plane   21m   v1.24.1

查看所有pod节点也都是正常,这里需要注意kube-flannel可以能初始化会有所延迟。

# kubectl get pod --all-namespaces
NAMESPACE      NAME                              READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-8x697             1/1     Running   0          2m12s
kube-system    coredns-74586cf9b6-4vqhq          1/1     Running   0          9m26s
kube-system    coredns-74586cf9b6-6s6mk          1/1     Running   0          9m26s
kube-system    etcd-master1                      1/1     Running   1          9m40s
kube-system    kube-apiserver-master1            1/1     Running   1          9m40s
kube-system    kube-controller-manager-master1   1/1     Running   0          9m40s
kube-system    kube-flannel-ds-thzgs             1/1     Running   0          5m59s
kube-system    kube-proxy-8f28v                  1/1     Running   0          9m26s
kube-system    kube-scheduler-master1            1/1     Running   1          9m40s

开启kube-proxy的ipvs模式

# 修改mod
kubectl edit cm kube-proxy -n kube-system
修改:mode: "ipvs"
# 删除现有kube-proxy pod
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

节点

在master节点获取加入节点命令

sudo kubeadm token create --print-join-command

注意:如果容器运行时不是contained需要在命令后面配置sock的url,例如cri-dockerd就需要配置--cri-socket unix:///var/run/cri-dockerd.sock

重置节点

如果节点不需要或者出错需要删除时我们需要重置节点,重置步骤为

  • 执行kubeadm reset
sudo kubeadm reset
  • 删除相关文件
sudo rm -rf /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni
/etc/cni/net.d $HOME/.kube/config
  • 清除ipvs
sudo ipvsadm --clear
  • 删除网络
sudo ifconfig cni0 down
sudo ip link delete cni0

部署web应用

创建资源

  1. 创建一个myhello-rc.yaml
apiVersion: v1
kind: ReplicationController # 副本控制器 RC
metadata:
  name: myhello-rc # RC名称,全局唯一
  labels:
    name: myhello-rc
spec:
  replicas: 5 # Pod副本期待数量
  selector:
    name: myhello-pod
  template:
    metadata:
      labels:
        name: myhello-pod
    spec:
      containers: # Pod 内容的定义部分
      - name: myhello #容器的名称
        image: xlhmzch/hello:1.0.0 #容器对应的 Docker Image
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        env: # 注入到容器的环境变量
        - name: env1
          value: "k8s-env1"
        - name: env2
          value: "k8s-env2"

创建资源:

sudo kubectl create -f myhello-rc.yaml

创建service

  1. 创建myhello-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: myhello-svc
  labels:
    name: myhello-svc
spec:
  type: NodePort # 对外提供端口
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    name: http
    nodePort: 30000
selector:
  name: myhello-pod
  1. 创建资源:
sudo kubectl create -f myhello-svc.yaml

验证

curl http://192.168.1.9:30000/ping

相关问题

节点加入时加载cni失败

  1. 检查kublet状态
systemctl status kubelet

● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Sun 2022-09-04 21:05:48 CST; 3min 39s ago
       Docs: https://kubernetes.io/docs/home/
   Main PID: 3310 (kubelet)
      Tasks: 16 (limit: 2236)
     Memory: 33.1M
        CPU: 5.174s
     CGroup: /system.slice/kubelet.service
             └─3310 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:>

Sep 04 21:08:09 node2 kubelet[3310]: E0904 21:08:09.146947    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini>
Sep 04 21:08:14 node2 kubelet[3310]: E0904 21:08:14.148254    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini>
Sep 04 21:08:19 node2 kubelet[3310]: E0904 21:08:19.151768    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini>
Sep 04 21:08:24 node2 kubelet[3310]: E0904 21:08:24.221484    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini>
Sep 04 21:08:29 node2 kubelet[3310]: E0904 21:08:29.223015    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini>
Sep 04 21:08:34 node2 kubelet[3310]: E0904 21:08:34.242096    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini>
Sep 04 21:08:39 node2 kubelet[3310]: E0904 21:08:39.242534    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini>
Sep 04 21:08:44 node2 kubelet[3310]: E0904 21:08:44.243459    3310 kubelet.go:2344] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not ini

发现加载网路插件cni失败

解决:

  1. cni网络相关配置文件缺失
sudo mkdir -p /run/flannel/
sudo scp root@master1:/run/flannel/subnet.env /run/flannel/subnet.env
sudo mkdir -p /etc/cni/net.d
sudo scp root@master1:/etc/cni/net.d/10-flannel.conflist  /etc/cni/net.d/
  1. containerd无法拉取镜像导致
# crictl pull quay.io/coreos/flannel:v0.9.1-amd64
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"

解决:

 cat <<EOF> /etc/crictl.yaml 
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

心得

  1. 刚开始装cri-docker没有安装docker,后面换成containerd
  2. containerd不需要修改sock少了个坑
  3. 其他节点忘记验证是否能拉去镜像,导致sock失败问题[^1]

[^1]: 2. containerd无法拉取镜像导致

相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
相关文章
|
20天前
|
Kubernetes 安全 Docker
在 K8s 集群中创建 DERP 服务器
在 K8s 集群中创建 DERP 服务器
|
20天前
|
Prometheus 监控 Kubernetes
如何用 Prometheus Operator 监控 K8s 集群外服务?
如何用 Prometheus Operator 监控 K8s 集群外服务?
|
20天前
|
Prometheus 监控 Kubernetes
监控 Kubernetes 集群证书过期时间的三种方案
监控 Kubernetes 集群证书过期时间的三种方案
|
3天前
|
Kubernetes 调度 Docker
玩转Kubernetes—使用minikube操作集群
玩转Kubernetes—使用minikube操作集群
18 0
|
3天前
|
Kubernetes Ubuntu Linux
玩转Kubernetes—尝试以不同方式初始化集群
玩转Kubernetes—尝试以不同方式初始化集群
15 0
|
12天前
|
Kubernetes 容器
使用kubeadm部署k8s报错:The kubelet is not running或者level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://us-west2-dock
使用kubeadm部署k8s报错:The kubelet is not running或者level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://us-west2-dock
|
20天前
|
存储 Kubernetes 容灾
Velero 系列文章(五):基于 Velero 的 Kubernetes 集群备份容灾生产最佳实践
Velero 系列文章(五):基于 Velero 的 Kubernetes 集群备份容灾生产最佳实践
|
20天前
|
Kubernetes 监控 物联网
IoT 边缘集群基于 Kubernetes Events 的告警通知实现(二):进一步配置
IoT 边缘集群基于 Kubernetes Events 的告警通知实现(二):进一步配置
|
20天前
|
存储 Kubernetes 监控
IoT 边缘集群基于 Kubernetes Events 的告警通知实现
IoT 边缘集群基于 Kubernetes Events 的告警通知实现
|
20天前
|
Kubernetes 网络协议 网络虚拟化
WireGuard 系列文章(九):基于 K3S+WireGuard+Kilo 搭建跨多云的统一 K8S 集群
WireGuard 系列文章(九):基于 K3S+WireGuard+Kilo 搭建跨多云的统一 K8S 集群

推荐镜像

更多