k8s 1.26.2基于containerd1.6.18

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
简介: k8s 1.26.2基于containerd1.6.18

环境

服务器参数:

CentOS Linux release 7.9.2009 (Core)

4核(vCPU)8GB


防火墙:关闭

SELINUX:SELINUX=disabled


软件环境:

docker版本:20.10.22

docker-compose版本:2.15.1

kubeadm版本:1.26.2;kubelet版本:1.26.2;kubectl版本:1.26.2

containerd版本:1.6.18

flannel版本:v0.20.0


一、环境

1、hostname

hostnamectl set-hostname tenxun-jing
vim /etc/hosts
127.0.0.1 tenxun-jing

2、防火墙

1、### 关闭
systemctl stop firewalld && \
systemctl disable firewalld && \
setenforce 0 && \
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
2、## 检查
getenforce && \
cat /etc/selinux/config |grep "^SELINUX=" && \
systemctl status firewalld |grep -B 1 'Active'
3、# 关闭swap
swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
4、# 设置系统参数
modprobe br_netfilter
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
5、同步时间
【可选】
yum -y install ntpdate
# 先看下是否可以手动同步,如果提示ntpdate不存在则须安装,提示地址不通则需要打通或者更换内部NTP服务器
ntpdate ntp.aliyun.com
# 配置定时同步
echo '*/15 * * * * ntpdate ntp.aliyun.com > /dev/null 2>&1' >> /var/spool/cron/root
crontab -l

3、依赖安装

yum install update
yum -y install lrzsz device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack telnet nc

4、docker安装(可选)

# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# Step 4: 更新并安装Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
# Step 4: 开启Docker服务
sudo service docker start

二、k8s安装

1、安装kubeadm、kubelet、kubectl

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache fast
yum install -y kubelet-1.26.2 kubeadm-1.26.2 kubectl-1.26.2
systemctl enable --now kubelet
systemctl is-active kubelet

2、安装containerd,配置 crictl

2.1、安装containerd

1、安装
# 源已经添加过可省略
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum list available |grep containerd
yum install -y containerd.io-1.6.18
2、生成配置文件
containerd config default > /etc/containerd/config.toml
3、修改配置文件
# 修改 cgroups 为 systemd
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
# 修改 pause 镜像地址:sandbox_image=
# containerd.io-1.6.18对应:registry.k8s.io
# containerd.io-1.6.6对应:k8s.gcr.io
sed -i 's#registry.k8s.io#registry.aliyuncs.com/google_containers#' /etc/containerd/config.toml
# 修改版本号:
# 可以先确定pasuse的版本号,containerd默认是3.6;kubeadm需要3.9
sed -i 's#pause:3.6#pause:3.9#' /etc/containerd/config.toml
# 修改容器存储路径到空间比较充裕的路径
默认:root = "/var/lib/containerd"
################## 补充【systemd驱动】开始 ##################
4、查看k8s驱动
# k8s 1.26.2默认是systemd驱动
# 查看所有配置项列表,找到kubelet-config
kubectl get cm -n kube-system
# 查看cgroupDriver的值;cgroupDriver: systemd
kubectl edit cm kubelet-config -n kube-system
5、查看kubelet默认驱动
# 查看kubelet 的配置文件
# yum安装kubelet 1.26.2默认是/var/lib/kubelet/config.yaml
systemctl status kubelet.service |grep 'config'
# 查看配置项;默认是:systemd
cat /var/lib/kubelet/config.yaml|grep "cgroupDriver"
# 输出
cgroupDriver: systemd
6、kubeadm init 时可指定驱动
# 打印初始化时配置,可查看cgroupDriver默认值
kubeadm config print init-defaults --component-configs KubeletConfiguration
# 默认是systemd;修改的话,需生成配置kubeadm.yml,在配置文件中的kind: KubeletConfiguration添加
cgroupDriver: systemd
# 例如:在kubeadm.yml最后添加以下内容
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
### 注意以上驱动配置需一致
################## 补充【systemd驱动】结束 ##################

containerd代理配置参考:

前提得有代理,没有代理,大可不必

1、添加代理
vim /lib/systemd/system/containerd.service
# 在 [Service] 下添加
Environment="http_proxy=http://127.0.0.1:7890"
Environment="https_proxy=http://127.0.0.1:7890"
Environment="ALL_PROXY=socks5://127.0.0.1:7891"
Environment="all_proxy=socks5://127.0.0.1:7891"
2、重启
systemctl daemon-reload && \
systemctl restart containerd

2.2、配置crictl

# 配置文件地址 /etc/crictl.yaml,修改 sock 地址
cat <<EOF> /etc/crictl.yaml 
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

2.3、启动服务

systemctl enable containerd && \
systemctl daemon-reload && \
systemctl restart containerd
systemctl status containerd

3、安装k8s

命令行形式初始化

1、官方镜像
kubeadm init \
--apiserver-advertise-address=10.0.4.12 \
--image-repository registry.k8s.io \
--kubernetes-version v1.26.2 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket /var/run/containerd/containerd.sock \
--ignore-preflight-errors=all
2、阿里云镜像
kubeadm init \
--apiserver-advertise-address=10.0.4.12 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.26.2 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket /var/run/containerd/containerd.sock \
--ignore-preflight-errors=all


3.1、生成kubeadm.yml

kubeadm config print init-defaults > kubeadm.yml
vim kubeadm.yml
修改如下配置:
修改本机主机名、本机ip、pod的ip、service的ip、k8s版本号、镜像源
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: tenxun-jing
taints: null
....
localAPIEndpoint:
advertiseAddress: 172.22.109.126
bindPort: 6443
....
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.26.2
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12

修改后的配置

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.0.99
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: tenxun-jing
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.26.2
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}

3.2、使用 kubeadm.yml 进行初始化

# 查看所需镜像列表
kubeadm config images list --config ./kubeadm.yml
# 拉取镜像
kubeadm config images pull --config ./kubeadm.yml
# 根据配置文件启动 kubeadm 初始化 k8s
kubeadm init --config=./kubeadm.yml --upload-certs --v=6
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
cat >> /etc/profile << EOF
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
source /etc/profile

3.3、kubectl补全

1、安装bash-completion工具
yum install bash-completion -y
否则报错:
-bash: _get_comp_words_by_ref: command not found
2、执行bash_completion
source /usr/share/bash-completion/bash_completion
3、加载kubectl completion
source <(kubectl completion bash) 
# 在 bash 中设置当前 shell 的自动补全,要先安装 bash-completion 包。
echo "source <(kubectl completion bash)" >> ~/.bashrc 
# 在您的 bash shell 中永久的添加自动补全

4、安装网络插件 flannel

1、下载
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
2、操作
# 修改镜像
# 
- name: install-cni-plugin
#image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
#image: flannelcni/flannel:v0.20.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.0
3、确保与kueadm初始化时设置的podSubnet网段一致
# 查看:kubectl get configmap kubeadm-config -n kube-system -o yaml |grep podSubnet
# 输出
      podSubnet: 10.244.0.0/16
# 查看kube-flannel.yml
grep -A 3 "net-conf.json" kube-flannel.yml|grep "Network"
# 输出
      "Network": "10.244.0.0/16",
4、启动
kubectl apply -f kube-flannel.yml

5、添加节点

1、获取加入集群的命令,以下 指令在master节点执行
#token会过期,有效期为5分钟,以下指令在master节点执行
kubeadm token create --print-join-command
2、添加
kubeadm join 172.16.8.31:6443 --token whihg6.utknhvj4dg3ndsv1     --discovery-token-ca-cert-hash sha256:5d2939c6d23cde6507e621cf21d550a7e083efd4331a245c2250209bdb110b89
3、检查
查看节点是否加入成功(master节点执行)
kubectl get pod -nsit -owide

三、问题记录

1、解决k8s Error registering network: failed to acquire lease: node “master“ pod cidr not assigne

问题描述:


部署flannel网络插件时发现flannel一直处于CrashLoopBackOff状态,查看日志提示没有分配cidr

1、修改
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
增加参数:
--allocate-node-cidrs=true
--cluster-cidr=10.244.0.0/16
2、重启
systemctl restart kubelet


四、其他

1、污点taint
1、查看
kubectl describe nodes k8s-master |grep Taints
2、删除
kubectl taint node k8s-master gameble-
kubectl taint node k8s-master node-role.kubernetes.io/control-plane:NoSchedule-
# 一键删除
kubectl taint node tenxun-jing $(kubectl describe node tenxun-jing |grep Taints|awk '{print $2}')-
3、添加
kubectl taint node k8s-master gameble

2、重置脚本

#/bin/bash
# premise: touch k8s_reset_init.sh && chmod +x k8s_reset_init.sh
# implement: bash k8s_reset_init.sh && [init1|init2]
function init1(){
kubeadm reset -f && \
kubeadm init \
--apiserver-advertise-address=10.0.4.12 \
--image-repository registry.k8s.io \
--kubernetes-version v1.26.2 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket /var/run/containerd/containerd.sock \
--ignore-preflight-errors=all
}
function init2(){
kubeadm reset -f && \
kubeadm init --config=./kubeadm.yml --upload-certs --v=6
}

3、代理脚本

前提得有代理,没有代理,大可不必
#!/usr/bin/env bash
# premise: touch start_containerd_env.sh && chmod +x start_containerd_env.sh
# implement: source start_containerd_env.sh && [env_start|env_stop|env_status]
containerd_file="/lib/systemd/system/containerd.service"
proxy_port="7890"
socks5_port="7891"
proxy_ip="127.0.0.1"
# list
proxy_str_list=(
                'Environment="http_proxy=http:\/\/'${proxy_ip}':'${proxy_port}'"' \
                'Environment="https_proxy=http:\/\/'${proxy_ip}':'${proxy_port}'"' \
                'Environment="ALL_PROXY=socks5:\/\/'${proxy_ip}':'${socks5_port}'"' \
                'Environment="all_proxy=socks5:\/\/'${proxy_ip}':'${socks5_port}'"' \
        )
list_len=$((${#proxy_str_list[@]} - 1))
function env_create(){
  [[ ! -f ${containerd_file} ]] && echo "[error] ${containerd_file} not exist" && return
  for ((i=0;i <= ${list_len};i++));do
  grep -on "^${proxy_str_list[${i}]}" ${containerd_file} &>/dev/null
  [[ $? != "0" ]] && sed -ri "/${proxy_str_list[${i}]}/d" ${containerd_file} && sed -ri "/\[Service\]/a${proxy_str_list[${i}]}" ${containerd_file}
  done
  proxy_str_num=$(grep -o "http://${proxy_ip}:${proxy_port}\|socks5://${proxy_ip}:${socks5_port}" ${containerd_file}|wc -l)
  [[ "${proxy_str_num}" != "${#proxy_str_list[@]}" ]] && echo "[error] not create containerd proxy in ${containerd_file}" && return
}
function env_delete(){
  [[ ! -f ${containerd_file} ]] && echo "[error] ${containerd_file} not exist" && return
        for ((i=0;i <= ${list_len};i++));do
  grep -on "^${proxy_str_list[${i}]}" ${containerd_file} &>/dev/null && sed -ri "s/(^${proxy_str_list[${i}]})/#\1/g" ${containerd_file}
  grep -on "^${proxy_str_list[${i}]}" ${containerd_file} &>/dev/null && echo "[error] not notes ${proxy_str_list[${i}]}" && return
  done
}
function env_start(){
  echo "==[env_start]== BEGIN"
  env_create
  systemctl daemon-reload && systemctl restart containerd
  [[ "$(systemctl is-active containerd)" != "active" ]] && echo "[error] containerd restart error" && return
  [[ $(systemctl show --property=Environment containerd|grep -o "${proxy_ip}"|wc -l) == "4" ]] && echo "[sucess] start containerd proxy" && systemctl show --property=Environment containerd |grep -o "http://${proxy_ip}:${proxy_port}\|socks5://${proxy_ip}:${socks5_port}" || echo "[error] not set containerd proxy env"
  echo "==[env_start]== END"
}
function env_stop(){
  echo "==[env_stop]== BEGIN"
  grep "^Environment=" ${containerd_file}|grep "${proxy_ip}" &>/dev/null
  if [[ $? == "0" ]];then
  env_delete
  systemctl daemon-reload && systemctl restart containerd
  [[ "$(systemctl is-active containerd)" != "active" ]] && echo "[error] containerd restart error" && return
  else
  echo "[warning] not operation, not set containerd proxy"
  fi
  systemctl show --property=Environment containerd | grep "Environment="
  [[ $(systemctl show --property=Environment containerd|grep -o "${proxy_ip}"|wc -l) != "4" ]] && echo "[sucess] stop containerd proxy"
  echo "==[env_stop]== END"
}
function env_status(){
  systemctl show --property=Environment containerd | grep -o "http://${proxy_ip}:${proxy_port}\|socks5://${proxy_ip}:${socks5_port}"
  [[ "$(systemctl show --property=Environment containerd|grep -o "${proxy_ip}"|wc -l)" != "4" ]] && echo "[error] not set containerd proxy env" 
}
msg="==[error]==input error, please try: source xx.sh && [env_start|env_stop|env_status]"
[[ ! "$1" ]] || echo ${msg}

4、更改nodePort模式下的默认端口范围

官网:https://kubernetes.io/zh-cn/docs/concepts/services-networking/service/


使用nodePort模式,官方默认范围为30000-32767


NodePort 类型

如果将 type 字段设置为 NodePort,则 Kubernetes 控制平面将在 –service-node-port-range 标志指定的范围内分配端口(默认值:30000-32767)。 每个节点将那个端口(每个节点上的相同端口号)代理到您的服务中。 您的服务在其 .spec.ports[*].nodePort 字段中要求分配的端口。


修改/etc/kubernetes/manifests/kube-apiserver.yaml

[root@node-1 manifests]# vim /etc/kubernetes/manifests/kube-apiserver.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.235.21
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-admission-plugins=PodPreset             
    - --runtime-config=settings.k8s.io/v1alpha1=true   
    - --service-node-port-range=1-65535                # 需增加的配置
    ...


调整完毕后会等待大概10s,因为更改kube-apiserver.yaml配置文件后会进行重启操作,重新加载配置文件,期间可执行kubectl get pod命令进行查看,如果可正常查看pod信息即说明重启完毕。但是此时端口范围可能仍然不会生效,需要继续进行以下操作:

[root@node-0 manifests]# systemctl daemon-reload
[root@node-0 manifests]# systemctl restart kubelet

然后重新进行新的service的生成,即可成功创建指定nodePort的service。  复制成功,转载请保留本站链接: www.chenjiao.cloud

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
算法 数据可视化 安全
Docker-11:Docekr安装Etcd
Docker方式安装etcd
1403 0
Docker-11:Docekr安装Etcd
|
关系型数据库 Linux Apache
|
2月前
|
Kubernetes 搜索推荐 Docker
Kubernetes容器运行时:Containerd vs Docke
Kubernetes容器运行时:Containerd vs Docke
51 4
|
Kubernetes Cloud Native Devops
安装k8s(kubernetes)+containerd
准备两台服务器节点,如果需要安装虚拟机,可以参考[《wmware和centos安装过程》](https://blog.csdn.net/huashetianzu/article/details/109510266)
|
存储 Kubernetes 安全
Docker、Containerd、RunC分别是什么
Docker、Containerd、RunC分别是什么
Docker、Containerd、RunC分别是什么
|
Kubernetes Linux 调度
将Kubernetes集群的CRI实现从cri-docker更改为containerd
本文记录了将Kubernetes集群的CRI实现从cri-docker更改为containerd的过程,包括cri-docker相关的卸载和containerd的安装配置。
843 0
|
Kubernetes 中间件 Linux
podman runc
podman runc
186 0
|
Kubernetes Ubuntu 网络安全
kubeadm基于containerd安装kubernetes1.24.2
kubeadm基于containerd安装kubernetes1.24.2
642 0
|
存储 Kubernetes 搜索推荐
containerd vs docker
containerd vs docker
containerd vs docker