kubeadm + containerd 部署 k8s-v1.23.3(含证书升级)

简介: kubeadm + containerd 部署 k8s-v1.23.3(含证书升级)

前言

kubeadm 和 二进制 部署的区别
  • kubeadm

    • 优点:

      • 部署很方便,两个参数就可以完成集群的部署和节点的加入

        1. kubeadm init 初始化节点
        2. kubeadm join 节点加入集群
    • 缺点:

      1. 集群证书有效期只有一年,要么破解,要么升级 k8s 版本
  • 二进制部署

    • 优点:

      1. 可以自定义集群证书有效期(一般都是十年)
      2. 所有组件的细节,可以在部署前定制
      3. 部署过程中,能更好的理解 k8s 各个组件之间的关联
    • 缺点:

      1. 部署相对 kubeadm 会复杂很多
人生苦短,我选二进制部署

环境准备

IP 角色 内核版本
192.168.91.8 master centos7.6/3.10.0-957.el7.x86_64
192.168.91.9 work centos7.6/3.10.0-957.el7.x86_64

答应我,所有节点都要关闭防火墙

systemctl disable firewalld
systemctl stop firewalld

答应我,所有节点都要关闭selinux

setenforce 0
sed -i '/SELINUX/s/enforcing/disabled/g' /etc/selinux/config

答应我,所有节点都要关闭swap

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

答应我,所有节点都要开启内核模块

modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack
modprobe nf_conntrack_ipv4
modprobe br_netfilter
modprobe overlay

答应我,所有节点都要开启模块自动加载服务

cat > /etc/modules-load.d/k8s-modules.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
nf_conntrack_ipv4
br_netfilter
overlay
EOF
答应我,记得重启服务,并设置为开机自启
systemctl enable systemd-modules-load
systemctl restart systemd-modules-load

答应我,所有节点都要做内核优化

cat <<EOF > /etc/sysctl.d/kubernetes.conf
# 开启数据包转发功能(实现vxlan)
net.ipv4.ip_forward=1
# iptables对bridge的数据进行处理
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-arptables=1
# 关闭tcp_tw_recycle,否则和NAT冲突,会导致服务不通
net.ipv4.tcp_tw_recycle=0
# 不允许将TIME-WAIT sockets重新用于新的TCP连接
net.ipv4.tcp_tw_reuse=0
# socket监听(listen)的backlog上限
net.core.somaxconn=32768
# 最大跟踪连接数,默认 nf_conntrack_buckets * 4
net.netfilter.nf_conntrack_max=1000000
# 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.swappiness=0
# 计算当前的内存映射文件数。
vm.max_map_count=655360
# 内核可分配的最大文件数
fs.file-max=6553600
# 持久连接
net.ipv4.tcp_keepalive_time=600
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=10
EOF
答应我,让配置生效
sysctl -p /etc/sysctl.d/kubernetes.conf

答应我,所有节点都要清空 iptables 规则

iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEPT

安装 containerd

所有节点都需要安装

配置 docker 源 (docker 源里面有 containerd)

wget -O /etc/yum.repos.d/docker.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
查找 containerd 安装包的名称
yum search containerd
安装 containerd
yum install -y containerd.io
修改 containerd 配置文件

root 容器存储路径,修改成磁盘空间充足的路径

sandbox_image pause 镜像名称以及镜像tag(一定要可以拉取到 pause 镜像的,否则会导致集群初始化的时候 kubelet 重启失败)

bin_dir cni 插件存放路径,yum 安装的 containerd 默认存放在 /opt/cni/bin 目录下

cat <<EOF > /etc/containerd/config.toml
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/approot1/data/containerd"
state = "/run/containerd"
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = "/etc/cni/net.d/cni-default.conf"
      max_conf_num = 1

    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      no_pivot = false
      snapshotter = "overlayfs"

      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = true

      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://docker.mirrors.ustc.edu.cn", "http://hub-mirror.c.163.com"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
          endpoint = ["https://gcr.mirrors.ustc.edu.cn"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
          endpoint = ["https://gcr.mirrors.ustc.edu.cn/google-containers/"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
          endpoint = ["https://quay.mirrors.ustc.edu.cn"]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.btrfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""

[proxy_plugins]

[stream_processors]

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[ttrpc]
  address = ""
  gid = 0
  uid = 0
EOF
启动 containerd 服务,并设置为开机启动
systemctl enable containerd
systemctl restart containerd

配置 kubernetes 源

所有节点都需要配置
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
通过 yum list 命令可以查看当前源的稳定版本,目前的稳定版本是 1.23.3-0
yum list kubeadm kubelet

安装 kubeadm 以及 kubelet

所有节点都需要安装

yum install 不带版本,就会安装当前稳定版本,为了后面文档通用,我这里就在安装的时候带上了版本

yum install -y kubelet-1.23.3-0 kubeadm-1.23.3-0

配置命令参数自动补全功能

所有节点都需要安装
yum install -y bash-completion
echo 'source <(kubectl completion bash)' >> $HOME/.bashrc
echo 'source <(kubeadm completion bash)' >> $HOME/.bashrc
source $HOME/.bashrc

启动 kubelet 服务

所有节点都要操作
systemctl enable kubelet
systemctl restart kubelet

kubeadm 部署 master 节点

注意在 master 节点上操作

查看 kubeadm init 默认配置

kubeadm config print init-defaults
vim kubeadm.yaml
kubeadm 配置 (v1beta3)

advertiseAddress 参数需要修改成当前 master 节点的 ip

bindPort 参数为 apiserver 服务的访问端口,可以自定义

criSocket 参数定义 容器运行时 使用的套接字,默认是 dockershim ,这里需要修改为 contained 的套接字文件,在 conf.toml 里面可以找到

imagePullPolicy 参数定义镜像拉取策略,IfNotPresent 本地没有镜像则拉取镜像;Always 总是重新拉取镜像;Never 从不拉取镜像,本地没有镜像,kubelet 启动 pod 就会报错 (注意驼峰命名,这里的大写别改成小写)

certificatesDir 参数定义证书文件存储路径,没特殊要求,可以不修改

controlPlaneEndpoint 参数定义稳定访问 ip ,高可用这里可以填 vip

dataDir 参数定义 etcd 数据持久化路径,默认 /var/lib/etcd ,部署前,确认路径所在磁盘空间是否足够

imageRepository 参数定义镜像仓库名称,默认 k8s.gcr.io ,如果要修改,需要注意确定镜像一定是可以拉取的到,并且所有的镜像都是从这个镜像仓库拉取的

kubernetesVersion 参数定义镜像版本,和镜像的 tag 一致

podSubnet 参数定义 pod 使用的网段,不要和 serviceSubnet 以及本机网段有冲突

serviceSubnet 参数定义 k8s 服务 ip 网段,注意是否和本机网段有冲突

cgroupDriver 参数定义 cgroup 驱动,默认是 cgroupfs

mode 参数定义转发方式,可选为iptablesipvs

name 参数定义节点名称,如果是主机名需要保证可以解析(kubectl get nodes 命令查看到的节点名称)

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.91.8
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: 192.168.91.8
  taints: null

---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.91.8:6443
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.23.3
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 172.22.0.0/16
scheduler: {}

---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
cgroupsPerQOS: true

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
集群初始化
kubeadm init --config kubeadm.yaml
以下是 kubeadm init 的过程,
[init] Using Kubernetes version: v1.23.3
[preflight] Running pre-flight checks
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [192.168.91.8 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.91.8]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [192.168.91.8 localhost] and IPs [192.168.91.8 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [192.168.91.8 localhost] and IPs [192.168.91.8 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.504586 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node 192.168.91.8 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node 192.168.91.8 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.91.8:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:5e2387403e698e95b0eab7197837f2425f7b8610e7b400e54d81c27f3c6f1964 \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.91.8:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:5e2387403e698e95b0eab7197837f2425f7b8610e7b400e54d81c27f3c6f1964
以下操作二选一

kubectl 不加 --kubeconfig 参数,默认找的是 $HOME/.kube/config ,如果不创建目录,并且将证书复制过去,就要生成环境变量,或者每次使用 kubectl 命令的时候,都要加上 --kubeconfig 参数指定证书文件,否则 kubectl 命令就找不到集群了

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> $HOME/.bashrc
source ~/.bashrc
查看 k8s 组件运行情况
kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-65c54cc984-cglz9               0/1     Pending   0          12s
coredns-65c54cc984-qwd5b               0/1     Pending   0          12s
etcd-192.168.91.8                      1/1     Running   0          27s
kube-apiserver-192.168.91.8            1/1     Running   0          21s
kube-controller-manager-192.168.91.8   1/1     Running   0          21s
kube-proxy-zwdlm                       1/1     Running   0          12s
kube-scheduler-192.168.91.8            1/1     Running   0          27s
因为还没有网络组件,coredns 没有运行成功

安装 flannel 组件

在 master 节点操作即可

Network 参数的 ip 段要和上面 kubeadm 配置文件的 podSubnet 一样

cat <<EOF> flannel.yaml | kubectl apply -f flannel.yaml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['policy']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "172.22.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.15.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.15.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
EOF
稍等 2-3 分钟,等待 flannel pod 成为 running 状态 (具体时间视镜像下载速度)
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-65c54cc984-cglz9               1/1     Running   0          2m7s
coredns-65c54cc984-qwd5b               1/1     Running   0          2m7s
etcd-192.168.91.8                      1/1     Running   0          2m22s
kube-apiserver-192.168.91.8            1/1     Running   0          2m16s
kube-controller-manager-192.168.91.8   1/1     Running   0          2m16s
kube-flannel-ds-26drg                  1/1     Running   0          100s
kube-proxy-zwdlm                       1/1     Running   0          2m7s
kube-scheduler-192.168.91.8            1/1     Running   0          2m22s

work 节点加入集群

在 master 节点初始化完成的时候,已经给出了加入集群的参数

只需要复制一下,到 work 节点执行即可

--node-name 参数定义节点名称,如果是主机名需要保证可以解析(kubectl get nodes 命令查看到的节点名称)

kubeadm join 192.168.91.8:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:5e2387403e698e95b0eab7197837f2425f7b8610e7b400e54d81c27f3c6f1964 \
--node-name 192.168.91.9
如果忘记记录了,或者以后需要增加节点怎么办?

执行下面的命令就可以了

kubeadm token create --print-join-command --ttl=0
输出也很少,这个时候只需要去 master 节点执行 kubectl get nodes 命令就可以查看节点的状态了
[preflight] Running pre-flight checks
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
节点变成 Ready 的时间取决于 work 节点的 flannel 镜像拉取时间

可以通过 kubectl get node -n kube-system 查看 flannel 是否为 Running 状态

NAME           STATUS   ROLES                  AGE     VERSION
192.168.91.8   Ready    control-plane,master   9m34s   v1.23.3
192.168.91.9   Ready    <none>                 6m11s   v1.23.3

master 节点加入集群

需要先从其中一个 master 节点获取 CA 键哈希值

这个值在 kubeadm init 完成时也是已经输出到终端了

kubeadm init 时如果有修改过 certificatesDir 参数,/etc/kubernetes/pki/ca.crt 这里的路径需要注意确认和修改

获取到的 hash 值,使用格式: sha256:<hash 值>

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
也可以直接创建新的 token ,并且会给出 hash 值,并给出如下的命令,只需要加上 --certificate-key--control-plane 参数即可

kubeadm join 192.168.91.8:6443 --token 352obx.dw7rqphzxo6cvz9r --discovery-token-ca-cert-hash sha256:5e2387403e698e95b0eab7197837f2425f7b8610e7b400e54d81c27f3c6f1964

kubeadm token create --print-join-command --ttl=0
解密由 kubeadm init 上传的证书 secret

对应的 kubeadm join 参数为 --certificate-key

kubeadm init phase upload-certs --upload-certs
在需要扩容的 master 节点执行 kubeadm join 命令加入集群

--node-name 参数定义节点名称,如果是主机名需要保证可以解析(kubectl get nodes 命令查看到的节点名称)

kubeadm join 192.168.91.8:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:5e2387403e698e95b0eab7197837f2425f7b8610e7b400e54d81c27f3c6f1964 \
--certificate-key a7a12fb565bf94c768f0097898926e4d0805eb7ecc1477b48fdaaf4d27eb26b0 \
--control-plane \
--node-name 192.168.91.10
查看节点

kubectl get nodes

NAME            STATUS   ROLES                  AGE    VERSION
192.168.91.10   Ready    control-plane,master   96m    v1.23.3
192.168.91.8    Ready    control-plane,master   161m   v1.23.3
192.168.91.9    Ready    <none>                 158m   v1.23.3
查看 master 组件

kubectl get pod -n kube-system | egrep -v 'flannel|dns'

NAME                                    READY   STATUS    RESTARTS      AGE
etcd-192.168.91.10                      1/1     Running   0             97m
etcd-192.168.91.8                       1/1     Running   0             162m
kube-apiserver-192.168.91.10            1/1     Running   0             97m
kube-apiserver-192.168.91.8             1/1     Running   0             162m
kube-controller-manager-192.168.91.10   1/1     Running   0             97m
kube-controller-manager-192.168.91.8    1/1     Running   0             162m
kube-proxy-6cczc                        1/1     Running   0             158m
kube-proxy-bfmzz                        1/1     Running   0             97m
kube-proxy-zwdlm                        1/1     Running   0             162m
kube-scheduler-192.168.91.10            1/1     Running   0             97m
kube-scheduler-192.168.91.8             1/1     Running   0             162m

k8s 组件证书续费

查看当前组件到期时间
kubeadm certs check-expiration
根证书其实是10年的,只是组件的证书只有1年
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Feb 17, 2023 02:45 UTC   364d            ca                      no
apiserver                  Feb 17, 2023 02:45 UTC   364d            ca                      no
apiserver-etcd-client      Feb 17, 2023 02:45 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Feb 17, 2023 02:45 UTC   364d            ca                      no
controller-manager.conf    Feb 17, 2023 02:45 UTC   364d            ca                      no
etcd-healthcheck-client    Feb 17, 2023 02:45 UTC   364d            etcd-ca                 no
etcd-peer                  Feb 17, 2023 02:45 UTC   364d            etcd-ca                 no
etcd-server                Feb 17, 2023 02:45 UTC   364d            etcd-ca                 no
front-proxy-client         Feb 17, 2023 02:45 UTC   364d            front-proxy-ca          no
scheduler.conf             Feb 17, 2023 02:45 UTC   364d            ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Feb 15, 2032 02:45 UTC   9y              no
etcd-ca                 Feb 15, 2032 02:45 UTC   9y              no
front-proxy-ca          Feb 15, 2032 02:45 UTC   9y              no

使用 kubeadm 命令续费1年

前提是证书已经到期了

这里使用 date -s 2023-2-18 命令修改系统时间来模拟证书到期的情况

kubectl get nodes --kubeconfig /etc/kubernetes/admin.conf
Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2023-02-18T00:00:15+08:00 is after 2023-02-17T05:34:40Z

因为证书到期,就会出现如下的输出,然后使用下面的命令再次续费一年,然后重启 kubelet 以及重启 etcd kube-apiserver kube-controller-manager kube-scheduler 组件

所有的 master 节点都操作一遍,或者其中一台 master 节点操作完成后,将 /etc/kubernetes/admin.conf 证书文件分发到其他 master 节点,替换掉老的证书文件

cp -r /etc/kubernetes/pki{,.old}
kubeadm certs renew all
systemctl restart kubelet
kubeadm certs check-expiration 再次查看证书,就可以看到,证书到期时间变成 2024 年了
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Feb 17, 2024 16:01 UTC   364d            ca                      no
apiserver                  Feb 17, 2024 16:01 UTC   364d            ca                      no
apiserver-etcd-client      Feb 17, 2024 16:01 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Feb 17, 2024 16:01 UTC   364d            ca                      no
controller-manager.conf    Feb 17, 2024 16:01 UTC   364d            ca                      no
etcd-healthcheck-client    Feb 17, 2024 16:01 UTC   364d            etcd-ca                 no
etcd-peer                  Feb 17, 2024 16:01 UTC   364d            etcd-ca                 no
etcd-server                Feb 17, 2024 16:01 UTC   364d            etcd-ca                 no
front-proxy-client         Feb 17, 2024 16:01 UTC   364d            front-proxy-ca          no
scheduler.conf             Feb 17, 2024 16:01 UTC   364d            ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Feb 15, 2032 02:45 UTC   8y              no
etcd-ca                 Feb 15, 2032 02:45 UTC   8y              no
front-proxy-ca          Feb 15, 2032 02:45 UTC   8y              no

编译 kubeadm 达成十年契约

编译 kubeadm 需要有 go 语言环境,先来一个 go

go 官方下载地址

官方下载上传到csdn

wget https://go.dev/dl/go1.17.7.linux-amd64.tar.gz
tar xvf go1.17.7.linux-amd64.tar.gz -C /usr/local/
echo 'PATH=$PATH:/usr/local/go/bin' >> $HOME/.bashrc
source $HOME/.bashrc
go version
下载 k8s 源码包,要和当前集群版本一致

github下载上传到csdn

wget https://github.com/kubernetes/kubernetes/archive/refs/tags/v1.23.3.tar.gz
tar xvf v1.23.3.tar.gz
cd kubernetes-1.23.3/
vim staging/src/k8s.io/client-go/util/cert/cert.go
duration365d * 10 改成 duration365d * 100
now.Add(duration365d * 100).UTC(),
vim cmd/kubeadm/app/constants/constants.go
CertificateValidity = time.Hour * 24 * 365 改成 CertificateValidity = time.Hour * 24 * 3650
CertificateValidity = time.Hour * 24 * 3650
编译 kubeadm
make WHAT=cmd/kubeadm GOFLAGS=-v
续费证书
cp -r /etc/kubernetes/pki{,.old}
_output/bin/kubeadm certs renew all
systemctl restart kubelet
查看证书到期时间
_output/bin/kubeadm certs check-expiration
十年了
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Feb 15, 2032 07:08 UTC   9y              ca                      no
apiserver                  Feb 15, 2032 07:08 UTC   9y              ca                      no
apiserver-etcd-client      Feb 15, 2032 07:08 UTC   9y              etcd-ca                 no
apiserver-kubelet-client   Feb 15, 2032 07:08 UTC   9y              ca                      no
controller-manager.conf    Feb 15, 2032 07:08 UTC   9y              ca                      no
etcd-healthcheck-client    Feb 15, 2032 07:08 UTC   9y              etcd-ca                 no
etcd-peer                  Feb 15, 2032 07:08 UTC   9y              etcd-ca                 no
etcd-server                Feb 15, 2032 07:08 UTC   9y              etcd-ca                 no
front-proxy-client         Feb 15, 2032 07:08 UTC   9y              front-proxy-ca          no
scheduler.conf             Feb 15, 2032 07:08 UTC   9y              ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Feb 15, 2032 02:45 UTC   9y              no
etcd-ca                 Feb 15, 2032 02:45 UTC   9y              no
front-proxy-ca          Feb 15, 2032 02:45 UTC   9y              no
替换 kubeadm 二进制文件,如果有多个 master 节点,也要分发过去,进行替换
mv /usr/bin/kubeadm{,-oneyear}
cp _output/bin/kubeadm /usr/bin/
如果是访问 $HOME/.kube/conf 文件,需要替换 admin.conf

如果是 export 设置环境变量的,可以不用替换

mv $HOME/.kube/conf{,-oneyear}
cp /etc/kubernetes/admin.conf $HOME/.kube/conf
相关实践学习
深入解析Docker容器化技术
Docker是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器上,也可以实现虚拟化,容器是完全使用沙箱机制,相互之间不会有任何接口。Docker是世界领先的软件容器平台。开发人员利用Docker可以消除协作编码时“在我的机器上可正常工作”的问题。运维人员利用Docker可以在隔离容器中并行运行和管理应用,获得更好的计算密度。企业利用Docker可以构建敏捷的软件交付管道,以更快的速度、更高的安全性和可靠的信誉为Linux和Windows Server应用发布新功能。 在本套课程中,我们将全面的讲解Docker技术栈,从环境安装到容器、镜像操作以及生产环境如何部署开发的微服务应用。本课程由黑马程序员提供。 &nbsp; &nbsp; 相关的阿里云产品:容器服务 ACK 容器服务 Kubernetes 版(简称 ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情: https://www.aliyun.com/product/kubernetes
目录
相关文章
|
11月前
|
存储 Kubernetes 开发工具
使用ArgoCD管理Kubernetes部署指南
ArgoCD 是一款基于 Kubernetes 的声明式 GitOps 持续交付工具,通过自动同步 Git 存储库中的配置与 Kubernetes 集群状态,确保一致性与可靠性。它支持实时同步、声明式设置、自动修复和丰富的用户界面,极大简化了复杂应用的部署管理。结合 Helm Charts,ArgoCD 提供模块化、可重用的部署流程,显著减少人工开销和配置错误。对于云原生企业,ArgoCD 能优化部署策略,提升效率与安全性,是实现自动化与一致性的理想选择。
723 0
|
10月前
|
存储 Kubernetes 异构计算
Qwen3 大模型在阿里云容器服务上的极简部署教程
通义千问 Qwen3 是 Qwen 系列最新推出的首个混合推理模型,其在代码、数学、通用能力等基准测试中,与 DeepSeek-R1、o1、o3-mini、Grok-3 和 Gemini-2.5-Pro 等顶级模型相比,表现出极具竞争力的结果。
|
11月前
|
存储 Kubernetes 监控
K8s集群实战:使用kubeadm和kuboard部署Kubernetes集群
总之,使用kubeadm和kuboard部署K8s集群就像回归童年一样,简单又有趣。不要忘记,技术是为人服务的,用K8s集群操控云端资源,我们不过是想在复杂的世界找寻简单。尽管部署过程可能遇到困难,但朝着简化复杂的目标,我们就能找到意义和乐趣。希望你也能利用这些工具,找到你的乐趣,满足你的需求。
985 33
|
11月前
|
Kubernetes 开发者 Docker
集群部署:使用Rancher部署Kubernetes集群。
以上就是使用 Rancher 部署 Kubernetes 集群的流程。使用 Rancher 和 Kubernetes,开发者可以受益于灵活性和可扩展性,允许他们在多种环境中运行多种应用,同时利用自动化工具使工作负载更加高效。
618 19
|
11月前
|
存储 测试技术 对象存储
使用容器服务ACK快速部署QwQ-32B模型并实现推理智能路由
阿里云最新发布的QwQ-32B模型,通过强化学习大幅度提升了模型推理能力。QwQ-32B模型拥有320亿参数,其性能可以与DeepSeek-R1 671B媲美。
|
12月前
|
存储 Kubernetes 测试技术
企业级LLM推理部署新范式:基于ACK的DeepSeek蒸馏模型生产环境落地指南
企业级LLM推理部署新范式:基于ACK的DeepSeek蒸馏模型生产环境落地指南
643 12
|
4月前
|
人工智能 算法 调度
阿里云ACK托管集群Pro版共享GPU调度操作指南
本文介绍在阿里云ACK托管集群Pro版中,如何通过共享GPU调度实现显存与算力的精细化分配,涵盖前提条件、使用限制、节点池配置及任务部署全流程,提升GPU资源利用率,适用于AI训练与推理场景。
393 1
|
4月前
|
弹性计算 监控 调度
ACK One 注册集群云端节点池升级:IDC 集群一键接入云端 GPU 算力,接入效率提升 80%
ACK One注册集群节点池实现“一键接入”,免去手动编写脚本与GPU驱动安装,支持自动扩缩容与多场景调度,大幅提升K8s集群管理效率。
285 89
|
9月前
|
资源调度 Kubernetes 调度
从单集群到多集群的快速无损转型:ACK One 多集群应用分发
ACK One 的多集群应用分发,可以最小成本地结合您已有的单集群 CD 系统,无需对原先应用资源 YAML 进行修改,即可快速构建成多集群的 CD 系统,并同时获得强大的多集群资源调度和分发的能力。
388 9
|
9月前
|
资源调度 Kubernetes 调度
从单集群到多集群的快速无损转型:ACK One 多集群应用分发
本文介绍如何利用阿里云的分布式云容器平台ACK One的多集群应用分发功能,结合云效CD能力,快速将单集群CD系统升级为多集群CD系统。通过增加分发策略(PropagationPolicy)和差异化策略(OverridePolicy),并修改单集群kubeconfig为舰队kubeconfig,可实现无损改造。该方案具备多地域多集群智能资源调度、重调度及故障迁移等能力,帮助用户提升业务效率与可靠性。

推荐镜像

更多