kubeadm + containerd 部署 k8s-v1.23.3(含证书升级)

简介: kubeadm + containerd 部署 k8s-v1.23.3(含证书升级)

前言

kubeadm 和 二进制 部署的区别
  • kubeadm

    • 优点:

      • 部署很方便,两个参数就可以完成集群的部署和节点的加入

        1. kubeadm init 初始化节点
        2. kubeadm join 节点加入集群
    • 缺点:

      1. 集群证书有效期只有一年,要么破解,要么升级 k8s 版本
  • 二进制部署

    • 优点:

      1. 可以自定义集群证书有效期(一般都是十年)
      2. 所有组件的细节,可以在部署前定制
      3. 部署过程中,能更好的理解 k8s 各个组件之间的关联
    • 缺点:

      1. 部署相对 kubeadm 会复杂很多
人生苦短,我选二进制部署

环境准备

IP 角色 内核版本
192.168.91.8 master centos7.6/3.10.0-957.el7.x86_64
192.168.91.9 work centos7.6/3.10.0-957.el7.x86_64

答应我,所有节点都要关闭防火墙

systemctl disable firewalld
systemctl stop firewalld

答应我,所有节点都要关闭selinux

setenforce 0
sed -i '/SELINUX/s/enforcing/disabled/g' /etc/selinux/config

答应我,所有节点都要关闭swap

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

答应我,所有节点都要开启内核模块

modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack
modprobe nf_conntrack_ipv4
modprobe br_netfilter
modprobe overlay

答应我,所有节点都要开启模块自动加载服务

cat > /etc/modules-load.d/k8s-modules.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
nf_conntrack_ipv4
br_netfilter
overlay
EOF
答应我,记得重启服务,并设置为开机自启
systemctl enable systemd-modules-load
systemctl restart systemd-modules-load

答应我,所有节点都要做内核优化

cat <<EOF > /etc/sysctl.d/kubernetes.conf
# 开启数据包转发功能(实现vxlan)
net.ipv4.ip_forward=1
# iptables对bridge的数据进行处理
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-arptables=1
# 关闭tcp_tw_recycle,否则和NAT冲突,会导致服务不通
net.ipv4.tcp_tw_recycle=0
# 不允许将TIME-WAIT sockets重新用于新的TCP连接
net.ipv4.tcp_tw_reuse=0
# socket监听(listen)的backlog上限
net.core.somaxconn=32768
# 最大跟踪连接数,默认 nf_conntrack_buckets * 4
net.netfilter.nf_conntrack_max=1000000
# 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.swappiness=0
# 计算当前的内存映射文件数。
vm.max_map_count=655360
# 内核可分配的最大文件数
fs.file-max=6553600
# 持久连接
net.ipv4.tcp_keepalive_time=600
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=10
EOF
答应我,让配置生效
sysctl -p /etc/sysctl.d/kubernetes.conf

答应我,所有节点都要清空 iptables 规则

iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEPT

安装 containerd

所有节点都需要安装

配置 docker 源 (docker 源里面有 containerd)

wget -O /etc/yum.repos.d/docker.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
查找 containerd 安装包的名称
yum search containerd
安装 containerd
yum install -y containerd.io
修改 containerd 配置文件

root 容器存储路径,修改成磁盘空间充足的路径

sandbox_image pause 镜像名称以及镜像tag(一定要可以拉取到 pause 镜像的,否则会导致集群初始化的时候 kubelet 重启失败)

bin_dir cni 插件存放路径,yum 安装的 containerd 默认存放在 /opt/cni/bin 目录下

cat <<EOF > /etc/containerd/config.toml
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/approot1/data/containerd"
state = "/run/containerd"
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = "/etc/cni/net.d/cni-default.conf"
      max_conf_num = 1

    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      no_pivot = false
      snapshotter = "overlayfs"

      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = true

      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://docker.mirrors.ustc.edu.cn", "http://hub-mirror.c.163.com"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
          endpoint = ["https://gcr.mirrors.ustc.edu.cn"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
          endpoint = ["https://gcr.mirrors.ustc.edu.cn/google-containers/"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
          endpoint = ["https://quay.mirrors.ustc.edu.cn"]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.btrfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""

[proxy_plugins]

[stream_processors]

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[ttrpc]
  address = ""
  gid = 0
  uid = 0
EOF
启动 containerd 服务,并设置为开机启动
systemctl enable containerd
systemctl restart containerd

配置 kubernetes 源

所有节点都需要配置
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
通过 yum list 命令可以查看当前源的稳定版本,目前的稳定版本是 1.23.3-0
yum list kubeadm kubelet

安装 kubeadm 以及 kubelet

所有节点都需要安装

yum install 不带版本,就会安装当前稳定版本,为了后面文档通用,我这里就在安装的时候带上了版本

yum install -y kubelet-1.23.3-0 kubeadm-1.23.3-0

配置命令参数自动补全功能

所有节点都需要安装
yum install -y bash-completion
echo 'source <(kubectl completion bash)' >> $HOME/.bashrc
echo 'source <(kubeadm completion bash)' >> $HOME/.bashrc
source $HOME/.bashrc

启动 kubelet 服务

所有节点都要操作
systemctl enable kubelet
systemctl restart kubelet

kubeadm 部署 master 节点

注意在 master 节点上操作

查看 kubeadm init 默认配置

kubeadm config print init-defaults
vim kubeadm.yaml
kubeadm 配置 (v1beta3)

advertiseAddress 参数需要修改成当前 master 节点的 ip

bindPort 参数为 apiserver 服务的访问端口,可以自定义

criSocket 参数定义 容器运行时 使用的套接字,默认是 dockershim ,这里需要修改为 contained 的套接字文件,在 conf.toml 里面可以找到

imagePullPolicy 参数定义镜像拉取策略,IfNotPresent 本地没有镜像则拉取镜像;Always 总是重新拉取镜像;Never 从不拉取镜像,本地没有镜像,kubelet 启动 pod 就会报错 (注意驼峰命名,这里的大写别改成小写)

certificatesDir 参数定义证书文件存储路径,没特殊要求,可以不修改

controlPlaneEndpoint 参数定义稳定访问 ip ,高可用这里可以填 vip

dataDir 参数定义 etcd 数据持久化路径,默认 /var/lib/etcd ,部署前,确认路径所在磁盘空间是否足够

imageRepository 参数定义镜像仓库名称,默认 k8s.gcr.io ,如果要修改,需要注意确定镜像一定是可以拉取的到,并且所有的镜像都是从这个镜像仓库拉取的

kubernetesVersion 参数定义镜像版本,和镜像的 tag 一致

podSubnet 参数定义 pod 使用的网段,不要和 serviceSubnet 以及本机网段有冲突

serviceSubnet 参数定义 k8s 服务 ip 网段,注意是否和本机网段有冲突

cgroupDriver 参数定义 cgroup 驱动,默认是 cgroupfs

mode 参数定义转发方式,可选为iptablesipvs

name 参数定义节点名称,如果是主机名需要保证可以解析(kubectl get nodes 命令查看到的节点名称)

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.91.8
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: 192.168.91.8
  taints: null

---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.91.8:6443
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.23.3
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 172.22.0.0/16
scheduler: {}

---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
cgroupsPerQOS: true

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
集群初始化
kubeadm init --config kubeadm.yaml
以下是 kubeadm init 的过程,
[init] Using Kubernetes version: v1.23.3
[preflight] Running pre-flight checks
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [192.168.91.8 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.91.8]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [192.168.91.8 localhost] and IPs [192.168.91.8 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [192.168.91.8 localhost] and IPs [192.168.91.8 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.504586 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node 192.168.91.8 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node 192.168.91.8 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.91.8:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:5e2387403e698e95b0eab7197837f2425f7b8610e7b400e54d81c27f3c6f1964 \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.91.8:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:5e2387403e698e95b0eab7197837f2425f7b8610e7b400e54d81c27f3c6f1964
以下操作二选一

kubectl 不加 --kubeconfig 参数,默认找的是 $HOME/.kube/config ,如果不创建目录,并且将证书复制过去,就要生成环境变量,或者每次使用 kubectl 命令的时候,都要加上 --kubeconfig 参数指定证书文件,否则 kubectl 命令就找不到集群了

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> $HOME/.bashrc
source ~/.bashrc
查看 k8s 组件运行情况
kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-65c54cc984-cglz9               0/1     Pending   0          12s
coredns-65c54cc984-qwd5b               0/1     Pending   0          12s
etcd-192.168.91.8                      1/1     Running   0          27s
kube-apiserver-192.168.91.8            1/1     Running   0          21s
kube-controller-manager-192.168.91.8   1/1     Running   0          21s
kube-proxy-zwdlm                       1/1     Running   0          12s
kube-scheduler-192.168.91.8            1/1     Running   0          27s
因为还没有网络组件,coredns 没有运行成功

安装 flannel 组件

在 master 节点操作即可

Network 参数的 ip 段要和上面 kubeadm 配置文件的 podSubnet 一样

cat <<EOF> flannel.yaml | kubectl apply -f flannel.yaml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['policy']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "172.22.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.15.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.15.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
EOF
稍等 2-3 分钟,等待 flannel pod 成为 running 状态 (具体时间视镜像下载速度)
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-65c54cc984-cglz9               1/1     Running   0          2m7s
coredns-65c54cc984-qwd5b               1/1     Running   0          2m7s
etcd-192.168.91.8                      1/1     Running   0          2m22s
kube-apiserver-192.168.91.8            1/1     Running   0          2m16s
kube-controller-manager-192.168.91.8   1/1     Running   0          2m16s
kube-flannel-ds-26drg                  1/1     Running   0          100s
kube-proxy-zwdlm                       1/1     Running   0          2m7s
kube-scheduler-192.168.91.8            1/1     Running   0          2m22s

work 节点加入集群

在 master 节点初始化完成的时候,已经给出了加入集群的参数

只需要复制一下,到 work 节点执行即可

--node-name 参数定义节点名称,如果是主机名需要保证可以解析(kubectl get nodes 命令查看到的节点名称)

kubeadm join 192.168.91.8:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:5e2387403e698e95b0eab7197837f2425f7b8610e7b400e54d81c27f3c6f1964 \
--node-name 192.168.91.9
如果忘记记录了,或者以后需要增加节点怎么办?

执行下面的命令就可以了

kubeadm token create --print-join-command --ttl=0
输出也很少,这个时候只需要去 master 节点执行 kubectl get nodes 命令就可以查看节点的状态了
[preflight] Running pre-flight checks
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
节点变成 Ready 的时间取决于 work 节点的 flannel 镜像拉取时间

可以通过 kubectl get node -n kube-system 查看 flannel 是否为 Running 状态

NAME           STATUS   ROLES                  AGE     VERSION
192.168.91.8   Ready    control-plane,master   9m34s   v1.23.3
192.168.91.9   Ready    <none>                 6m11s   v1.23.3

master 节点加入集群

需要先从其中一个 master 节点获取 CA 键哈希值

这个值在 kubeadm init 完成时也是已经输出到终端了

kubeadm init 时如果有修改过 certificatesDir 参数,/etc/kubernetes/pki/ca.crt 这里的路径需要注意确认和修改

获取到的 hash 值,使用格式: sha256:<hash 值>

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
也可以直接创建新的 token ,并且会给出 hash 值,并给出如下的命令,只需要加上 --certificate-key--control-plane 参数即可

kubeadm join 192.168.91.8:6443 --token 352obx.dw7rqphzxo6cvz9r --discovery-token-ca-cert-hash sha256:5e2387403e698e95b0eab7197837f2425f7b8610e7b400e54d81c27f3c6f1964

kubeadm token create --print-join-command --ttl=0
解密由 kubeadm init 上传的证书 secret

对应的 kubeadm join 参数为 --certificate-key

kubeadm init phase upload-certs --upload-certs
在需要扩容的 master 节点执行 kubeadm join 命令加入集群

--node-name 参数定义节点名称,如果是主机名需要保证可以解析(kubectl get nodes 命令查看到的节点名称)

kubeadm join 192.168.91.8:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:5e2387403e698e95b0eab7197837f2425f7b8610e7b400e54d81c27f3c6f1964 \
--certificate-key a7a12fb565bf94c768f0097898926e4d0805eb7ecc1477b48fdaaf4d27eb26b0 \
--control-plane \
--node-name 192.168.91.10
查看节点

kubectl get nodes

NAME            STATUS   ROLES                  AGE    VERSION
192.168.91.10   Ready    control-plane,master   96m    v1.23.3
192.168.91.8    Ready    control-plane,master   161m   v1.23.3
192.168.91.9    Ready    <none>                 158m   v1.23.3
查看 master 组件

kubectl get pod -n kube-system | egrep -v 'flannel|dns'

NAME                                    READY   STATUS    RESTARTS      AGE
etcd-192.168.91.10                      1/1     Running   0             97m
etcd-192.168.91.8                       1/1     Running   0             162m
kube-apiserver-192.168.91.10            1/1     Running   0             97m
kube-apiserver-192.168.91.8             1/1     Running   0             162m
kube-controller-manager-192.168.91.10   1/1     Running   0             97m
kube-controller-manager-192.168.91.8    1/1     Running   0             162m
kube-proxy-6cczc                        1/1     Running   0             158m
kube-proxy-bfmzz                        1/1     Running   0             97m
kube-proxy-zwdlm                        1/1     Running   0             162m
kube-scheduler-192.168.91.10            1/1     Running   0             97m
kube-scheduler-192.168.91.8             1/1     Running   0             162m

k8s 组件证书续费

查看当前组件到期时间
kubeadm certs check-expiration
根证书其实是10年的,只是组件的证书只有1年
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Feb 17, 2023 02:45 UTC   364d            ca                      no
apiserver                  Feb 17, 2023 02:45 UTC   364d            ca                      no
apiserver-etcd-client      Feb 17, 2023 02:45 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Feb 17, 2023 02:45 UTC   364d            ca                      no
controller-manager.conf    Feb 17, 2023 02:45 UTC   364d            ca                      no
etcd-healthcheck-client    Feb 17, 2023 02:45 UTC   364d            etcd-ca                 no
etcd-peer                  Feb 17, 2023 02:45 UTC   364d            etcd-ca                 no
etcd-server                Feb 17, 2023 02:45 UTC   364d            etcd-ca                 no
front-proxy-client         Feb 17, 2023 02:45 UTC   364d            front-proxy-ca          no
scheduler.conf             Feb 17, 2023 02:45 UTC   364d            ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Feb 15, 2032 02:45 UTC   9y              no
etcd-ca                 Feb 15, 2032 02:45 UTC   9y              no
front-proxy-ca          Feb 15, 2032 02:45 UTC   9y              no

使用 kubeadm 命令续费1年

前提是证书已经到期了

这里使用 date -s 2023-2-18 命令修改系统时间来模拟证书到期的情况

kubectl get nodes --kubeconfig /etc/kubernetes/admin.conf
Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2023-02-18T00:00:15+08:00 is after 2023-02-17T05:34:40Z

因为证书到期,就会出现如下的输出,然后使用下面的命令再次续费一年,然后重启 kubelet 以及重启 etcd kube-apiserver kube-controller-manager kube-scheduler 组件

所有的 master 节点都操作一遍,或者其中一台 master 节点操作完成后,将 /etc/kubernetes/admin.conf 证书文件分发到其他 master 节点,替换掉老的证书文件

cp -r /etc/kubernetes/pki{,.old}
kubeadm certs renew all
systemctl restart kubelet
kubeadm certs check-expiration 再次查看证书,就可以看到,证书到期时间变成 2024 年了
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Feb 17, 2024 16:01 UTC   364d            ca                      no
apiserver                  Feb 17, 2024 16:01 UTC   364d            ca                      no
apiserver-etcd-client      Feb 17, 2024 16:01 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Feb 17, 2024 16:01 UTC   364d            ca                      no
controller-manager.conf    Feb 17, 2024 16:01 UTC   364d            ca                      no
etcd-healthcheck-client    Feb 17, 2024 16:01 UTC   364d            etcd-ca                 no
etcd-peer                  Feb 17, 2024 16:01 UTC   364d            etcd-ca                 no
etcd-server                Feb 17, 2024 16:01 UTC   364d            etcd-ca                 no
front-proxy-client         Feb 17, 2024 16:01 UTC   364d            front-proxy-ca          no
scheduler.conf             Feb 17, 2024 16:01 UTC   364d            ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Feb 15, 2032 02:45 UTC   8y              no
etcd-ca                 Feb 15, 2032 02:45 UTC   8y              no
front-proxy-ca          Feb 15, 2032 02:45 UTC   8y              no

编译 kubeadm 达成十年契约

编译 kubeadm 需要有 go 语言环境,先来一个 go

go 官方下载地址

官方下载上传到csdn

wget https://go.dev/dl/go1.17.7.linux-amd64.tar.gz
tar xvf go1.17.7.linux-amd64.tar.gz -C /usr/local/
echo 'PATH=$PATH:/usr/local/go/bin' >> $HOME/.bashrc
source $HOME/.bashrc
go version
下载 k8s 源码包,要和当前集群版本一致

github下载上传到csdn

wget https://github.com/kubernetes/kubernetes/archive/refs/tags/v1.23.3.tar.gz
tar xvf v1.23.3.tar.gz
cd kubernetes-1.23.3/
vim staging/src/k8s.io/client-go/util/cert/cert.go
duration365d * 10 改成 duration365d * 100
now.Add(duration365d * 100).UTC(),
vim cmd/kubeadm/app/constants/constants.go
CertificateValidity = time.Hour * 24 * 365 改成 CertificateValidity = time.Hour * 24 * 3650
CertificateValidity = time.Hour * 24 * 3650
编译 kubeadm
make WHAT=cmd/kubeadm GOFLAGS=-v
续费证书
cp -r /etc/kubernetes/pki{,.old}
_output/bin/kubeadm certs renew all
systemctl restart kubelet
查看证书到期时间
_output/bin/kubeadm certs check-expiration
十年了
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Feb 15, 2032 07:08 UTC   9y              ca                      no
apiserver                  Feb 15, 2032 07:08 UTC   9y              ca                      no
apiserver-etcd-client      Feb 15, 2032 07:08 UTC   9y              etcd-ca                 no
apiserver-kubelet-client   Feb 15, 2032 07:08 UTC   9y              ca                      no
controller-manager.conf    Feb 15, 2032 07:08 UTC   9y              ca                      no
etcd-healthcheck-client    Feb 15, 2032 07:08 UTC   9y              etcd-ca                 no
etcd-peer                  Feb 15, 2032 07:08 UTC   9y              etcd-ca                 no
etcd-server                Feb 15, 2032 07:08 UTC   9y              etcd-ca                 no
front-proxy-client         Feb 15, 2032 07:08 UTC   9y              front-proxy-ca          no
scheduler.conf             Feb 15, 2032 07:08 UTC   9y              ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Feb 15, 2032 02:45 UTC   9y              no
etcd-ca                 Feb 15, 2032 02:45 UTC   9y              no
front-proxy-ca          Feb 15, 2032 02:45 UTC   9y              no
替换 kubeadm 二进制文件,如果有多个 master 节点,也要分发过去,进行替换
mv /usr/bin/kubeadm{,-oneyear}
cp _output/bin/kubeadm /usr/bin/
如果是访问 $HOME/.kube/conf 文件,需要替换 admin.conf

如果是 export 设置环境变量的,可以不用替换

mv $HOME/.kube/conf{,-oneyear}
cp /etc/kubernetes/admin.conf $HOME/.kube/conf
相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
目录
相关文章
|
25天前
|
Kubernetes 网络协议 应用服务中间件
K8S二进制部署实践-1.15.5
K8S二进制部署实践-1.15.5
34 0
|
27天前
|
Kubernetes 流计算 Perl
在Rancher K8s上部署Flink时,TaskManager连接不上并不断重启可能是由多种原因导致的
在Rancher K8s上部署Flink时,TaskManager连接不上并不断重启可能是由多种原因导致的
33 7
|
10天前
|
Kubernetes Linux 网络安全
kubeadm安装k8s
该文档提供了一套在CentOS 7.6上安装Docker和Kubernetes(kubeadm)的详细步骤,包括安装系统必备软件、关闭防火墙和SELinux、禁用swap、开启IP转发、设置内核参数、配置Docker源和加速器、安装指定版本Docker、启动Docker、设置kubelet开机启动、安装kubelet、kubeadm、kubectl、下载和配置Kubernetes镜像、初始化kubeadm、创建kubeconfig文件、获取节点加入集群命令、下载Calico YAML文件以及安装Calico。这些步骤不仅适用于v1.19.14,也适用于更高版本。
58 1
|
9天前
|
Kubernetes 搜索推荐 Docker
使用 kubeadm 部署 Kubernetes 集群(二)k8s环境安装
使用 kubeadm 部署 Kubernetes 集群(二)k8s环境安装
48 17
|
22天前
|
Kubernetes Ubuntu 应用服务中间件
Ubuntu 22.04 利用kubeadm方式部署Kubernetes(v1.28.2版本)
Ubuntu 22.04 利用kubeadm方式部署Kubernetes(v1.28.2版本)
92 0
|
23天前
|
Kubernetes 安全 网络安全
搭建k8s集群kubeadm搭建Kubernetes二进制搭建Kubernetes集群
搭建k8s集群kubeadm搭建Kubernetes二进制搭建Kubernetes集群
106 0
|
25天前
|
人工智能 监控 Serverless
如何基于ACK Serverless快速部署AI推理服务
通过上述步骤,可以在ACK Serverless上快速部署AI推理服务,实现高可用、弹性扩展的服务架构。
19 1
|
29天前
|
Kubernetes Cloud Native Docker
【云原生】kubeadm快速搭建K8s集群Kubernetes1.19.0
Kubernetes 是一个开源平台,用于管理容器化工作负载和服务,提供声明式配置和自动化。源自 Google 的大规模运维经验,它拥有广泛的生态支持。本文档详细介绍了 Kubernetes 集群的搭建过程,包括服务器配置、Docker 和 Kubernetes 组件的安装,以及 Master 和 Node 的部署。此外,还提到了使用 Calico 作为 CNI 网络插件,并提供了集群功能的测试步骤。
218 0
|
1月前
|
Prometheus 监控 Kubernetes
Kubernetes 集群监控与日志管理实践
【2月更文挑战第29天】 在微服务架构日益普及的当下,Kubernetes 已成为容器编排的事实标准。然而,随着集群规模的扩大和业务复杂度的提升,有效的监控和日志管理变得至关重要。本文将探讨构建高效 Kubernetes 集群监控系统的策略,以及实施日志聚合和分析的最佳实践。通过引入如 Prometheus 和 Fluentd 等开源工具,我们旨在为运维专家提供一套完整的解决方案,以保障系统的稳定性和可靠性。
|
20天前
|
数据库 存储 监控
什么是 SAP HANA 内存数据库 的 Delta Storage
什么是 SAP HANA 内存数据库 的 Delta Storage
16 0
什么是 SAP HANA 内存数据库 的 Delta Storage

推荐镜像

更多