基于 cri-dockerd 二进制部署 kubernetest-v1.26.3 3

本文涉及的产品
全局流量管理 GTM,标准版 1个月
公共DNS(含HTTPDNS解析),每月1000万次HTTP解析
云解析 DNS,旗舰版 1个月
简介: 基于 cri-dockerd 二进制部署 kubernetest-v1.26.3
部署 scheduler 组件
配置 scheduler 证书

下面的 ip 记得替换成自己的节点 ip

  • 和 etcd 组件一样,把所有的 scheduler 节点 ip 都要写进去
  • 同样的,在配置 ip 的时候,注意 json 格式的 , 逗号位置
cat << EOF > ${work_dir}/tmp/ssl/kube-scheduler-csr.json
{
    "CN": "system:kube-scheduler",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.91.19"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "ShangHai",
        "L": "ShangHai",
        "O": "system:kube-scheduler",
        "OU": "System"
      }
    ]
}
EOF
创建 scheduler 证书

设置集群参数

  • --server 为 apiserver 的访问地址(如果有高可用地址,一定要写高可用的地址)
  • 修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口(如果是高可用地址,要写高可用的端口)
  • 切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver
${work_dir}/bin/cfssl gencert \
-ca=${work_dir}/tmp/ssl/ca.pem \
-ca-key=${work_dir}/tmp/ssl/ca-key.pem \
-config=${work_dir}/tmp/ssl/ca-config.json \
-profile=kubernetes ${work_dir}/tmp/ssl/kube-scheduler-csr.json | \
${work_dir}/bin/cfssljson -bare ${work_dir}/tmp/ssl/kube-scheduler
创建 kubeconfig 证书

设置集群参数

  • --server 为 apiserver 的访问地址(如果有高可用地址,一定要写高可用的地址)
  • 修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口(如果是高可用地址,要写高可用的端口)
  • 切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver
${work_dir}/bin/kubernetes/server/bin/kubectl config \
set-cluster kubernetes \
--certificate-authority=${work_dir}/tmp/ssl/ca.pem \
--embed-certs=true \
--server=https://192.168.11.147:6443 \
--kubeconfig=${work_dir}/tmp/ssl/kube-scheduler.kubeconfig

设置客户端认证参数

${work_dir}/bin/kubernetes/server/bin/kubectl config \
set-credentials system:kube-scheduler \
--client-certificate=${work_dir}/tmp/ssl/kube-scheduler.pem \
--client-key=${work_dir}/tmp/ssl/kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=${work_dir}/tmp/ssl/kube-scheduler.kubeconfig

设置上下文参数

${work_dir}/bin/kubernetes/server/bin/kubectl config \
set-context system:kube-scheduler \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=${work_dir}/tmp/ssl/kube-scheduler.kubeconfig

设置默认上下文

${work_dir}/bin/kubernetes/server/bin/kubectl config \
use-context system:kube-scheduler \
--kubeconfig=${work_dir}/tmp/ssl/kube-scheduler.kubeconfig
配置 scheduler 为 systemctl 管理

controller-manager 参数

创建 systemctl 启动模板

cat << EOF > ${work_dir}/tmp/service/kube-scheduler.service.template
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=##k8sBin##/kube-scheduler \\
  --authentication-kubeconfig=##configPath##/kube-scheduler.kubeconfig \\
  --authorization-kubeconfig=##configPath##/kube-scheduler.kubeconfig \\
  --bind-address=0.0.0.0 \\
  --kubeconfig=##configPath##/kube-scheduler.kubeconfig \\
  --leader-elect=true \\
  --v=2
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

生成各个 scheduler 节点的 systemctl 启动文件

k8sBin='/data/kubernetes/bin'; \
sslPath='/etc/kubernetest/ssl'; \
configPath='/etc/kubernetest'; \
sed -e "s|##k8sBin##|${k8sBin}|g" \
-e "s|##sslPath##|${sslPath}|g" \
-e "s|##configPath##|${configPath}|g" ${work_dir}/tmp/service/kube-scheduler.service.template \
> ${work_dir}/tmp/service/kube-scheduler.service
分发文件并启动 scheduler 集群

下面的 ip 记得替换成自己的节点 ip

ip_head='192.168.11';for i in 147 148 149;do \
k8sHost="${ip_head}.${i}"; \
ssh ${k8sHost} "mkdir -p ${k8sBin} ${sslPath}"; \
scp ${work_dir}/tmp/ssl/kube-scheduler.kubeconfig ${k8sHost}:${configPath}/; \
scp ${work_dir}/tmp/service/kube-scheduler.service ${k8sHost}:/usr/lib/systemd/system/; \
scp ${work_dir}/bin/kubernetes/server/bin/kube-scheduler ${k8sHost}:${k8sBin}/; \
ssh ${k8sHost} "systemctl enable kube-scheduler && systemctl start kube-scheduler --no-block"; \
done

到这里,master 节点的部署就结束了,可以使用下面的命令查看 master 组件的状态

${work_dir}/bin/kubernetes/server/bin/kubectl get componentstatus

可以看到状态都是 Healthy,下面的 Warning 只是说 ComponentStatus 在 v1 版本的 api 里面,从 1.19 以后的版本取消了

Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true","reason":""}
etcd-2               Healthy   {"health":"true","reason":""}
etcd-1               Healthy   {"health":"true","reason":""}

部署 worker 组件

部署 docker 组件
准备 docker 二进制文件

我提前下载好了 docker 二进制文件,放在了 ${work_dir}/bin 目录下了,大家要注意自己放的路径

cd ${work_dir}/bin && \
tar xf docker-23.0.1.tgz
配置 docker 为 systemctl 管理

创建 systemctl 启动模板

cat <<EOF > ${work_dir}/tmp/service/docker.service.template
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd \\
          -H unix://var/run/docker.sock \\
          --data-root=##dataRoot##
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF

准备 daemon.json 文件

cat <<EOF > ${work_dir}/tmp/service/daemon.json.template
{
  "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
分发文件并启动 docker 组件

下面的 ip 记得替换成自己的节点 ip

ip_head='192.168.11';for i in 147 148 149;do \
dataRoot='/data/docker'; \
sed "s|##dataRoot##|${dataRoot}|g" ${work_dir}/tmp/service/docker.service.template \
> ${work_dir}/tmp/service/docker.service; \
ssh ${ip_head}.${i} "mkdir -p /etc/docker"; \
scp ${work_dir}/tmp/service/docker.service ${ip_head}.${i}:/usr/lib/systemd/system/; \
scp ${work_dir}/tmp/service/daemon.json.template ${ip_head}.${i}:/etc/docker/daemon.json; \
scp ${work_dir}/bin/docker/* ${ip_head}.${i}:/usr/bin/; \
ssh ${ip_head}.${i} "systemctl enable docker && systemctl start docker --no-block"; \
done
部署 cri-dockerd 组件
准备 cri-dockerd 二进制文件

我提前下载好了 cri-dockerd 二进制文件,放在了 ${work_dir}/bin 目录下了,大家要注意自己放的路径

cd ${work_dir}/bin && \
tar xf cri-dockerd-0.3.1.amd64.tgz
配置 cri-dockerd 为 systemctl 管理

github 有标准的 service 文件

  • 官方给出的模板,一些参数是需要做修改的
  • --cni-bin-dir 默认路径是 /opt/cni/bin,有需要的话,可以做修改,为了避免大家后面忘记了,这里我就不做修改了,使用默认的路径
  • --container-runtime-endpoint 默认的路径是 unix:///var/run/cri-dockerd.sock ,可以把官方的这个参数去掉就好
  • --cri-dockerd-root-directory 默认路径是 /var/lib/cri-dockerd ,可以和 docker 一样修改路径,避免默认的路径空间不足
  • --pod-infra-container-image 默认镜像是 registry.k8s.io/pause:3.6,要改成阿里的镜像,如果没有国外服务器,拉取 k8s 的镜像会失败
  • 其他参数可以通过 cri-dockerd --help 命令来获取

创建 systemctl 启动模板

cat <<EOF > ${work_dir}/tmp/service/cri-dockerd.service.template
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
# 懒得配置 cri-docker.socket 了,我这边就直接注释掉了,不然启动会报错:Unit not found
# Requires=cri-docker.socket
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd \\
          --cri-dockerd-root-directory=##criData## \\
          --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
分发文件并启动 cri-dockerd 组件

下面的 ip 记得替换成自己的节点 ip

  • 下面的操作,需要在有网络的情况下执行
  • 会通过 yum 安装 conntrack-tools
  • subnet 参数要和 controller-manager--cluster-cidr 参数一致,注意修改 subNet 变量的值
  • 我的 cri-dockerd 没有改 cni 插件地址,默认是 /opt/cni/bin ,如果大家有修改的话,下面的 cni_dir 变量值记得替换
  • 我的 cri-dockerd 没有改 cni 网络配置文件的地址,默认是 /etc/cni/net.d,如果大家有修改的话,下面的 cni_conf 变量值记得替换
ip_head='192.168.11';for i in 147 148 149;do \
criData='/data/cri-dockerd'; \
sed "s|##criData##|${criData}|g" ${work_dir}/tmp/service/cri-dockerd.service.template \
> ${work_dir}/tmp/service/cri-dockerd.service; \
scp ${work_dir}/tmp/service/cri-dockerd.service ${ip_head}.${i}:/usr/lib/systemd/system/; \
scp ${work_dir}/bin/cri-dockerd/* ${ip_head}.${i}:/usr/bin/; \
ssh ${ip_head}.${i} "yum install -y conntrack-tools"; \
ssh ${ip_head}.${i} "systemctl enable cri-dockerd && systemctl start cri-dockerd --no-block"; \
done
部署 kubelet 组件
配置 kubelet 证书
cat << EOF > ${work_dir}/tmp/ssl/kubelet-csr.json.template
{
    "CN": "system:node:##nodeHost##",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "##nodeHost##"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "ShangHai",
        "L": "ShangHai",
        "O": "system:nodes",
        "OU": "System"
      }
    ]
}
EOF
创建 kubelet 证书

下面的 ip 记得替换成自己的节点 ip

ip_head='192.168.11';for i in 147 148 149;do \
nodeHost="${ip_head}.${i}"; \
sed "s|##nodeHost##|${nodeHost}|g" ${work_dir}/tmp/ssl/kubelet-csr.json.template \
> ${work_dir}/tmp/ssl/kubelet-csr.${nodeHost}.json
${work_dir}/bin/cfssl gencert \
-ca=${work_dir}/tmp/ssl/ca.pem \
-ca-key=${work_dir}/tmp/ssl/ca-key.pem \
-config=${work_dir}/tmp/ssl/ca-config.json \
-profile=kubernetes ${work_dir}/tmp/ssl/kubelet-csr.${nodeHost}.json | \
${work_dir}/bin/cfssljson -bare ${work_dir}/tmp/ssl/kubelet.${nodeHost};
done
创建 kubeconfig 证书

设置集群参数

  • --server 为 apiserver 的访问地址(如果有高可用地址,一定要写高可用的地址)
  • 修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口(如果是高可用地址,要写高可用的端口)
  • 切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver
ip_head='192.168.11';for i in 147 148 149;do \
nodeHost="${ip_head}.${i}"; \
${work_dir}/bin/kubernetes/server/bin/kubectl config \
set-cluster kubernetes \
--certificate-authority=${work_dir}/tmp/ssl/ca.pem \
--embed-certs=true \
--server=https://192.168.11.147:6443 \
--kubeconfig=${work_dir}/tmp/ssl/kubelet.kubeconfig.${nodeHost};
done

设置客户端认证参数

ip_head='192.168.11';for i in 147 148 149;do \
nodeHost="${ip_head}.${i}"; \
${work_dir}/bin/kubernetes/server/bin/kubectl config \
set-credentials system:node:${nodeHost} \
--client-certificate=${work_dir}/tmp/ssl/kubelet.${nodeHost}.pem \
--client-key=${work_dir}/tmp/ssl/kubelet.${nodeHost}-key.pem \
--embed-certs=true \
--kubeconfig=${work_dir}/tmp/ssl/kubelet.kubeconfig.${nodeHost};
done

设置上下文参数

ip_head='192.168.11';for i in 147 148 149;do \
nodeHost="${ip_head}.${i}"; \
${work_dir}/bin/kubernetes/server/bin/kubectl config \
set-context system:node:${nodeHost} \
--cluster=kubernetes \
--user=system:node:${nodeHost} \
--kubeconfig=${work_dir}/tmp/ssl/kubelet.kubeconfig.${nodeHost};
done

设置默认上下文

ip_head='192.168.11';for i in 147 148 149;do \
nodeHost="${ip_head}.${i}"; \
${work_dir}/bin/kubernetes/server/bin/kubectl config \
use-context system:node:${nodeHost} \
--kubeconfig=${work_dir}/tmp/ssl/kubelet.kubeconfig.${nodeHost};
done
配置 kubelet 配置文件

clusterDNS 参数的 ip 注意修改,和 apiserver--service-cluster-ip-range 参数一个网

段,和 k8s 服务 ip 要不一样,一般 k8s 服务的 ip 取网段第一个ip, clusterdns 选网段的第二个ip

cat << EOF > ${work_dir}/tmp/service/kubelet-config.yaml.template
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: ##sslPath##/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.88.0.2
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 3
containerLogMaxSize: 10Mi
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 300Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 40s
hairpinMode: hairpin-veth
healthzBindAddress: 0.0.0.0
healthzPort: 10248
httpCheckFrequency: 40s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
kubeAPIBurst: 100
kubeAPIQPS: 50
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 1m0s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
# disable readOnlyPort
readOnlyPort: 0
resolvConf: /etc/resolv.conf
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
tlsCertFile: ##sslPath##/kubelet.pem
tlsPrivateKeyFile: ##sslPath##/kubelet-key.pem
EOF
配置 kubelet 为 systemctl 管理
  • --container-runtime 参数默认是 docker ,如果使用 docker 以外的,需要配置为 remote ,并且要配置 --container-runtime-endpoint 参数来指定 sock 文件的路径
  • 我们这里需要改成 /var/run/cri-dockerd.sock 来指定使用 cri-dockerd
  • --pod-infra-container-image 参数和 cri-dockerd 的一样,改成阿里镜像的

kubelet 参数

创建 systemctl 启动模板

cat << EOF > ${work_dir}/tmp/service/kubelet.service.template
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
WorkingDirectory=##dataRoot##/kubelet
ExecStart=##k8sBin##/kubelet \\
  --config=##dataRoot##/kubelet/kubelet-config.yaml \\
  --container-runtime=remote \\
  --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock \\
  --hostname-override=##nodeHost## \\
  --kubeconfig=##configPath##/kubelet.kubeconfig \\
  --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 \\
  --root-dir=##dataRoot##/kubelet \\
  --v=2
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
分发文件并启动 kubelet 组件

下面的 ip 记得替换成自己的节点 ip

ip_head='192.168.11';for i in 147 148 149;do \
nodeHost="${ip_head}.${i}"; \
k8sBin='/data/kubernetes/bin'; \
sslPath='/etc/kubernetest/ssl'; \
configPath='/etc/kubernetest'; \
dataRoot='/data/kubernetes/data'
sed "s|##sslPath##|${sslPath}|g" ${work_dir}/tmp/service/kubelet-config.yaml.template \
> ${work_dir}/tmp/service/kubelet-config.yaml;
sed -e "s|##dataRoot##|${dataRoot}|g" \
    -e "s|##k8sBin##|${k8sBin}|g" \
    -e "s|##configPath##|${configPath}|g" \
    -e "s|##nodeHost##|${nodeHost}|g" ${work_dir}/tmp/service/kubelet.service.template \
    > ${work_dir}/tmp/service/kubelet.service.${nodeHost}
ssh ${nodeHost} "mkdir -p ${k8sBin} ${sslPath} ${configPath} ${dataRoot}/kubelet"; \
scp ${work_dir}/tmp/ssl/ca*.pem ${nodeHost}:${sslPath}/; \
scp ${work_dir}/tmp/ssl/kubelet.${nodeHost}.pem ${nodeHost}:${sslPath}/kubelet.pem; \
scp ${work_dir}/tmp/ssl/kubelet.${nodeHost}-key.pem ${nodeHost}:${sslPath}/kubelet-key.pem; \
scp ${work_dir}/tmp/ssl/kubelet.kubeconfig.${nodeHost} ${nodeHost}:${configPath}/kubelet.kubeconfig; \
scp ${work_dir}/tmp/service/kubelet.service.${nodeHost} ${nodeHost}:/usr/lib/systemd/system/kubelet.service; \
scp ${work_dir}/tmp/service/kubelet-config.yaml ${nodeHost}:${dataRoot}/kubelet/kubelet-config.yaml; \
scp ${work_dir}/bin/kubernetes/server/bin/kubelet ${nodeHost}:${k8sBin}/; \
ssh ${nodeHost} "systemctl enable kubelet && systemctl start kubelet --no-block"; \
done
部署 proxy 组件
配置 proxy 证书
cat << EOF > ${work_dir}/tmp/ssl/kube-proxy-csr.json
{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [],
    "names": [
      {
        "C": "CN",
        "ST": "ShangHai",
        "L": "ShangHai",
        "O": "system:kube-proxy",
        "OU": "System"
      }
    ]
}
EOF

创建 proxy 证书

创建 kubeconfig 证书

设置集群参数

  • --server 为 apiserver 的访问地址(如果有高可用地址,一定要写高可用的地址)
  • 修改成自己的 ip 地址和 service 文件里面指定的 --secure-port 参数的端口(如果是高可用地址,要写高可用的端口)
  • 切记,一定要带上https:// 协议,否则生成的证书,kubectl 命令访问不到 apiserver
${work_dir}/bin/kubernetes/server/bin/kubectl config \
set-cluster kubernetes \
--certificate-authority=${work_dir}/tmp/ssl/ca.pem \
--embed-certs=true \
--server=https://192.168.11.147:6443 \
--kubeconfig=${work_dir}/tmp/ssl/kube-proxy.kubeconfig

设置客户端认证参数

${work_dir}/bin/kubernetes/server/bin/kubectl config \
set-credentials system:kube-proxy \
--client-certificate=${work_dir}/tmp/ssl/kube-proxy.pem \
--client-key=${work_dir}/tmp/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=${work_dir}/tmp/ssl/kube-proxy.kubeconfig

设置上下文参数

${work_dir}/bin/kubernetes/server/bin/kubectl config \
set-context system:kube-proxy \
--cluster=kubernetes \
--user=system:kube-proxy \
--kubeconfig=${work_dir}/tmp/ssl/kube-proxy.kubeconfig

设置默认上下文

${work_dir}/bin/kubernetes/server/bin/kubectl config \
use-context system:kube-proxy \
--kubeconfig=${work_dir}/tmp/ssl/kube-proxy.kubeconfig
配置 proxy 配置文件
  • clusterCIDR 参数要和 controller-manager--cluster-cidr 参数一致
  • hostnameOverride 要和 kubelet--hostname-override 参数一致,否则会出现 node not found 的报错
cat << EOF > ${work_dir}/tmp/service/kube-proxy-config.yaml.template
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  kubeconfig: "##configPath##/kube-proxy.kubeconfig"
clusterCIDR: "172.20.0.0/16"
conntrack:
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: "##nodeHost##"
metricsBindAddress: 0.0.0.0:10249
mode: "ipvs"
EOF

配置 proxy 为 systemctl 管理

cat << EOF > ${work_dir}/tmp/service/kube-proxy.service.template
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
# kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量
## 指定 --cluster-cidr 或 --masquerade-all 选项后
## kube-proxy 会对访问 Service IP 的请求做 SNAT
WorkingDirectory=##dataRoot##/kube-proxy
ExecStart=##k8sBin##/kube-proxy \\
  --config=##dataRoot##/kube-proxy/kube-proxy-config.yaml
Restart=always
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
分发文件并启动 proxy 组件

下面的 ip 记得替换成自己的节点 ip

ip_head='192.168.11';for i in 147 148 149;do \
nodeHost="${ip_head}.${i}"; \
k8sBin='/data/kubernetes/bin'; \
configPath='/etc/kubernetest'; \
dataRoot='/data/kubernetes/data'
sed -e "s|##configPath##|${configPath}|g" \
    -e "s|##nodeHost##|${nodeHost}|g" ${work_dir}/tmp/service/kube-proxy-config.yaml.template \
    > ${work_dir}/tmp/service/kube-proxy-config.yaml.${nodeHost};
sed -e "s|##dataRoot##|${dataRoot}|g" \
    -e "s|##k8sBin##|${k8sBin}|g" ${work_dir}/tmp/service/kube-proxy.service.template \
    > ${work_dir}/tmp/service/kube-proxy.service
ssh ${nodeHost} "mkdir -p ${k8sBin} ${configPath} ${dataRoot}/kube-proxy"; \
scp ${work_dir}/tmp/ssl/kube-proxy.kubeconfig ${nodeHost}:${configPath}/; \
scp ${work_dir}/tmp/service/kube-proxy.service ${nodeHost}:/usr/lib/systemd/system/; \
scp ${work_dir}/tmp/service/kube-proxy-config.yaml.${nodeHost} ${nodeHost}:${dataRoot}/kube-proxy/kube-proxy-config.yaml; \
scp ${work_dir}/bin/kubernetes/server/bin/kube-proxy ${nodeHost}:${k8sBin}/; \
ssh ${nodeHost} "systemctl enable kube-proxy && systemctl start kube-proxy --no-block"; \
done
部署 calico 网络插件

下载 calico 的 yaml 文件

wget -O ${work_dir}/calico.yaml --no-check-certificate https://docs.tigera.io/archive/v3.25/manifests/calico.yaml

配置 cidr 网段,找到 CALICO_IPV4POOL_CIDR 字段,关闭前面的注释,把 ip 网段修改成和 controller-manager--cluster-cidr 参数一致

- name: CALICO_IPV4POOL_CIDR
  value: "172.20.0.0/16"

创建 calico 组件

${work_dir}/bin/kubernetes/server/bin/kubectl apply -f ${work_dir}/calico.yaml
部署 coredns 组件

clusterIP 参数要和 kubelet 配置文件的 clusterDNS 参数一致

cat << EOF > ${work_dir}/coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
            max_concurrent 1000
        }
        cache 30
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: k8s-app
                    operator: In
                    values: ["kube-dns"]
              topologyKey: kubernetes.io/hostname
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: coredns
        image: coredns/coredns:1.9.3
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.88.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
EOF

部署 coredns 组件

${work_dir}/bin/kubernetes/server/bin/kubectl apply -f ${work_dir}/coredns.yaml

测试 coredns

cat<<EOF | ${work_dir}/bin/kubernetes/server/bin/kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28.3
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

使用 busybox 容器内的 nslookup 命令来解析 kubernetes 域名

${work_dir}/bin/kubernetes/server/bin/kubectl exec busybox -- nslookup kubernetes

返回结果类似如下,表示解析没问题

Server:    10.88.0.2
Address 1: 10.88.0.2 kube-dns.kube-system.svc.cluster.local
Name:      kubernetes
Address 1: 10.88.0.1 kubernetes.default.svc.cluster.local


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
3月前
|
Kubernetes 应用服务中间件 nginx
[kubernetes]二进制部署k8s集群-基于containerd
[kubernetes]二进制部署k8s集群-基于containerd
303 0
|
存储 Kubernetes 网络安全
基于 cri-dockerd 二进制部署 kubernetest-v1.26.3 1
基于 cri-dockerd 二进制部署 kubernetest-v1.26.3
274 0
|
存储 JSON Kubernetes
基于 cri-dockerd 二进制部署 kubernetest-v1.26.3 2
基于 cri-dockerd 二进制部署 kubernetest-v1.26.3
109 0
|
6月前
|
搜索推荐 Linux 开发工具
【docker】二进制方式安装 Docker
【docker】二进制方式安装 Docker
745 2
|
Kubernetes Linux 调度
K8S CRI解析
上篇文章,我们讲到容器引擎Docker与Podman,关于K8S弃用Docker的根本原因在于容器运行时接口CRI,Kubelet 之前使用的是一个命名为 dockershim 的模块,用以实现对 Docker 的 CRI 支持。具体,可参考上篇文章:容器引擎Docker与Podman解析。本文主要针对CRI进行简要解析,以使得大家能够更深入了解K8S底层运行机制,以便能够更好地掌握容器生态技能。
112 0
|
Kubernetes 容器
【kubernetes】添加新节点(基于二进制安装方式)
【kubernetes】添加新节点(基于二进制安装方式)
244 0
|
Kubernetes Docker 容器
kubernetes二进制安装(一)
kubernetes二进制安装(一)
297 0
kubernetes二进制安装(一)
|
Kubernetes 容器
kubernetes二进制安装(四)
kubernetes二进制安装(四)
196 0
kubernetes二进制安装(四)
|
Kubernetes 容器
kubernetes二进制安装(三)
kubernetes二进制安装(三)
152 0
kubernetes二进制安装(三)
|
Kubernetes NoSQL Redis
【kubernetes】二进制方式安装 containerd
【kubernetes】二进制方式安装 containerd
1467 0
【kubernetes】二进制方式安装 containerd