kubernetes二进制安装(三)

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
简介: kubernetes二进制安装(三)

11、部署kube-scheduler


11步骤全部都在master01上执行

高可用kube-scheduler介绍
本实验部署一个三实例 kube-scheduler 的集群,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用时,阻塞的节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书:
• 与 kube-apiserver 的安全端口通信;
• 在安全端口(https,10251) 输出 prometheus 格式的 metrics。
#创建kube-scheduler证书和私钥,注意修改ip
[root@master01 work]# source /root/environment.sh
[root@master01 work]# cat > kube-scheduler-csr.json <<EOF
{
  "CN": "system:kube-scheduler",
  "hosts": [
    "127.0.0.1",
    "192.168.100.202",
    "192.168.100.203",
    "192.168.100.204"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "system:kube-scheduler",
      "OU": "System"
    }
  ]
}
EOF 
#解释:
hosts 列表包含所有 kube-scheduler 节点 IP;
CN 和 O 均为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。
#生成
[root@master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
#分发证书和私钥
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-scheduler*.pem root@${master_ip}:/etc/kubernetes/cert/
  done
#创建和分发kubeconfig
kube-scheduler 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-scheduler 证书
[root@master01 work]# kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-scheduler.kubeconfig
[root@master01 work]#  kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig
[root@master01 work]# kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig
[root@master01 work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
[root@master01 work]#  for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-scheduler.kubeconfig root@${master_ip}:/etc/kubernetes/
  done
#创建kube-scheduler 配置文件
[root@master01 work]# cat > kube-scheduler.yaml.template <<EOF
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
bindTimeoutSeconds: 600
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes/kube-scheduler.kubeconfig"
  qps: 100
enableContentionProfiling: false
enableProfiling: true
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 127.0.0.1:10251
leaderElection:
  leaderElect: true
metricsBindAddress: 127.0.0.1:10251
EOF
#解释:
--kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
--leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态。
#分发配置文件
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-scheduler.yaml.template root@${master_ip}:/etc/kubernetes/kube-scheduler.yaml
  done
# 创建kube-scheduler的systemd
[root@master01 work]# cat > kube-scheduler.service.template <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
WorkingDirectory=${K8S_DIR}/kube-scheduler
ExecStart=/opt/k8s/bin/kube-scheduler \\
  --port=0 \\
  --secure-port=10259 \\
  --bind-address=127.0.0.1 \\
  --config=/etc/kubernetes/kube-scheduler.yaml \\
  --tls-cert-file=/etc/kubernetes/cert/kube-scheduler.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kube-scheduler-key.pem \\
  --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-allowed-names="system:metrics-server" \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --logtostderr=true \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
EOF
#分发systemd
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-scheduler.service.template root@${master_ip}:/etc/systemd/system/kube-scheduler.service
  done
# 启动kube-scheduler 服务,启动服务前必须先创建工作目录
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p ${K8S_DIR}/kube-scheduler"
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"
  done    
# 检查kube-scheduler 服务
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl status kube-scheduler | grep Active"
  done
>>> 192.168.100.202
   Active: active (running) since 五 2021-08-06 12:30:52 CST; 22s ago
>>> 192.168.100.203
   Active: active (running) since 五 2021-08-06 12:30:53 CST; 22s ago
# 查看输出的 metrics
kube-scheduler 监听 10251 和 10251 端口:
• 10251:接收 http 请求,非安全端口,不需要认证授权;
• 10259:接收 https 请求,安全端口,需要认证授权。
• 两个接口都对外提供 /metrics 和 /healthz 的访问。
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
do
echo ">>> ${master_ip}"
ssh root@${master_ip} "netstat -lnpt | grep kube-sch"
done
#测试非安全端口
[root@master01 work]# curl -s http://127.0.0.1:10251/metrics | head
#测试安全端口
[root@master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://127.0.0.1:10259/metrics | head
#查看当前leader
[root@master01 work]# kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml


12、部署worker kubelet


12步骤全部都在master01上执行

kubelet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。
kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。
为确保安全,部署时关闭了 kubelet 的非安全 http 端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster 的请求)。
#提示:master01节点已下载相应二进制,可直接分发至worker节点。
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    scp kubernetes/server/bin/kubelet root@${all_ip}:/opt/k8s/bin/
    ssh root@${all_ip} "chmod +x /opt/k8s/bin/*"
  done
#分发kubeconfig
[root@master01 work]# for all_name in ${ALL_NAMES[@]}
  do
    echo ">>> ${all_name}"
    export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:${all_name} \
      --kubeconfig ~/.kube/config)
    kubectl config set-cluster kubernetes \
      --certificate-authority=/etc/kubernetes/cert/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
  done
#解释:
向 kubeconfig 写入的是 token,bootstrap 结束后 kube-controller-manager 为 kubelet 创建 client 和 server 证书。
token 有效期为 1 天,超期后将不能再被用来 boostrap kubelet,且会被 kube-controller-manager 的 tokencleaner 清理;
kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:<Token ID>,group 设置为 system:bootstrappers,后续将为这个 group 设置 ClusterRoleBinding。
#查看 kubeadm 为各节点创建的 token
[root@master01 work]# kubeadm token list --kubeconfig ~/.kube/config
##查看各 token 关联的 Secret
[root@master01 work]# kubectl get secrets  -n kube-system|grep bootstrap-token
#分发bootstrap kubeconfig
[root@master01 work]# for all_name in ${ALL_NAMES[@]}
  do
    echo ">>> ${all_name}"
    scp kubelet-bootstrap-${all_name}.kubeconfig root@${all_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
  done
# 创建kubelet 参数配置文件
[root@master01 work]# cat > kubelet-config.yaml.template <<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "##ALL_IP##"
staticPodPath: ""
syncFrequency: 1m
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodURL: ""
port: 10250
readOnlyPort: 0
rotateCertificates: true
serverTLSBootstrap: true
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/etc/kubernetes/cert/ca.pem"
authorization:
  mode: Webhook
registryPullQPS: 0
registryBurst: 20
eventRecordQPS: 0
eventBurst: 20
enableDebuggingHandlers: true
enableContentionProfiling: true
healthzPort: 10248
healthzBindAddress: "##ALL_IP##"
clusterDomain: "${CLUSTER_DNS_DOMAIN}"
clusterDNS:
  - "${CLUSTER_DNS_SVC_IP}"
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m
imageMinimumGCAge: 2m
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m
kubeletCgroups: ""
systemCgroups: ""
cgroupRoot: ""
cgroupsPerQOS: true
cgroupDriver: cgroupfs
runtimeRequestTimeout: 10m
hairpinMode: promiscuous-bridge
maxPods: 220
podCIDR: "${CLUSTER_CIDR}"
podPidsLimit: -1
resolvConf: /etc/resolv.conf
maxOpenFiles: 1000000
kubeAPIQPS: 1000
kubeAPIBurst: 2000
serializeImagePulls: false
evictionHard:
  memory.available:  "100Mi"
nodefs.available:  "10%"
nodefs.inodesFree: "5%"
imagefs.available: "15%"
evictionSoft: {}
enableControllerAttachDetach: true
failSwapOn: true
containerLogMaxSize: 20Mi
containerLogMaxFiles: 10
systemReserved: {}
kubeReserved: {}
systemReservedCgroup: ""
kubeReservedCgroup: ""
enforceNodeAllocatable: ["pods"]
EOF
#分发kubelet 参数配置文件
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    sed -e "s/##ALL_IP##/${all_ip}/" kubelet-config.yaml.template > kubelet-config-${all_ip}.yaml.template
    scp kubelet-config-${all_ip}.yaml.template root@${all_ip}:/etc/kubernetes/kubelet-config.yaml
  done
#创建kubelet systemd
[root@master01 work]# cat > kubelet.service.template <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=${K8S_DIR}/kubelet
ExecStart=/opt/k8s/bin/kubelet \\
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
  --cert-dir=/etc/kubernetes/cert \\
  --cgroup-driver=cgroupfs \\
  --cni-conf-dir=/etc/cni/net.d \\
  --container-runtime=docker \\
  --container-runtime-endpoint=unix:///var/run/dockershim.sock \\
  --root-dir=${K8S_DIR}/kubelet \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --config=/etc/kubernetes/kubelet-config.yaml \\
  --hostname-override=##ALL_NAME## \\
  --pod-infra-container-image=k8s.gcr.io/pause-amd64:3.2 \\
  --image-pull-progress-deadline=15m \\
  --volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\
  --logtostderr=true \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
EOF
#解释:
• 如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;
• --bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;
• K8S approve kubelet 的 csr 请求后,在 --cert-dir 目录创建证书和私钥文件,然后写入 --kubeconfig 文件;
• --pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 镜像,它不能回收容器的僵尸。
#分发kubelet systemd
[root@master01 work]# for all_name in ${ALL_NAMES[@]}
  do
    echo ">>> ${all_name}"
    sed -e "s/##ALL_NAME##/${all_name}/" kubelet.service.template > kubelet-${all_name}.service
    scp kubelet-${all_name}.service root@${all_name}:/etc/systemd/system/kubelet.service
  done
#授权
kubelet 启动时查找 --kubeletconfig 参数对应的文件是否存在,如果不存在则使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 发送证书签名请求 (CSR)。
kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证,认证通过后将请求的 user 设置为 system:bootstrap:<Token ID>,group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth。
默认情况下,这个 user 和 group 没有创建 CSR 的权限,因此kubelet 会启动失败,可通过如下方式创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定。
[root@master01 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
#启动kubelet
[root@master01 work]# for all_name in ${ALL_NAMES[@]}
  do
    echo ">>> ${all_name}"
    ssh root@${all_name} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
    ssh root@${all_name} "/usr/sbin/swapoff -a"
    ssh root@${all_name} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
  done
kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig 文件。
#注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。
#提示:启动服务前必须先创建工作目录;
#关闭 swap 分区,否则 kubelet 会启动失败。
#提示:本步骤操作仅需要在master01节点操作。
#查看kubelet服务
[root@master01 work]# for all_name in ${ALL_NAMES[@]}
  do
    echo ">>> ${all_name}"
    ssh root@${all_name} "systemctl status kubelet|grep active"
  done
>>> master01
   Active: active (running) since 五 2021-08-06 12:41:14 CST; 1min 9s ago  #全是running即可
>>> master02
   Active: active (running) since 五 2021-08-06 12:41:14 CST; 1min 8s ago
>>> worker01
   Active: active (running) since 五 2021-08-06 12:41:15 CST; 1min 8s ago
>>> worker02
   Active: active (running) since 五 2021-08-06 12:41:16 CST; 1min 7s ago
[root@master01 work]# kubectl get csr
NAME        AGE   SIGNERNAME                                    REQUESTOR                 CONDITION
csr-mkkcx   92s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:hrdvi4   Pending
csr-pgx7g   94s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:32syq8   Pending
csr-v6cws   93s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:0top7h   Pending
csr-vhq62   93s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:bf9va5   Pending
[root@master01 work]# kubectl get nodes
No resources found in default namespace.   
#自动 approve CSR 请求
创建三个 ClusterRoleBinding,分别用于自动 approve client、renew client、renew server 证书。
[root@master01 work]# cat > csr-crb.yaml <<EOF
 # Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
EOF
[root@master01 work]# kubectl apply -f csr-crb.yaml
#解释:
auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group 为 system:bootstrappers;
node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes;
node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes。
#查看 kubelet 的情况
[root@master01 work]#  kubectl get csr | grep boot
csr-mkkcx   4m8s    kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:hrdvi4   Pending
csr-pgx7g   4m10s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:32syq8   Pending
csr-v6cws   4m9s    kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:0top7h   Pending
csr-vhq62   4m9s    kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:bf9va5   Pending
#等待一段时间(1-10 分钟),三个节点的 CSR 都被自动 approved
[root@master01 work]# ls -l /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2313 8月   6 12:41 /etc/kubernetes/kubelet.kubeconfig
[root@master01 work]# ls -l /etc/kubernetes/cert/ | grep kubelet
-rw------- 1 root root 1224 8月   6 12:48 kubelet-client-2021-08-06-12-48-03.pem
lrwxrwxrwx 1 root root   59 8月   6 12:48 kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2021-08-06-12-48-03.pem
#手动 approve server cert csr
基于安全性考虑,CSR approving controllers 不会自动 approve kubelet server 证书签名请求,需要手动 approve。
[root@master01 work]# kubectl get csr | grep node  #现在还是pending
csr-874vr   23s     kubernetes.io/kubelet-serving                 system:node:master01      Pending
csr-dsc6z   22s     kubernetes.io/kubelet-serving                 system:node:master02      Pending
csr-t6wvz   22s     kubernetes.io/kubelet-serving                 system:node:worker02      Pending
csr-wc84r   22s     kubernetes.io/kubelet-serving                 system:node:worker01      Pending
[root@master01 work]# kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
certificatesigningrequest.certificates.k8s.io/csr-874vr approved
certificatesigningrequest.certificates.k8s.io/csr-dsc6z approved
certificatesigningrequest.certificates.k8s.io/csr-t6wvz approved
certificatesigningrequest.certificates.k8s.io/csr-wc84r approved
[root@master01 work]# ls -l /etc/kubernetes/cert/kubelet-*
-rw------- 1 root root 1224 8月   6 12:48 /etc/kubernetes/cert/kubelet-client-2021-08-06-12-48-03.pem
lrwxrwxrwx 1 root root   59 8月   6 12:48 /etc/kubernetes/cert/kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2021-08-06-12-48-03.pem
-rw------- 1 root root 1261 8月   6 12:48 /etc/kubernetes/cert/kubelet-server-2021-08-06-12-48-53.pem
lrwxrwxrwx 1 root root   59 8月   6 12:48 /etc/kubernetes/cert/kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2021-08-06-12-48-53.pem
[root@master01 work]# kubectl get csr | grep node
csr-874vr   64s     kubernetes.io/kubelet-serving                 system:node:master01      Approved,Issued
csr-dsc6z   63s     kubernetes.io/kubelet-serving                 system:node:master02      Approved,Issued
csr-t6wvz   63s     kubernetes.io/kubelet-serving                 system:node:worker02      Approved,Issued
csr-wc84r   63s     kubernetes.io/kubelet-serving                 system:node:worker01      Approved,Issued
[root@master01 work]# kubectl get nodes  #查看集群节点状态,都是ready就对了
NAME       STATUS   ROLES    AGE    VERSION
master01   Ready    <none>   103s   v1.18.3
master02   Ready    <none>   102s   v1.18.3
worker01   Ready    <none>   101s   v1.18.3
worker02   Ready    <none>   101s   v1.18.3
#kubelet 提供的 API 接口
[root@master01 work]# netstat -lnpt | grep kubelet  #查看监听的端口
tcp        0      0 127.0.0.1:35257         0.0.0.0:*               LISTEN      20914/kubelet       
tcp        0      0 192.168.100.202:10248   0.0.0.0:*               LISTEN      20914/kubelet       
tcp        0      0 192.168.100.202:10250   0.0.0.0:*               LISTEN      20914/kubelet 
#解释:
• 10248: healthz http 服务;
• 10250: https 服务,访问该端口时需要认证和授权(即使访问 /healthz 也需要);
• 未开启只读端口 10255;
• 从 K8S v1.10 开始,去除了 --cadvisor-port 参数(默认 4194 端口),不支持访问 cAdvisor UI & API。
#kubelet api 认证和授权
kubelet 配置了如下认证参数:
• authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
• authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTPs 证书认证;
• authentication.webhook.enabled=true:开启 HTTPs bearer token 认证。
同时配置了如下授权参数:
authroization.mode=Webhook:开启 RBAC 授权。
kubelet 收到请求后,使用 clientCAFile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 Unauthorized,这里就是都没有通过
[root@master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.100.202:10250/metrics
Unauthorized
[root@master01 work]#  curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://192.168.100.202cs0250/metri 
Unauthorized
若通过认证后,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 发送请求,查询证书或 token 对应的 user、group 是否有操作资源的权限(RBAC)。
#证书认证和授权,默认权限不足
[root@master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://192.168.100.202:10250/metrics
Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)
##使用最高权限的admin
[root@master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://192.168.100.202:10250/metrics | head
#解释:
--cacert、--cert、--key 的参数值必须是文件路径,如上面的 ./admin.pem 不能省略 ./,否则返回 401 Unauthorized。

8f978537df6c4afabb7b53ecdfd8c41a.png

#创建bear token 认证和授权
[root@master01 work]#  kubectl create sa kubelet-api-test
serviceaccount/kubelet-api-test created
[root@master01 work]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
clusterrolebinding.rbac.authorization.k8s.io/kubelet-api-test created
[root@master01 work]# SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
[root@master01 work]# TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
[root@master01 work]# echo ${TOKEN}   #查看token值
eyJhbGciOiJSUzI1NiIsImtpZCI6IjBMT1lOUFgycEpIOWpQajFoQUpNNHlWMkRxZDNmdUttcVBNVHpyajdTN2sifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Imt1YmVsZXQtYXBpLXRlc3QtdG9rZW4tNGI4Y3YiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3ViZWxldC1hcGktdGVzdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImQ2NDhkNzQ5LTg1NWUtNDI4YS1iZGE1LTJiMmQ5NmQwYjNjOSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Omt1YmVsZXQtYXBpLXRlc3QifQ.t_h8MprNlDiSKXBUN_oTCm9KNXxxKbiqHBwBsrx4q7KvyPS10QCZ03nkNhbsjmrboXxSgkj7Ll7yBY_-DaXGI0-bULLA4v8fjK_c0UCWEC3jKHLpsCBIYIS9WKJliZNZb3NgXXGY33n8MEQpZccVz1IyTih0kFPgV4JgQxwYeqIH60mq4KieZv1gAEnnXg9rhU_AXm1bqB7QQEQafWMFO3XHfOo7HYvoVlFa0BzlfJgN8SAExdF6-V6BC6AqY6oRC6BIt7rPlnGIS9iHLEqH8rCm-lDgXlW3LQbn7sbSGZa-kGMnBwI10v7xOdxd18g_FfoNFO-Rcnpb6KbGQsPVbw

5f83b9bd1a1d40129aa9cdabe8f2e442.png

[root@master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://192.168.100.202:10250/metrics | head

236decfa01ac407eac420b265df1be08.png


13、部署所有节点的kube-proxy


13步骤所有操作都在master01上执行

#kube-proxy 运行在所有节点上,它监听 apiserver 中 service 和 endpoint 的变化情况,创建路由规则以提供服务 IP 和负载均衡功能。
安装kube-proxy
#提示:master01 节点已下载相应二进制,可直接分发至worker节点。
#分发kube-proxy
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    scp kubernetes/server/bin/kube-proxy root@${all_ip}:/opt/k8s/bin/
    ssh root@${all_ip} "chmod +x /opt/k8s/bin/*"
  done
#创建kube-proxy证书和私钥
[root@master01 work]# cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF 
#解释:
• CN:指定该证书的 User 为 system:kube-proxy;
• 预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
• 该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空。
#生成
[root@master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
# 创建和分发kubeconfig
kube-proxy 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-proxy 证书:
[root@master01 work]# kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
[root@master01 work]# kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
[root@master01 work]#  kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
[root@master01 work]#  kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    scp kube-proxy.kubeconfig root@${all_ip}:/etc/kubernetes/
  done
#创建kube-proxy 配置文件
从 v1.10 开始,kube-proxy 部分参数可以配置文件中配置。可以使用 --write-config-to 选项生成该配置文件。
[root@master01 work]# cat > kube-proxy-config.yaml.template <<EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
  qps: 100
bindAddress: ##ALL_IP##
healthzBindAddress: ##ALL_IP##:10256
metricsBindAddress: ##ALL_IP##:10249
enableProfiling: true
clusterCIDR: ${CLUSTER_CIDR}
hostnameOverride: ##ALL_NAME##
mode: "ipvs"
portRange: ""
kubeProxyIPTablesConfiguration:
  masqueradeAll: false
kubeProxyIPVSConfiguration:
  scheduler: rr
  excludeCIDRs: []
EOF
#解释:
• bindAddress: 监听地址;
• clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
• clusterCIDR: kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
• hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
• mode: 使用 ipvs 模式。
#分发配置文件
[root@master01 work]# for (( i=0; i < 4; i++ ))
  do
    echo ">>> ${ALL_NAMES[i]}"
    sed -e "s/##ALL_NAME##/${ALL_NAMES[i]}/" -e "s/##ALL_IP##/${ALL_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${ALL_NAMES[i]}.yaml.template
    scp kube-proxy-config-${ALL_NAMES[i]}.yaml.template root@${ALL_NAMES[i]}:/etc/kubernetes/kube-proxy-config.yaml
  done
#创建kube-proxy的systemd
[root@master01 work]# cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=${K8S_DIR}/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy-config.yaml \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
#分发kube-proxy systemd
[root@master01 work]# for all_name in ${ALL_NAMES[@]}
  do
    echo ">>> ${all_name}"
    scp kube-proxy.service root@${all_name}:/etc/systemd/system/
  done    
# 启动kube-proxy 服务,启动服务前必须先创建工作目录
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
    ssh root@${all_ip} "modprobe ip_vs_rr"
    ssh root@${all_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
  done
#检查kube-proxy 服务
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "systemctl status kube-proxy | grep Active"
  done
>>> 192.168.100.202
   Active: active (running) since 五 2021-08-06 13:03:59 CST; 42s ago  #都是running即可
>>> 192.168.100.203
   Active: active (running) since 五 2021-08-06 13:04:00 CST; 42s ago
>>> 192.168.100.205
   Active: active (running) since 五 2021-08-06 13:04:00 CST; 41s ago
>>> 192.168.100.206
   Active: active (running) since 五 2021-08-06 13:04:01 CST; 41s ago
#查看监听端口
kube-proxy 监听 10249 和 10256 端口:
• 10249:对外提供 /metrics;
• 10256:对外提供 /healthz 的访问。
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "sudo netstat -lnpt | grep kube-prox"
  done
#查看ipvs 路由规则
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "/usr/sbin/ipvsadm -ln"
  done
#可见所有通过 https 访问 K8S SVC kubernetes 的请求都转发到 kube-apiserver 节点的 6443 端口。

36e9b0be2bed4efdb0d7f17451782c20.png


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
7天前
|
Kubernetes Ubuntu Linux
我应该如何安装Kubernetes
我应该如何安装Kubernetes
|
30天前
|
Kubernetes Ubuntu Docker
从0开始搞K8S:使用Ubuntu进行安装(环境安装)
通过上述步骤,你已经在Ubuntu上成功搭建了一个基本的Kubernetes单节点集群。这只是开始,Kubernetes的世界广阔且深邃,接下来你可以尝试部署应用、了解Kubernetes的高级概念如Services、Deployments、Ingress等,以及探索如何利用Helm等工具进行应用管理,逐步提升你的Kubernetes技能树。记住,实践是最好的老师,不断实验与学习,你将逐渐掌握这一强大的容器编排技术。
113 1
|
1月前
|
Kubernetes Linux 开发工具
centos7通过kubeadm安装k8s 1.27.1版本
centos7通过kubeadm安装k8s 1.27.1版本
|
1月前
|
Kubernetes Docker 容器
rancher docker k8s安装(一)
rancher docker k8s安装(一)
40 2
|
1月前
|
Kubernetes 网络安全 容器
基于Ubuntu-22.04安装K8s-v1.28.2实验(一)部署K8s
基于Ubuntu-22.04安装K8s-v1.28.2实验(一)部署K8s
197 2
|
1月前
|
存储 Kubernetes 负载均衡
基于Ubuntu-22.04安装K8s-v1.28.2实验(四)使用域名访问网站应用
基于Ubuntu-22.04安装K8s-v1.28.2实验(四)使用域名访问网站应用
28 1
|
1月前
|
负载均衡 应用服务中间件 nginx
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
50 1
|
1月前
|
缓存 Kubernetes 应用服务中间件
k8s学习--helm的详细解释及安装和常用命令
k8s学习--helm的详细解释及安装和常用命令
k8s学习--helm的详细解释及安装和常用命令
|
1月前
|
Kubernetes 网络协议 安全
[kubernetes]二进制方式部署单机k8s-v1.30.5
[kubernetes]二进制方式部署单机k8s-v1.30.5
|
1月前
|
Kubernetes Docker 容器
rancher docker k8s安装(二)
rancher docker k8s安装(二)
43 0