kubernetes二进制安装(中)

本文涉及的产品
网络型负载均衡 NLB,每月750个小时 15LCU
传统型负载均衡 CLB,每月750个小时 15LCU
公网NAT网关,每月750个小时 15CU
简介: kubernetes二进制安装(中)

6、部署flannel


6步骤的操作全部都在master01上进行

kubernetes 要求集群内各节点(包括 master 节点)能通过 Pod 网段互联互通。flannel 使用 vxlan 技术为各节点创建一个可以互通的 Pod 网络,使用的端口为 UDP 8472。
flanneld 第一次启动时,从 etcd 获取配置的 Pod 网段信息,为本节点分配一个未使用的地址段,然后创建 flannedl.1 网络接口(也可能是其它名称,如 flannel1 等)。
flannel 将分配给自己的 Pod 网段信息写入 /run/flannel/docker 文件,docker 后续使用这个文件中的环境变量设置 docker0 网桥,从而从这个地址段为本节点的所有 Pod 容器分配 IP。
#下载flannel
root@master01 ~]# cd /opt/k8s/work/
[root@master01 work]# mkdir flannel
[root@master01 work]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@master01 work]# ll
总用量 79816
-rw-r--r-- 1 root    root       388 8月   6 10:12 ca-config.json
-rw-r--r-- 1 root    root      1005 8月   6 10:15 ca.csr
-rw-r--r-- 1 root    root       310 8月   6 10:13 ca-csr.json
-rw------- 1 root    root      1679 8月   6 10:15 ca-key.pem
-rw-r--r-- 1 root    root      1367 8月   6 10:15 ca.pem
-rw-r--r-- 1 root    root       567 8月   6 10:12 config.json
-rw-r--r-- 1 root    root       287 8月   6 10:12 csr.json
drwxrwxr-x 2    1000  1000      138 6月  22 2020 docker
-rw-r--r-- 1 root    root  60741087 7月   1 2020 docker-19.03.12.tgz
-rw-r--r-- 1 root    root       413 8月   6 10:38 docker-daemon.json
-rw-r--r-- 1 root    root       487 8月   6 10:37 docker.service
-rw-r--r-- 1 root    root      1383 8月   6 10:26 etcd-192.168.100.202.service
-rw-r--r-- 1 root    root      1383 8月   6 10:26 etcd-192.168.100.203.service
-rw-r--r-- 1 root    root      1058 8月   6 10:23 etcd.csr
-rw-r--r-- 1 root    root       354 8月   6 10:21 etcd-csr.json
-rw------- 1 root    root      1679 8月   6 10:23 etcd-key.pem
-rw-r--r-- 1 root    root      1436 8月   6 10:23 etcd.pem
-rw-r--r-- 1 root    root      1382 8月   6 10:25 etcd.service.template
drwxr-xr-x 3 6810230 users      123 10月 11 2018 etcd-v3.3.10-linux-amd64
-rw-r--r-- 1 root    root  11353259 3月  25 2020 etcd-v3.3.10-linux-amd64.tar.gz
-rw-r--r-- 1 root    root   9565743 3月  25 2020 flannel-v0.11.0-linux-amd64.tar.gz
[root@master01 work]# tar -xzvf flannel-v0.11.0-linux-amd64.tar.gz -C flannel
#分发flannel
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    scp flannel/{flanneld,mk-docker-opts.sh} root@${all_ip}:/opt/k8s/bin/
    ssh root@${all_ip} "chmod +x /opt/k8s/bin/*"
  done
#创建flannel证书和密钥,创建flanneld的CA证书请求文件
[root@master01 work]# cat > flanneld-csr.json <<EOF
{
    "CN": "flanneld",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shanghai",
            "L": "Shanghai",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
#解释:
该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空。
#生成密钥和证书
[root@master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
> -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
> -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
#分发证书和私钥
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "mkdir -p /etc/flanneld/cert"
    scp flanneld*.pem root@${all_ip}:/etc/flanneld/cert
  done  
#分发证书和私钥
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "mkdir -p /etc/flanneld/cert"
    scp flanneld*.pem root@${all_ip}:/etc/flanneld/cert
  done  
# 创建flanneld的systemd
[root@master01 work]# source /root/environment.sh
[root@master01 work]# cat > flanneld.service << EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
ExecStart=/opt/k8s/bin/flanneld \\
  -etcd-cafile=/etc/kubernetes/cert/ca.pem \\
  -etcd-certfile=/etc/flanneld/cert/flanneld.pem \\
  -etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem \\
  -etcd-endpoints=${ETCD_ENDPOINTS} \\
  -etcd-prefix=${FLANNEL_ETCD_PREFIX} \\
  -iface=${IFACE} \\
  -ip-masq
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF
#解释:
mk-docker-opts.sh:该脚本将分配给 flanneld 的 Pod 子网段信息写入 /run/flannel/docker 文件,后续 docker 启动时使用这个文件中的环境变量配置 docker0 网桥;
flanneld:使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口;
flanneld:运行时需要 root 权限;
-ip-masq: flanneld 为访问 Pod 网络外的流量设置 SNAT 规则,同时将传递给 Docker 的变量 --ip-masq(/run/flannel/docker 文件中)设置为 false,这样 Docker 将不再创建 SNAT 规则; Docker 的 --ip-masq 为 true 时,创建的 SNAT 规则比较“暴力”:将所有本节点 Pod 发起的、访问非 docker0 接口的请求做 SNAT,这样访问其他节点 Pod 的请求来源 IP 会被设置为 flannel.1 接口的 IP,导致目的 Pod 看不到真实的来源 Pod IP。 flanneld 创建的 SNAT 规则比较温和,只对访问非 Pod 网段的请求做 SNAT。
#分发flannel systemd
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    scp flanneld.service root@${all_ip}:/etc/systemd/system/
  done
#启动并验证
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld"
  done
#检查flannel启动
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "systemctl status flanneld|grep Active"
  done
>>> 192.168.100.202
   Active: active (running) since 五 2021-08-06 10:52:31 CST; 26s ago #同样四个running即可
>>> 192.168.100.203
   Active: active (running) since 五 2021-08-06 10:52:32 CST; 24s ago
>>> 192.168.100.205
   Active: active (running) since 五 2021-08-06 10:52:33 CST; 23s ago
>>> 192.168.100.206
   Active: active (running) since 五 2021-08-06 10:52:35 CST; 22s ago
#检查pod网段信息,查看集群 Pod 网段(/16)
[root@master01 work]# etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/cert/ca.pem \
  --cert-file=/etc/flanneld/cert/flanneld.pem \
  --key-file=/etc/flanneld/cert/flanneld-key.pem \
  get ${FLANNEL_ETCD_PREFIX}/config
{"Network":"10.10.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}  #会输出这个网段信息
#查看已分配的 Pod 子网段列表(/24)
[root@master01 work]# etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/cert/ca.pem \
  --cert-file=/etc/flanneld/cert/flanneld.pem \
  --key-file=/etc/flanneld/cert/flanneld-key.pem \
  ls ${FLANNEL_ETCD_PREFIX}/subnets
/kubernetes/network/subnets/10.10.152.0-21
/kubernetes/network/subnets/10.10.136.0-21
/kubernetes/network/subnets/10.10.192.0-21
/kubernetes/network/subnets/10.10.168.0-21
#查看某一 Pod 网段对应的节点 IP 和 flannel 接口地址
[root@master01 work]# etcdctl \
  --endpoints=${ETCD_ENDPOINTS} \
  --ca-file=/etc/kubernetes/cert/ca.pem \
  --cert-file=/etc/flanneld/cert/flanneld.pem \
  --key-file=/etc/flanneld/cert/flanneld-key.pem \
  get ${FLANNEL_ETCD_PREFIX}/subnets/10.10.168.0-21
{"PublicIP":"192.168.100.205","BackendType":"vxlan","BackendData":{"VtepMAC":"66:73:0e:a2:bc:4e"}} #接口信息
#解释输出信息:
10.10.168.0/21 被分配给节点 worker01(192.168.100.205);
VtepMAC 为 worker01 节点的 flannel.1 网卡 MAC 地址。
#检查flannel网络信息
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
do 
echo ">>> ${all_ip}"
ssh root@${all_ip} "/usr/sbin/ip addr show flannel.1 && /usr/sbin/ip addr show docker0" 
done
>>> 192.168.100.202
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether ea:7b:96:a4:6a:f5 brd ff:ff:ff:ff:ff:ff
    inet 10.10.136.0/32 scope global flannel.1   #可以看到各节点的flannel网卡的网段和上面的pod子网段是相对应的
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:1d:99:54:e1 brd ff:ff:ff:ff:ff:ff
    inet 10.10.136.1/21 brd 10.10.143.255 scope global docker0
       valid_lft forever preferred_lft forever
>>> 192.168.100.203
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether f2:74:0d:58:8e:48 brd ff:ff:ff:ff:ff:ff
    inet 10.10.192.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:1c:98:5d:f3 brd ff:ff:ff:ff:ff:ff
    inet 10.10.192.1/21 brd 10.10.199.255 scope global docker0
       valid_lft forever preferred_lft forever
>>> 192.168.100.205
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 66:73:0e:a2:bc:4e brd ff:ff:ff:ff:ff:ff
    inet 10.10.168.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:dd:1f:ba:ba brd ff:ff:ff:ff:ff:ff
    inet 10.10.168.1/21 brd 10.10.175.255 scope global docker0
       valid_lft forever preferred_lft forever
>>> 192.168.100.206
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 12:a4:53:8b:d3:4f brd ff:ff:ff:ff:ff:ff
    inet 10.10.152.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:44:74:0d:84 brd ff:ff:ff:ff:ff:ff
    inet 10.10.152.1/21 brd 10.10.159.255 scope global docker0
       valid_lft forever preferred_lft forever
#解释:flannel.1 网卡的地址为分配的 Pod 子网段的第一个 IP(.0),且是 /32 的地址。
#查看网络信息
[root@master01 work]# ip route show |grep flannel.1
10.10.152.0/21 via 10.10.152.0 dev flannel.1 onlink 
10.10.168.0/21 via 10.10.168.0 dev flannel.1 onlink 
10.10.192.0/21 via 10.10.192.0 dev flannel.1 onlink 
#解释:
到其它节点 Pod 网段请求都被转发到 flannel.1 网卡;
flanneld 根据 etcd 中子网段的信息,如 ${FLANNEL_ETCD_PREFIX}/subnets/172.30.32.0-21 ,来决定进请求发送给哪个节点的互联 IP。
#验证各节点flannel
在各节点上部署 flannel 后,检查是否创建了 flannel 接口(名称可能为 flannel0、flannel.0、flannel.1 等)
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "/usr/sbin/ip addr show flannel.1 | grep -w inet"
  done
>>> 192.168.100.202
    inet 10.10.136.0/32 scope global flannel.1
>>> 192.168.100.203
    inet 10.10.192.0/32 scope global flannel.1
>>> 192.168.100.205
    inet 10.10.168.0/32 scope global flannel.1
>>> 192.168.100.206
    inet 10.10.152.0/32 scope global flannel.1
#在各节点上 ping 所有 flannel 接口 IP,确保能通,要注意ping的ip要和上面输出的网段信息相匹配
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh ${all_ip} "ping -c 1 10.10.136.0"
    ssh ${all_ip} "ping -c 1 10.10.152.0"
    ssh ${all_ip} "ping -c 1 10.10.192.0"
    ssh ${all_ip} "ping -c 1 10.10.168.0"
  done
#输出信息显示能通即可

7、部署master节点高可用


7步骤全部都在master01节点上进行,本次实验采用keepalived+nginx代理实现高可用

#Keepalived安装,创建keepalived的目录
[root@master01 work]# cd
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh ${master_ip} "mkdir -p /opt/k8s/kube-keepalived/"
    ssh ${master_ip} "mkdir -p /etc/keepalived/"
  done
[root@master01 ~]# cd /opt/k8s/work  
[root@master01 work]# wget http://down.linuxsb.com:8888/software/keepalived-2.0.20.tar.gz
[root@master01 work]# ll | grep keepalived
-rw-r--r-- 1 root    root   1036063 7月   1 2020 keepalived-2.0.20.tar.gz
[root@master01 work]# tar -zxvf keepalived-2.0.20.tar.gz
[root@master01 work]# cd keepalived-2.0.20/ && ./configure --sysconf=/etc --prefix=/opt/k8s/kube-keepalived/ && make && make install
#分发Keepalived二进制文件
[root@master01 keepalived-2.0.20]# cd ..
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp -rp /opt/k8s/kube-keepalived/ root@${master_ip}:/opt/k8s/
    scp -rp /usr/lib/systemd/system/keepalived.service  root@${master_ip}:/usr/lib/systemd/system/
    ssh ${master_ip} "systemctl daemon-reload && systemctl enable keepalived"
  done            
# Nginx安装
[root@master01 work]# wget http://nginx.org/download/nginx-1.19.0.tar.gz
[root@master01 work]# ll | grep nginx
-rw-r--r-- 1 root    root   1043748 7月   1 2020 nginx-1.19.0.tar.gz
[root@master01 work]# tar -xzvf nginx-1.19.0.tar.gz
[root@master01 work]# cd /opt/k8s/work/nginx-1.19.0/
[root@master01 nginx-1.19.0]# mkdir nginx-prefix
[root@master01 nginx-1.19.0]# ./configure --with-stream --without-http --prefix=$(pwd)/nginx-prefix --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
[root@master01 nginx-1.19.0]# make && make install
#解释:
--with-stream:开启 4 层透明转发(TCP Proxy)功能;
--without-xxx:关闭所有其他功能,这样生成的动态链接二进制程序依赖最小。
[root@master01 nginx-1.19.0]# ./nginx-prefix/sbin/nginx -v
nginx version: nginx/1.19.0  #查看版本
#验证编译后的Nginx,查看 nginx 动态链接的库
[root@master01 nginx-1.19.0]#  ldd ./nginx-prefix/sbin/nginx
  linux-vdso.so.1 =>  (0x00007ffe23cdd000)
  libdl.so.2 => /lib64/libdl.so.2 (0x00007f5c49436000)
  libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f5c4921a000)
  libc.so.6 => /lib64/libc.so.6 (0x00007f5c48e56000)
  /lib64/ld-linux-x86-64.so.2 (0x0000559ddee42000)
#提示:
由于只开启了 4 层透明转发功能,所以除了依赖 libc 等操作系统核心 lib 库外,没有对其它 lib 的依赖(如 libz、libssl 等),以便达到精简编译的目的。 
#分发Nginx二进制文件
[root@master01 nginx-1.19.0]# cd ..
[root@master01 work]# source /root/environment.sh
[root@master01 work]#  for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p /opt/k8s/kube-nginx/{conf,logs,sbin}"
    scp /opt/k8s/work/nginx-1.19.0/nginx-prefix/sbin/nginx root@${master_ip}:/opt/k8s/kube-nginx/sbin/kube-nginx
    ssh root@${master_ip} "chmod a+x /opt/k8s/kube-nginx/sbin/*"
  done    
#配置Nginx system
[root@master01 work]# cat > kube-nginx.service <<EOF
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=forking
ExecStartPre=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -t
ExecStart=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx
ExecReload=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
#分发Nginx systemd
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-nginx.service  root@${master_ip}:/etc/systemd/system/
    ssh ${master_ip} "systemctl daemon-reload && systemctl enable kube-nginx.service"
  done
#创建配置文件
[root@master01 work]# ll | grep config
-rw-r--r--  1 root    root       388 8月   6 10:12 ca-config.json
drwxr-xr-x  6 root    root        92 8月   6 11:25 config   #上传config目录
-rw-r--r--  1 root    root       567 8月   6 10:12 config.json
[root@master01 work]# vim binngkek8s.sh   #需要修改master节点的ip和vip,以及master节点的网卡名称
#!/bin/sh
echo """
    请记得一定要先上传,config目录
"""
if [ ! -d config ]
then
    sleep 30
    echo "请看上边输出..."
  exit 1
fi
#######################################
# set variables below to create the config files, all files will create at ./config directory
#######################################
# master keepalived virtual ip address
export K8SHA_VIP=192.168.100.204
# master01 ip address
export K8SHA_IP1=192.168.100.202
# master02 ip address
export K8SHA_IP2=192.168.100.203
# master01 hostname
export K8SHA_HOST1=master01
# master02 hostname
export K8SHA_HOST2=master02
# master01 network interface name
export K8SHA_NETINF1=ens32
# master02 network interface name
export K8SHA_NETINF2=ens32
# keepalived auth_pass config
export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d
# kubernetes CIDR pod subnet
export K8SHA_PODCIDR=10.10.0.0
# kubernetes CIDR svc subnet
export K8SHA_SVCCIDR=10.20.0.0
##############################
# please do not modify anything below
##############################
mkdir -p config/$K8SHA_HOST1/{keepalived,nginx-lb}
mkdir -p config/$K8SHA_HOST2/{keepalived,nginx-lb}
mkdir -p config/keepalived
mkdir -p config/nginx-lb
# create all keepalived files
chmod u+x config/keepalived/check_apiserver.sh
cp config/keepalived/check_apiserver.sh config/$K8SHA_HOST1/keepalived
cp config/keepalived/check_apiserver.sh config/$K8SHA_HOST2/keepalived
sed \
-e "s/K8SHA_KA_STATE/BACKUP/g" \
-e "s/K8SHA_KA_INTF/${K8SHA_NETINF1}/g" \
-e "s/K8SHA_IPLOCAL/${K8SHA_IP1}/g" \
-e "s/K8SHA_KA_PRIO/102/g" \
-e "s/K8SHA_VIP/${K8SHA_VIP}/g" \
-e "s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g" \
config/keepalived/k8s-keepalived.conf.tpl > config/$K8SHA_HOST1/keepalived/keepalived.conf
sed \
-e "s/K8SHA_KA_STATE/BACKUP/g" \
-e "s/K8SHA_KA_INTF/${K8SHA_NETINF2}/g" \
-e "s/K8SHA_IPLOCAL/${K8SHA_IP2}/g" \
-e "s/K8SHA_KA_PRIO/101/g" \
-e "s/K8SHA_VIP/${K8SHA_VIP}/g" \
-e "s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g" \
config/keepalived/k8s-keepalived.conf.tpl > config/$K8SHA_HOST2/keepalived/keepalived.conf
echo "create keepalived files success. config/$K8SHA_HOST1/keepalived/"
echo "create keepalived files success. config/$K8SHA_HOST2/keepalived/"
# create all nginx-lb files
sed \
-e "s/K8SHA_IP1/$K8SHA_IP1/g" \
-e "s/K8SHA_IP2/$K8SHA_IP2/g" \
-e "s/K8SHA_IP3/$K8SHA_IP3/g" \
config/nginx-lb/bink8s-nginx-lb.conf.tpl > config/nginx-lb/nginx-lb.conf
echo "create nginx-lb files success. config/nginx-lb/nginx-lb.conf"
# cp all file to node
scp -rp config/nginx-lb/nginx-lb.conf root@$K8SHA_HOST1:/opt/k8s/kube-nginx/conf/kube-nginx.conf
scp -rp config/nginx-lb/nginx-lb.conf root@$K8SHA_HOST2:/opt/k8s/kube-nginx/conf/kube-nginx.conf
scp -rp config/$K8SHA_HOST1/keepalived/* root@$K8SHA_HOST1:/etc/keepalived/
scp -rp config/$K8SHA_HOST2/keepalived/* root@$K8SHA_HOST2:/etc/keepalived/
# chmod *.sh
chmod u+x config/*.sh
#保存退出
[root@master01 work]# chmod u+x *.sh
[root@master01 work]# ./binngkek8s.sh
#解释:
如上仅需Master01节点操作。执行binngkek8s.sh脚本后,会自动生成以下配置文件:
• keepalived:keepalived配置文件,位于各个master节点的/etc/keepalived目录
• nginx-lb:nginx-lb负载均衡配置文件,位于各个master节点的/opt/k8s/kube-nginx/conf/kube-nginx.conf目录
#确认高可用配置
[root@master01 work]#  for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    echo ">>>> check check sh"
    ssh root@${master_ip} "ls -l /etc/keepalived/check_apiserver.sh"
    echo ">>> check Keepalived config"
    ssh root@${master_ip} "cat /etc/keepalived/keepalived.conf"
    echo ">>> check Nginx config"
    ssh root@${master_ip} "cat /opt/k8s/kube-nginx/conf/kube-nginx.conf"
  done  
#检查一下高可用配置,会输出nginx和keepalived的配置文件,检查项包括:(要注意看是那个节点的输出信息不要搞混了)
mcast_src_ip       #查看此节点的ip
virtual_ipaddress  #查看vip地址
upstream apiserver #查看nginx的负载均衡
#启动服务
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl restart keepalived.service && systemctl enable keepalived.service"
    ssh root@${master_ip} "systemctl restart kube-nginx.service && systemctl enable kube-nginx.service"
    ssh root@${master_ip} "systemctl status keepalived.service | grep Active"
    ssh root@${master_ip} "systemctl status kube-nginx.service | grep Active"
    ssh root@${master_ip} "netstat -tlunp | grep 16443"
  done
>>> 192.168.100.202
   Active: active (running) since 五 2021-08-06 11:31:55 CST; 292ms ago  #都是running并且nginx处于启动状态即可
   Active: active (running) since 五 2021-08-06 11:31:55 CST; 255ms ago
tcp        0      0 0.0.0.0:16443           0.0.0.0:*               LISTEN      14274/nginx: master 
>>> 192.168.100.203
   Active: active (running) since 五 2021-08-06 11:31:55 CST; 342ms ago
   Active: active (running) since 五 2021-08-06 11:31:55 CST; 309ms ago
tcp        0      0 0.0.0.0:16443           0.0.0.0:*               LISTEN      8300/nginx: master  
#确认验证,稍等一会再进行ping,下面ping的ip是vip,vip是多少就填多少
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh root@${all_ip} "ping -c1 192.168.100.204"
  done        
#确认都可以ping通即可

8、部署master kubectl


8步骤的所有操作都在master01上进行


#获取kubectl
[root@master01 work]# wget https://storage.googleapis.com/kubernetes-release/release/v1.18.3/kubernetes-client-linux-amd64.tar.gz
[root@master01 work]# ll | grep kubernetes-client
-rw-r--r--  1 root    root  13233170 7月   1 2020 kubernetes-client-linux-amd64.tar.gz
[root@master01 work]# tar -xzvf kubernetes-client-linux-amd64.tar.gz
#分发kubectl
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kubernetes/client/bin/kubectl root@${master_ip}:/opt/k8s/bin/
    ssh root@${master_ip} "chmod +x /opt/k8s/bin/*"
  done
#创建admin证书和密钥,创建admin的CA证书请求文件
[root@master01 work]# cat > admin-csr.json <<EOF
{
    "CN": "admin",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shanghai",
            "L": "Shanghai",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}
EOF
#解释:
O 为 system:masters:kube-apiserver 收到该证书后将请求的 Group 设置为 system:masters;
预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,即该 Role 授予所有 API的权限;
该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空。
#生成密钥和证书
[root@master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
-profile=kubernetes admin-csr.json | cfssljson -bare admin
#创建kubeconfig文件
kubectl 默认从 ~/.kube/config 文件读取 kube-apiserver 地址和认证信息。只需在master节点部署一次,其生成的 kubeconfig 文件是通用的,可以拷贝到需要执行 kubectl 命令的机器,重命名为/.kube/config即可。
# 设置集群参数
[root@master01 work]# kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kubectl.kubeconfig
# 设置客户端认证参数
[root@master01 work]#kubectl config set-credentials admin \
  --client-certificate=/opt/k8s/work/admin.pem \
  --client-key=/opt/k8s/work/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=kubectl.kubeconfig
#设置上下文参数
[root@master01 work]#kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=kubectl.kubeconfig
#设置默认上下文
[root@master01 work]# kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
#解释:
--certificate-authority:验证 kube-apiserver 证书的根证书;
--client-certificate、--client-key:刚生成的 admin 证书和私钥,连接 kube-apiserver 时使用;
--embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl.kubeconfig 文件中(默认写入的是证书文件路径,后续需要拷贝 kubeconfig 和该证书文件至到其它机器。)。
#分发kubeconfig
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p ~/.kube"
    scp kubectl.kubeconfig root@${master_ip}:~/.kube/config
    ssh root@${master_ip} "echo 'export KUBECONFIG=\$HOME/.kube/config' >> ~/.bashrc"
    ssh root@${master_ip} "echo 'source <(kubectl completion bash)' >> ~/.bashrc"
  done


9、部署kube-apiserver


9步骤全部都在master01上执行

#master节点服务
kubernetes master 节点运行如下组件:
• kube-apiserver
• kube-scheduler
• kube-controller-manager
• kube-nginx
kube-apiserver、kube-scheduler 和 kube-controller-manager 均以多实例模式运行:
kube-scheduler 和 kube-controller-manager 会自动选举产生一个 leader 实例,其它实例处于阻塞模式,当 leader 挂了后,重新选举产生新的 leader,从而保证服务可用性。
kube-apiserver 是无状态的,需要通过 kube-nginx 进行代理访问,从而保证服务可用性。
#安装Kubernetes
[root@master01 work]# wget https://storage.googleapis.com/kubernetes-release/release/v1.18.3/kubernetes-server-linux-amd64.tar.gz
[root@master01 work]# ll | grep kubernetes-server
-rw-r--r--  1 root    root  363654483 7月   1 2020 kubernetes-server-linux-amd64.tar.gz
[root@master01 work]# tar -xzvf kubernetes-server-linux-amd64.tar.gz
[root@master01 work]# cd kubernetes
[root@master01 kubernetes]# tar -xzvf kubernetes-src.tar.gz
#分发Kubernetes
[root@master01 kubernetes]# cd ..
[root@master01 work]# source /root/environment.sh
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp -rp kubernetes/server/bin/{apiextensions-apiserver,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} root@${master_ip}:/opt/k8s/bin/
    ssh root@${master_ip} "chmod +x /opt/k8s/bin/*"
  done  
高可用apiserver介绍
本实验部署一个三实例 kube-apiserver 集群的步骤,它们通过 kube-nginx 进行代理访问,从而保证服务可用性
#创建kube-apiserver证书,创建Kubernetes证书和私钥,要注意看hosts项中的ip对不对,分别是master01、02和vip地址
[root@master01 work]# cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.100.202",
    "192.168.100.203",
    "192.168.100.204",
    "${CLUSTER_KUBERNETES_SVC_IP}",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local."
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF 
#解释:
hosts 字段指定授权使用该证书的 IP 和域名列表,这里列出了 master 节点 IP、kubernetes 服务的 IP 和域名;
kubernetes 服务 IP 是 apiserver 自动创建的,一般是 --service-cluster-ip-range 参数指定的网段的第一个IP,后续可以通过下面命令获取:
kubectl get svc kubernetes
#生成密钥和证书
[root@master01 work]#  cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
#分发证书和私钥
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p /etc/kubernetes/cert"
    scp kubernetes*.pem root@${master_ip}:/etc/kubernetes/cert/
  done
#配置kube-apiserver审计
#创建加密配置文件
[root@master01 work]# cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF
#分发加密配置文件
[root@master01 work]#  for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp encryption-config.yaml root@${master_ip}:/etc/kubernetes/
  done
#创建审计策略文件
 cat > audit-policy.yaml <<EOF
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
  # The following requests were manually identified as high-volume and low-risk, so drop them.
  - level: None
    resources:
      - group: ""
        resources:
          - endpoints
          - services
          - services/status
    users:
      - 'system:kube-proxy'
    verbs:
      - watch
  - level: None
    resources:
      - group: ""
        resources:
          - nodes
          - nodes/status
    userGroups:
      - 'system:nodes'
    verbs:
      - get
  - level: None
    namespaces:
      - kube-system
    resources:
      - group: ""
        resources:
          - endpoints
    users:
      - 'system:kube-controller-manager'
      - 'system:kube-scheduler'
      - 'system:serviceaccount:kube-system:endpoint-controller'
    verbs:
      - get
      - update
  - level: None
    resources:
      - group: ""
        resources:
          - namespaces
          - namespaces/status
          - namespaces/finalize
    users:
      - 'system:apiserver'
    verbs:
      - get
  # Don't log HPA fetching metrics.
  - level: None
    resources:
      - group: metrics.k8s.io
    users:
      - 'system:kube-controller-manager'
    verbs:
      - get
      - list
  # Don't log these read-only URLs.
  - level: None
    nonResourceURLs:
      - '/healthz*'
      - /version
      - '/swagger*'
  # Don't log events requests.
  - level: None
    resources:
      - group: ""
        resources:
          - events
  # node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    users:
      - kubelet
      - 'system:node-problem-detector'
      - 'system:serviceaccount:kube-system:node-problem-detector'
    verbs:
      - update
      - patch
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - nodes/status
          - pods/status
    userGroups:
      - 'system:nodes'
    verbs:
      - update
      - patch
  # deletecollection calls can be large, don't log responses for expected namespace deletions
  - level: Request
    omitStages:
      - RequestReceived
    users:
      - 'system:serviceaccount:kube-system:namespace-controller'
    verbs:
      - deletecollection
  # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
  # so only log at the Metadata level.
  - level: Metadata
    omitStages:
      - RequestReceived
    resources:
      - group: ""
        resources:
          - secrets
          - configmaps
      - group: authentication.k8s.io
        resources:
          - tokenreviews
  # Get repsonses can be large; skip them.
  - level: Request
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
    verbs:
      - get
      - list
      - watch
  # Default level for known APIs
  - level: RequestResponse
    omitStages:
      - RequestReceived
    resources:
      - group: ""
      - group: admissionregistration.k8s.io
      - group: apiextensions.k8s.io
      - group: apiregistration.k8s.io
      - group: apps
      - group: authentication.k8s.io
      - group: authorization.k8s.io
      - group: autoscaling
      - group: batch
      - group: certificates.k8s.io
      - group: extensions
      - group: metrics.k8s.io
      - group: networking.k8s.io
      - group: policy
      - group: rbac.authorization.k8s.io
      - group: scheduling.k8s.io
      - group: settings.k8s.io
      - group: storage.k8s.io
  # Default level for all other requests.
  - level: Metadata
    omitStages:
      - RequestReceived
EOF
#分发策略文件
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp audit-policy.yaml root@${master_ip}:/etc/kubernetes/audit-policy.yaml
  done
#配置metrics-server,创建metrics-server的CA证书请求文件
[root@master01 work]#  cat > proxy-client-csr.json <<EOF
{
  "CN": "system:metrics-server",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
#解释:
CN 名称需要位于 kube-apiserver 的 --requestheader-allowed-names 参数中,否则后续访问 metrics 时会提示权限不足。
#生成密钥和证书
[root@master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
-profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client 
#分发证书和私钥
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp proxy-client*.pem root@${master_ip}:/etc/kubernetes/cert/
  done
#创建kube-apiserver的systemd
[root@master01 work]# cat > kube-apiserver.service.template <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=${K8S_DIR}/kube-apiserver
ExecStart=/opt/k8s/bin/kube-apiserver \\
  --insecure-port=0 \\
  --secure-port=6443 \\
  --bind-address=##MASTER_IP## \\
  --advertise-address=##MASTER_IP## \\
  --default-not-ready-toleration-seconds=360 \\
  --default-unreachable-toleration-seconds=360 \\
  --feature-gates=DynamicAuditing=true \\
  --max-mutating-requests-inflight=2000 \\
  --max-requests-inflight=4000 \\
  --default-watch-cache-size=200 \\
  --delete-collection-workers=2 \\
  --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\
  --etcd-cafile=/etc/kubernetes/cert/ca.pem \\
  --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \\
  --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \\
  --etcd-servers=${ETCD_ENDPOINTS} \\
  --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \\
  --audit-dynamic-configuration \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-truncate-enabled=true \\
  --audit-log-path=${K8S_DIR}/kube-apiserver/audit.log \\
  --audit-policy-file=/etc/kubernetes/audit-policy.yaml \\
  --profiling \\
  --anonymous-auth=false \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --enable-bootstrap-token-auth=true \\
  --requestheader-allowed-names="system:metrics-server" \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix=X-Remote-Extra- \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --service-account-key-file=/etc/kubernetes/cert/ca.pem \\
  --authorization-mode=Node,RBAC \\
  --runtime-config=api/all=true \\
  --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --event-ttl=168h \\
  --kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \\
  --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \\
  --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \\
  --kubelet-https=true \\
  --kubelet-timeout=10s \\
  --proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \\
  --proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \\
  --service-cluster-ip-range=${SERVICE_CIDR} \\
  --service-node-port-range=${NODE_PORT_RANGE} \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
——————————————————————————————————————————————————————————————————————————————————————————————————————————————————
#使用 --audit-policy-file 标志将包含策略的文件传递给 kube-apiserver。如果不设置该标志,则不记录事件。
#解释:
• --advertise-address:apiserver 对外通告的 IP(kubernetes 服务后端节点 IP);
• --default-*-toleration-seconds:设置节点异常相关的阈值;
• --max-*-requests-inflight:请求相关的最大阈值;
• --etcd-*:访问 etcd 的证书和 etcd 服务器地址;
• --experimental-encryption-provider-config:指定用于加密 etcd 中 secret 的配置;
• --bind-address: https 监听的 IP,不能为 127.0.0.1,否则外界不能访问它的安全端口 6443;
• --secret-port:https 监听端口;
• --insecure-port=0:关闭监听 http 非安全端口(8080);
• --tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件;
• --audit-*:配置审计策略和审计日志文件相关的参数;
• --client-ca-file:验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
• --enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证;
• --requestheader-*:kube-apiserver 的 aggregator layer 相关的配置参数,proxy-client & HPA 需要使用;
• --requestheader-client-ca-file:用于签名 --proxy-client-cert-file 和 --proxy-client-key-file 指定的证书;在启用了 metric aggregator 时使用;
• --requestheader-allowed-names:不能为空,值为逗号分割的 --proxy-client-cert-file 证书的 CN 名称,这里设置为 "aggregator";
• --service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,两者配对使用;
• --runtime-config=api/all=true: 启用所有版本的 APIs,如 autoscaling/v2alpha1;
• --authorization-mode=Node,RBAC、--anonymous-auth=false: 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
• --enable-admission-plugins:启用一些默认关闭的 plugins;
• --allow-privileged:运行执行 privileged 权限的容器;
• --apiserver-count=3:指定 apiserver 实例的数量;
• --event-ttl:指定 events 的保存时间;
• --kubelet-*:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes*.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
• --proxy-client-*:apiserver 访问 metrics-server 使用的证书;
• --service-cluster-ip-range: 指定 Service Cluster IP 地址段;
• --service-node-port-range: 指定 NodePort 的端口范围。
#提示:如果 kube-apiserver 机器没有运行 kube-proxy,则还需要添加 --enable-aggregator-routing=true 参数。
#注意:requestheader-client-ca-file 指定的 CA 证书,必须具有 client auth and server auth;
如果 --requestheader-allowed-names 为空,或者 --proxy-client-cert-file 证书的 CN 名称不在 allowed-names 中,则后续查看 node 或 pods 的 metrics 失败,会提示:
[root@master01 ~]# kubectl top nodes       #不用进行操作,这里只是演示报错信息
Error from server (Forbidden): nodes.metrics.k8s.io is forbidden: User "aggregator" cannot list resource "nodes" in API group "metrics.k8s.io" at the cluster scope
———————————————————————————————————————————————————————————————————————————————————————————————————————————————————
#分发systemd
[root@master01 work]# for (( i=0; i < 2; i++ ))
  do
    sed -e "s/##MASTER_NAME##/${MASTER_NAMES[i]}/" -e "s/##MASTER_IP##/${MASTER_IPS[i]}/" kube-apiserver.service.template > kube-apiserver-${MASTER_IPS[i]}.service
  done
[root@master01 work]# ls kube-apiserver*.service    #查看,会发现都替换了两个master节点的相应的ip地址
kube-apiserver-192.168.100.202.service  kube-apiserver-192.168.100.203.service
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-apiserver-${master_ip}.service root@${master_ip}:/etc/systemd/system/kube-apiserver.service
  done      
#启动kube-apiserver服务
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p ${K8S_DIR}/kube-apiserver"
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
  done
#检查kube-apiserver服务
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl status kube-apiserver |grep 'Active:'"
  done
>>> 192.168.100.202
   Active: active (running) since 五 2021-08-06 12:05:06 CST; 31s ago  #两个都是running即可
>>> 192.168.100.203
   Active: active (running) since 五 2021-08-06 12:05:18 CST; 20s ago
#查看kube-apiserver写入etcd的数据
[root@master01 work]# ETCDCTL_API=3 etcdctl \
    --endpoints=${ETCD_ENDPOINTS} \
    --cacert=/opt/k8s/work/ca.pem \
    --cert=/opt/k8s/work/etcd.pem \
    --key=/opt/k8s/work/etcd-key.pem \
get /registry/ --prefix --keys-only
#检查集群信息
[root@master01 work]# kubectl cluster-info  #集群ip
Kubernetes master is running at https://192.168.100.204:16443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@master01 work]# kubectl get all --all-namespaces  
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.20.0.1    <none>        443/TCP   107s
[root@master01 work]# kubectl get componentstatuses
NAME                 STATUS      MESSAGE                                                                                     ERROR
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused   
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused   
etcd-0               Healthy     {"health":"true"}                                                                           
etcd-1               Healthy     {"health":"true"}                                                                           
[root@master01 work]# netstat -lnpt|grep 6443
tcp        0      0 192.168.100.202:6443    0.0.0.0:*               LISTEN      17172/kube-apiserve 
tcp        0      0 0.0.0.0:16443           0.0.0.0:*               LISTEN      14274/nginx: master 
#提示
执行 kubectl get componentstatuses 命令时,apiserver 默认向 127.0.0.1 发送请求。目前controller-manager和scheduler还未部署,因此显示Unhealthy;
6443: 接收 https 请求的安全端口,对所有请求做认证和授权;
16443:Nginx反向代理监听端口;
由于关闭了非安全端口,故没有监听 8080。
#授权
授予 kube-apiserver 访问 kubelet API 的权限。
在执行 kubectl exec、run、logs 等命令时,apiserver 会将请求转发到 kubelet 的 https 端口。本实验定义 RBAC 规则,授权 apiserver 使用的证书(kubernetes.pem)用户名(CN:kuberntes)访问 kubelet API 的权限:
[root@master01 ~]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver:kubelet-apis created  #会提示已创建

10、部署kube-controller-manager


10步骤全部都在master01上执行

高可用kube-controller-manager介绍
本实验部署一个三实例 kube-controller-manager 的集群,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用时,阻塞的节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书:
• 与 kube-apiserver 的安全端口通信;
• 在安全端口(https,10252) 输出 prometheus 格式的 metrics。
#创建kube-controller-manager证书和私钥,创建kube-controller-manager的CA证书请求文件,注意修改ip
[root@master01 work]# source /root/environment.sh
[root@master01 work]# cat > kube-controller-manager-csr.json <<EOF 
{
  "CN": "system:kube-controller-manager",
  "hosts": [
    "127.0.0.1",
    "192.168.100.202",
    "192.168.100.203",
    "192.168.100.204"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "system:kube-controller-manager",
      "OU": "System"
    }
  ]
}
EOF
#解释:
hosts 列表包含所有 kube-controller-manager 节点 IP;
CN 和 O 均为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。
#生成
[root@master01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
#分发证书和私钥
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-controller-manager*.pem root@${master_ip}:/etc/kubernetes/cert/
  done
#创建和分发kubeconfig
kube-controller-manager 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-controller-manager 证书:
[root@master01 work]# kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-controller-manager.kubeconfig
[root@master01 work]# kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=kube-controller-manager.pem \
  --client-key=kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig
[root@master01 work]#  kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig
[root@master01 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-controller-manager.kubeconfig root@${master_ip}:/etc/kubernetes/
  done
#创建kube-controller-manager的systemd
[root@master01 work]# cat > kube-controller-manager.service.template <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
WorkingDirectory=${K8S_DIR}/kube-controller-manager
ExecStart=/opt/k8s/bin/kube-controller-manager \\
  --secure-port=10257 \\
  --bind-address=127.0.0.1 \\
  --profiling \\
  --cluster-name=kubernetes \\
  --controllers=*,bootstrapsigner,tokencleaner \\
  --kube-api-qps=1000 \\
  --kube-api-burst=2000 \\
  --leader-elect \\
  --use-service-account-credentials\\
  --concurrent-service-syncs=2 \\
  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \\
  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \\
  --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-allowed-names="system:metrics-server" \\
  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\
  --experimental-cluster-signing-duration=87600h \\
  --horizontal-pod-autoscaler-sync-period=10s \\
  --concurrent-deployment-syncs=10 \\
  --concurrent-gc-syncs=30 \\
  --node-cidr-mask-size=24 \\
  --service-cluster-ip-range=${SERVICE_CIDR} \\
  --cluster-cidr=${CLUSTER_CIDR} \\
  --pod-eviction-timeout=6m \\
  --terminated-pod-gc-threshold=10000 \\
  --root-ca-file=/etc/kubernetes/cert/ca.pem \\
  --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \\
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
#分发systemd
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-controller-manager.service.template root@${master_ip}:/etc/systemd/system/kube-controller-manager.service
  done
#启动kube-controller-manager 服务
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p ${K8S_DIR}/kube-controller-manager"
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"
  done
#检查kube-controller-manager 服务
[root@master01 work]# for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "systemctl status kube-controller-manager|grep Active"
  done
>>> 192.168.100.202
   Active: active (running) since 五 2021-08-06 12:17:44 CST; 25s ago  #全是running即可
>>> 192.168.100.203
   Active: active (running) since 五 2021-08-06 12:17:45 CST; 25s ago
#查看输出的 metrics
[root@master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://127.0.0.1:10257/metrics | head
#查看权限
[root@master01 work]# kubectl describe clusterrole system:kube-controller-manager
#提示
ClusteRole system:kube-controller-manager 的权限很小,只能创建 secret、serviceaccount 等资源对象,各 controller 的权限分散到 ClusterRole system:controller:XXX 中。
当在 kube-controller-manager 的启动参数中添加 --use-service-account-credentials=true 参数,这样 main controller 会为各 controller 创建对应的 ServiceAccount XXX-controller。内置的 ClusterRoleBinding system:controller:XXX 将赋予各 XXX-controller ServiceAccount 对应的 ClusterRole system:controller:XXX 权限。
[root@master01 ~]# kubectl get clusterrole | grep controller
#查看当前leader
[root@master01 work]# kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
18天前
|
Kubernetes 应用服务中间件 nginx
二进制安装Kubernetes(k8s)v1.32.0
本指南提供了一个详细的步骤,用于在Linux系统上通过二进制文件安装Kubernetes(k8s)v1.32.0,支持IPv4+IPv6双栈。具体步骤包括环境准备、系统配置、组件安装和配置等。
184 10
|
3月前
|
Kubernetes Ubuntu Docker
从0开始搞K8S:使用Ubuntu进行安装(环境安装)
通过上述步骤,你已经在Ubuntu上成功搭建了一个基本的Kubernetes单节点集群。这只是开始,Kubernetes的世界广阔且深邃,接下来你可以尝试部署应用、了解Kubernetes的高级概念如Services、Deployments、Ingress等,以及探索如何利用Helm等工具进行应用管理,逐步提升你的Kubernetes技能树。记住,实践是最好的老师,不断实验与学习,你将逐渐掌握这一强大的容器编排技术。
370 1
|
2月前
|
Kubernetes Ubuntu Linux
我应该如何安装Kubernetes
我应该如何安装Kubernetes
|
3月前
|
Kubernetes Linux 开发工具
centos7通过kubeadm安装k8s 1.27.1版本
centos7通过kubeadm安装k8s 1.27.1版本
|
3月前
|
Kubernetes 网络安全 容器
基于Ubuntu-22.04安装K8s-v1.28.2实验(一)部署K8s
基于Ubuntu-22.04安装K8s-v1.28.2实验(一)部署K8s
364 2
|
3月前
|
存储 Kubernetes 负载均衡
基于Ubuntu-22.04安装K8s-v1.28.2实验(四)使用域名访问网站应用
基于Ubuntu-22.04安装K8s-v1.28.2实验(四)使用域名访问网站应用
48 1
|
3月前
|
负载均衡 应用服务中间件 nginx
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
85 1
|
3月前
|
Kubernetes 网络协议 安全
[kubernetes]二进制方式部署单机k8s-v1.30.5
[kubernetes]二进制方式部署单机k8s-v1.30.5
|
3月前
|
Kubernetes Docker 容器
rancher docker k8s安装(二)
rancher docker k8s安装(二)
74 0
|
3月前
|
Kubernetes 容器
基于Ubuntu-22.04安装K8s-v1.28.2实验(三)数据卷挂载NFS(网络文件系统)
基于Ubuntu-22.04安装K8s-v1.28.2实验(三)数据卷挂载NFS(网络文件系统)
197 0

热门文章

最新文章