1.2、部署etcd集群
- 所有
master
节点需要etcd
(复用master节点,也可以独立三节点部署etcd,只要kubernetes集群可以访问即可)
1.2.0、下载etcd二进制文件
k8s-01:~ # cd /opt/k8s/packages/ k8s-01:/opt/k8s/packages # wget https://github.com/etcd-io/etcd/releases/download/v3.4.12/etcd-v3.4.12-linux-amd64.tar.gz k8s-01:/opt/k8s/packages # tar xf etcd-v3.4.12-linux-amd64.tar.gz
1.2.1、创建etcd证书和私钥
k8s-01:~ # cd /opt/k8s/ssl k8s-01:/opt/k8s/ssl # cat > etcd-csr.json <<EOF { "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.72.39", "192.168.72.40", "192.168.72.41" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "ShangHai", "L": "ShangHai", "O": "k8s", "OU": "bandian" } ] } EOF
host字段
指定授权使用该证书的etcd节点IP
或域名列表
,需要将etcd集群的3个节点
都添加其中
1.2.2、生成etcd证书和私钥
k8s-01:/opt/k8s/ssl # cfssl gencert -ca=/opt/k8s/ssl/ca.pem \ -ca-key=/opt/k8s/ssl/ca-key.pem \ -config=/opt/k8s/ssl/ca-config.json \ -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
1.2.3、配置etcd为systemctl管理
k8s-01:~ # cd /opt/k8s/conf/ k8s-01:/opt/k8s/conf # source /opt/k8s/bin/k8s-env.sh k8s-01:/opt/k8s/conf # cat > etcd.service.template <<EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=${ETCD_DATA_DIR} ExecStart=/opt/k8s/bin/etcd \\ --data-dir=${ETCD_DATA_DIR} \\ --wal-dir=${ETCD_WAL_DIR} \\ --name=##NODE_NAME## \\ --cert-file=/etc/etcd/cert/etcd.pem \\ --key-file=/etc/etcd/cert/etcd-key.pem \\ --trusted-ca-file=/etc/kubernetes/cert/ca.pem \\ --peer-cert-file=/etc/etcd/cert/etcd.pem \\ --peer-key-file=/etc/etcd/cert/etcd-key.pem \\ --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\ --peer-client-cert-auth \\ --client-cert-auth \\ --listen-peer-urls=https://##NODE_IP##:2380 \\ --initial-advertise-peer-urls=https://##NODE_IP##:2380 \\ --listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379 \\ --advertise-client-urls=https://##NODE_IP##:2379 \\ --initial-cluster-token=etcd-cluster-0 \\ --initial-cluster=${ETCD_NODES} \\ --initial-cluster-state=new \\ --auto-compaction-mode=periodic \\ --auto-compaction-retention=1 \\ --max-request-bytes=33554432 \\ --quota-backend-bytes=6442450944 \\ --heartbeat-interval=250 \\ --election-timeout=2000 \\ --enable-v2=true Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
WorkDirectory
、--data-dir
指定etcd工作目录和数据存储为${ETCD_DATA_DIR},需要在启动前创建这个目录 (后面会有创建步骤)--wal-dir
指定wal目录,为了提高性能,一般使用SSD和–data-dir不同的盘--name
指定节点名称,当–initial-cluster-state值为new时,–name的参数值必须位于–initial-cluster列表中--cert-file
、--key-file
etcd server与client通信时使用的证书和私钥--trusted-ca-file
签名client证书的CA证书,用于验证client证书--peer-cert-file
、--peer-key-file
etcd与peer通信使用的证书和私钥--peer-trusted-ca-file
签名peer证书的CA证书,用于验证peer证书
1.2.4、分发etcd证书和启动文件到其他etcd节点
#!/usr/bin/env bash source /opt/k8s/bin/k8s-env.sh for (( i=0; i < 3; i++ )) do sed -e "s/##NODE_NAME##/${MASTER_NAMES[i]}/" \ -e "s/##NODE_IP##/${ETCD_IPS[i]}/" \ /opt/k8s/conf/etcd.service.template > /opt/k8s/conf/etcd-${ETCD_IPS[i]}.service done for host in ${ETCD_IPS[@]} do printf "\e[1;34m${host}\e[0m\n" scp /opt/k8s/packages/etcd-v3.4.12-linux-amd64/etcd* ${host}:/opt/k8s/bin/ scp /opt/k8s/conf/etcd-${host}.service ${host}:/etc/systemd/system/etcd.service scp /opt/k8s/ssl/etcd*.pem ${host}:/etc/etcd/cert/ done
1.2.5、配置并启动etcd服务
#!/usr/bin/env bash source /opt/k8s/bin/k8s-env.sh for host in ${ETCD_IPS[@]} do printf "\e[1;34m${host}\e[0m\n" ssh root@${host} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}" ssh root@${host} "chmod 700 ${ETCD_DATA_DIR}" ssh root@${host} "systemctl daemon-reload && \ systemctl enable etcd && \ systemctl restart etcd && \ systemctl status etcd | grep Active" done
- 如果第一个回显是failed,先别着急取消,若后面两个节点的回显是running就没有问题了,这是集群的机制,如下显示是正常的
192.168.72.39 Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service. Job for etcd.service failed because a timeout was exceeded. See "systemctl status etcd.service" and "journalctl -xe" for details. 192.168.72.40 Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service. Active: active (running) since Sat 2021-02-13 00:27:24 CST; 16ms ago 192.168.72.41 Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service. Active: active (running) since Sat 2021-02-13 00:27:25 CST; 6ms ago
1.2.6、验证etcd集群状态
- 在k8s-01机器执行就可以了,既然是集群,在哪执行,都是可以获取到信息的
#!/usr/bin/env bash source /opt/k8s/bin/k8s-env.sh for host in ${ETCD_IPS[@]} do printf "\e[1;34m${host}\e[0m\n" ETCDCTL_API=3 /opt/k8s/bin/etcdctl \ --endpoints=https://${host}:2379 \ --cacert=/etc/kubernetes/cert/ca.pem \ --cert=/etc/etcd/cert/etcd.pem \ --key=/etc/etcd/cert/etcd-key.pem endpoint health done
- 如下显示
successfully committed proposal
则表示集群正常
192.168.72.39 https://192.168.72.39:2379 is healthy: successfully committed proposal: took = 9.402229ms 192.168.72.40 https://192.168.72.40:2379 is healthy: successfully committed proposal: took = 10.247073ms 192.168.72.41 https://192.168.72.41:2379 is healthy: successfully committed proposal: took = 11.01422ms