Kubernetes高可用集群二进制部署(一)主机准备和负载均衡器安装
Kubernetes高可用集群二进制部署(二)ETCD集群部署
Kubernetes高可用集群二进制部署(三)部署api-server
Kubernetes高可用集群二进制部署(四)部署kubectl和kube-controller-manager、kube-scheduler
Kubernetes高可用集群二进制部署(五)kubelet、kube-proxy、Calico、CoreDNS
Kubernetes高可用集群二进制部署(六)Kubernetes集群节点添加
1. Kubernetes软件包下载
在master1下载k8s的安装包
[root@k8s-master1 k8s-work]# wget https://dl.k8s.io/v1.21.10/kubernetes-server-linux-amd64.tar.gz
网络不好可以多试几次,或者本地下载好上传到服务器上
2. Kubernetes软件包安装
tar -xvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin/ cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
3. Kubernetes软件分发
scp kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master2:/usr/local/bin/ scp kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master3:/usr/local/bin/
在工作节点上分发软件,因为只规划了一台服务器作为工作节点(k8s-worker1),实际在工作中为了节省资源会把master同时作为工作节点
scp kubelet kube-proxy k8s-master1:/usr/local/bin scp kubelet kube-proxy k8s-master2:/usr/local/bin scp kubelet kube-proxy k8s-master3:/usr/local/bin scp kubelet kube-proxy k8s-worker1:/usr/local/bin
如果在工作中主备服务器(控制平面)不需要作为工作节点(数据平面)使用,那么就不需要拷贝kubelet
kube-proxy
4. 在集群节点上创建目录
所有节点(除了负载均衡器之外),也就是三台master + worker1
mkdir -p /etc/kubernetes/ mkdir -p /etc/kubernetes/ssl #存放集群所使用的证书 mkdir -p /var/log/kubernetes #当前节点组件的日志
5. 部署api-server
5.1 创建apiserver证书请求文件
在master1上执行
cd /data/k8s-work cat > kube-apiserver-csr.json << "EOF" { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.10.103", "192.168.10.104", "192.168.10.105", "192.168.10.106", "192.168.10.107", #为了后期可以往集群添加节点,冗余几个ip "192.168.10.108", "192.168.10.109", "192.168.10.110", "192.168.10.111", "192.168.10.100", #负载均衡器中的虚拟ip "10.96.0.1", #k8s集群service网段的第一个ip "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", #加密算法 "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "kubemsb", "OU": "CN" } ] } EOF
说明: 如果 hosts 字段不为空则需要指定授权使用该证书的 IP(含VIP) 或域名列表。由于该证书被 集群使用,需要将节点的IP都填上,为了方便后期扩容可以多写几个预留的IP。 同时还需要填写 service 网络的首个IP(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.96.0.1)。
5.2 生成apiserver证书及token文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
cat > token.csv << EOF $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF
说明: 创建TLS机制所需TOKEN TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。
5.3 创建apiserver服务配置文件
cat > /etc/kubernetes/kube-apiserver.conf << "EOF" KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --anonymous-auth=false \ --bind-address=192.168.10.103 \ #当前主机master1的ip --secure-port=6443 \ #安全端口,与haproxy中的配置文件端口对应的6443是一致的 --advertise-address=192.168.10.103 \ --insecure-port=0 \ --authorization-mode=Node,RBAC \ --runtime-config=api/all=true \ --enable-bootstrap-token-auth \ --service-cluster-ip-range=10.96.0.0/16 \ --token-auth-file=/etc/kubernetes/token.csv \ #上一步创建的token文件位置 --service-node-port-range=30000-32767 \ --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \ #kube-apiserver私钥文件 --client-ca-file=/etc/kubernetes/ssl/ca.pem \ #客户端ca证书 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \ --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-issuer=api \ --etcd-cafile=/etc/etcd/ssl/ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --etcd-servers=https://192.168.10.103:2379,https://192.168.10.104:2379,https://192.168.10.105:2379 \ #etcd集群地址 --enable-swagger-ui=true \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kube-apiserver-audit.log \ --event-ttl=1h \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=4" EOF
cd /etc/kubernetes
5.4 创建apiserver服务管理配置文件
cat > /etc/systemd/system/kube-apiserver.service << "EOF" [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=etcd.service Wants=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
5.5 同步文件到集群master节点
[root@k8s-master1 k8s-work]# cd /data/k8s-work/ cp ca*.pem /etc/kubernetes/ssl/
cp kube-apiserver*.pem /etc/kubernetes/ssl/
cp token.csv /etc/kubernetes/
scp /etc/kubernetes/token.csv k8s-master2:/etc/kubernetes scp /etc/kubernetes/token.csv k8s-master3:/etc/kubernetes
scp /etc/kubernetes/ssl/kube-apiserver*.pem k8s-master2:/etc/kubernetes/ssl scp /etc/kubernetes/ssl/kube-apiserver*.pem k8s-master3:/etc/kubernetes/ssl
scp /etc/kubernetes/ssl/ca*.pem k8s-master2:/etc/kubernetes/ssl scp /etc/kubernetes/ssl/ca*.pem k8s-master3:/etc/kubernetes/ssl
scp /etc/kubernetes/kube-apiserver.conf k8s-master2:/etc/kubernetes/kube-apiserver.conf
在master2上修改配置文件
vim /etc/kubernetes/kube-apiserver.conf
# cat /etc/kubernetes/kube-apiserver.conf KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --anonymous-auth=false \ --bind-address=192.168.10.104 \ #修改ip --secure-port=6443 \ --advertise-address=192.168.10.104 \ #修改ip --insecure-port=0 \ --authorization-mode=Node,RBAC \ --runtime-config=api/all=true \ --enable-bootstrap-token-auth \ --service-cluster-ip-range=10.96.0.0/16 \ --token-auth-file=/etc/kubernetes/token.csv \ --service-node-port-range=30000-32767 \ --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \ --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-issuer=api \ --etcd-cafile=/etc/etcd/ssl/ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --etcd-servers=https://192.168.10.12:2379,https://192.168.10.13:2379,https://192.168.10.14:2379 \ --enable-swagger-ui=true \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kube-apiserver-audit.log \ --event-ttl=1h \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=4"
scp /etc/kubernetes/kube-apiserver.conf k8s-master3:/etc/kubernetes/kube-apiserver.conf
在master3上修改配置文件
vim /etc/kubernetes/kube-apiserver.conf
# cat /etc/kubernetes/kube-apiserver.conf KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --anonymous-auth=false \ --bind-address=192.168.10.105 \ --secure-port=6443 \ --advertise-address=192.168.10.105 \ --insecure-port=0 \ --authorization-mode=Node,RBAC \ --runtime-config=api/all=true \ --enable-bootstrap-token-auth \ --service-cluster-ip-range=10.96.0.0/16 \ --token-auth-file=/etc/kubernetes/token.csv \ --service-node-port-range=30000-32767 \ --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \ --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-issuer=api \ --etcd-cafile=/etc/etcd/ssl/ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --etcd-servers=https://192.168.10.12:2379,https://192.168.10.13:2379,https://192.168.10.14:2379 \ --enable-swagger-ui=true \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kube-apiserver-audit.log \ --event-ttl=1h \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=4"
分发服务管理文件
scp /etc/systemd/system/kube-apiserver.service k8s-master2:/etc/systemd/system/kube-apiserver.service scp /etc/systemd/system/kube-apiserver.service k8s-master3:/etc/systemd/system/kube-apiserver.service
5.6 启动apiserver服务
三个主节点都要执行
systemctl daemon-reload systemctl enable --now kube-apiserver systemctl status kube-apiserver # 测试 curl --insecure https://192.168.10.103:6443/ curl --insecure https://192.168.10.104:6443/ curl --insecure https://192.168.10.105:6443/ curl --insecure https://192.168.10.100:6443/ #虚拟ip
因为在当前命令行验证是没有经过认证的 所以会提示401,但可以证明服务正常启动