CentOS7上二进制部署Kubernetes高可用集群(v1.18版本)

本文涉及的产品
服务治理 MSE Sentinel/OpenSergo,Agent数量 不受限
应用实时监控服务-可观测链路OpenTelemetry版,每月50GB免费额度
可观测监控 Prometheus 版,每月50GB免费额度
简介: CentOS7上二进制部署Kubernetes高可用集群(v1.18版本)

Kubernetes集群架构与组件

Master组件

  • kube-apiserver

Kubernetes API,集群的统⼀⼊⼝,各组件协调者,以RESTful API提供接⼝服务,所有对象资源的增 删查改和监听操作都交给APIServer处理后再提交给etcd存储。

  • kube-controller-manager

处理集群中常规后台任务,⼀个资源对应⼀个控制器,⽽ControllerManager就是负责管理这些控制器 的。

  • kube-scheduler

根据调度算法为新创建的Pod选择⼀个Node节点,可以任意部署,可以部署在同⼀个节点,也可以部署 在不同的节点上。

  • etcd

分布式键值存储系统。⽤于保存集群状态数据,⽐如Pod、Service等对象信息。

Node组件

  • kubelet

kubelet是Master在Node节点上的Agent,管理本机运⾏容器的⽣命周期,⽐如创建容器、Pod挂载数 据卷、下载secret、获取容器和节点状态等⼯作。kubelet将每个Pod转换成⼀组容器。

  • kube-proxy

在Node节点上实现Pod⽹络代理,维护⽹络规则和四层负载均衡⼯作。

  • docker或rocket

容器引擎,运⾏容器

生产环境K8S平台规划

单Master集群

多Master集群(HA)

实验环境信息

主机

操作系统

IP地址

角色

组件(版本号)

k8s-master1

CentOS7.4

192.168.43.205

master

node

kube-apiserver

kube-controller-manager

kube-scheduler

etcd

kubelet

kube-proxy

docker

k8s-master2

CentOS7.4

192.168.43.206

master

node

kube-apiserver

kube-controller-manager

kube-scheduler

etcd

kubelet

kube-proxy

docker

k8s-node1

CentOS7.4

192.168.43.207

node

kubelet

kube-proxy

docker

etcd

系统初始化

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

关闭selinux

setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

关闭swap

swapoff -a
echo 'swapoff -a ' >> /etc/rc.d/rc.local

配置主机名

hostnamectl set-hostname <hostname>

添加所有节点的本地host解析

cat >> /etc/hosts << EOF
x.x.x.x hostname1
y.y.y.y hostname2
...
EOF

内核开启⽹络⽀持

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1
EOF
modprobe br_netfilter
sysctl -p

配置免密

在master1节点生成密钥对,并分发给其他的所有主机

[root@k8s-master1 ~]# ssh-keygen -t rsa -b 1200
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:OoMw1dARsWhbJKAQL2hUxwnM4tLQJeLynAQHzqNQs5s root@localhost.localdomain
The key's randomart image is:
+---[RSA 1200]----+
|*=X=*o*+         |
|OO.*.O..         |
|BO= + +          |
|**o* o           |
|o E .   S        |
|   o . .         |
|    . +          |
|       o         |
|                 |
+----[SHA256]-----+

分发公钥

[root@k8s-master1 ~]for i in  k8s-master2 k8s-node1;do scp -i ~/.ssh/id_rsa.pub root@$i;done

安装CFSSL工具

CFSSL是CloudFlare开源的⼀款PKI/TLS⼯具。 CFSSL 包含⼀个命令⾏⼯具 和⼀个⽤于 签名,验证 并且捆绑TLS证书的 HTTP API 服务。 使⽤Go语⾔编写

Github地址:https://github.com/cloudflare/cfssl 官⽹地址:https://pkg.cfssl.org/

在其中一台节点(⼀般是master1)上执⾏如下指令直接进⾏安装

curl -s -L -o /usr/local/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linuxamd64
curl -s -L -o /usr/local/bin/cfssljson  https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
curl -s -L -o /usr/local/bin/cfssl-certinfo https://pkg.cfssl.org/R1.2/cfsslcertinfo_linux-amd64
chmod +x /usr/local/bin/cfssl*

如果环境⽆法联⽹,则到官⽹下载最新版本的cfssl_linux-amd64、cfssljson_linux-amd64、cfsslcertinfo_linux-amd64并上传到其中⼀台节点的/root⽬录下(⼀般是master1),并执⾏如下指令安装 cfssl

mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl*

部署etcd数据库集群

etcd 是基于 Raft 的分布式 key-value 存储系统,由 CoreOS 开发,常⽤于服务发现、共享配置以 及并发控制(如 leader 选举、分布式锁等)。kubernetes 使⽤ etcd 存储所有运⾏数据。

Github地址:https://github.com/etcd-io/etcd 官⽹地址:https://etcd.io/

使用cfssl为etcd生成自签证书

在安装了cfssl⼯具的节点上执⾏如下指令为etcd创建对应的ca机构并⽣成⾃签证书

创建工作目录

mkdir -p /opt/kubernetes/{bin,cfg,log,ssl}

bin 目录存放K8S有关的二进制文件

cfg 目录存放K8S各组件的配置文件

log 目录存放K8S各组件的日志文件

ssl 目录存放K8S有关的证书文件

创建生成证书相关的JSON文件

在ssl目录下创建ca-csr.json、ca-config.json和etcd-csr.json。

ca-csr.json的内容如下:

{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}

ca-config.json的内容如下:

{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}

ca-config.json:定义多个profiles,分别指定不同的过期时间,使⽤场景等参数,这⾥我们只定义 了kubernetes⼀个profile。

signing:表示该证书可⽤于签名其它证书。

server auth:表示client可以使⽤该CA对server提供的证书进⾏验证。

client auth:表示server可以⽤该CA对client提供的证书进⾏验证。

etcd-csr.json的内容如下:

{
    "CN": "etcd",
    "hosts": [
    "127.0.0.1",
    "192.168.43.205",
    "192.168.43.206",
    "192.168.43.207"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}

etcd-csr.json中的hosts字段需要所把有etcd集群节点的IP地址都添加进去

生成CA证书和私钥

在ssl目录下执行如下命令:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

执行完之后,ssl目录下会生成两个.pem为后缀的文件,分别是ca.pem和ca-key.pem.

[root@k8s-master1 ssl]# ls ca*.pem
ca-key.pem  ca.pem

为etcd⽣成⾃签证书

同样在ssl目录下执行如下命令:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www etcd-csr.json | cfssljson -bare etcd

执行完之后,ssl目录下会生成两个.pem为后缀的文件,分别是etcd.pem和etcd-key.pem.

[root@k8s-master1 ssl]# ls etc*.pem
etcd-key.pem  etcd.pem

部署etcd3.3的版本

访问https://github.com/etcd-io/etcd/releases下载etcd3.3版本的⼆进制包(本⽂以3.3.24为例),并上传到其中⼀台etcd节点的/root⽬录下,然后执⾏如下指令解压并创建etcd相关⽬录和配置⽂件。

下载etcd 3.3.24版本

[root@k8s-master1 ~]# cd /root/
[root@k8s-master1 ~]# wget https://github.com/etcd-io/etcd/releases/download/v3.3.24/etcd-v3.3.24-linux-amd64.tar.gz
[root@k8s-master1 ~]# ls
anaconda-ks.cfg  etcd-v3.3.24-linux-amd64.tar.gz

解压etcd

[root@k8s-master1 ~]# tar -zxvf etcd-v3.3.24-linux-amd64.tar.gz
[root@k8s-master1 ~]# ls
anaconda-ks.cfg  etcd-v3.3.24-linux-amd64  etcd-v3.3.24-linux-amd64.tar.gz
[root@k8s-master1 ~]#

拷贝进制文件到opt/kubernetes/bin目录下

[root@k8s-master1 ~]# cp -a etcd-v3.3.24-linux-amd64/etcd* /opt/kubernetes/bin/
[root@k8s-master1 ~]# ls -l /opt/kubernetes/bin/etcd*
-rwxr-xr-x 1 root root 22820544 8月  25 10:46 /opt/kubernetes/bin/etcd
-rwxr-xr-x 1 root root 18389632 8月  25 10:46 /opt/kubernetes/bin/etcdctl

创建etcd配置文件

在/opt/kubernetes/cfg目录下创建etcd.conf文件,内容如下

#[Member]
##⾃定义此etcd节点的名称,集群内唯⼀
ETCD_NAME="etcd01"
#定义etcd数据存放⽬录
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#定义本机和成员之间通信的地址
ETCD_LISTEN_PEER_URLS="https://192.168.43.205:2380"
#定义etcd对外提供服务的地址
ETCD_LISTEN_CLIENT_URLS="https://192.168.43.205:2379,https://127.0.0.1:2379"
#[Clustering]
#定义该节点成员对等URL地址,且会通告集群的其余成员节点
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.43.205:2380"
#此成员的客户端URL列表,⽤于通告群集的其余部分
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.43.205:2379"
#集群中所有节点的信息
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.43.205:2380,etcd02=https://192.168.43.206:2380,etcd03=https://192.168.43.207:2380"
#创建集群的token,这个值每个集群保持唯⼀
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#设置new为初始静态或DNS引导期间出现的所有成员。如果将此选项设置为existing,则etcd将尝试加⼊现有群集
ETCD_INITIAL_CLUSTER_STATE="new"

创建etcd的systemd unit⽂件

在/usr/lib/systemd/system目录下,创建etcd.service文件,添加如下内容:

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/etcd.conf
ExecStart=/opt/kubernetes/bin/etcd \
--initial-cluster-state=new \
--cert-file=/opt/kubernetes/ssl/etcd.pem \
--key-file=/opt/kubernetes/ssl/etcd-key.pem \
--peer-cert-file=/opt/kubernetes/ssl/etcd.pem \
--peer-key-file=/opt/kubernetes/ssl/etcd-key.pem \
--trusted-ca-file=/opt/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

其他etcd节点部署

将/opt/kubernetes/bin⽬录下的etcd、etcdctl二进制文件、/opt/kubernetes/cfg目录下etcd.conf和systemd unit⽂件,同时把/opt/kubernetes/ssl目录ca.pem、ca-key.pem、etcd.pem和etcd-key.pem拷⻉到其余etcd集群节点上,并修改etcd配置⽂件中的名称和IP地址。

拷贝etcd相关二进制文件、配置文件、证书文件和etcd.service到其余etcd集群节点上

for i in  k8s-master2 k8s-node1;do scp /opt/kubernetes/bin/{etcd,etcdctl} root@$i:/opt/kubernetes/bin/;done
 for i in  k8s-master2 k8s-node1;do scp /opt/kubernetes/cfg/etcd.conf root@$i:/opt/kubernetes/cfg/;done
 for i in  k8s-master2 k8s-node1;do scp /opt/kubernetes/ssl/{etc*.pem,ca*.pem} root@$i:/opt/kubernetes/ssl/;done
 for i in  k8s-master2 k8s-node1;do scp /usr/lib/systemd/system/etcd.service root@$i:/usr/lib/systemd/system/;done

修改etcd集群配置⽂件

k8s-master2主机上修改etcd.conf.

#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.43.206:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.43.206:2379,https://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.43.206:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.43.206:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.43.205:2380,etcd02=https://192.168.43.206:2380,etcd03=https://192.168.43.207:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

k8s-node1主机上修改etcd.conf.

#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.43.207:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.43.207:2379,https://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.43.207:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.43.207:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.43.205:2380,etcd02=https://192.168.43.206:2380,etcd03=https://192.168.43.207:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

各节点启动etcd服务

在所有的etcd集群节点上启动etcd并设置开机⾃启

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

在第⼀台节点上执⾏start后会⼀直卡着⽆法返回命令提示符,这是因为在等待其他节点准备就绪, 继续启动其余节点即可

查询etcd集群状态

在各节点上执行如下命令,把/opt/kubernetes/bin目录下的二进制,添加到系统环境中,方便后续的调用

[root@k8s-node1 ~]# echo "PATH=$PATH:/opt/kubernetes/bin" >> /etc/profile
[root@k8s-node1 ~]# source /etc/profile

在任意etcd节点上执⾏如下指令查看集群状态,若所有节点均处于healthy状态则表示etcd集群部署成功

[root@k8s-node1 ~]# etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/etcd.pem  --key-file=/opt/kubernetes/ssl/etcd-key.pem  --endpoints="https://192.168.43.205:2379,https://192.168.43.206:2379,https://192.168.43.207:2379"   cluster-health
member 766d46413c399dca is healthy: got healthy result from https://192.168.43.206:2379
member 8a541d3634ab80cf is healthy: got healthy result from https://192.168.43.207:2379
member b935d4e1b5e06e68 is healthy: got healthy result from https://192.168.43.205:2379
cluster is healthy

部署Master组件

使用cfssl为apiserver、metrics-server和kube-proxy生成自签证书

在opt/kubernetes/ssl目录下创建kubernetes-csr.json,内容如下:

{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.43.205",
    "192.168.43.206",
    "192.168.43.200",
    "10.1.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
        "C": "CN",
        "L": "BeiJing",
        "ST": "BeiJing"
    }
  ]
}

hosts字段需要把所有master节点、负载均衡节点的IP地址和VIP地址,还有规划的servicecluster-ip-range(在kube-apiserver.conf和kube-controller-manager.conf中配置)的第 ⼀个IP地址(本例中为10.1.0.1)都添加进去,其中的127.0.0.1和kubernetes.*部分不要修改。

本案例中VIP是192.168.43.200

在opt/kubernetes/ssl目录下创建kube-proxy-csr.json,内容如下:

{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
        "C": "CN",
        "L": "BeiJing",
        "ST": "BeiJing"
    }
  ]
}

在opt/kubernetes/ssl目录下创建metrics-server-csr.json,内容如下:

{
  "CN": "system:metrics-server",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
        "C": "CN",
        "L": "BeiJing",
        "ST": "BeiJing"
    }
  ]
}

为apiserver生成自签证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www kubernetes-csr.json | cfssljson -bare kubernetes

执行完之后,ssl目录下会生成两个.pem为后缀的文件,分别是kubernetes.pem和kubernetes-key.pem。

[root@k8s-master1 ssl]# ls -l kubernetes*.pem
-rw------- 1 root root 1675 8月  22 17:56 kubernetes-key.pem
-rw-r--r-- 1 root root 1627 8月  22 17:56 kubernetes.pem
[root@k8s-master1 ssl]#

为kube-proxy生成自签证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www kube-proxy-csr.json | cfssljson -bare kube-proxy

执行完之后,ssl目录下会生成两个.pem为后缀的文件,分别是kubernetes.pem和kubernetes-key.pem。

[root@k8s-master1 ssl]# ls -l kube-proxy*.pem
-rw------- 1 root root 1675 8月  22 15:22 kube-proxy-key.pem
-rw-r--r-- 1 root root 1403 8月  22 15:22 kube-proxy.pem

metrics-server生成自签证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www metrics-server-csr.json | cfssljson -bare metrics-server

执行完之后,ssl目录下会生成两个.pem为后缀的文件,分别是metrics-server.pem和metrics-server-key.pem。

[root@k8s-master1 ssl]# ls -l metrics-server*.pem
-rw------- 1 root root 1675 8月  21 22:44 metrics-server-key.pem
-rw-r--r-- 1 root root 1407 8月  21 22:44 metrics-server.pem

部署apiserver、controller-manager和scheduler组件

访问:https://github.com/kubernetes/kubernetes/releases下载kubernetes 1.18.8版本的⼆进制包,并上传到其中⼀台master节点的/root⽬录下。

[root@k8s-master1 ~]# wget https://storage.googleapis.com/kubernetes-release/release/v1.18.8/kubernetes-server-linux-amd64.tar.gz
[root@k8s-master1 ~]# ls -l kubernetes-*
-rw-r--r-- 1 root root 363943527 8月  21 21:51 kubernetes-server-linux-amd64.tar.gz

解压kubernetes,并把相对于的二进制文件拷贝到/opt/kubernetes/bin下。

[root@k8s-master1 ~]# tar -zxvf kubernetes-server-linux-amd64.tar.gz
[root@k8s-master1 ~]# cp -a  /root/kubernetes/server/bin/{kube-proxy,kube-apiserver,kube-controller-manager,kube-scheduler,kubectl} \
/opt/kubernetes/bin/

创建kube-apiserver配置文件

在/opt/kubernetes/cfg目录下,创建kube-apiserver.conf。内容如下:

KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/log \
--etcd-servers=https://192.168.43.205:2379,https://192.168.43.206:2379,https://192.168.43.207:2379 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=192.168.43.205 \
--allow-privileged=true \
--service-cluster-ip-range=10.1.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/bootstrap-token.csv \
--service-node-port-range=30000-50000 \
--kubelet-client-certificate=/opt/kubernetes/ssl/kubernetes.pem \
--kubelet-client-key=/opt/kubernetes/ssl/kubernetes-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \
--etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--proxy-client-cert-file=/opt/kubernetes/ssl/metrics-server.pem \
--proxy-client-key-file=/opt/kubernetes/ssl/metrics-server-key.pem \
--runtime-config=api/all=true \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-truncate-enabled=true \
--audit-log-path=/opt/kubernetes/log/k8s-audit.log"

--logtostderr:⽇志是否输出到标准错误输出

--v=2:⽇志级别0-8,数字越⼤,⽇志越详细

--log-dir:设置⽇志存放⽬录

--etcd-servers:指定etcd服务的URL

--bind-address:apiserver监听地址

--secure-port:apiserver监听端⼝,默认为6443

--advertise-address:通告地址,让其他节点通过此IP来连接apiserver

--allow-privileged:开启容器的privileged权限

--service-cluster-ip-range:Kubernetes集群中Service的虚拟IP地址范围,以CIDR格式表示,例如 169.169.0.0/16,该IP范围不能与部署机器的IP地址有重合

--enable-admission-plugins:Kubernetes集群的准⼊控制设置,各控制模块以插件的形式依次⽣效。

--authorization-mode:授权模式

--enable-bootstrap-token-auth:启⽤bootstrap token认证

--service-node-port-range:Kubernetes集群中Service可使⽤的端⼝号范围,默认值为30000~ 32767

--kubelet-client-certificate、--kubelet-client-key:连接kubelet使⽤的证书和私钥

--tls-cert-file、--tls-private-key-file、--client-ca-file、--service-account-key-file:apiserver启⽤https所⽤的证书和私钥

--etcd-cafile、--etcd-certfile、--etcd-keyfile:连接etcd所使⽤的证书

--audit-log-maxage、--audit-log-maxbackup、--audit-log-maxsize、--audit-log-path:⽇志轮 转、⽇志路径

创建kube-controller-manage配置文件

在/opt/kubernetes/cfg目录下,创建kube-controller-manager.conf。内容如下:

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=2 \
--log-dir=/opt/kubernetes/log \
--master=127.0.0.1:8080 \
--leader-elect=true \
--bind-address=127.0.0.1 \
--service-cluster-ip-range=10.1.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s \
--feature-gates=RotateKubeletServerCertificate=true \
--feature-gates=RotateKubeletClientCertificate=true \
--allocate-node-cidrs=true \
--cluster-cidr=10.2.0.0/16 \
--root-ca-file=/opt/kubernetes/ssl/ca.pem"

--leader-elect:启⽤⾃动选举

--master:连接apiserver的IP,127.0.0.1:8080是apiserver默认监听的,⽤于让其他组件通过此地址连接

--address:配置controller-manager监听地址,不需要对外

--allocate-node-cidrs:允许安装CNI插件,⾃动分配IP

--cluster-cidr:集群pod的IP段,要与与CNI插件的IP段⼀致

--service-cluster-ip-range:service cluster IP段,与apiserver中配置保持⼀致

--cluster-signing-cert-file、--cluster-signing-key-file:⽤于集群签名的ca证书和私钥

--root-ca-file、--service-account-private-key-file:签署service account的证书和私钥

--experimental-cluster-signing-duration:签发证书的有效期

创建kube-scheduler配置文件

在/opt/kubernetes/cfg目录下,创建kube-scheduler.conf。内容如下:

KUBE_SCHEDULER_OPTS="--logtostderr=true \
--log-dir=/opt/kubernetes/log \
--v=2 \
--master=127.0.0.1:8080 \
--address=127.0.0.1 \
--leader-elect"

--leader-elect:启⽤⾃动选举

创建kube-apiserver的systemd unit文件

/usr/lib/systemd/system/目录下,创建kube-apiserver.service。内容如下:

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

创建kube-controller-manager的systemd unit文件

/usr/lib/systemd/system/目录下,创建kube-controller-manager.service。内容如下:

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

创建kube-scheduler的systemd unit文件

/usr/lib/systemd/system/目录下,创建kube-scheduler.service。内容如下:

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

设置api-server、controller-manager、scheduler开机自启并启动

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler

检查各主件的状态

在master节点上执⾏kubectl get cs获取k8s的各服务端组件状态看是否Healthy

[root@k8s-master1 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-2               Healthy   {"health":"true"}
etcd-0               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
[root@k8s-master1 ~]#

到此单节点的Master已部署完成。

部署Node组件

本案例中两台master节点即系master节点又是node节点。

kubelet.kubeconfig 配置是通过 TLS Bootstrapping 机制生成,下面是生成的流程图。

创建Node节点kubeconfig文件

创建TLS Bootstrapping Token

随机⽣成⼀个32位字符串,⽤以创建bootstrap-token.csv⽂件,这个文件对应kube-apiserver配置文件中token-auth-file的值。

token=`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`
echo "$token,kubelet-bootstrap,10001,'system:kubelet-bootstrap'" > /opt/kubernetes/cfg/bootstrap-token.csv

执行完上述命令,会生成一个bootstrap-token.csv⽂件。

[root@k8s-master1 cfg]# cat bootstrap-token.csv
91bda8cbe3822abf9f9201160fea7fab,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@k8s-master1 cfg]#

此处apiserver配置的token(32位随机字符串)必须要与后⾯node节点bootstrap.kubeconfig配 置⾥的token⼀致

创建kubelet kubeconfig

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://192.168.43.205:6443 \
  --kubeconfig=bootstrap.kubeconfig
 
 # 设置客户端认证参数
 kubectl config set-credentials kubelet-bootstrap \
  --token=91bda8cbe3822abf9f9201160fea7fab \
  --kubeconfig=bootstrap.kubeconfig
  
# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
 
 # 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

执行以上命令会在当前目录生成一个文件名为 bootstrap.kubeconfig。

创建kube-proxy kubeconfig

# 创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://192.168.43.205:6443 \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
  --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
  --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

执行以上命令会在当前目录生成一个文件名为 kube-proxy.kubeconfig

安装Docker

本案例采用二进制方式安装Docker,Docker版本采用19.03.12。

二进制包下载地址https://download.docker.com/linux/static/stable/x86_64/docker-19.03.12.tgz

解压并拷贝二进制文件到对应的目录下

[root@k8s-master1 ~]# cd /root/
[root@k8s-master1 ~]# tar zxf docker-18.09.9.tgz
[root@k8s-master1 ~]# cp -a docker/* /usr/bin/
[root@k8s-master1 ~]# chmod 755 /usr/bin/{containerd,containerd-shim,ctr,docker,dockerd,docker-init,docker-proxy,runc}

创建docker的systemd unit文件

在/usr/lib/systemd/system的目录下,创建文件docker.service,内容如下:

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
ExecStart=/usr/bin/dockerd 
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target

设置docker开机自启并启动

[root@k8s-master1 ~]# systemctl daemon-reload
[root@k8s-master1 ~]# systemctl start docker
[root@k8s-master1 ~]# systemctl enable docker

部署kubelet和kube-proxy

拷贝kubelet和kube-proxy二进制文件到指定目录

[root@k8s-master1 ~]# cd kubernetes/server/bin/
[root@k8s-master1 bin]# cp -a kube-proxy kubelet /opt/kubernetes/bin/

创建kubelet配置文件

在/opt/kubernetes/cfg目录下,创建kubelet.conf文件,内容如下:

KUBELET_OPTS="--logtostderr=true \
--v=2 \
--hostname-override=k8s-master1 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--network-plugin=cni \
--cni-conf-dir=/etc/cni/net.d \
--cni-bin-dir=/etc/cni/bin \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=yangpeng2468/google_containers-pause-amd64:3.2"

kubeconfig中的值kubelet.kubeconfig,在启动kubelet时自动生成的。

kubelet-config.yml文件内容如下:

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
  - 10.1.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
rotateCertificates: true
featureGates:
  RotateKubeletServerCertificate: true
  RotateKubeletClientCertificate: true
maxOpenFiles: 1000000
maxPods: 110

创建kube-proxy配置文件

在/opt/kubernetes/cfg目录下,创建kube-proxy.conf文件,内容如下

KUBE_PROXY_OPTS="--logtostderr=true \
--v=2 \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"

kube-proxy-config.yml文件内容如下:

kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0  
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: 10.1.0.0/16
mode: iptables

address: 监听地址

metricsBindAddress : 监控指标地址,监控获取相关信息 就从这里获取

hostnameOverride:  注册到k8s的节点名称唯一

clusterCIDR:  service IP范围

创建kubelet和kube-proxy的systemd unit文件

在/usr/lib/systemd/system的目录下,创建文件kubelet.service,内容如下:

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target

在/usr/lib/systemd/system的目录下,创建文件kube-proxy.service,内容如下:

[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

设置kubelet和kube-proxy开机自启并启动

systemctl daemon-reload
systemctl enable kubelet
systemctl enable kube-proxy
systemctl start kubelet
systemctl start kube-proxy

启动完之后,发现日志一直报如下错误

Sep 29 11:47:17 master1 kubelet: I0929 11:47:17.958425   11720 csi_plugin.go:945] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master1" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope
Sep 29 11:47:17 master1 kubelet: E0929 11:47:17.993463   11720 kubelet.go:2268] node "master1" not found
Sep 29 11:47:18 master1 kubelet: E0929 11:47:18.095008   11720 kubelet.go:2268] node "master1" not found
Sep 29 11:47:18 master1 kubelet: E0929 11:47:18.195935   11720 kubelet.go:2268] node "master1" not found
Sep 29 11:47:18 master1 kubelet: E0929 11:47:18.296599   11720 kubelet.go:2268] node "master1" not found
Sep 29 11:47:18 master1 kubelet: E0929 11:47:18.397716   11720 kubelet.go:2268] node "master1" not found
Sep 29 11:47:18 master1 kubelet: E0929 11:47:18.497910   11720 kubelet.go:2268] node "master1" not found
Sep 29 11:47:18 master1 kubelet: E0929 11:47:18.598863   11720 kubelet.go:2268] node "master1" not found
Sep 29 11:47:18 master1 kubelet: E0929 11:47:18.699174   11720 kubelet.go:2268] node "master1" not found
Sep 29 11:47:18 master1 kubelet: E0929 11:47:18.800122   11720 kubelet.go:2268] node "master1" not found
Sep 29 11:47:18 master1 kubelet: E0929 11:47:18.900452   11720 kubelet.go:2268] node "master1" not found

node "master1" not found是由于没有添加kubelet-bootstrap用户,另外一个是由于没有安装CNI网络插件。

执行如下命令创建kubelet-bootstrap用户

[root@master1 cfg]# kubectl create clusterrolebinding  kubelet-bootstrap --clusterrole=system:node-bootstrapper  --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

启动完之后,通过执行 kubectl get csr,能查询到node颁布证书的请求。

[root@k8s-master1 cfg]# kubectl get csr
NAME        AGE    SIGNERNAME                                    REQUESTOR           CONDITION
csr-ljwbs   2m9s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
[root@k8s-master1 cfg]#

允许给Node颁发证书

执行如下命令,允许给Node颁发证书。

[root@k8s-master1 cfg]# kubectl certificate approve csr-ljwbs
certificatesigningrequest.certificates.k8s.io/csr-ljwbs approved
[root@k8s-master1 cfg]# kubectl get csr
NAME        AGE     SIGNERNAME                                    REQUESTOR           CONDITION
csr-ljwbs   7m46s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
[root@k8s-master1 cfg]# kubectl get node
NAME          STATUS   ROLES    AGE   VERSION
k8s-master1   NotReady    <none>   14s   v1.18.8

本案例中的NotReady状态是由于cni插件还没安装导致的,详细信息可以通过查看日志定位原因

部署CNI网络和flannel插件

在每个node节点上都要安装CNI网络和flannel插件。本案例以master1为例部署,其他node节点按照此步骤操作。

部署flannel网络插件

下载flannel并解压

[root@k8s-master1 ~]# wget https://github.com/coreos/flannel/releases/download/v0.12.0/flannel-v0.12.0-linux-amd64.tar.gz
[root@k8s-master1 ~]# tar  -zxvf flannel-v0.12.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md

把相应的文件拷贝到指定目录

[root@k8s-master1 ~]# cp -a {flanneld,mk-docker-opts.sh} /opt/kubernetes/bin/

创建flannel配置文件

在/opt/kubernetes/cfg目录下创建flannel.conf文件,内容如下:

FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.43.205:2379,https://192.168.43.206:2379,https://192.168.43.207:2379 \
-etcd-cafile=/opt/kubernetes/ssl/ca.pem \
-etcd-certfile=/opt/kubernetes/ssl/etcd.pem \
-etcd-keyfile=/opt/kubernetes/ssl/etcd-key.pem \
-etcd-prefix=/kubernetes/network"

在etcd上创建网段

在任意一台etcd节点上执行如下命令:

[root@k8s-master1 ~]# etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem \ 
 --cert-file=/opt/kubernetes/ssl/etcd.pem \ 
 --key-file=/opt/kubernetes/ssl/etcd-key.pem \ 
 --endpoints="https://192.168.43.205:2379,https://192.168.43.206:2379,https://192.168.43.207:2379" \
 set   /kubernetes/network/config  '{"Network": "10.2.0.0/16", "SubnetLen": 24,  "Backend": {"Type": "vxlan"}}'

注意: /kubernetes/network/ 一定要flannel的配置文件中的-etcd-prefix值保持一致。

创建flannel的systemd unit文件

在/usr/lib/systemd/system的目录下,创建文件flannel.service,内容如下:

[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flannel.conf
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker_opts.env
Restart=on-failure
[Install]
WantedBy=multi-user.target

修改docker的systemd unit文件

修改/usr/lib/systemd/system/docker.service文件,其中添加了docker_opts.env环境变量,docker启动是读取flannel分配的网段。

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
EnvironmentFile=/run/flannel/docker_opts.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target

启动flannel服务并设置开机启动

[root@k8s-master1 ~]# systemctl daemon-reload
[root@k8s-master1 ~]# systemctl start flannel.service
[root@k8s-master1 ~]# systemctl enable flannel.service
#重启docker服务
[root@k8s-master1 ~]# systemctl start docker.service

成功启动后,服务器会多出一个虚拟网卡名称为flannel.1,网段地址和docker0网段一样。

[root@k8s-master1 ~]# ip add
省略...
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 76:90:1b:57:5f:a2 brd ff:ff:ff:ff:ff:ff
    inet 10.2.41.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::7490:1bff:fe57:5fa2/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default
    link/ether 02:42:bc:65:1f:4e brd ff:ff:ff:ff:ff:ff
    inet 10.2.41.1/24 brd 10.2.41.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:bcff:fe65:1f4e/64 scope link
       valid_lft forever preferred_lft forever

部署CNI网络

下载CNI二进制文件

下载地址:https://github.com/containernetworking/plugins/releases/download/v0.8.7/cni-plugins-linux-amd64-v0.8.7.tgz

[root@k8s-master1 ~]# wget https://github.com/containernetworking/plugins/releases/download/v0.8.7/cni-plugins-linux-amd64-v0.8.7.tgz

新建CNI目录

在etc目录下创建CNI相关目录,这个目录位置要和kubelet配置文件中的cni-conf-dir和cni-bin-dir的值要保持一致。

[root@k8s-master1 ~]# mkdir -p {/etc/cni/bin,/etc/cni/net.d}
[root@k8s-master1 ~]# ls /etc/cni/
bin  net.d
[root@k8s-master1 ~]#

把CNI网络插件解压到指定目录下

[root@k8s-master1 ~]# tar  -zxvf cni-plugins-linux-amd64-v0.8.7.tgz  -C /etc/cni/bin/
[root@k8s-master1 ~]# ls /etc/cni/bin/
bandwidth  bridge  dhcp  firewall  flannel  host-device  host-local  ipvlan  loopback  macvlan  portmap  ptp  sbr  static  tuning  vlan
[root@k8s-master1 ~]#

创建CNI配置文件

在/etc/cni/net.d目录创建10-default.conf文件,内容如下:

{
        "cniVersion": "0.2.0",
        "name": "flannel",
        "type": "flannel",
        "delegate": {
            "bridge": "docker0",
            "isDefaultGateway": true,
            "mtu": 1400
        }
}

部署coredns

下载coredns的yaml文件并部署

Github地址:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/coredns

部署DNS主要是为了给k8s的Service提供DNS解析服务,使得程序可以通过service的名称进⾏访问

  • DNS服务监视Kubernetes API,为每⼀个Service创建DNS记录⽤于域名解析
  • ClusterIP A记录格式:..svc.cluster.local,示例:mysvc.my-namespace.svc.cluster.local

从Github地址上下载coredns.yaml.base⽂件到任意master节点的/root/⽬录下,并重命名为 coredns.yaml,然后参考下⽅标注修改其中的部分参数

  • __MACHINE_GENERATED_WARNING__替换为This is a file generated from the base underscore template file: coredns.yaml.base
  • __PILLAR__DNS__DOMAIN__替换为cluster.local,⼀般不修改,若要修改记得要与node节点上 kubelet-config.yml⽂件中的clusterDomain的值⼀致,并要调整api-server证书中的hosts字段值 并重新⽣产证书
  • __PILLAR__DNS__MEMORY__LIMIT__替换为170Mi,此内存限制的值可根据实际环境资源进⾏调整
  • __PILLAR__DNS__SERVER__替换为10.1.0.2,此IP地址需要与Node节点 上/opt/kubernetes/cfg/kubelet-config.yml⽂件中配置的clusterDNS字段的IP⼀致.

以下为本案例替换后最终的⽂件内容

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           upstream
           fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: kube-dns
  name: coredns
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kube-dns
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      containers:
      - args:
        - -conf
        - /etc/coredns/Corefile
        image:  docker.io/fengyunpan/coredns:1.2.6
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          procMount: Default
          readOnlyRootFilesystem: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/coredns
          name: config-volume
          readOnly: true
      dnsPolicy: Default
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: coredns
      serviceAccountName: coredns
      terminationGracePeriodSeconds: 30
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      volumes:
      - configMap:
          defaultMode: 420
          items:
          - key: Corefile
            path: Corefile
          name: coredns
        name: config-volume
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: KubeDNS
  name: kube-dns
  namespace: kube-system
spec:
  selector:
    k8s-app: coredns
  clusterIP: 10.1.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
    targetPort: 53
  - name: dns-tcp
    port: 53
    protocol: TCP
    targetPort: 53
  selector:
    k8s-app: kube-dns

在master节点上,执行如下命令进行部署

[root@k8s-master1 yaml]# kubectl apply -f /home/k8s/yaml/coredns.yaml
[root@k8s-master1 yaml]# kubectl get pod -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-7f89866bbb-5x4bz          1/1     Running   3          4h9m

当STATUS状态为Running说明已经部署成功了。

测试验证

在任意master节点上执⾏如下指令创建⼀个dig容器,在容器中ping service的名称看是否可以正 常解析到IP地址并通信正常,如果可以则说明DNS服务部署成功。

在root目录创建dig.yaml文件,文件内容如下:

apiVersion: v1
kind: Pod
metadata:
  name: dig
  namespace: default
spec:
  containers:
  - name: dig
    image:  docker.io/azukiapp/dig
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always

执行如下命令,部署一个新的pod。

[root@k8s-master1 yaml]# kubectl apply -f dig.yml
pod/dig created
#查看pod已经成功运行
[root@k8s-master1 yaml]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
dig    1/1     Running   0          25s

进入容器执行验证dns是否正常解析

[root@k8s-master1 yaml]# kubectl exec -it dig sh
/ # nslookup kubernetes
Server:         10.1.0.2
Address:        10.1.0.2#53
Name:   kubernetes.default.svc.cluster.local
Address: 10.1.0.1

部署metrics-server

下载metrics server部署文件

当前最新版本是0.3.7,从github下载部署文件。登陆到 k8s-master1 操作

[root@master1 ~]# mkdir -p /home/k8s/ymal/
cd /home/k8s/ymal/
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml -O metrics-server.yaml

修改deployment.yaml文件,修正集群问题

问题1:metrics-server默认使用节点hostname通过kubelet 10250端口获取数据,但是coredns里面没有该数据无法解析( 10.1.174.106:53),可以在metrics server启动命令添加参数 --kubelet-preferred-address-types=InternalIP 直接使用节点IP地址获取数据。

问题2:kubelet 的10250端口使用的是https协议,连接需要验证tls证书。可以在metrics server启动命令添加参数--kubelet-insecure-tls不验证客户端证书。

问题3:yaml文件中的image地址k8s.gcr.io/metrics-server/metrics-server:v0.3.7  需要梯子,目前其他的镜像源一般是0.3.6版本,新的0.3.7不好找,我自己从k8s.gcr.io同步了一个过来(原装的什么也没有改)pull下来或修改下tag,或者将yaml中的image参数修改下。

针对以上3个问题修正后的部署文件内容如下,其他什么都没有修改,保持原装

args:
          - --cert-dir=/tmp
          - --secure-port=4443
          - --metric-resolution=30s
          - --kubelet-insecure-tls
          - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
          - --logtostderr

部署启动metrics-server

[root@master1 ymal]# kubectl apply -f metrics-server.yaml
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@master1 ymal]# kubectl get pod  -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-686c5455-pq8wz            1/1     Running   0          18h
metrics-server-79848c7b88-gxsln   1/1     Running   0          26s

查看API资源

[root@master1 ymal]# kubectl api-versions
...省略...
metrics.k8s.io/v1beta1 #多了这个
...省略...

查看集群节点资源使用情况(CPU,MEM)

[root@master1 ymal]# kubectl top node
NAME      CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master1   717m         17%    1894Mi          50%
master2   519m         12%    1407Mi          37%
node1     298m         7%     879Mi           23%
[root@master1 ymal]#

Master高可用

接下来本案例将扩容一台Master节点,并通过Nginx和Keepalived实现⾼可⽤的负载均衡。

部署Master组件(同master1一致)

将Master1上的/opt/kubernetes⼯作⽬录下Master主件所需的二进制文件,配置文件,Master组件的 systemd unit⽂件拷⻉到新增Master节点的对应⽬录下

#拷贝Master主件所需的二进制文件
[root@k8s-master1 ~]# cd /opt/kubernetes/bin/
[root@k8s-master1 bin]# scp {kube-apiserver,kubectl,kube-controller-manager,kube-scheduler} root@k8s-master2:/opt/kubernetes/bin/
kube-apiserver                                                                                                                                            100%  115MB  20.6MB/s   00:05
kubectl                                                                                                                                                   100%   42MB  14.4MB/s   00:02
kube-controller-manager                                                                                                                                   100%  105MB  18.4MB/s   00:05
kube-scheduler                                                                                                                                            100%   41MB  15.2MB/s   00:02
[root@k8s-master1 bin]#
#拷贝Master主件所需的配置文件
[root@k8s-master1 bin]# cd ../cfg/
[root@k8s-master1 cfg]# scp bootstrap-token.csv kube-apiserver.conf kube-controller-manager.conf kube-scheduler.conf  root@k8s-master2:/opt/kubernetes/cfg/
kube-apiserver.conf                                                                                                                                       100% 1775   473.1KB/s   00:00
kube-controller-manager.conf                                                                                                                              100%  683   264.5KB/s   00:00
kube-scheduler.conf                                                                                                                                       100%  146    78.6KB/s   00:00
bootstrap-token.csv                                                                                                                                       100%   84    49.1KB/s   00:00
[root@k8s-master1 cfg]#
#拷贝Master主件所需的证书文件
[root@k8s-master1 ssl]# scp  metrics-server*.pem ca*.pem kubernetes*.pem  root@k8s-master2:/opt/kubernetes/ssl/
metrics-server-key.pem                                                                                                                                    100% 1675   538.3KB/s   00:00
metrics-server.pem                                                                                                                                        100% 1407   350.7KB/s   00:00
ca-key.pem                                                                                                                                                100% 1675   557.0KB/s   00:00
ca.pem                                                                                                                                                    100% 1359     1.2MB/s   00:00
kubernetes-key.pem                                                                                                                                        100% 1675   796.8KB/s   00:00
kubernetes.pem                                                                                                                                            100% 1627   968.4KB/s   00:00
#拷贝Master主件所需的systemd unit文件
[root@k8s-master1 cfg]# scp /usr/lib/systemd/system/{kube-apiserver.service,kube-controller-manager.service,kube-scheduler.service} root@k8s-master2:/usr/lib/systemd/system/
kube-apiserver.service                                                                                                                                    100%  287    66.4KB/s   00:00
kube-controller-manager.service                                                                                                                           100%  322   153.4KB/s   00:00
kube-scheduler.service                                                                                                                                    100%  286   156.8KB/s   00:00
[root@k8s-master1 cfg]#

修改新Master节点上apiserver配置⽂件中的--advertise-address参数为本机IP

KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/log \
--etcd-servers=https://192.168.43.205:2379,https://192.168.43.206:2379,https://192.168.43.207:2379 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=192.168.43.206 \
--allow-privileged=true \
--service-cluster-ip-range=10.1.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/bootstrap-token.csv \
--service-node-port-range=30000-50000 \
--kubelet-client-certificate=/opt/kubernetes/ssl/kubernetes.pem \
--kubelet-client-key=/opt/kubernetes/ssl/kubernetes-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \
--etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--proxy-client-cert-file=/opt/kubernetes/ssl/metrics-server.pem \
--proxy-client-key-file=/opt/kubernetes/ssl/metrics-server-key.pem \
--runtime-config=api/all=true \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-truncate-enabled=true \
--audit-log-path=/opt/kubernetes/log/k8s-audit.log"

设置api-server、controller-manager、scheduler开机⾃启并启动

[root@k8s-master2 ~]# systemctl daemon-reload
[root@k8s-master2 ~]# systemctl start kube-apiserver.service
[root@k8s-master2 ~]# systemctl start kube-controller-manager.service
[root@k8s-master2 ~]# systemctl start kube-scheduler.service
[root@k8s-master2 ~]# systemctl enable kube-apiserver.service
[root@k8s-master2 ~]# systemctl enable kube-controller-manager.service
[root@k8s-master2 ~]# systemctl enable kube-scheduler.service

在新Master节点上执⾏如下指令若能正常获取到node节点信息说明新Master节点新增成功

[root@k8s-master2 ~]# kubectl get node
NAME          STATUS   ROLES    AGE     VERSION
k8s-master1   Ready    <none>   4h59m   v1.18.8

部署Nginx负载均衡

Nginx RPM下载地址:http://nginx.org/packages/rhel/7/x86_64/RPMS/,本案例以Nginx 1.18为例。

下载Nginx安装包并上传到规划部署Nginx的机器的/root⽬录下,并进⾏安装。

#下载Nginx 1.18.0
[root@k8s-master1 ~]# wget http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.18.0-1.el7.ngx.x86_64.rpm
[root@k8s-master1 ~]# ls nginx-1.18.0-1.el7.ngx.x86_64.rpm -l
-rw-r--r-- 1 root root 790284 4月  21 23:19 nginx-1.18.0-1.el7.ngx.x86_64.rpm
[root@k8s-master1 ~]#
#安装Nginx
[root@k8s-master1 ~]# rpm -ivh nginx-1.18.0-1.el7.ngx.x86_64.rpm

配置Nginx配置⽂件

编辑/etc/nginx/nginx.conf文件,在末尾添加如下内容:

省略...
stream {
        log_format main '$remote_addr $upstream_addr - [$time_local] $status  $upstream_bytes_sent';
        access_log /var/log/nginx/k8s-access.log main;
        upstream k8s-apiserver {
                 server 192.168.43.205:6443;
                 server 192.168.43.206:6443;
        }
        server {
                 listen 8443;
                 proxy_pass k8s-apiserver;
        }
}

upstream中依此列出所有master节点的IP:Port

listen字段如果Nginx是和apiserver部署在同⼀台服务器上,需要使⽤⾮6443端⼝(本⽂使⽤ 8443),否则会产⽣端⼝冲突,若不是部署在同⼀台机器上则可以使⽤默认6443端⼝

设置Nginx开机自启并启动

[root@k8s-master1 ~]# systemctl restart nginx.service
[root@k8s-master1 ~]# systemctl enable nginx.service
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
[root@k8s-master1 ~]#

以同样的方式在master2上部署Nginx。

部署Keepalived

主节点(本案例中是master1)

yum方式安装Keepalived

[root@k8s-master1 ~]# yum install keepalived
[root@k8s-master1 ~]# cd /etc/keepalived/
[root@k8s-master1 keepalived]# mv keepalived.conf keepalived.conf.bk
[root@k8s-master1 keepalived]#

创建Nginx检测脚本

vim /etc/keepalived/check_nginx.sh
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
 exit 1
else
 exit 0
fi
#授予Nginx检测脚本可执⾏权限
chmod +x /etc/keepalived/check_nginx.sh

配置keepalived配置⽂件

global_defs {
         notification_email {
                 admin@admin.com
         }
         notification_email_from admin@admin.co
         smtp_server 127.0.0.1
         smtp_connect_timeout 30
         router_id NGINX_MASTER
}
vrrp_script check_nginx {
     script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
         state MASTER
         interface ens33 # 接口名称
         virtual_router_id 51 # VRRP 路由ID实例,每个实例是唯一的
         priority 100 # 优先级,备服务器设置90
         advert_int 1 # 指定VRRP心跳包通告间隔时间,默认1秒
         authentication {
            auth_type PASS
            auth_pass 1111
         }
         virtual_ipaddress {
            192.168.43.200
         }
         track_script {
            check_nginx
         }
}

设置Keepalived开机⾃启并启动

[root@k8s-master1 keepalived]# systemctl restart keepalived.service
[root@k8s-master1 keepalived]# systemctl enable keepalived.service

备节点(本案例中是master2)

yum方式安装Keepalived

[root@k8s-master1 ~]# yum install keepalived
[root@k8s-master1 ~]# cd /etc/keepalived/
[root@k8s-master1 keepalived]# mv keepalived.conf keepalived.conf.bk
[root@k8s-master1 keepalived]#

创建Nginx检测脚本

vim /etc/keepalived/check_nginx.sh
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
 exit 1
else
 exit 0
fi
#授予Nginx检测脚本可执⾏权限
chmod +x /etc/keepalived/check_nginx.sh

配置keepalived配置⽂件

global_defs {
         notification_email {
                 admin@admin.com
         }
         notification_email_from admin@admin.co
         smtp_server 127.0.0.1
         smtp_connect_timeout 30
         router_id NGINX_MASTER
}
vrrp_script check_nginx {
     script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
         state MASTER
         interface ens33 # 接口名称
         virtual_router_id 51 # VRRP 路由ID实例,每个实例是唯一的
         priority 90 # 优先级,备服务器设置90
         advert_int 1 # 指定VRRP心跳包通告间隔时间,默认1秒
         authentication {
            auth_type PASS
            auth_pass 1111
         }
         virtual_ipaddress {
            192.168.43.200
         }
         track_script {
            check_nginx
         }
}

设置Keepalived开机⾃启并启动

[root@k8s-master1 keepalived]# systemctl restart keepalived.service
[root@k8s-master1 keepalived]# systemctl enable keepalived.service

修改Node连接VIP

修改所有node节点上k8s的bootstrap.kubeconfig、kubelet.kubeconfig和kube-proxy.kubeconfig配 置⽂件中的server字段的IP和Port信息,IP替换为VIP、Port替换为Nginx中配置的监听端⼝,然后重启 kubelet和kube-proxy服务。

[root@k8s-master1 cfg]# sed -i "s/192.168.43.205:6443/192.168.43.200:8443/g" /opt/kubernetes/cfg/*
[root@k8s-master1 cfg]# systemctl restart kubelet.service
[root@k8s-master1 cfg]# systemctl restart kube-proxy.service

确认node节点是否处于Ready状态

[root@k8s-master1 cfg]# kubectl get node
NAME          STATUS   ROLES    AGE   VERSION
k8s-master1   Ready    <none>   10h   v1.18.8
k8s-master2   Ready    <none>   14s   v1.18.8
k8s-node1     Ready    <none>   28s   v1.18.8
[root@k8s-master1 cfg]#

测试VIP是否正常工作

在任意节点上执⾏如下指令调⽤API看是否可以正常查看版本信息。其中token替换为token.csv中的 token值,IP替换为VIP,Port替换为Nginx中配置的监听端⼝

若VIP可以正常⼯作,可以尝试关闭其中⼀台Nginx,确认VIP是否可以正常漂移到backup节点,然后再 次测试调⽤API是否正常,验证是否可以达到故障切换的效果

[root@k8s-node1 cfg]# curl -k --header "Authorization: Bearer 91bda8cbe3822abf9f9201160fea7fab"  https://192.168.43.200:8443/version
{
  "major": "1",
  "minor": "18",
  "gitVersion": "v1.18.8",
  "gitCommit": "9f2892aab98fe339f3bd70e3c470144299398ace",
  "gitTreeState": "clean",
  "buildDate": "2020-08-13T16:04:18Z",
  "goVersion": "go1.13.15",
  "compiler": "gc",
  "platform": "linux/amd64"
}
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
1天前
|
Prometheus Kubernetes 监控
OpenAI故障复盘 - 阿里云容器服务与可观测产品如何保障大规模K8s集群稳定性
聚焦近日OpenAI的大规模K8s集群故障,介绍阿里云容器服务与可观测团队在大规模K8s场景下我们的建设与沉淀。以及分享对类似故障问题的应对方案:包括在K8s和Prometheus的高可用架构设计方面、事前事后的稳定性保障体系方面。
|
6天前
|
存储 Kubernetes 容器
K8S部署nexus
该配置文件定义了Nexus 3的Kubernetes部署,包括PersistentVolumeClaim、Deployment和服务。PVC请求20Gi存储,使用NFS存储类。Deployment配置了一个Nexus 3容器,内存限制为6G,CPU为1000m,并挂载数据卷。Service类型为NodePort,通过30520端口对外提供服务。所有资源位于`nexus`命名空间中。
|
3天前
|
Kubernetes 网络协议 应用服务中间件
Kubernetes Ingress:灵活的集群外部网络访问的利器
《Kubernetes Ingress:集群外部访问的利器-打造灵活的集群网络》介绍了如何通过Ingress实现Kubernetes集群的外部访问。前提条件是已拥有Kubernetes集群并安装了kubectl工具。文章详细讲解了Ingress的基本组成(Ingress Controller和资源对象),选择合适的版本,以及具体的安装步骤,如下载配置文件、部署Nginx Ingress Controller等。此外,还提供了常见问题的解决方案,例如镜像下载失败的应对措施。最后,通过部署示例应用展示了Ingress的实际使用方法。
18 2
|
15天前
|
存储 Kubernetes 关系型数据库
阿里云ACK备份中心,K8s集群业务应用数据的一站式灾备方案
本文源自2024云栖大会苏雅诗的演讲,探讨了K8s集群业务为何需要灾备及其重要性。文中强调了集群与业务高可用配置对稳定性的重要性,并指出人为误操作等风险,建议实施周期性和特定情况下的灾备措施。针对容器化业务,提出了灾备的新特性与需求,包括工作负载为核心、云资源信息的备份,以及有状态应用的数据保护。介绍了ACK推出的备份中心解决方案,支持命名空间、标签、资源类型等维度的备份,并具备存储卷数据保护功能,能够满足GitOps流程企业的特定需求。此外,还详细描述了备份中心的使用流程、控制台展示、灾备难点及解决方案等内容,展示了备份中心如何有效应对K8s集群资源和存储卷数据的灾备挑战。
|
29天前
|
Kubernetes Cloud Native 微服务
云原生入门与实践:Kubernetes的简易部署
云原生技术正改变着现代应用的开发和部署方式。本文将引导你了解云原生的基础概念,并重点介绍如何使用Kubernetes进行容器编排。我们将通过一个简易的示例来展示如何快速启动一个Kubernetes集群,并在其上运行一个简单的应用。无论你是云原生新手还是希望扩展现有知识,本文都将为你提供实用的信息和启发性的见解。
|
7月前
|
Kubernetes 网络协议 应用服务中间件
K8S二进制部署实践-1.15.5
K8S二进制部署实践-1.15.5
91 0
|
7月前
|
Kubernetes 调度 Docker
Kubernetes高可用集群二进制部署(五)kubelet、kube-proxy、Calico、CoreDNS
Kubernetes高可用集群二进制部署(五)kubelet、kube-proxy、Calico、CoreDNS
Kubernetes高可用集群二进制部署(五)kubelet、kube-proxy、Calico、CoreDNS
|
3月前
|
存储 Kubernetes 负载均衡
CentOS 7.9二进制部署K8S 1.28.3+集群实战
本文详细介绍了在CentOS 7.9上通过二进制方式部署Kubernetes 1.28.3+集群的全过程,包括环境准备、组件安装、证书生成、高可用配置以及网络插件部署等关键步骤。
640 3
CentOS 7.9二进制部署K8S 1.28.3+集群实战
|
3月前
|
Kubernetes 负载均衡 前端开发
二进制部署Kubernetes 1.23.15版本高可用集群实战
使用二进制文件部署Kubernetes 1.23.15版本高可用集群的详细教程,涵盖了从环境准备到网络插件部署的完整流程。
126 2
二进制部署Kubernetes 1.23.15版本高可用集群实战
|
2月前
|
Kubernetes 网络协议 安全
[kubernetes]二进制方式部署单机k8s-v1.30.5
[kubernetes]二进制方式部署单机k8s-v1.30.5

相关产品

  • 容器服务Kubernetes版