kubeadm 一主两从配置 V1.22.4

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
简介: kubeadm 一主两从配置 V1.22.4

版本要求

1、centos7.9
2、docker 20.10.11
3、k8s v1.22.4

1、服务器基础配置

1. 域名配置:(修改host)
[root@wzj-k8s-master ~]# more /etc/hosts
172.28.149.3 k8s-0001
172.28.149.5 k8s-0002
172.28.149.4 k8s-0003

命令追加:

cat >> /etc/hosts << EOF
172.28.149.6 k8s-00001
172.28.149.7 k8s-00002
172.28.149.10 k8s-00003
EOF

2. 时间同步:保证3台机器时间同步 安装chrony实现所有服务器间的时间同步
# yum install chrony -y
# systemctl  start chronyd
# sed -i -e '/^server/s/^/#/' -e '1a server ntp.aliyun.com iburst' /etc/chrony.conf
# systemctl  restart chronyd
# timedatectl set-timezone Asia/Shanghai
# timedatectl
      Local time: Fri 2020-11-27 16:06:42 CST
Universal time: Fri 2020-11-27 08:06:42 UTC
RTC time: Fri 2020-11-27 08:06:42
Time zone: Asia/Shanghai (CST, +0800)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: n/a

3. 禁用iptables和firewalld(Kubernetes和Docker在运行时会产生大量iptables规则,为了不让系统规则和它们混淆,直接关闭系统的规则)
[root@wzj-k8s-master ~]# systemctl stop firewalld
[root@wzj-k8s-master ~]# systemctl disable firewalld
[root@wzj-k8s-master ~]# systemctl stop iptables
[root@wzj-k8s-master ~]# systemctl disable iptables 

4. 禁用SELINUX

[root@wzj-k8s-master ~]# more /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled 

5. 禁用SWAP分区

[root@wzj-k8s-master ~]# more /etc/fstab
# /etc/fstab
# Created by anaconda on Sun Jul 11 15:11:03 2021
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=66d8c51a-8b6f-4e76-b687-53eaaae260b3 /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
重启后生效:
[root@wzj-k8s-master ~]# free -m
total        used        free      shared  buff/cache   available
Mem:           3777         717         191          57        2868        2730
Swap:             0           0           0 

6. 修改Linux内核参数
配置sysctl内核参数
$ cat > /etc/sysctl.conf <<EOF
vm.max_map_count=262144
net.ipv4.ip_forward = 1
### 下面可以不加
##net.bridge.bridge-nf-call-ip6tables = 1
##net.bridge.bridge-nf-call-iptables = 1
EOF
生效文件
$ sysctl -p
修改Linux 资源配置文件,调高ulimit最大打开数和systemctl管理的服务文件最大打开数
$ echo "* soft nofile 655360" >> /etc/security/limits.conf
$ echo "* hard nofile 655360" >> /etc/security/limits.conf
$ echo "* soft nproc 655360"  >> /etc/security/limits.conf
$ echo "* hard nproc 655360"  >> /etc/security/limits.conf
$ echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
$ echo "* hard memlock  unlimited"  >> /etc/security/limits.conf
$ echo "DefaultLimitNOFILE=1024000"  >> /etc/systemd/system.conf
$ echo "DefaultLimitNPROC=1024000"  >> /etc/systemd/system.conf

6.1 添加网桥过滤和地址转发

[root@wzj-k8s-master ~]# more /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

cat >> /etc/sysctl.d/kubernetes.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

6.2 重新加载配置:

[root@wzj-k8s-master ~]# sysctl -p
6.3 加载网桥过滤模块:

[root@wzj-k8s-master ~]# modprobe br_netfilter
6.4 查看网桥过滤模块是否添加成功:

[root@wzj-k8s-master ~]# lsmod |grep br_net
br_netfilter           22256  0
bridge                151336  1 br_netfilter

6.5 配置节点间ssh互信
配置ssh互信,那么节点之间就能无密访问,方便日后执行自动化部署
# ssh-keygen     # 每台机器执行这个命令, 一路回车即可
# ssh-copy-id  node      # 到master上拷贝公钥到其他节点,这里需要输入 yes和密码

7. 配置IPVS功能

在kubernetes中service有两种代理模型,一种是基于iptables的,一种是基于ipvs的。两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块。

7.1 安装ipset ipvsadmin

yum install ipset ipvsadmin -y
7.2 添加需要加载的模块写入脚本文件

[root@wzj-k8s-master ~]# cat /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

cat >> /etc/sysconfig/modules/ipvs.modules << EOF

!/bin/bash

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF


nf_conntrack_ipv4是内核4.8版本以下使用 4.8版本以上使用nf_conntrack

7.3 为脚本文件添加执行权限

[root@wzj-k8s-master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
7.4 执行脚本

[root@wzj-k8s-master ~]# /bin/bash /etc/sysconfig/modules/ipvs.modules
7.5 查看对应的模块是否加载成功

[root@wzj-k8s-master ~]# lsmod |grep br_net
br_netfilter           22256  0
bridge                151336  1 br_netfilter

[root@wzj-k8s-master ~]# lsmod |grep -e ip_vs -e nf_conntrack
nf_conntrack_ipv6      18935  8
nf_defrag_ipv6         35104  1 nf_conntrack_ipv6
nf_conntrack_netlink    36396  0
nfnetlink              14519  3 nf_tables,nf_conntrack_netlink
nf_conntrack_ipv4      15053  10
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          139264  10 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_nat_masquerade_ipv6,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack 

8. 重启服务器

步骤二:Docker安装

1. 切换镜像源

[root@wzj-k8s-master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O/etc/yum.repos.d/docker-ce.repo

2. 查看镜像源中支持的docker版本(如果要看所有版本可以加参数--showduplicates,否则只显示最新版本)

[root@wzj-k8s-master ~]# yum list docker-ce
Loaded plugins: fastestmirror, product-id, search-disabled-repos, subscription-manager

This system is not registered with an entitlement server. You can use subscription-manager to register.

Loading mirror speeds from cached hostfile
* base: ftp.sjtu.edu.cn
* extras: ftp.sjtu.edu.cn
* updates: mirror.lzu.edu.cn
  Installed Packages
  docker-ce.x86_64                                                   3:20.10.11-3.el7                                                   @docker-ce-st
3. 安装docker-ce

yum install docker-ce.x86_64 -y

TODO 可以修改docker路径,找一个大一点的磁盘存放

4. 添加配置文件(Docker在默认情况下使用的Cgroup Driver为cgroupfs,而kubernetes推荐使用systemd来代替cgroupfs),同时添加仓库。

先创建文件夹 mkdir /etc/docker
[root@wzj-k8s-master ~]# more /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors":
["https://docker.mirrors.ustc.edu.cn/",
"https://hub-mirror.c.163.com",
"https://registry.docker-cn.com",
"https://kn0t2bca.mirror.aliyuncs.com"
]
}

cat >> /etc/docker/daemon.json << EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors":
["https://docker.mirrors.ustc.edu.cn/",
"https://hub-mirror.c.163.com",
"https://registry.docker-cn.com",
"https://kn0t2bca.mirror.aliyuncs.com"
]
}
EOF

5. 启动docker

[root@wzj-k8s-master ~]# systemctl start docker
[root@wzj-k8s-master ~]# systemctl enable docker
6. 查看docker版本

[root@wzj-k8s-master ~]# docker version
Client: Docker Engine - Community
Version:           20.10.11
API version:       1.41
Go version:        go1.16.9
Git commit:        dea9396
Built:             Thu Nov 18 00:38:53 2021
OS/Arch:           linux/amd64
Context:           default
Experimental:      true

Server: Docker Engine - Community
Engine:
Version:          20.10.11
API version:      1.41 (minimum version 1.12)
Go version:       go1.16.9
Git commit:       847da18
Built:            Thu Nov 18 00:37:17 2021
OS/Arch:          linux/amd64
Experimental:     false
containerd:
Version:          1.4.12
GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
runc:
Version:          1.0.2
GitCommit:        v1.0.2-0-g52b36a2
docker-init:
Version:          0.19.0
GitCommit:        de40ad0 

步骤三:K8S组件安装

1. YUM源更新由于Kubernetes的镜像源在国外,速度比较慢,这里切换成国内的镜像源,编辑/etc/yum.repos.d/kubernetes.repo,添加下面的配置:

[root@wzj-k8s-master ~]# more /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

cat >> /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2. 安装kubeadm,kubelet和kubectl3个组件

[root@wzj-k8s-master ~]# yum list | grep kube
cri-tools.x86_64                            1.19.0-0                   @kubernetes
kubeadm.x86_64                              1.22.3-0                   @kubernetes
kubectl.x86_64                              1.22.3-0                   @kubernetes
kubelet.x86_64                              1.22.3-0                   @kubernetes
[root@wzj-k8s-master ~]# yum install kubeadm.x86_64 kubectl.x86_64 kubelet.x86_64 -y
3.配置kubelet的cgroup

[root@wzj-k8s-master ~]# more /etc/sysconfig/kubelet
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"

cat > /etc/sysconfig/kubelet << EOF
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
EOF

4.设置kubelet开机自启

[root@wzj-k8s-master ~]# systemctl enable kubelet

步骤四:K8S集群部署

1. 查看部署集群需要的镜像包,由于使用kubeadm部署集群,可用如下命令查看:

[root@wzj-k8s-master sysconfig]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.22.4
k8s.gcr.io/kube-controller-manager:v1.22.4
k8s.gcr.io/kube-scheduler:v1.22.4
k8s.gcr.io/kube-proxy:v1.22.4
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4
2. 下载镜像包,由于kubeadm默认从谷歌官方下载,网络可能不通,可以先从阿里云下载,再把镜像名修改为kubeadm可识别的谷歌官方名字,以其中一个镜像为例配置如下:(也可通过循环一次操作)

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.4 k8s.gcr.io/kube-controller-manager:v1.22.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.4

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.4 k8s.gcr.io/kube-apiserver:v1.22.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.4

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.4 k8s.gcr.io/kube-scheduler:v1.22.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.4

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.4 k8s.gcr.io/kube-proxy:v1.22.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.4

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0 k8s.gcr.io/etcd:3.5.0-0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0

docker pull coredns/coredns
docker tag coredns/coredns k8s.gcr.io/coredns/coredns:v1.8.4
docker rmi coredns/coredns

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5 k8s.gcr.io/pause:3.5
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
3. 查看本地docker镜像,保证kubeadm需要的镜像都已完成:

[root@wzj-k8s-master ~]# docker images
REPOSITORY                           TAG       IMAGE ID       CREATED        SIZE
k8s.gcr.io/kube-apiserver            v1.22.4   8a5cc299272d   34 hours ago   128MB
k8s.gcr.io/kube-controller-manager   v1.22.4   0ce02f92d3e4   34 hours ago   122MB
k8s.gcr.io/kube-scheduler            v1.22.4   721ba97f54a6   34 hours ago   52.7MB
k8s.gcr.io/kube-proxy                v1.22.4   edeff87e4802   34 hours ago   104MB
k8s.gcr.io/etcd                      3.5.0-0   004811815584   5 months ago   295MB
k8s.gcr.io/coredns/coredns           v1.8.4    8d147537fb7d   5 months ago   47.6MB
k8s.gcr.io/pause                     3.5       ed210e3e4a5b   8 months ago   683kB

4. 在master节点进行集群部署

[root@wzj-k8s-master ~]# 

kubeadm reset

kubeadm init \
> --kubernetes-version=v1.22.4 \
> --pod-network-cidr=10.244.0.0/16 \   (需要填写 不然pod无法创建 open /run/flannel/subnet.env: no such file or directory)
> --service-cidr=10.96.0.0/12 \
> --apiserver-advertise-address=172.28.149.3


说明
--apiserver-advertise-address:API服务器将通知它正在监听的IP地址,监听的地址为“0.0.0.0”,即本机所有IP地址。
--apiserver-bind-port:API服务器绑定到的端口。(默认:6443)
--cert-dir:加载证书的相关目录(默认:/etc/kubernetes/pki)
--config:配置文件的路径。警告:配置文件目前属于实验性,还不稳定。
--ignore-preflight-errors:将错误显示为警告的检查列表进行忽略。例如:“IsPrivilegedUser,Swp”。Value 'all'忽略所有检查中的错误。
--pod-network-cidr:指定pod网络的IP地址范围。如果设置,控制平面将为每个节点自动分配CIDRs。
--service-cidr:为service VIPs使用不同的IP地址。(默认“10.96.0.0/12”)


Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.28.149.6:6443 --token g0f4db.sin9azlvvhnihj3n --discovery-token-ca-cert-hash sha256:f5c9d504a850a6de70637270572b0ff6752b0d247b016241b37eefb530aea435

5. 部署成功后,按指引先执行以下命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

6.Node节点加入集群:

[root@wzj-k8s-node1 ~]# kubeadm join 172.16.20.100:6443 --token q66eda.qcd3f081ojwf7t0a \
>         --discovery-token-ca-cert-hash sha256:e6d23168ebea43a2cd70b97210adae378e677df481241967b3ebab9f0a898bbe
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...

[root@wzj-k8s-master ~]# kubectl get nodes
NAME             STATUS     ROLES                  AGE     VERSION
wzj-k8s-master   NotReady   control-plane,master   4m33s   v1.22.3
wzj-k8s-node1    NotReady   <none>                 10s     v1.22.3 

步骤五:配置网络插件

kubernetes支持多种网络插件,比如flannel、calico、canal等等,这里选择使用flannel。(以下操作只在master节点)

1. 获取flannel文件

[root@wzj-k8s-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

--2021-11-19 10:13:57--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.110.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5177 (5.1K) [text/plain]
Saving to: ‘kube-flannel.yml’

100%[=============================================================================================================>] 5,177       --.-K/s   in 0.003s

2021-11-19 10:13:58 (1.71 MB/s) - ‘kube-flannel.yml’ saved [5177/5177]

2.使用配置文件启动fannel

[root@wzj-k8s-master ~]# kubectl apply -f kube-flannel.yml

Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

3. 完成以后集群状态变为ready,耐心等一会:

[root@wzj-k8s-master ~]# kubectl get nodes
NAME             STATUS   ROLES                  AGE   VERSION
wzj-k8s-master   Ready    control-plane,master   66m   v1.22.3
wzj-k8s-node1    Ready    <none>                 61m   v1.22.3
wzj-k8s-node2    Ready    <none>                 61m   v1.22.3 

步骤六:集群测试

通过在kubernetes集群中部署一个Nginx程序,测试集群是否正常工作。

1. 部署Nginx

[root@wzj-k8s-master ~]# kubectl create deployment nginx --image=nginx:1.14-alpine
deployment.apps/nginx created
2. 暴露端口

[root@wzj-k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
3. 查看POD和Service状态:

[root@wzj-k8s-master ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-65c4bffcb6-5d6hr   1/1     Running   0          93s

[root@wzj-k8s-master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        71m
nginx        NodePort    10.96.94.199   <none>        80:30678/TCP   56s 

微信公众号,需要的话就关注下我~

相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
目录
相关文章
|
缓存 负载均衡 Kubernetes
【k8s】多节点master部署k8s集群详解、步骤带图、配置文件带注释(下)
文章目录 前言 一、部署详解 1.1 架构 1.2 初始化环境(所有节点)
672 1
【k8s】多节点master部署k8s集群详解、步骤带图、配置文件带注释(下)
|
1月前
|
存储 运维 Kubernetes
Kubernetes学习-集群搭建篇(一) 搭建Master结点
Kubernetes学习-集群搭建篇(一) 搭建Master结点
|
1月前
|
Kubernetes 应用服务中间件 nginx
使用kubeadm搭建生产环境的多master节点k8s高可用集群
使用kubeadm搭建生产环境的多master节点k8s高可用集群
191 0
|
1月前
|
NoSQL Redis Docker
使用Docker搭建一个“一主两从”的 Redis 集群(超详细步骤)
使用Docker搭建一个“一主两从”的 Redis 集群(超详细步骤)
304 0
|
Kubernetes 容器
【k8s】多节点master部署k8s集群详解、步骤带图、配置文件带注释(上)
文章目录 前言 一、部署详解 1.1 架构 1.2 初始化环境(所有节点)
1248 0
【k8s】多节点master部署k8s集群详解、步骤带图、配置文件带注释(上)
|
1月前
|
Kubernetes Cloud Native 虚拟化
云原生|kubernetes|找回丢失的etcd集群节点---etcd节点重新添加,扩容和重新初始化k8s的master节点
云原生|kubernetes|找回丢失的etcd集群节点---etcd节点重新添加,扩容和重新初始化k8s的master节点
136 0
|
9月前
|
存储 Kubernetes 关系型数据库
Kubernetes(k8s)上搭建一主两从的mysql8集群
Kubernetes(k8s)上搭建一主两从读写分离的MySQL集群,采用了MySQL8。视频教程地址:https://www.bilibili.com/video/BV1iw411e7ZE/
596 1
|
10月前
|
Kubernetes 网络安全 Docker
K8S(二)集群搭建(kubeadm 方式)
K8S(二)集群搭建(kubeadm 方式)
154 0
|
Kubernetes 监控 网络协议
⽤ kubeadm 搭建集群环境的准备工作
我们即将开始学习使用kubeadm搭建Kubernets集群。在开始之前,我们要完成一些准备工作。
⽤ kubeadm 搭建集群环境的准备工作
|
NoSQL 安全 Redis
基于Docker搭建Redis一主两从三哨兵
安装redis集群,安装爬坑的过程。以及如何在python中使用所遇到的问题,和解决这些问题的方法。
341 0
 基于Docker搭建Redis一主两从三哨兵