K8S(V1.10.1)高可用集群超详细版本(包含Dashboard、Rancher)

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
简介: K8S(V1.10.1)高可用集群超详细版本(包含Dashboard、Rancher)

在这里插入图片描述

IP 主机名 CPU 运行内存 备注
192.168.1.10 node01 2 4 Master and etcd
192.168.1.20 node02 2 4 Master and etcd
192.168.1.30 node03 2 4 Master and etcd
192.168.1.40 node04 1 2 node
192.168.1.50 node05 1 2 node
192.168.1.60 node06 1 2 node
软件 版本
kubernetes等组件 V1.10.1
docker V1.13.1

①环境初始化

分别在6台主机设置主机名称

hostnamectl set-hostname node01
hostnamectl set-hostname node02
hostnamectl set-hostname node03
hostnamectl set-hostname node04
hostnamectl set-hostname node05
hostnamectl set-hostname node06

配置主机映射

echo '192.168.1.10 node01
192.168.1.20 node02
192.168.1.30 node03
192.168.1.40 node04
192.168.1.50 node05
192.168.1.60 node06' >> /etc/hosts

node01上执行ssh免密码登陆配置

ssh-keygen  #一路回车即可
ssh-copy-id  -i node01
ssh-copy-id  -i node02
ssh-copy-id  -i node03
ssh-copy-id  -i node04
ssh-copy-id  -i node05
ssh-copy-id  -i node06

六台主机配置、停防火墙、关闭Swap、关闭Selinux、设置内核、安装依赖包、配置ntp(配置完后建议重启一次)

systemctl stop firewalld
systemctl disable firewalld

swapoff -a 
sed -i 's/.*swap.*/#&/' /etc/fstab

setenforce  0 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config  

modprobe br_netfilter
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
ls /proc/sys/net/bridge


yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl 

systemctl enable ntpdate.service
echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
crontab /tmp/crontab2.tmp
systemctl start ntpdate.service
 
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536"  >> /etc/security/limits.conf
echo "* hard nproc 65536"  >> /etc/security/limits.conf
echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
echo "* hard memlock  unlimited"  >> /etc/security/limits.conf
-----------------------------------------------------------------------------------------------------------------------
=======================================================================================================================
-----------------------------------------------------------------------------------------------------------------------
重启过后建议在执行一次
sysctl -p /etc/sysctl.d/k8s.conf
如果报错:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 没有那个文件或目录
再次执行以下命令:
modprobe br_netfilter
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
ls /proc/sys/net/bridge

--------------------------------------------------------------------------------
修改iptables的另一种方法 区别:
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
1.临时开启,(写入内存,在内存中开启)
echo "1" > /proc/sys/net/ipv4/ip_forward

2.永久开启,(写入内核)
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf 
sysctl -p   

②创建etcd证书(node01上执行即可)

设置cfssl环境

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
export PATH=/usr/local/bin:$PATH

创建 CA 配置文件(下面配置的IP为etc节点的IP)

mkdir /root/ssl
cd /root/ssl
cat >  ca-config.json <<EOF
{
"signing": {
"default": {
  "expiry": "8760h"
},
"profiles": {
  "kubernetes-Soulmate": {
    "usages": [
        "signing",
        "key encipherment",
        "server auth",
        "client auth"
    ],
    "expiry": "8760h"
  }
}
}
}
EOF

cat >  ca-csr.json <<EOF
{
"CN": "kubernetes-Soulmate",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
  "C": "CN",
  "ST": "shanghai",
  "L": "shanghai",
  "O": "k8s",
  "OU": "System"
}
]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.1.10",
    "192.168.1.20",
    "192.168.1.30"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "shanghai",
      "L": "shanghai",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd

node01分发etcd证书到node02、node03上面

mkdir -p /etc/etcd/ssl
cp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/
ssh -n node02 "mkdir -p /etc/etcd/ssl && exit"
ssh -n node03 "mkdir -p /etc/etcd/ssl && exit"
scp -r /etc/etcd/ssl/*.pem node02:/etc/etcd/ssl/
scp -r /etc/etcd/ssl/*.pem node03:/etc/etcd/ssl/

安装配置etcd (三主节点)
安装etcd

yum install etcd -y
mkdir -p /var/lib/etcd

node01的etcd.service

cat <<EOF >/etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
  --name node01 \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --initial-advertise-peer-urls https://192.168.1.10:2380 \
  --listen-peer-urls https://192.168.1.10:2380 \
  --listen-client-urls https://192.168.1.10:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.1.10:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster node01=https://192.168.1.10:2380,node02=https://192.168.1.20:2380,node03=https://192.168.1.30:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

node02的etcd.service

cat <<EOF >/etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
  --name node02 \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --initial-advertise-peer-urls https://192.168.1.20:2380 \
  --listen-peer-urls https://192.168.1.20:2380 \
  --listen-client-urls https://192.168.1.20:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.1.20:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster node01=https://192.168.1.10:2380,node02=https://192.168.1.20:2380,node03=https://192.168.1.30:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

node03的etcd.service

cat <<EOF >/etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
  --name node03 \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --initial-advertise-peer-urls https://192.168.1.30:2380 \
  --listen-peer-urls https://192.168.1.30:2380 \
  --listen-client-urls https://192.168.1.30:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.1.30:2379 \
  --initial-cluster-token etcd-cluster-0 \
--initial-cluster node01=https://192.168.1.10:2380,node02=https://192.168.1.20:2380,node03=https://192.168.1.30:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

添加自启动(etc集群最少2个节点才能启动,启动报错看mesages日志)

 mv /etc/systemd/system/etcd.service /usr/lib/systemd/system/
 systemctl daemon-reload
 systemctl enable etcd
 systemctl start etcd
 systemctl status etcd

在三个etcd节点执行一下命令检查

etcdctl --endpoints=https://192.168.1.10:2379,https://192.168.1.20:2379,https://192.168.1.30:2379 \
  --ca-file=/etc/etcd/ssl/ca.pem \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem  cluster-health

③所有节点安装配置docker

因为本次要部署的K8S版本为1.10.1版本,版本较低,所以docker版本不要安装太高的版本(亲测19.03版本不兼容)

yum install docker -y
systemctl start docker && systemctl enable docker

默认安装的是1.13版本(目前最新版本是1.13版本),不要安装docker-ce,docker-ce会安装19.03版本(截止目前的最新版本)
查看docker cgroup

docker info

1.10.1版本的K8S使用的是cgroup driver,1.14之后的版本才推荐使用systemd,docker与K8S两者必须要保持一致,否则会报错
在这里插入图片描述

如果查看到的cgroup不为cgroupfs,需要修改

vim /usr/lib/systemd/system/docker.service

在这里插入图片描述
可以在添加一个阿里云加速
在这里插入图片描述

修改完成后重载配置,重启docker

systemctl daemon-reload && systemctl restart docker

查看docker版本

# docker --version
Docker version 1.13.1, build 0be3e21/1.13.1

④安装kubeadm,kubectl,kubelet

下载必要离线包,因为不下载的话,后面在初始化kubeadm时,会去自动拉去必要镜像,有些镜像需要翻墙才能下得下来

安装包连接🔗

链接: https://pan.baidu.com/s/1tPGxcqUkepbGnVV934bOpQ
提取码:r0cz
复制这段内容后打开百度网盘手机App,操作更方便哦

安装,此步骤6台机器全部都需要执行
注:本文将所有K8S的文件放置在/root目录下,以下代码在使用时记得修改为实际路径

cd /root/kubernetes-1.10
tar -xvf kube-packages-1.10.1.tar
cd kube-packages-1.10.1
rpm -Uvh * --force --nodeps

在所有kubernetes节点上设置kubelet使用cgroupfs,与dockerd保持一致,否则kubelet会启动报错
默认kubelet使用的cgroup-driver=systemd

sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

建议:/etc/systemd/system/kubelet.service.d/10-kubeadm.conf文件中添加
Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.0"

systemctl daemon-reload && systemctl restart kubelet && systemctl enable kubelet

导入镜像,只导入了必要镜像,后续镜像很多的话,可以考虑搭建harbor存放镜像,此步骤三天机器全部都需要执行

cd /root/kubernetes-1.10/
docker load -i k8s-images-1.10.tar.gz

命令补全

yum -y  install bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

⑤初始化集群

node01、node02、node03添加集群初始配置文件(集群配置文件一样)

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
  endpoints:
  - https://192.168.1.10:2379
  - https://192.168.1.20:2379
  - https://192.168.1.30:2379
  caFile: /etc/etcd/ssl/ca.pem
  certFile: /etc/etcd/ssl/etcd.pem
  keyFile: /etc/etcd/ssl/etcd-key.pem
  dataDir: /var/lib/etcd
networking:
  podSubnet: 10.244.0.0/16
kubernetesVersion: 1.10.1
api:
  advertiseAddress: "192.168.1.10"
token: "b99a00.a144ef80536d4344"
tokenTTL: "0s"
apiServerCertSANs:
- node01
- 192.168.1.10
featureGates:
  CoreDNS: true
imageRepository: "registry.cn-beijing.aliyuncs.com/k8sct"

这个config

首先node01初始化集群

配置文件定义podnetwork是10.244.0.0/16

kubeadmin init –hlep可以看出,service默认网段是10.96.0.0/12

/etc/systemd/system/kubelet.service.d/10-kubeadm.conf默认dns地址cluster-dns=10.96.0.10

kubeadm init --config config.yaml 

初始化失败后处理办法

kubeadm reset
rm -rf $HOME/.kube
#或
rm -rf $HOME/.kube
rm -rf /etc/kubernetes/*.conf
rm -rf /etc/kubernetes/manifests/*.yaml
docker ps -a |awk '{print $1}' |xargs docker rm -f
systemctl  stop kubelet

初始化正常的结果如下

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.1.10:6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:7e234163db10f31e0fbb0c383410b81b8bd32f89fae1b947ab3f4ca75bd2f058

node01上面执行如下命令

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config
 

kubeadm生成证书密码文件分发到node02和node03上面去(一会添加master节点)

scp -r /etc/kubernetes/pki  node02:/etc/kubernetes/
scp -r /etc/kubernetes/pki  node03:/etc/kubernetes/

部署flannel网络,只需要在node01执行就行

cd /root/kubernetes-1.10
kubectl apply -f kube-flannel.yml

查看状态kubernetes节点状态

[root@node01 kubernetes-1.10]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node01    Ready     master    3m        v1.10.1
[root@node01 kubernetes-1.10]# kubectl get pods --all-namespaces 
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   coredns-7997f8864c-85wds         1/1       Running   0          2m
kube-system   coredns-7997f8864c-9wjtx         1/1       Running   0          2m
kube-system   kube-apiserver-node01            1/1       Running   0          1m
kube-system   kube-controller-manager-node01   1/1       Running   0          2m
kube-system   kube-flannel-ds-ls2hp            1/1       Running   0          2m
kube-system   kube-proxy-77zkv                 1/1       Running   0          2m
kube-system   kube-scheduler-node01            1/1       Running   0          1m

将node02和node03节点同样执行

kubeadm init --config config.yaml

node02和node03输出的内容应该是与node01一致 此时已经以master角色加入到集群了
在这里插入图片描述
node04、node05、node06执行命令加入集群以node角色加入

node01查看集群状态

[root@node01 ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node01    Ready     master    10m       v1.10.1
node02    Ready     master    4m        v1.10.1
node03    Ready     master    4m        v1.10.1
node04    Ready     <none>    39s       v1.10.1
node05    Ready     <none>    27s       v1.10.1
node06    Ready     <none>    17s       v1.10.1
[root@node01 ~]# kubectl get pods --all-namespaces 
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   coredns-7997f8864c-85wds         1/1       Running   0          13m
kube-system   coredns-7997f8864c-9wjtx         1/1       Running   0          13m
kube-system   kube-apiserver-node01            1/1       Running   0          12m
kube-system   kube-apiserver-node02            1/1       Running   0          7m
kube-system   kube-apiserver-node03            1/1       Running   0          7m
kube-system   kube-controller-manager-node01   1/1       Running   0          12m
kube-system   kube-controller-manager-node02   1/1       Running   0          7m
kube-system   kube-controller-manager-node03   1/1       Running   0          7m
kube-system   kube-flannel-ds-4dmg7            1/1       Running   0          7m
kube-system   kube-flannel-ds-8whpg            1/1       Running   1          3m
kube-system   kube-flannel-ds-g66s5            1/1       Running   0          3m
kube-system   kube-flannel-ds-j5dk6            1/1       Running   0          4m
kube-system   kube-flannel-ds-ls2hp            1/1       Running   0          12m
kube-system   kube-flannel-ds-s4vcz            1/1       Running   0          7m
kube-system   kube-proxy-4vm9g                 1/1       Running   0          4m
kube-system   kube-proxy-5mpng                 1/1       Running   0          7m
kube-system   kube-proxy-77zkv                 1/1       Running   0          13m
kube-system   kube-proxy-f67wb                 1/1       Running   0          7m
kube-system   kube-proxy-n4tlk                 1/1       Running   0          3m
kube-system   kube-proxy-q8sbm                 1/1       Running   0          3m
kube-system   kube-scheduler-node01            1/1       Running   0          12m
kube-system   kube-scheduler-node02            1/1       Running   0          7m
kube-system   kube-scheduler-node03            1/1       Running   0          7m

⑥Dashboard部署

直接使用离线包中的三个.yaml文件即可完成部署

cd /root/k8s/kubernetes-1.10
kubectl apply -f kubernetes-dashboard-http.yaml -f admin-role.yaml -f kubernetes-dashboard-admin.rbac.yaml

完成后查看主机是否起了31000端口,如果已启动,可通过ip:31000打开 K8S可视化页面

[root@node01 kubernetes-1.10]# netstat -tunlp | grep 31000
tcp6       0      0 :::31000                :::*                    LISTEN      56139/kube-proxy    

在这里插入图片描述

⑦Rancher部署

docker安装rancher

docker run -d --name rancher --restart=unless-stopped -p 80:80 -p 443:443 -v /opt/rancher:/var/lib/rancher rancher/rancher:v2.2.4

界面访问直接访问IP
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
第一条授权经测试 不执行也可以添加成功。
如果集群有证书,直接第三条命令,执行:

[root@node01 kubernetes-1.10]# curl --insecure -sfL https://192.168.1.10/v3/import/5s2xmsfnbrj89thgf4m25pm9j4s6mcczsg7rszwsc95zm49m6ndtm5.yaml | kubectl apply -f -
namespace "cattle-system" created
serviceaccount "cattle" created
clusterrolebinding.rbac.authorization.k8s.io "cattle-admin-binding" created
secret "cattle-credentials-664b64d" created
clusterrole.rbac.authorization.k8s.io "cattle-admin" created
deployment.extensions "cattle-cluster-agent" created
daemonset.extensions "cattle-node-agent" created

查看rancher agent:

[root@node01 ~]# kubectl  get pod -n cattle-system
NAME                                    READY     STATUS    RESTARTS   AGE
cattle-cluster-agent-6559655864-p44qs   1/1       Running   0          1m
cattle-node-agent-9tnvs                 1/1       Running   0          39s
cattle-node-agent-c9x69                 1/1       Running   0          51s
cattle-node-agent-pt2r6                 1/1       Running   0          59s

回到浏览器发现集群已经倒入,集群能正常使用
在这里插入图片描述

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
1天前
|
Kubernetes 网络安全 Docker
在k8S中,Worker节点加入集群的过程是什么?
在k8S中,Worker节点加入集群的过程是什么?
|
1天前
|
Kubernetes 安全 数据安全/隐私保护
在k8S中,如何保证集群的安全性?
在k8S中,如何保证集群的安全性?
|
1天前
|
存储 Kubernetes 监控
在K8S中,worke节点如何加入K8S高可用集群?
在K8S中,worke节点如何加入K8S高可用集群?
|
1天前
|
Kubernetes 网络协议 应用服务中间件
在K8S中,SVC资源是否支持在K8S集群外部访问?
在K8S中,SVC资源是否支持在K8S集群外部访问?
|
1天前
|
存储 Kubernetes 负载均衡
在k8S中,Master节点高可用是如何做的?
在k8S中,Master节点高可用是如何做的?
|
1天前
|
Kubernetes 网络架构 容器
在k8S中,外部如何访问集群内的服务?
在k8S中,外部如何访问集群内的服务?
|
1天前
|
SQL Kubernetes 数据处理
实时计算 Flink版产品使用问题之如何把集群通过kubernetes进行部署
实时计算Flink版作为一种强大的流处理和批处理统一的计算框架,广泛应用于各种需要实时数据处理和分析的场景。实时计算Flink版通常结合SQL接口、DataStream API、以及与上下游数据源和存储系统的丰富连接器,提供了一套全面的解决方案,以应对各种实时计算需求。其低延迟、高吞吐、容错性强的特点,使其成为众多企业和组织实时数据处理首选的技术平台。以下是实时计算Flink版的一些典型使用合集。
|
12天前
|
canal Kubernetes Docker
基于Kubernetes v1.25.0和Docker部署高可用集群(03部分)
基于Kubernetes v1.25.0和Docker部署高可用集群(03部分)
|
7天前
|
Kubernetes 安全 数据安全/隐私保护
Kubernetes(K8S) 集群安全机制
Kubernetes(K8S) 集群安全机制
18 2
|
13天前
|
Kubernetes Ubuntu Linux
基于Kubernetes v1.25.0和Docker部署高可用集群(02部分)
基于Kubernetes v1.25.0和Docker部署高可用集群(02部分)