kubeasz 部署高可用 kubernetes 集群

本文涉及的产品
传统型负载均衡 CLB,每月750个小时 15LCU
网络型负载均衡 NLB,每月750个小时 15LCU
应用型负载均衡 ALB,每月750个小时 15LCU
简介: kubeasz 部署高可用 kubernetes 集群

环境准备

IP HOSTNAME SYSTEM
192.168.131.145 master CentOS 7.6
192.168.131.146 node1 CentOS 7.6
192.168.131.147 node2 CentOS 7.6

[root@localhost ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@localhost ~]# sestatus
SELinux status:                 disabled
[root@localhost ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

配置模板机

建议内核是4.X以上的,为了方便后面直接克隆虚拟机,所以创建一个模板机器,不能克隆的环境,需要自己手动升级
----------------------------------------------------------------------------------------------
更新:2021年2月6日
此脚本已经不适合升级内核了,现在默认的repo里面的内核都是"5.x了,4.x需要自己编译升级了,千万不要直接使用"
# 以下脚本会升级内核,以及安装ansible,使用的DNS是阿里云的
#!/bin/bash
# k8s集群模板机
# from 半癫
uname=$(uname -r | awk -F '.' '{printf $1}')
check_network() {
printf "\e[1;32m###########正在检查网络######################################\e[0m\n"
ping -c1 www.baidu.com > /dev/null 2>&1
if [ $? -ne 0 ]
then
    printf "\e[1;32m###########网络链接失败,请检查网络###########################\e[0m\n"
    exit 1
else
    printf "\e[1;32m###########网络链接成功,继续执行,请稍后#####################\e[0m\n"
fi
}
check_network
set_network() {
printf "\e[1;32m###########正在配置网络######################################\e[0m\n"
echo "请输入主机名:"
read HOST_NAME
if [ -z $HOST_NAME ];then
  echo "Hostname will not be changed."
fi
echo "请输入ip地址:"
read NETWORK_IP
if [ -z $NETWORK_IP ];then
  echo "You need input IP here. Please try again."
  exit
fi
echo "请输入网关: "
read GATEWAY_CUSTOM
if [ -z $GATEWAY_CUSTOM  ];then
  echo "You need input GATEWAY here. Please try again."
        exit
else 
  NETWORK_GATEWAY=$GATEWAY_CUSTOM
fi
echo "请输入子网掩码:"
read NETMASK_CUSTOM
if [ -z $NETMASK_CUSTOM ];then 
  NETWORK_NETMASK=$NETMASK_DEFAULT
else 
  NETWORK_NETMASK=$NETMASK_CUSTOM
fi
echo "$NETWORK_IP $HOST_NAME" >> /etc/hosts
sed -i '/^ONBOOT/s/=.*/=yes/' /etc/sysconfig/network-scripts/ifcfg-eth0
sed -i '/^BOOTPROTO/s/=.*/=none/' /etc/sysconfig/network-scripts/ifcfg-eth0
sed -i '/^IPADDR/d' /etc/sysconfig/network-scripts/ifcfg-eth0
sed -i '/^NETMASK/d' /etc/sysconfig/network-scripts/ifcfg-eth0
sed -i '/^GATEWAY/d' /etc/sysconfig/network-scripts/ifcfg-eth0
echo "IPADDR=$NETWORK_IP
NETMASK=$NETWORK_NETMASK
GATEWAY=$NETWORK_GATEWAY">> /etc/sysconfig/network-scripts/ifcfg-eth0
echo "DNS1=223.5.5.5" >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo "DNS2=223.6.6.6" >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo $HOST_NAME > /etc/hostname
systemctl restart network
check_network
}
elrepo() {
printf "\e[1;32m###########正在升级内核######################################\e[0m\n"
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum --disablerepo=\* --enablerepo=elrepo-kernel repolist
yum --disablerepo=\* --enablerepo=elrepo-kernel install kernel-lt.x86_64 -y
yum remove kernel-tools-libs.x86_64 kernel-tools.x86_64  -y
yum --disablerepo=\* --enablerepo=elrepo-kernel install kernel-lt-tools* -y
grub2-set-default 0
printf "\e[1;32m###########内核升级完成######################################\e[0m\n"
}
init_yum() {
printf "\e[1;32m###########正在配置yum源#####################################\e[0m\n"
mkdir /etc/yum.repos.d/bak
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache
if [ $? -eq 0 ]
then
    printf "\e[1;32m###########yum源配置成功#####################################\e[0m\n"
else
    printf "\e[1;32m###########yum源配置失败#####################################\e[0m\n"
    exit 3
fi
}
install_ansible() {
printf "\e[1;32m###########正在安装Ansible###################################\e[0m\n"
yum install git python-pip -y
pip install pip --upgrade -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
pip install ansible==2.6.12 -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
pip install netaddr -i http://pypi.douban.com/simple --trusted-host pypi.douban.com
if [ $? -eq 0 ]
then
    printf "\e[1;32m###########Ansible 部署成功##################################\e[0m\n"
else
    printf "\e[1;32m###########Ansible 部署失败##################################\e[0m\n"
    exit 4
fi
}
swappoff() {
swapoff -a && sysctl -w vm.swappiness=0
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
}
selinuxstatus() {
systemctl disable --now firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
}
if [ $uname -eq 4 ]
then
    printf "\e[1;32m###########内核版本为4,无须修改##############################\e[0m\n"
    selinuxstatus
    set_network
    init_yum
    install_ansible
    swappoff
    /usr/sbin/init 0
else
    printf "\e[1;32m###########内核版本为3,准备升级内核##########################\e[0m\n"
    selinuxstatus
    set_network
    init_yum
    install_ansible
    swappoff
    elrepo
    sleep 3
    /usr/sbin/init 0
fi
# 克隆完机器后,记得修改ip和hostname

配置hosts解析

[root@master ~]# sed -i 's/192.168.131.144.*//g' /etc/hosts   # 模板机里面的要删掉
[root@master ~]# cat >> /etc/hosts <<EOF
192.168.131.145 master
192.168.131.146 node1
192.168.131.147 node2
EOF

配置ssh 免密钥登陆

[root@master ~]# ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
[root@master ~]# yum -y install sshpass
[root@master ~]# cat ssh-key.sh
for node in master node1 node2
do
  sshpass -p '123.com' ssh-copy-id  ${node}  -o StrictHostKeyChecking=no
  scp /etc/hosts ${node}:/etc/hosts
  if [ $? -eq 0 ];then
    echo "${node} 秘钥copy完成"
  else
    echo "${node} 秘钥copy失败"
  fi
done
[root@master ~]# sh ssh-key.sh

kubeasz 部署服务准备

  • github上的kubeasz已经更新了,导致和之前写的博客不一样,之前保留了一份kubeasz在本地,现已经上传到百度云了,新版的kubeasz暂时没有好好的把玩过
  • 链接:https://pan.baidu.com/s/1rFscCCLHhD4O3os_9yKqEQ
    提取码:o1bs
[root@master ~]# git clone https://github.com/easzlab/kubeasz.git
[root@master ~]# cd kubeasz/
[root@master kubeasz]# ll
total 88
-rw-r--r--  1 root root   395 Sep  7 16:56 01.prepare.yml
-rw-r--r--  1 root root    58 Sep  7 16:56 02.etcd.yml
-rw-r--r--  1 root root   149 Sep  7 16:56 03.containerd.yml
-rw-r--r--  1 root root   137 Sep  7 16:56 03.docker.yml
-rw-r--r--  1 root root   470 Sep  7 16:56 04.kube-master.yml
-rw-r--r--  1 root root   140 Sep  7 16:56 05.kube-node.yml
-rw-r--r--  1 root root   408 Sep  7 16:56 06.network.yml
-rw-r--r--  1 root root    77 Sep  7 16:56 07.cluster-addon.yml
-rw-r--r--  1 root root  3686 Sep  7 16:56 11.harbor.yml
-rw-r--r--  1 root root   431 Sep  7 16:56 22.upgrade.yml
-rw-r--r--  1 root root  1975 Sep  7 16:56 23.backup.yml
-rw-r--r--  1 root root   113 Sep  7 16:56 24.restore.yml
-rw-r--r--  1 root root  1752 Sep  7 16:56 90.setup.yml
-rw-r--r--  1 root root  1127 Sep  7 16:56 91.start.yml
-rw-r--r--  1 root root  1120 Sep  7 16:56 92.stop.yml
-rw-r--r--  1 root root   337 Sep  7 16:56 99.clean.yml
-rw-r--r--  1 root root 10283 Sep  7 16:56 ansible.cfg
drwxr-xr-x  2 root root    23 Sep  7 16:56 bin
drwxr-xr-x  2 root root    23 Sep  7 16:56 dockerfiles
drwxr-xr-x  8 root root    92 Sep  7 16:56 docs
drwxr-xr-x  2 root root    25 Sep  7 16:56 down
drwxr-xr-x  2 root root    52 Sep  7 16:56 example
drwxr-xr-x 14 root root   218 Sep  7 16:56 manifests
drwxr-xr-x  2 root root   322 Sep  7 16:56 pics
-rw-r--r--  1 root root  5653 Sep  7 16:56 README.md
drwxr-xr-x 23 root root  4096 Sep  7 16:56 roles
drwxr-xr-x  2 root root   294 Sep  7 16:56 tools
[root@master kubeasz]# tools/easzup -D     

配置主机清单

[root@master kubeasz]# cd /etc/ansible/   # 完成后,所需的文件都会下载到ansible目录下
[root@master ansible]# ll
total 92
-rw-rw-r--  1 root root   395 May 28 21:11 01.prepare.yml
-rw-rw-r--  1 root root    58 May 28 21:11 02.etcd.yml
-rw-rw-r--  1 root root   149 May 28 21:11 03.containerd.yml
-rw-rw-r--  1 root root   137 May 28 21:11 03.docker.yml
-rw-rw-r--  1 root root   470 May 28 21:11 04.kube-master.yml
-rw-rw-r--  1 root root   140 May 28 21:11 05.kube-node.yml
-rw-rw-r--  1 root root   408 May 28 21:11 06.network.yml
-rw-rw-r--  1 root root    77 May 28 21:11 07.cluster-addon.yml
-rw-rw-r--  1 root root  3686 May 28 21:11 11.harbor.yml
-rw-rw-r--  1 root root   431 May 28 21:11 22.upgrade.yml
-rw-rw-r--  1 root root  1975 May 28 21:11 23.backup.yml
-rw-rw-r--  1 root root   113 May 28 21:11 24.restore.yml
-rw-rw-r--  1 root root  1752 May 28 21:11 90.setup.yml
-rw-rw-r--  1 root root  1127 May 28 21:11 91.start.yml
-rw-rw-r--  1 root root  1120 May 28 21:11 92.stop.yml
-rw-rw-r--  1 root root   337 May 28 21:11 99.clean.yml
-rw-rw-r--  1 root root 10283 May 28 21:11 ansible.cfg
drwxrwxr-x  2 root root  4096 Sep  7 16:59 bin
drwxrwxr-x  2 root root    23 May 29 09:15 dockerfiles
drwxrwxr-x  8 root root    92 May 29 09:15 docs
drwxrwxr-x  2 root root   292 Sep  7 17:01 down
drwxrwxr-x  2 root root    52 May 29 09:15 example
drwxrwxr-x 14 root root   218 May 29 09:15 manifests
drwxrwxr-x  2 root root   322 May 29 09:15 pics
-rw-rw-r--  1 root root  5653 May 28 21:11 README.md
drwxrwxr-x 23 root root  4096 May 29 09:15 roles
drwxrwxr-x  2 root root   294 May 29 09:15 tools
[root@master ansible]# cp example/hosts.multi-node ./hosts
[root@master ansible]# vim hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
# variable 'NODE_NAME' is the distinct name of a member in 'etcd' cluster
[etcd]
192.168.131.145 NODE_NAME=etcd1
# master node(s)
[kube-master]
192.168.131.145
# work node(s)
[kube-node]
192.168.131.146
192.168.131.147
# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'yes' to install a harbor server; 'no' to integrate with existed one
# 'SELF_SIGNED_CERT': 'no' you need put files of certificates named harbor.pem and harbor-key.pem in directory 'down'
[harbor]
#192.168.1.8 HARBOR_DOMAIN="harbor.yourdomain.com" NEW_INSTALL=no SELF_SIGNED_CERT=yes
# [optional] loadbalance for accessing k8s from outside
[ex-lb]
#192.168.1.6 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443
#192.168.1.7 LB_ROLE=master EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443
# [optional] ntp server for the cluster
[chrony]
192.168.131.145
[all:vars]
# --------- Main Variables ---------------
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"
# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.68.0.0/16"
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="172.20.0.0/16"
# NodePort Range
NODE_PORT_RANGE="20000-40000"
# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local."
# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/opt/kube/bin"
# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"
# Deploy Directory (kubeasz workspace)
base_dir="/etc/ansible"

配置主机清单

[root@master kubeasz]# cd /etc/ansible/   # 完成后,所需的文件都会下载到ansible目录下
[root@master ansible]# ll
total 92
-rw-rw-r--  1 root root   395 May 28 21:11 01.prepare.yml
-rw-rw-r--  1 root root    58 May 28 21:11 02.etcd.yml
-rw-rw-r--  1 root root   149 May 28 21:11 03.containerd.yml
-rw-rw-r--  1 root root   137 May 28 21:11 03.docker.yml
-rw-rw-r--  1 root root   470 May 28 21:11 04.kube-master.yml
-rw-rw-r--  1 root root   140 May 28 21:11 05.kube-node.yml
-rw-rw-r--  1 root root   408 May 28 21:11 06.network.yml
-rw-rw-r--  1 root root    77 May 28 21:11 07.cluster-addon.yml
-rw-rw-r--  1 root root  3686 May 28 21:11 11.harbor.yml
-rw-rw-r--  1 root root   431 May 28 21:11 22.upgrade.yml
-rw-rw-r--  1 root root  1975 May 28 21:11 23.backup.yml
-rw-rw-r--  1 root root   113 May 28 21:11 24.restore.yml
-rw-rw-r--  1 root root  1752 May 28 21:11 90.setup.yml
-rw-rw-r--  1 root root  1127 May 28 21:11 91.start.yml
-rw-rw-r--  1 root root  1120 May 28 21:11 92.stop.yml
-rw-rw-r--  1 root root   337 May 28 21:11 99.clean.yml
-rw-rw-r--  1 root root 10283 May 28 21:11 ansible.cfg
drwxrwxr-x  2 root root  4096 Sep  7 16:59 bin
drwxrwxr-x  2 root root    23 May 29 09:15 dockerfiles
drwxrwxr-x  8 root root    92 May 29 09:15 docs
drwxrwxr-x  2 root root   292 Sep  7 17:01 down
drwxrwxr-x  2 root root    52 May 29 09:15 example
drwxrwxr-x 14 root root   218 May 29 09:15 manifests
drwxrwxr-x  2 root root   322 May 29 09:15 pics
-rw-rw-r--  1 root root  5653 May 28 21:11 README.md
drwxrwxr-x 23 root root  4096 May 29 09:15 roles
drwxrwxr-x  2 root root   294 May 29 09:15 tools
[root@master ansible]# cp example/hosts.multi-node ./hosts
[root@master ansible]# vim hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
# variable 'NODE_NAME' is the distinct name of a member in 'etcd' cluster
[etcd]
192.168.131.145 NODE_NAME=etcd1
# master node(s)
[kube-master]
192.168.131.145
# work node(s)
[kube-node]
192.168.131.146
192.168.131.147
# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'yes' to install a harbor server; 'no' to integrate with existed one
# 'SELF_SIGNED_CERT': 'no' you need put files of certificates named harbor.pem and harbor-key.pem in directory 'down'
[harbor]
#192.168.1.8 HARBOR_DOMAIN="harbor.yourdomain.com" NEW_INSTALL=no SELF_SIGNED_CERT=yes
# [optional] loadbalance for accessing k8s from outside
[ex-lb]
#192.168.1.6 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443
#192.168.1.7 LB_ROLE=master EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443
# [optional] ntp server for the cluster
[chrony]
192.168.131.145
[all:vars]
# --------- Main Variables ---------------
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"
# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.68.0.0/16"
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="172.20.0.0/16"
# NodePort Range
NODE_PORT_RANGE="20000-40000"
# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local."
# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/opt/kube/bin"
# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"
# Deploy Directory (kubeasz workspace)
base_dir="/etc/ansible"
测试是否ping通
[root@master ansible]# ansible all -m ping
/usr/lib/python2.7/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in a future release.
  from cryptography.exceptions import InvalidSignature
192.168.131.145 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
192.168.131.147 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
192.168.131.146 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

部署集群

[root@master ansible]# ansible-playbook 90.setup.yml
PLAY RECAP ******************************************************************************************
192.168.131.145            : ok=99   changed=84   unreachable=0    failed=0
192.168.131.146            : ok=104  changed=91   unreachable=0    failed=0
192.168.131.147            : ok=99   changed=86   unreachable=0    failed=0
localhost                  : ok=37   changed=33   unreachable=0    failed=0
[root@master ansible]# echo "source <(kubectl completion bash)" >> ~/.bashrc # 配置命令补全
[root@master ansible]# kubectl get nodes
NAME              STATUS                     ROLES    AGE     VERSION
192.168.131.145   Ready,SchedulingDisabled   master   6m49s   v1.18.3
192.168.131.146   Ready                      node     5m41s   v1.18.3
192.168.131.147   Ready                      node     5m41s   v1.18.3
[root@master ansible]# kubectl top nodes
NAME              CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
192.168.131.145   202m         10%    1354Mi          105%
192.168.131.146   83m          8%     485Mi           176%
192.168.131.147   60m          6%     473Mi           172%
[root@master ansible]# kubectl get pods -A
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7fdc86d8ff-96nlk     1/1     Running   0          91s
kube-system   calico-node-2jc2n                            1/1     Running   0          91s
kube-system   calico-node-dft74                            1/1     Running   0          91s
kube-system   calico-node-fdgwc                            1/1     Running   0          91s
kube-system   coredns-65dbdb44db-8tlpg                     1/1     Running   0          59s
kube-system   dashboard-metrics-scraper-545bbb8767-hd7gj   1/1     Running   0          22s
kube-system   kubernetes-dashboard-65665f84db-qtx8b        1/1     Running   0          23s
kube-system   metrics-server-869ffc99cd-6w2n8              1/1     Running   0          51s  
[root@master ansible]# kubectl get svc -n kube-system   # dashboard开放的端口是39915
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
dashboard-metrics-scraper   ClusterIP   10.68.124.158   <none>        8000/TCP                 7m16s
kube-dns                    ClusterIP   10.68.0.2       <none>        53/UDP,53/TCP,9153/TCP   7m52s
kubernetes-dashboard        NodePort    10.68.224.242   <none>        443:39915/TCP            7m16s
metrics-server              ClusterIP   10.68.152.27    <none>        443/TCP                  7m44s
[root@master ansible]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-cjl42
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: dd7fb3fb-a95d-4f51-9df4-efcd7bf9bbc3
Type:  kubernetes.io/service-account-token
Data
====
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjZwUjlZNUYzd0pCRzFrQVBHLTNYeFhuVEpyZi1rNU1MbnhIT2VWaG5BSXcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWNqbDQyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkZDdmYjNmYi1hOTVkLTRmNTEtOWRmNC1lZmNkN2JmOWJiYzMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.maXSb5FGfutG1fLQwCM6L1sNYPU9lOSr0hIE7xQOxvpDW-oDFPWWrLkVyHQgfG1bwZxqu-M3YfPy6cYSqhuGB7-UAR18TaG3rXNDmaa6QnMlLch65ZyoaUkHb_X_woa3ZL_TOd9NnckuZ4lo5e-PudDWRGmUJSmlXTG-O10kmi_RQ_txjD4wXa4XGl-GER7JXTc78Nhbacj1uyzm2SDk4xTsT2tN6C3sQt_5hfhxTxmBhM-9kw12_a9a6FVxLi8CB6GOoAqxmckPU-FMbgSOO_VOc6idN4D9OMbZtSuqXvop-SxL6PkcOUEoc9tK12U81pekBlwKPIcVjmcKnHnMQQ
ca.crt:     1350 bytes
# 浏览器访问:https://192.168.131.145:39915


相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
目录
相关文章
|
12天前
|
Kubernetes Cloud Native 微服务
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
194 3
|
1天前
|
Kubernetes Ubuntu Linux
k8s部署grafana beyla实现app应用服务依赖图可观测
k8s部署grafana beyla实现app应用服务依赖图可观测
12 4
|
3天前
|
Kubernetes 算法 API
K8S 集群认证管理
【6月更文挑战第22天】Kubernetes API Server通过REST API管理集群资源,关键在于客户端身份认证和授权。
|
5天前
|
Kubernetes 前端开发 微服务
实操教程丨如何在K8S集群中部署Traefik Ingress Controller
实操教程丨如何在K8S集群中部署Traefik Ingress Controller
|
5天前
|
Kubernetes 容器 Perl
019.Kubernetes二进制部署插件dashboard
019.Kubernetes二进制部署插件dashboard
|
5天前
|
运维 Kubernetes 监控
备战双 11!蚂蚁金服万级规模 K8s 集群管理系统如何设计?
备战双 11!蚂蚁金服万级规模 K8s 集群管理系统如何设计?
|
Kubernetes 开发者 微服务
简化Kubernetes应用部署工具-Helm之Hook
微服务和容器化给复杂应用部署与管理带来了极大的挑战。Helm是目前Kubernetes服务编排领域的唯一开源子项目,做为Kubernetes应用的一个包管理工具,可理解为Kubernetes的apt-get / yum,由Deis 公司发起,该公司已经被微软收购。
1583 0
|
Kubernetes 开发者 微服务
简化Kubernetes应用部署工具-Helm之Hook
本文讲的是简化Kubernetes应用部署工具-Helm之Hook【编者的话】微服务和容器化给复杂应用部署与管理带来了极大的挑战。Helm是目前Kubernetes服务编排领域的唯一开源子项目,做为Kubernetes应用的一个包管理工具,可理解为Kubernetes的apt-get / yum,由Deis 公司发起,该公司已经被微软收购。
2529 0
|
24天前
|
Kubernetes 微服务 容器
Aspire项目发布到远程k8s集群
Aspire项目发布到远程k8s集群
376 2
Aspire项目发布到远程k8s集群
|
14天前
|
Kubernetes 数据处理 调度
天呐!部署 Kubernetes 模式的 Havenask 集群太震撼了!
【6月更文挑战第11天】Kubernetes 与 Havenask 集群结合,打造高效智能的数据处理解决方案。Kubernetes 如指挥家精准调度资源,Havenask 快速响应查询,简化复杂任务,优化资源管理。通过搭建 Kubernetes 环境并配置 Havenask,实现高可扩展性和容错性,保障服务连续性。开发者因此能专注业务逻辑,享受自动化基础设施管理带来的便利。这项创新技术组合引领未来,开启数据处理新篇章。拥抱技术新时代!