kubeadm基于docker安装高可用1.26.3版本k8s集群

简介: kubeadm基于docker安装高可用1.26.3版本k8s集群

前言

高可用集群分为etcd堆叠模式和etcd外部模式


堆叠式etcd集群拓扑将相同节点上的控制平面同etcd成员耦合在一起,每个控制平面节点分别运行一个etcd、kube-apiserver、kubescheduler和kube-controller-manager实例。而以kubeadm部署的该类控制平面中,各kube-apiserver实例仅与本地节点上的etcd成员通信,而各kube-scheduler和kube-controller-manager也仅与本地节点上的kube-apiserver通信。


使用独立etcd集群的设计方案中,etcd集群与控制平面集群各自独立运行,它们各自遵循自有的节点拓扑要求和能承载各自需求的成员节点数量,例如etcd集群存在3个成员节点,而控制平面集群有4个成员节点等。kube-apiserver通常基于专用的域名与etcd集群中的任何成员进行通信,而各kube-controller-manager和kube-scheduler实例也可以通过专用域名及外部的负载均衡器与任一kube-apiserver实例进行通信。这样就实现了kube-apiserver与etcd以及控制平面其他组件在本地节点上的解耦。


这两种拓扑结构各有利弊,但第一种方案节点需求量较小,适合中小规模的生产类集群,第二种方案节点需求量大,有较好的承载力及故障隔离能力,较适合中大型规模的生产类集群。


本次采用的是堆叠模式,如需要使用kubeadm部署外部etcd集群可以参考这篇blog


新建虚拟机并进行配置

配置详情

操作系统centos7.9(本来想用Ubuntu的,ubuntu2004内核版本挺高的,但是还得从头安装系统,就直接克隆了五台centos)配置2C/4G如下:

[root@template ~]# cat /etc/redhat-release 
CentOS Linux release 7.7.1908 (Core)
[root@template ~]# free -hm
              total        used        free      shared  buff/cache   available
Mem:           3.8G        155M        2.9G         11M        786M        3.5G
Swap:          2.0G          0B        2.0G
[root@template ~]# lscpu |grep -i "^cpu(s)"
CPU(s):                2


配置IP地址

根据自己的规划进行修改对应信息

[root@template ~]# nmcli connection modify ens33 ipv4.addresses 1.1.1.1 ipv4.gateway 1.1.1.1 ipv4.dns 223.5.5.5 ipv4.method manual autoconnect yes
[root@template ~]# nmcli connection up ens33


记录一个配置文件

以简化一部分操作,如参照我的部署记得修改配置(主要修改IP地址和系统密码)

cat <<EOF > /opt/k8s_env.sh
#!/bin/bash
# k8s节点网段,方便做chronyd对时
NODEIPS=192.168.123.0/24
# k8s集群所有节点
HOSTS=(master1 master2 master3 node1 node2)
# k8s管理节点
MASTERS=(master1 master2 master3)
# k8s工作节点
WORKS=(master1 master2 master3 node1 node2)
# 每个节点对应的IP地址
master1=192.168.123.201
master2=192.168.123.202
master3=192.168.123.203
node1=192.168.123.204
node2=192.168.123.205
# 高可用域名
HAVIP=kubernetes-vip:8443
# 节点root密码,方便免密
export SSHPASS=1
# 配置kubectl自动补全
#source <(kubeadm completion bash)
#source <(kubectl completion bash)
#source <(crictl completion bash)
# 服务网段(Service CIDR),部署前路由不可达,部署后集群内部使用IP:Port可达
SERVICE_CIDR="10.100.0.0/16"
# clusterDNS地址,部署前路由不可达,部署后集群内部使用IP:Port可达,需要在Service CIDR中可达,一般建议选用第10个地址
CLUSTER_KUBERNETES_SVC_IP="10.100.0.10"
# Pod 网段(Cluster CIDR),部署前路由不可达,部署后路由可达(flanneld 保证)
CLUSTER_CIDR="172.31.0.0/16"
---
### 如和我一样采用堆叠模式etcd可以不需要下面的变量配置
# etcd集群服务地址列表(默认复用3个master节点)
ETCD_ENDPOINTS="https://\$master1:2379,https://\$master2:2379,https://\$master3:2379"
# etcd集群服务地址列表(默认复用3个master节点)
ETCD_CLUSTERS="master1=https://\$master1:2380,master2=https://\$master2:2380,master3=https://\$master3:2380"
EOF


配置yum源,并安装对应软件

配置Centos7基础源和epel源(下载基础包)

mkdir /opt/yum_bak && mv /etc/yum.repos.d/* /opt/yum_bak/ # 备份原有的repo
curl -o /etc/yum.repos.d/CentOS-Base.repo https://repo.huaweicloud.com/repository/conf/CentOS-7-reg.repo
yum -y install epel-release
sed -i "s/#baseurl/baseurl/g" /etc/yum.repos.d/epel.repo
sed -i "s/metalink/#metalink/g" /etc/yum.repos.d/epel.repo
sed -i "s@https\?://download.fedoraproject.org/pub@https://repo.huaweicloud.com@g" /etc/yum.repos.d/epel.repo
# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
# curl -o /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo
# 阿里貌似限速了,最近用阿里的源老慢了


配置elrepo源(更新内核)

yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y 
sed -i "s@mirrorlist@#mirrorlist@g" /etc/yum.repos.d/elrepo.repo 
sed -i "s@elrepo.org/linux@mirrors.tuna.tsinghua.edu.cn/elrepo@g" /etc/yum.repos.d/elrepo.repo


配置docker源(安装docker)

yum install -y yum-utils device-mapper-persistent-data lvm2
curl -o /etc/yum.repos.d/docker-ce.repo https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo
# Step 3
sed -i 's+download.docker.com+repo.huaweicloud.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo


配置k8s源(本不想用阿里的,奈何只有他的最新)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装基础包

yum clean all && yum makecache fast
[root@template ~]# yum -y install wget psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git tar curl ipvsadm ipset sysstat conntrack libseccomp chrony  gcc gcc-c++ pcre pcre-devel zlib zlib-devel openssl openssl-devel make sshpass

更新内核至最新lt,可以参考这篇blog

[root@template ~]# yum  -y --enablerepo=elrepo-kernel  install  kernel-lt --enablerepo=elrepo-kernel
[root@template ~]# awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
CentOS Linux (5.4.238-1.el7.elrepo.x86_64) 7 (Core)   # 刚更新的内核
CentOS Linux (3.10.0-1062.el7.x86_64) 7 (Core)        # 自带的内核
CentOS Linux (0-rescue-50b098c445fa47c8a7b598eb96857747) 7 (Core)
[root@template ~]# grub2-set-default 0     # 更换启动内核
[root@template ~]# init 6 # 重启生效

安装docker

[root@template ~]# yum -y install docker-ce --disableexcludes=docker-ce
[root@template ~]# cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://xxxx.mirror.aliyuncs.com"],        # 修改为填自己的加速器
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true  # 设置docker重启时候容器不重启
}
EOF

安装k8s

[root@template ~]# yum -y install kubeadm-1.26.3-0 kubectl-1.26.3-0 kubelet-1.26.3-0

安装nginx

编译

[root@template nginx-1.20.1]# wget http://nginx.org/download/nginx-1.20.1.tar.gz
[root@template nginx-1.20.1]# tar -xf nginx-1.20.1.tar.gz
[root@template nginx-1.20.1]# cd nginx-1.20.1/
[root@template nginx-1.20.1]# ./configure --prefix=/opt/nginx --with-stream --without-http --without-http_uwsgi_module && make -j 2  && make install
[root@template ~]# mkdir /etc/nginx/logs -p
[root@template ~]# cp /opt/nginx/sbin/nginx /usr/local/bin/
[root@template ~]# nginx -v
nginx version: nginx/1.20.1

编辑nginx的conf文件

[root@template ~]# source /opt/k8s_env.sh
[root@template ~]# cat > /opt/nginx.conf <<EOF
worker_processes auto;
events {
    worker_connections  1024;
}
stream {
    upstream backend {
        hash \$remote_addr consistent;
        server $master1:6443        max_fails=3 fail_timeout=30s;
        server $master2:6443        max_fails=3 fail_timeout=30s;
        server $master3:6443        max_fails=3 fail_timeout=30s;
    }
    server {
        listen *:8443;                # 使用8443端口代理apiserver的6443
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}
EOF
[root@template ~]# cp /opt/nginx.conf /etc/nginx/


编辑service文件

[root@template ~]# source /opt/k8s_env.sh
[root@template ~]#  cat > /opt/kube-nginx.service <<EOF
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=forking
ExecStartPre=/usr/local/bin/nginx -c /etc/nginx/nginx.conf -p /etc/nginx -t
ExecStart=/usr/local/bin/nginx -c /etc/nginx/nginx.conf  -p /etc/nginx
ExecReload=/usr/local/bin/nginx -c /etc/nginx/nginx.conf  -p /etc/nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
[root@template ~]# cp /opt/kube-nginx.service /etc/systemd/system


配置文件描述符

[root@template ~]# cat <<EOF > /etc/security/limits.conf
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimited
EOF


配置ipvs和内核模块

ipvs

[root@template ~]# cat <<EOF > /etc/modules-load.d/ipvs.conf 
ip_vs                  
ip_vs_lc                              
ip_vs_wlc                              
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo                              
ip_vs_nq                         
ip_vs_sed                        
ip_vs_ftp                       
ip_vs_sh                           
nf_conntrack #内核小于4.18,把这行改成nf_conntrack_ipv4,安装的时候注释要删除,否则可能报错
ip_tables                        
ip_set                         
xt_set                     
ipt_set                             
ipt_rpfilter
ipt_REJECT                          
ipip                       
overlay                   
br_netfilter
EOF


内核

[root@template ~]# cat <<EOF >  /etc/sysctl.d/k8s.conf 
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
vm.swappiness=0
EOF


重启服务生效

systemctl restart systemd-modules-load.service
sysctl --system


其他准备工作

修改hosts配置解析

source /opt/k8s_env.sh
for host in ${HOSTS[@]}
do echo "$(eval echo "$"$host) $host" >> /etc/hosts
done
echo "127.0.0.1 kubernetes-vip" >> /etc/hosts
# 执行本命令之前记得先修改配置文件/opt/k8s_env.sh


修改hostname、关闭防火墙、selinux、关闭swap分区

[root@template ~]# systemctl disable --now firewalld
[root@template ~]# setenforce 0 
[root@template ~]# sed -ri '/^SELINUX=/cSELINUX=disabled' /etc/selinux/config 
[root@template ~]# sed -i 's@.*swap.*@#&@g' /etc/fstab 
[root@template ~]# swapoff -a


下载cri-docker的包

项目地址分别为:

https://github.com/Mirantis/cri-dockerd

[root@template ~]# yum localinstall -y cri-dockerd-0.3.1-3.el7.x86_64.rpm
[root@template ~]# sed -i "s#^ExecStart.*#ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9#"  /usr/lib/systemd/system/cri-docker.service 
# 这里若不修改后续会无法拉下来pause镜像
[root@template ~]# systemctl enable --now docker cri-docker
# 如果没有报错那就cri没问题,否则需要检查


检查docker、cri-docker状态

[root@template ~]# systemctl is-active docker cri-docker
active
active


开始安装的准备工作

免密

前面已经安装了sshpass,可以只在第一个节点操作,这一步会免密同时修改hostname

[root@template ~]# for host in  ${HOSTS[@]};do     sshpass -e  ssh-copy-id -o StrictHostKeyChecking=no $host;     ssh $host "hostnamectl set-hostname $host"; done


配置时间同步

master1同步阿里ntp服务器,其余节点同步master1,这一步只在master1操作。

如果都对时阿里的ntp可以在克隆之前就操作

cat > /etc/chrony.conf <<EOF
server ntp.aliyun.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow $NODEIPS
local stratum 10
keyfile /etc/chrony/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF
cat > /opt/chrony.conf <<EOF
server $master1 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
local stratum 10
keyfile /etc/chrony/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF


分发chrony的配置文件

[root@master1 ~]# for host in ${HOSTS[@]};do ssh $host "ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime"; if [[ $host == $(hostname) ]];then ssh $host "systemctl restart chronyd"; continue; fi; scp /opt/chrony.conf  $host:/etc/chrony.conf ; ssh $host " systemctl restart chronyd"; done

修改完成之后可以每个节点使用如下命令检查是否对时成功

[root@master1 ~]# chronyc sources -v
210 Number of sources = 1
  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 203.107.6.88                  2   6    17    50  -1162us[-1076us] +/-   26ms


安装k8s

准备工作已经全部做完了,现在开始k8s的安装

配置命令tab补全

source /opt/k8s_env.sh
for i in ${HOSTS[@]};do ssh $i systemctl enable --now kubelet docker cri-docker kube-nginx chronyd ;done
# 为了避免开机起不来,做一下开机自启
cat >> .bashrc <<EOF
source <(kubeadm completion bash)
source <(kubectl completion bash)
EOF
source .bashrc


生成初始化文件并编辑

kubeadm config print init-defaults  > kubeadm_init.yaml
sed -i "s/1.2.3.4/$master1/" kubeadm_init.yaml  # 修改master1地址
sed -i "s#name: node#name: $HOSTNAME#" kubeadm_init.yaml  # 修改node名字
sed -i "s#^imageRepository.*#imageRepository: registry.aliyuncs.com/google_containers#" kubeadm_init.yaml  # 修改仓库地址
sed -i "s#^kubernetesVersion.*#kubernetesVersion: 1.26.3#" kubeadm_init.yaml # 修改k8s版本
sed -i "s#unix:///var/run/containerd/containerd.sock#unix:///var/run/cri-dockerd.sock#" kubeadm_init.yaml  # 修改sock配置
sed -i "s#^controllerManager.*#controlPlaneEndpoint: "$HAVIP"#" kubeadm_init.yaml # 修改controlPlaneEndpoint配置
sed -i "s#10.96.0.0/12#$SERVICE_CIDR#" kubeadm_init.yaml # 修改service CIDR
sed -i "s#^scheduler.*#  podSubnet: $CLUSTER_CIDR#" kubeadm_init.yaml # 修改pod CIDR
cat >> kubeadm_init.yaml <<EOF     # 配置为proxy模式ipvs
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
EOF
# kubeadm config print init-defaults --component-configs KubeletConfiguration > kubeadm_init.yaml
# 如果需要生成更详细的配置文件可以用上面的命令,记得自己对应的修改生成的文件


开始初始化

方法一:利用配置文件init

[root@template yaml]# kubeadm init --config=kubeadm_init.yaml


方法二:利用配置文件init

kubeadm init \
 --image-repository registry.aliyuncs.com/google_containers \  
 --kubernetes-version v1.26.3 \
 --control-plane-endpoint kubernetes-vip:8443 \
 --apiserver-advertise-address $master1 \
 --pod-network-cidr $CLUSTER_CID \
 --service-cidr $SERVICE_CIDR


init完成

Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
  kubeadm join kubernetes-vip:8443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:89e4c4fe97e89df98cd986f209bb3cc14b4b911611e70eba8bccb19eb0538409 \
  --control-plane 
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join kubernetes-vip:8443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:89e4c4fe97e89df98cd986f209bb3cc14b4b911611e70eba8bccb19eb0538409 
# 看到类似内容就代表已经初始化成功
mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config


增加控制节点

其他master节点的kubeadm join操作需要将上面生成的命令和新增的证书拼接起来

kubeadm init phase upload-certs --upload-certs --config kubeadm_init.yaml  # 更新证书方便扩容控制节点
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
d4712cdfe6638ef65587526362d0ea76110f3059884088ec5249aac250e910d2
[root@master2 ~]# kubeadm join kubernetes-vip:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:89e4c4fe97e89df98cd986f209bb3cc14b4b911611e70eba8bccb19eb0538409 --control-plane --certificate-key d4712cdfe6638ef65587526362d0ea76110f3059884088ec5249aac250e910d2  --cri-socket unix:///run/cri-dockerd.sock

记得修改token和key

[root@master3 ~]# kubeadm join kubernetes-vip:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:89e4c4fe97e89df98cd986f209bb3cc14b4b911611e70eba8bccb19eb0538409 --control-plane --certificate-key d4712cdfe6638ef65587526362d0ea76110f3059884088ec5249aac250e910d2  --cri-socket unix:///run/cri-dockerd.sock

记得修改token和key


增加工作节点

kubeadm join kubernetes-vip:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:89e4c4fe97e89df98cd986f209bb3cc14b4b911611e70eba8bccb19eb0538409 --cri-socket unix:///run/cri-dockerd.sock

验证节点是否加进来

[root@template ~]# kubectl get node -owide
NAME       STATUS     ROLES           AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
master2    NotReady   control-plane   5m40s   v1.26.3   192.168.123.202   <none>        CentOS Linux 7 (Core)   5.4.238-1.el7.elrepo.x86_64   docker://23.0.1
master3    NotReady   control-plane   6m35s   v1.26.3   192.168.123.203   <none>        CentOS Linux 7 (Core)   5.4.238-1.el7.elrepo.x86_64   docker://23.0.1
node1      NotReady   <none>          60s     v1.26.3   192.168.123.204   <none>        CentOS Linux 7 (Core)   5.4.238-1.el7.elrepo.x86_64   docker://23.0.1
node2      NotReady   <none>          53s     v1.26.3   192.168.123.205   <none>        CentOS Linux 7 (Core)   5.4.238-1.el7.elrepo.x86_64   docker://23.0.1
master1    Ready      control-plane   29m     v1.26.3   192.168.123.201   <none>        CentOS Linux 7 (Core)   5.4.238-1.el7.elrepo.x86_64   docker://23.0.1


安装网络插件

主流网络插件简单介绍:

flannel: vxlan模式(其中还包括直连模式),host-gw模式

calico: IP-IP模式,BGP模式

本次安装flannel:https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml

打开链接,复制yaml文件并写入到flannel.yaml中

sed -i "s#10.244.0.0/16#$CLUSTER_CIDR#" flannel.yaml  # 修改cidr
sed -i "s/vxlan/host-gw/" flannel.yaml # 修改模式
kubectl apply -f flannel.yaml

查看pod是否正常、节点是否ready

[root@master1 yaml]# kubectl get pod -n kube-flannel 
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-c4md4   1/1     Running   0          84s
kube-flannel-ds-csd6s   1/1     Running   0          84s
kube-flannel-ds-h8xvv   1/1     Running   0          84s
kube-flannel-ds-hh858   1/1     Running   0          84s
kube-flannel-ds-lrksn   1/1     Running   0          84s
[root@master1 yaml]# kubectl get nodes -owide
NAME       STATUS   ROLES           AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
master2    Ready    control-plane   16m   v1.26.3   192.168.123.202   <none>        CentOS Linux 7 (Core)   5.4.238-1.el7.elrepo.x86_64   docker://23.0.1
master3    Ready    control-plane   17m   v1.26.3   192.168.123.203   <none>        CentOS Linux 7 (Core)   5.4.238-1.el7.elrepo.x86_64   docker://23.0.1
node1      Ready    <none>          11m   v1.26.3   192.168.123.204   <none>        CentOS Linux 7 (Core)   5.4.238-1.el7.elrepo.x86_64   docker://23.0.1
node2      Ready    <none>          11m   v1.26.3   192.168.123.205   <none>        CentOS Linux 7 (Core)   5.4.238-1.el7.elrepo.x86_64   docker://23.0.1
master1    Ready    control-plane   40m   v1.26.3   192.168.123.201   <none>        CentOS Linux 7 (Core)   5.4.238-1.el7.elrepo.x86_64   docker://23.0.1


验证集群是否正常

以下结果代表能正常创建pod且可以正常解析

kubectl run client --image=busybox:1.28 --rm -it -- /bin/sh
/ # nslookup kubernetes
Server:    10.100.0.10
Address 1: 10.100.0.10 kube-dns.kube-system.svc.cluster.local
Name:      kubernetes
Address 1: 10.100.0.1 kubernetes.default.svc.cluster.local
/ # nslookup kube-dns.kube-system.svc.cluster.local
Server:    10.100.0.10
Address 1: 10.100.0.10 kube-dns.kube-system.svc.cluster.local
Name:      kube-dns.kube-system.svc.cluster.local
Address 1: 10.100.0.10 kube-dns.kube-system.svc.cluster.local
# 
[root@master1 yaml]# kubectl get pod -A
NAMESPACE      NAME                               READY   STATUS    RESTARTS      AGE
kube-flannel   kube-flannel-ds-c4md4              1/1     Running   0             25m
kube-flannel   kube-flannel-ds-csd6s              1/1     Running   0             25m
kube-flannel   kube-flannel-ds-h8xvv              1/1     Running   0             25m
kube-flannel   kube-flannel-ds-hh858              1/1     Running   0             25m
kube-flannel   kube-flannel-ds-lrksn              1/1     Running   0             25m
kube-system    coredns-5bbd96d687-5lt6b           1/1     Running   0             64m
kube-system    coredns-5bbd96d687-9zgxk           1/1     Running   0             64m
kube-system    etcd-master2                       1/1     Running   0             40m
kube-system    etcd-master3                       1/1     Running   0             41m
kube-system    etcd-master1                       1/1     Running   0             64m
kube-system    kube-apiserver-master2             1/1     Running   0             40m
kube-system    kube-apiserver-master3             1/1     Running   0             41m
kube-system    kube-apiserver-master1             1/1     Running   0             64m
kube-system    kube-controller-manager-master2    1/1     Running   0             39m
kube-system    kube-controller-manager-master3    1/1     Running   0             40m
kube-system    kube-controller-manager-master1    1/1     Running   0             64m
kube-system    kube-proxy-ctt6d                   1/1     Running   0             41m
kube-system    kube-proxy-jswdm                   1/1     Running   0             35m
kube-system    kube-proxy-nrdmt                   1/1     Running   0             64m
kube-system    kube-proxy-qpxg4                   1/1     Running   0             40m
kube-system    kube-proxy-t9xlj                   1/1     Running   0             35m
kube-system    kube-scheduler-master2             1/1     Running   0             39m
kube-system    kube-scheduler-master3             1/1     Running   0             41m
kube-system    kube-scheduler-master1             1/1     Running   0             64m
#
[root@master1 yaml]# kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes   ClusterIP   10.100.0.1    <none>        443/TCP                  65m
kube-system   kube-dns     ClusterIP   10.100.0.10   <non


相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
目录
相关文章
|
28天前
|
运维 Kubernetes 监控
构建高效自动化运维体系:基于Docker和Kubernetes的实践指南
【2月更文挑战第30天】 在当今快速发展的云计算时代,传统的IT运维模式已难以满足业务的敏捷性和稳定性需求。本文深入探讨了如何通过Docker容器化技术和Kubernetes集群管理工具构建一个高效、可靠的自动化运维体系。文章首先概述了容器化技术和微服务架构的基本概念,随后详细阐述了基于Docker的应用打包、部署流程,以及Kubernetes在自动化部署、扩展和管理容器化应用中的关键作用。最后,文中通过案例分析,展示了如何在实际场景中利用这些技术优化运维流程,提高系统的整体效率和可靠性。
|
4天前
|
Kubernetes Linux 网络安全
kubeadm安装k8s
该文档提供了一套在CentOS 7.6上安装Docker和Kubernetes(kubeadm)的详细步骤,包括安装系统必备软件、关闭防火墙和SELinux、禁用swap、开启IP转发、设置内核参数、配置Docker源和加速器、安装指定版本Docker、启动Docker、设置kubelet开机启动、安装kubelet、kubeadm、kubectl、下载和配置Kubernetes镜像、初始化kubeadm、创建kubeconfig文件、获取节点加入集群命令、下载Calico YAML文件以及安装Calico。这些步骤不仅适用于v1.19.14,也适用于更高版本。
40 1
|
6天前
|
时序数据库 Docker 容器
Docker安装InfluxDB
Docker安装InfluxDB
10 0
|
4天前
|
Kubernetes 搜索推荐 Docker
使用 kubeadm 部署 Kubernetes 集群(二)k8s环境安装
使用 kubeadm 部署 Kubernetes 集群(二)k8s环境安装
37 17
|
9天前
|
关系型数据库 MySQL 数据库
docker自定义安装mysql 5.7
docker自定义安装mysql 5.7
19 0
|
17天前
|
Kubernetes Ubuntu 应用服务中间件
Ubuntu 22.04 利用kubeadm方式部署Kubernetes(v1.28.2版本)
Ubuntu 22.04 利用kubeadm方式部署Kubernetes(v1.28.2版本)
75 0
|
18天前
|
Kubernetes 安全 网络安全
搭建k8s集群kubeadm搭建Kubernetes二进制搭建Kubernetes集群
搭建k8s集群kubeadm搭建Kubernetes二进制搭建Kubernetes集群
101 0
|
20天前
|
运维 Kubernetes 持续交付
构建高效自动化运维体系:基于Docker和Kubernetes的最佳实践
在现代云计算环境中,自动化运维成为保障系统稳定性与提升效率的关键。本文深入探讨了如何利用Docker容器化技术和Kubernetes容器编排工具构建一个高效、可靠的自动化运维体系。文中不仅介绍了相关的技术原理,还结合具体案例分析了实施过程中的常见问题及解决方案,为读者提供了一套行之有效的最佳实践指南。
|
20天前
|
Linux Shell 开发工具
CentOS8中Docker安装及部署
CentOS8中Docker安装及部署
67 0
|
22天前
|
弹性计算 Serverless 数据库
ECS安装问题之docker安装如何解决
ECS(Elastic Compute Service,弹性计算服务)是云计算服务提供商提供的一种基础云服务,允许用户在云端获取和配置虚拟服务器。以下是ECS服务使用中的一些常见问题及其解答的合集: