云原生|kubernetes|kubeadm部署高可用集群(一)使用外部etcd集群(二)

简介: 云原生|kubernetes|kubeadm部署高可用集群(一)使用外部etcd集群

完整的初始化清单文件:

[root@master ~]# cat kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: "0"
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.217.19
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: master
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  external:
    endpoints:     #下面为自定义etcd集群地址
    - https://192.168.217.19:2379
    - https://192.168.217.20:2379
    - https://192.168.217.21:2379
    caFile: /etc/kubernetes/pki/etcd/ca.pem
    certFile: /etc/kubernetes/pki/etcd/apiserver-etcd-client.pem
    keyFile: /etc/kubernetes/pki/etcd/apiserver-etcd-client-key.pem
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.22.2
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
  acceptContentTypes: ""
  burst: 0
  contentType: ""
  kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
  qps: 0
clusterCIDR: "10.244.0.0/16"
configSyncPeriod: 0s
conntrack:
  maxPerCore: null
  min: null
  tcpCloseWaitTimeout: null
  tcpEstablishedTimeout: null
detectLocalMode: ""
enableProfiling: false
healthzBindAddress: ""
hostnameOverride: "k8s-master"
iptables:
  masqueradeAll: false
  masqueradeBit: null
  minSyncPeriod: 0s
  syncPeriod: 0s
ipvs:
  excludeCIDRs: null
  minSyncPeriod: 0s
  scheduler: ""
  strictARP: false
  syncPeriod: 0s
  tcpFinTimeout: 0s
  tcpTimeout: 0s
  udpTimeout: 0s
kind: KubeProxyConfiguration
metricsBindAddress: ""
mode: ""
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
showHiddenMetricsForVersion: ""
udpIdleTimeout: 0s
winkernel:
  enableDSR: false
  networkName: ""
  sourceVip: ""
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging: {}
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

外部etcd集群的证书处理:

 (建立目录在三个节点都执行,在master节点拷贝好文件后,scp到工作节点20和21)
mkdir -p /etc/kubernetes/pki/etcd/ 
cp /opt/etcd/ssl/ca.pem /etc/kubernetes/pki/etcd/
cp /opt/etcd/ssl/server.pem  /etc/kubernetes/pki/etcd/apiserver-etcd-client.pem
cp /opt/etcd/ssl/server-key.pem  /etc/kubernetes/pki/etcd/apiserver-etcd-client-key.pem
scp /etc/kubernetes/pki/etcd/*  node1:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/*  node2:/etc/kubernetes/pki/etcd/

使用config清单文件:

kubeadm init --config=kubeadm-init.yaml

此命令的输出如下:

其中的这些输出表示已经启用了外部etcd集群,按官方来说,就是External etcd mode (扩展etcd模式),因此,etcd相关证书不生成,Skipping了嘛。

[certs] External etcd mode: Skipping etcd/ca certificate authority generation

[certs] External etcd mode: Skipping etcd/server certificate generation

[certs] External etcd mode: Skipping etcd/peer certificate generation

[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation

[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation

这个是kubelet环境变量配置,方便使用kubectl命令的

mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

工作节点加入集群的命令,此命令复制后,在工作节点执行即可将工作节点加入集群:

kubeadm join 192.168.217.19:6443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:8245100f694c48a3171fd473fc2b9a1c6696394c89ff4ac902d4fde95c4740f1

两个网络相关插件coreDNS和kube-proxy以静态pod的方式部署在集群内了:

[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
  [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.217.19]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.008300 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.217.19:6443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:8245100f694c48a3171fd473fc2b9a1c6696394c89ff4ac902d4fde95c4740f1 

四,工作节点加入(在20和21服务器上都执行)


root@node1 ~]# kubeadm join 192.168.217.19:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:8245100f694c48a3171fd473fc2b9a1c6696394c89ff4ac902d4fde95c4740f1
[preflight] Running pre-flight checks
  [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

两个服务器都执行完毕后,就可以使用kubectl 查看节点和pod了,只是节点状态会是notready,需要安装网络插件比如,flannel或者calico,weave,canal等等任意一种即可。

五,功能测试


网络插件的安装部署本文就省略了,因为本文主要就是讲述如何使用扩展etcd外部集群。假设正确安装了flannel。现查看集群的整体状态:

可以看到外部etcd集群可以在集群内看到:

[root@master bin]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Healthy     ok                                                                                            
etcd-1               Healthy     {"health":"true"}                                                                             
etcd-2               Healthy     {"health":"true"}                                                                             
etcd-0               Healthy     {"health":"true"}                                                                             
[root@master bin]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.217.19:6443
CoreDNS is running at https://192.168.217.19:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

各个组件pod都运行正常,静态pod里没有etcd:

[root@master bin]# kubectl get po,svc -A
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-7f6cbbb7b8-4vrj7         1/1     Running   0          171m
kube-system   pod/coredns-7f6cbbb7b8-9x2kg         1/1     Running   0          171m
kube-system   pod/kube-apiserver-master            1/1     Running   0          171m
kube-system   pod/kube-controller-manager-master   1/1     Running   0          171m
kube-system   pod/kube-flannel-ds-c8d2t            1/1     Running   0          168m
kube-system   pod/kube-flannel-ds-cxvxs            1/1     Running   0          168m
kube-system   pod/kube-flannel-ds-v5s85            1/1     Running   0          168m
kube-system   pod/kube-proxy-kvq2z                 1/1     Running   0          170m
kube-system   pod/kube-proxy-rwtbz                 1/1     Running   0          171m
kube-system   pod/kube-proxy-wp2ft                 1/1     Running   0          171m
kube-system   pod/kube-scheduler-master            1/1     Running   0          171m
NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  171m
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   171m

DNS功能测试正常,并且pod正常生成,表明etcd功能完全正确,此次使用外部etcd集群完全成功:

kubectl run -it --image busybox:1.28.3   dns-test --restart=Never --rm
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ # nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ # nslookup baidu.com
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name:      baidu.com
Address 1: 39.156.66.10
Address 2: 110.242.68.66
相关实践学习
深入解析Docker容器化技术
Docker是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器上,也可以实现虚拟化,容器是完全使用沙箱机制,相互之间不会有任何接口。Docker是世界领先的软件容器平台。开发人员利用Docker可以消除协作编码时“在我的机器上可正常工作”的问题。运维人员利用Docker可以在隔离容器中并行运行和管理应用,获得更好的计算密度。企业利用Docker可以构建敏捷的软件交付管道,以更快的速度、更高的安全性和可靠的信誉为Linux和Windows Server应用发布新功能。 在本套课程中,我们将全面的讲解Docker技术栈,从环境安装到容器、镜像操作以及生产环境如何部署开发的微服务应用。本课程由黑马程序员提供。 &nbsp; &nbsp; 相关的阿里云产品:容器服务 ACK 容器服务 Kubernetes 版(简称 ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情: https://www.aliyun.com/product/kubernetes
目录
相关文章
|
5月前
|
运维 Dubbo Cloud Native
Dubbo 云原生重构出击:更快部署、更强控制台、更智能运维
Apache Dubbo 最新升级支持云原生,提供一键部署微服务集群与全新可视化控制台,提升全生命周期管理体验,助力企业高效构建云原生应用。
400 25
|
11月前
|
Cloud Native Serverless 数据中心
阿里云ACK One:注册集群支持ACS算力——云原生时代的计算新引擎
ACK One注册集群已正式支持ACS(容器计算服务)算力,为企业的容器化工作负载提供更多选择和更强大的计算能力。
|
11月前
|
Cloud Native Serverless 数据中心
阿里云ACK One:注册集群支持ACS算力——云原生时代的计算新引擎
阿里云ACK One:注册集群支持ACS算力——云原生时代的计算新引擎
343 10
|
11月前
|
Cloud Native 安全 Serverless
云原生应用实战:基于阿里云Serverless的API服务开发与部署
随着云计算的发展,Serverless架构日益流行。阿里云函数计算(Function Compute)作为Serverless服务,让开发者无需管理服务器即可运行代码,按需付费,简化开发运维流程。本文从零开始,介绍如何使用阿里云函数计算开发简单的API服务,并探讨其核心优势与最佳实践。通过Python示例,演示创建、部署及优化API的过程,涵盖环境准备、代码实现、性能优化和安全管理等内容,帮助读者快速上手Serverless开发。
|
存储 Kubernetes 开发者
容器化时代的领航者:Docker 和 Kubernetes 云原生时代的黄金搭档
Docker 是一种开源的应用容器引擎,允许开发者将应用程序及其依赖打包成可移植的镜像,并在任何支持 Docker 的平台上运行。其核心概念包括镜像、容器和仓库。镜像是只读的文件系统,容器是镜像的运行实例,仓库用于存储和分发镜像。Kubernetes(k8s)则是容器集群管理系统,提供自动化部署、扩展和维护等功能,支持服务发现、负载均衡、自动伸缩等特性。两者结合使用,可以实现高效的容器化应用管理和运维。Docker 主要用于单主机上的容器管理,而 Kubernetes 则专注于跨多主机的容器编排与调度。尽管 k8s 逐渐减少了对 Docker 作为容器运行时的支持,但 Doc
540 5
容器化时代的领航者:Docker 和 Kubernetes 云原生时代的黄金搭档
|
人工智能 缓存 异构计算
云原生AI加速生成式人工智能应用的部署构建
本文探讨了云原生技术背景下,尤其是Kubernetes和容器技术的发展,对模型推理服务带来的挑战与优化策略。文中详细介绍了Knative的弹性扩展机制,包括HPA和CronHPA,以及针对传统弹性扩展“滞后”问题提出的AHPA(高级弹性预测)。此外,文章重点介绍了Fluid项目,它通过分布式缓存优化了模型加载的I/O操作,显著缩短了推理服务的冷启动时间,特别是在处理大规模并发请求时表现出色。通过实际案例,展示了Fluid在vLLM和Qwen模型推理中的应用效果,证明了其在提高模型推理效率和响应速度方面的优势。
云原生AI加速生成式人工智能应用的部署构建
|
Kubernetes Cloud Native 微服务
云原生入门与实践:Kubernetes的简易部署
云原生技术正改变着现代应用的开发和部署方式。本文将引导你了解云原生的基础概念,并重点介绍如何使用Kubernetes进行容器编排。我们将通过一个简易的示例来展示如何快速启动一个Kubernetes集群,并在其上运行一个简单的应用。无论你是云原生新手还是希望扩展现有知识,本文都将为你提供实用的信息和启发性的见解。
|
存储 Cloud Native 数据处理
从嵌入式状态管理到云原生架构:Apache Flink 的演进与下一代增量计算范式
本文整理自阿里云资深技术专家、Apache Flink PMC 成员梅源在 Flink Forward Asia 新加坡 2025上的分享,深入解析 Flink 状态管理系统的发展历程,从核心设计到 Flink 2.0 存算分离架构,并展望未来基于流批一体的通用增量计算方向。
415 0
从嵌入式状态管理到云原生架构:Apache Flink 的演进与下一代增量计算范式
|
5月前
|
运维 监控 Cloud Native
从本土到全球,云原生架构护航灵犀互娱游戏出海
本文内容整理自「 2025 中企出海大会·游戏与互娱出海分论坛」,灵犀互娱基础架构负责人朱晓靖的演讲内容,从技术层面分享云原生架构护航灵犀互娱游戏出海经验。
558 16
|
5月前
|
运维 监控 Cloud Native
从本土到全球,云原生架构护航灵犀互娱游戏出海
内容整理自「 2025 中企出海大会·游戏与互娱出海分论坛」,灵犀互娱基础架构负责人朱晓靖的演讲内容,从技术层面分享云原生架构护航灵犀互娱游戏出海经验。

热门文章

最新文章

推荐镜像

更多