云原生|kubernetes|kubeadm部署高可用集群(二)---kube-apiserver高可用+etcd外部集群+haproxy+keepalived

本文涉及的产品
传统型负载均衡 CLB,每月750个小时 15LCU
网络型负载均衡 NLB,每月750个小时 15LCU
EMR Serverless StarRocks,5000CU*H 48000GB*H
简介: 云原生|kubernetes|kubeadm部署高可用集群(二)---kube-apiserver高可用+etcd外部集群+haproxy+keepalived

前言:

上一篇文章云原生|kubernetes|kubeadm部署高可用集群(一)使用外部etcd集群_晚风_END的博客-CSDN博客讲述了如何利用kubeadm部署集群的时候使用外部扩展etcd集群,使得集群的整体结构做了一些解耦,提高了集群的负载能力,那么,本文将讲述如何在此基础上做到kube-apiserver的高可用,从而部署一个可完全用于生产的kubernetes集群。

下面就直接上干货。

一,集群环境简介

本次实践计划使用haproxy+keepalived针对kube-apiserver搭建一个负载均衡。为什么不使用nginx+keepalived的技术栈呢?主要是考虑到主机性能不太够,因此虚拟机不够多,其次nginx做负载均衡的时候,会占用集群的默认6443端口,会有一些不必要的麻烦,而haproxy是一个更为专业的代理软件。

kubernetes集群内部使用三个kube-apiserver+外部扩展etcd集群,各个组件都是高可用的,完全可以应用在实际的生产活动中。

kubernetes集群是由三个master节点和一个node工作节点组成,master节点使用三个是出于集群的奇数要求,两个或者四个提现不出高可用的特点。工作节点的扩展是比较简单的,如果在实际生产中可以很简单的就扩展工作节点,因此工作节点设置为一个。

下面是整个集群的大体配置:

kubernetes高可用集群配置表

服务器IP地址 集群角色 服务器硬件配置 安装的组件 组件版本 部署方式
192.168.217.19 control-plane,master

CPU:2c2u

内存:4G

硬盘:100G 单分区 /

kubelet,kubeadm,kubernetes集群其它关键组件为静态pod形式。

负载均衡:haproxy和keepalived

etcd集群

docker环境

kubernetes-1.22.2

docker-ce-20.10.5

haproxy-1.5.18-9

keepalived-1.3.5

etcd Version: 3.4.9

docker和etcd:二进制方式

其它:yum安装

192.168.217.20 control-plane,master

CPU:2c2u

内存:4G

硬盘:100G 单分区 /

kubelet,kubeadm,kubernetes集群其它关键组件为静态pod形式。

负载均衡:haproxy和keepalived

etcd集群

docker环境

kubernetes-1.22.2

docker-ce-20.10.5

haproxy-1.5.18-9

keepalived-1.3.5

etcd Version: 3.4.9

docker和etcd:二进制方式

其它:yum安装

192.168.217.21 control-plane,master

CPU:2c2u

内存:4G

硬盘:100G 单分区 /

kubelet,kubeadm,kubernetes集群其它关键组件为静态pod形式。

负载均衡:haproxy和keepalived

etcd集群

docker环境

kubernetes-1.22.2

docker-ce-20.10.5

haproxy-1.5.18-9

keepalived-1.3.5

etcd Version: 3.4.9

docker和etcd:二进制方式

其它:yum安装

192.168.217.22 node,工作节点

CPU:2c2u

内存:4G

硬盘:100G 单分区 /

kubelet,kubeadm,kubernetes集群其它关键组件为静态pod形式。

docker环境

kubernetes-1.22.2

docker-ce-20.10.5

etcd Version: 3.4.9

docker:二进制方式

其它:yum安装

注意:etcd集群是三节点形式,因此,本节点不安装

VIP:192.168.217.100 负载均衡节点 keepalived产生
[root@master1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.217.19 master1 k8s-master1
192.168.217.20 master2 k8s-master2
192.168.217.21 master3 k8s-master3
192.168.217.22 node1 k8s-node1

二,

集群基础环境搭建

(1)配置主机名称:

k8s-master1

[root@master1 ~]# cat /etc/hostname 
master1

k8s-master2

[root@master2 manifests]# cat /etc/hostname
master2

k8s-master3

[root@master3 ~]# cat /etc/hostname
master3

node1

[root@node1 ~]# cat /etc/hostname
node1

所有服务器统一hosts:

[root@master1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.217.19 master1 k8s-master1
192.168.217.20 master2 k8s-master2
192.168.217.21 master3 k8s-master3
192.168.217.22 node1 k8s-node1

(2)时间服务器

所有节点安装ntp并启动ntpd服务:

yum install ntp -y
systemctl enable ntpd && systemctl start ntpd

192.168.217.19作为时间服务器,配置文件主要的地方:

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 127.127.1.0 prefer
fudge 127.127.1.0 stratum 10

其它节点的时间服务器配置:

# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 192.168.217.19

任意一个节点输出如下表示时间服务器正常:

[root@master2 ~]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*master1         LOCAL(0)        11 u   26  128  377    0.390    0.014   0.053
[root@master2 ~]# ntpstat
synchronised to NTP server (192.168.217.19) at stratum 12
   time correct to within 23 ms
   polling server every 128 s

(3)关闭防火墙

systemctl disable firewalld && systemctl stop firewalld

(4)关闭selinux

编辑/etc/selinux/config文件:

[root@master2 ~]# cat /etc/selinux/config 
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 

输出如下表示关闭:

[root@master2 ~]# getenforce 
Disabled

(5)关闭swap

这个就不说了,太基础的东西。

(6)服务器之间的免密码

也不在这详细说了,免密的原理和如何免密见我的博客:科普扫盲---ssh免密登陆(ssh的一些小秘密)_晚风_END的博客-CSDN博客_ssh免密不带端口号

这里配置免密的原因是由于后面初始化集群的时候需要服务器自动传输证书。

(7)docker环境的搭建

docker的离线安装以及本地化配置_晚风_END的博客-CSDN博客 务必要做好本地化的配置,否则后面的下载镜像会成为噩梦。

部署完成后按教程做好docker环境的测试工作。

(8)etcd集群的搭建

centos7操作系统 ---ansible剧本离线快速部署etcd集群_晚风_END的博客-CSDN博客_etcd离线安装 务必按此教程部署etcd集群,后面的集群初始化工作会基于此etcd集群开始。

部署完成后,按教程做好etcd集群的测试工作。

(9)

安装kubelet,kubectl,kubeadm

云原生|kubernetes|kubeadm五分钟内部署完成集群(完全离线部署---适用于centos7全系列)_晚风_END的博客-CSDN博客




其实集群搭建也就麻烦在基础环境的搭建,费时费力,不过有一个良好的开端的话,后面会比较轻松一些。

三,创建负载均衡器(HAProxy+Keepalived)

当存在多个控制平面时,kube-apiserver也存在多个,可以使用Nginx+Keepalived、HAProxy+Keepalived等工具实现多个kube-apiserver的负载均衡和高可用。
推荐使用HAProxy+Keepalived这个组合,因为HAProxy可以提高更高性能的四层负载均衡功能,这也是大多数人的选择。

负载均衡架构图:

在三个master节点都安装haproxy+keepalived:

yum install haproxy keepalived -y

haproxy的配置(绑定9443端口,监听三个apiserver并组成后端,此配置文件复制到其它两个master节点):

cat /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend apiserver
bind *:9443
mode tcp
option tcplog
default_backend apiserver
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend apiserver
    balance     roundrobin
    server k8s-master1 192.168.217.19:6443 check
    server k8s-master2 192.168.217.20:6443 check
    server k8s-master3 192.168.217.21:6443 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
listen admin_stats  
    bind 0.0.0.0:9188  #登录页面所绑定的地址加端口
    mode http #监控的模式
    log 127.0.0.1 local0 err #错误日志等级
    stats refresh 30s
    stats uri /haproxy-status   #登录页面的网址,IP:9188/haproxy-status 即为登录网址
    stats realm welcome login\ Haproxy 
    stats auth admin:admin123       #web页面的用户名和密码
    stats hide-version 
    stats admin if TRUE

复制haproxy配置文件:

scp /etc/haproxy/haproxy.cfg master2:/etc/haproxy/
scp /etc/haproxy/haproxy.cfg master3:/etc/haproxy/

打开任意浏览器,输入ip+9188,可以打开haproxy的web界面(账号:admin  密码:admin123):

keepalived的配置:

master1节点的keepalived的配置文件:

[root@master1 ~]# cat /etc/keepalived/keepalived.conf 
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 192.168.217.19
}
vrrp_script check_haproxy {
script "bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi'"
interval 3
weight -2
fall 3
rise 3
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 50
priority 100
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.217.100
}
track_script {
check_haproxy
}
}

master节点的keepalived的配置文件:

[root@master2 ~]# cat /etc/keepalived/keepalived.conf 
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 192.168.217.20
}
vrrp_script check_haproxy {
script "bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi'"
interval 3
weight -2
fall 3
rise 3
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 50
priority 99
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.217.100
}
track_script {
check_haproxy
}
}

master3节点的keepalived的配置文件:

[root@master3 ~]# cat /etc/keepalived/keepalived.conf 
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id 192.168.217.21
}
vrrp_script check_haproxy {
script "bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi'"
interval 3
weight -2
fall 3
rise 3
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 50
priority 98
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.217.100
}
track_script {
check_haproxy
}
}

两个服务的自启和启动:

systemctl enable keepalived haproxy && systemctl start keepalived haproxy

负载均衡的测试:

查看端口是否开启:

[root@master1 ~]# netstat -antup |grep 9443
tcp        0      0 0.0.0.0:9443            0.0.0.0:*               LISTEN      1011/haproxy 

在master1上查看vip(为什么在master1呢?因为它的优先级是100):

[root@master1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:5c:1e:66 brd ff:ff:ff:ff:ff:ff
    inet 192.168.217.19/24 brd 192.168.217.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.217.100/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5c:1e66/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:f3:9c:2f:92 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

关闭master1的haproxy服务,查看vip是否漂移到master2:

[root@master1 ~]# systemctl stop haproxy

在master2上查看:

[root@master2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:86:30:e6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.217.20/24 brd 192.168.217.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.217.100/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe86:30e6/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:8a:27:b6:7f brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

漂移成功,这一步一定要成功才可以进行下一步。

在master1上恢复haproxy服务,可以看到很迅速的vip就回来了:

[root@master1 ~]# systemctl start haproxy
[root@master1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:5c:1e:66 brd ff:ff:ff:ff:ff:ff
    inet 192.168.217.19/24 brd 192.168.217.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.217.100/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5c:1e66/64 scope link 
       valid_lft forever preferred_lft forever

查看master1的系统日志,看看vip漂移的过程:

Oct 27 19:01:28 master1 Keepalived_vrrp[1037]: /usr/bin/bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi' exited with status 1
Oct 27 19:01:31 master1 Keepalived_vrrp[1037]: /usr/bin/bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi' exited with status 1
Oct 27 19:01:34 master1 Keepalived_vrrp[1037]: /usr/bin/bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi' exited with status 1
Oct 27 19:01:37 master1 Keepalived_vrrp[1037]: /usr/bin/bash -c 'if [ $(ss -alnupt |grep 9443|wc -l) -eq 0 ];then exit 1;fi' exited with status 1
Oct 27 19:01:38 master1 systemd: Started HAProxy Load Balancer.
Oct 27 19:01:38 master1 systemd: Starting HAProxy Load Balancer...
Oct 27 19:01:38 master1 haproxy-systemd-wrapper: [WARNING] 299/190138 (31426) : config : 'option forwardfor' ignored for frontend 'apiserver' as it requires HTTP mode.
Oct 27 19:01:46 master1 Keepalived_vrrp[1037]: VRRP_Script(check_haproxy) succeeded
Oct 27 19:01:47 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Changing effective priority from 98 to 100
Oct 27 19:01:47 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) forcing a new MASTER election
Oct 27 19:01:48 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Transition to MASTER STATE
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Entering MASTER STATE
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) setting protocol VIPs.
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: Sending gratuitous ARP on ens33 for 192.168.217.100
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 192.168.217.100
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: Sending gratuitous ARP on ens33 for 192.168.217.100
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: Sending gratuitous ARP on ens33 for 192.168.217.100
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: Sending gratuitous ARP on ens33 for 192.168.217.100
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: Sending gratuitous ARP on ens33 for 192.168.217.100
Oct 27 19:01:50 master1 ntpd[757]: Listen normally on 10 ens33 192.168.217.100 UDP 123

其中关于选举的部分是这样的:

  • 脚本执行成功
  • 由于脚本成功,优先级从98跳跃到100
  • 强制重新选举
  • master1当选为master
  • master1的keepalived进入master状态
ct 27 19:01:46 master1 Keepalived_vrrp[1037]: VRRP_Script(check_haproxy) succeeded
Oct 27 19:01:47 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Changing effective priority from 98 to 100
Oct 27 19:01:47 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) forcing a new MASTER election
Oct 27 19:01:48 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Transition to MASTER STATE
Oct 27 19:01:49 master1 Keepalived_vrrp[1037]: VRRP_Instance(VI_1) Entering MASTER STATE

四,etcd集群的证书传递

要使用外部etcd集群需要先安装etcd集群,然后etcd的证书要分发到全部节点中,本例是四个节点,都需要分发,分发命令为:

在所有节点建立相关目录(四个服务器都执行):

mkdir -p /etc/kubernetes/pki/etcd/

在master1节点复制etcd的证书到上述建立的目录内:

cp /opt/etcd/ssl/ca.pem /etc/kubernetes/pki/etcd/
cp /opt/etcd/ssl/server.pem  /etc/kubernetes/pki/etcd/apiserver-etcd-client.pem
cp /opt/etcd/ssl/server-key.pem  /etc/kubernetes/pki/etcd/apiserver-etcd-client-key.pem

分发etcd证书到各个节点:

scp /etc/kubernetes/pki/etcd/*  master2:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/*  master3:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/*  node1:/etc/kubernetes/pki/etcd/

五,

kubeadm部署集群可以使用两种形式,第一,命令行初始化,第二,config配置文件初始化。本文选择使用config配置文件的方式:

在192.168.217.19服务器上生成模板文件;

kubeadm config print init-defaults  >kubeadm-init1-ha.yaml

编辑此模板文件,最终的内容应该如下:

[root@master1 ~]# cat kubeadm-init-ha.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: "0"
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.217.19
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: master1
  taints: null
---
controlPlaneEndpoint: "192.168.217.100"
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  external:
    endpoints:     #下面为自定义etcd集群地址
    - https://192.168.217.19:2379
    - https://192.168.217.20:2379
    - https://192.168.217.21:2379
    caFile: /etc/kubernetes/pki/etcd/ca.pem
    certFile: /etc/kubernetes/pki/etcd/apiserver-etcd-client.pem
    keyFile: /etc/kubernetes/pki/etcd/apiserver-etcd-client-key.pem
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.22.2
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
scheduler: {}

kubeadm-init-ha.yaml配置文件说明:

配置说明:

name: 同 /etc/hosts 中统一设置的 hostname 一致,因计划在master1节点执行初始化,因此,是master1的主机名
controlPlaneEndpoint:为vip地址,可以不需要指定端口
imageRepository:由于国内无法访问google镜像仓库k8s.gcr.io,这里指定为阿里云镜像仓库registry.aliyuncs.com/google_containers
podSubnet:指定的IP地址段与后续部署的网络插件相匹配,这里需要部署flannel插件,所以配置为10.244.0.0/16

 

初始化命令以及该命令的输出日志(在master1节点上执行):

[root@master1 ~]# kubeadm init --config=kubeadm-init-ha.yaml --upload-certs
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.96.0.1 192.168.217.19 192.168.217.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.508717 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
d14a06a73a7aabf5cce805880c085f7427247d507ba01bf2d2f21c294aeb8643
[mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
  kubeadm join 192.168.217.100:6443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:f9ea1a25922e715f893c360759c9f56a26b1a3760df4ce140e94610a76ca6a9e \
  --control-plane --certificate-key d14a06a73a7aabf5cce805880c085f7427247d507ba01bf2d2f21c294aeb8643
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.217.100:6443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:f9ea1a25922e715f893c360759c9f56a26b1a3760df4ce140e94610a76ca6a9e 

可以看到有两个join命令,这两个join命令是有不同用途的,

第一个join命令:

此命令是用于master节点加入的,也就是带有apiserver的节点加入,因此,在本例中,这个join命令应该用在master2和master3这两个服务器上

kubeadm join 192.168.217.100:6443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:f9ea1a25922e715f893c360759c9f56a26b1a3760df4ce140e94610a76ca6a9e \
  --control-plane --certificate-key d14a06a73a7aabf5cce805880c085f7427247d507ba01bf2d2f21c294aeb8643

在master2节点上执行此join命令,输入日志如下:

[root@master2 ~]# kubeadm join 192.168.217.100:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:f9ea1a25922e715f893c360759c9f56a26b1a3760df4ce140e94610a76ca6a9e --control-plane --certificate-key d14a06a73a7aabf5cce805880c085f7427247d507ba01bf2d2f21c294aeb8643
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master2] and IPs [10.96.0.1 192.168.217.20 192.168.217.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Skipping etcd check in external mode
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[control-plane-join] using external etcd - no local stacked instance added
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node master2 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
To start administering your cluster from this node, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.

关键代码:

此时master2作为control-plane角色加入了集群,并且打了NoSchedule的污点。

[control-plane-join] using external etcd - no local stacked instance added
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
[mark-control-plane] Marking the node master2 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

同样的在master3服务器上执行此命令,输出基本相同就不废话了。

工作节点join命令:

kubeadm join 192.168.217.100:6443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:f9ea1a25922e715f893c360759c9f56a26b1a3760df4ce140e94610a76ca6a9e 

此命令在node1节点执行,当然,本例是只有一个工作节点,后续要在添加工作节点,仍然使用此命令即可。

kubeadm join 192.168.217.100:6443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:f9ea1a25922e715f893c360759c9f56a26b1a3760df4ce140e94610a76ca6a9e 

此命令输出日志如下:

[root@node1 ~]# kubeadm join 192.168.217.100:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:f9ea1a25922e715f893c360759c9f56a26b1a3760df4ce140e94610a76ca6a9e 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

关键代码:

这个节点加入了集群,并且向apiserver发出了证书申请,申请被接受并返回给此节点了。

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received



附:1:kubeadm部署完成后的环境变量问题

在上述的master初始化节点日志中,有如下输出:

To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf

建议只在一个节点使用上述命令,例如,只在master1节点上做集群管理工作,那么就在master1节点上执行:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果指定在工作节点做集群管理工作,需要将任意一个master节点的/etc/kubernetes/admin.conf 文件拷贝到工作节点的同名目录下,本例是node1节点:

master1节点拷贝文件:

[root@master1 ~]# scp /etc/kubernetes/admin.conf node1:/etc/kubernetes/
admin.conf  

在node1节点上执行以下命令:

[root@node1 ~]#  mkdir -p $HOME/.kube
[root@node1 ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@node1 ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

集群管理:

[root@node1 ~]# kubectl get no,po -A
NAME           STATUS   ROLES                  AGE   VERSION
node/master1   Ready    control-plane,master   22h   v1.22.2
node/master2   Ready    control-plane,master   22h   v1.22.2
node/master3   Ready    control-plane,master   21h   v1.22.2
node/node1     Ready    <none>                 21h   v1.22.2
NAMESPACE     NAME                                  READY   STATUS            RESTARTS       AGE
default       pod/dns-test                          0/1     Completed         0              6h11m
default       pod/nginx-7fb9867-jfg2r               1/1     Running           2              6h6m
kube-system   pod/coredns-7f6cbbb7b8-7c85v          1/1     Running           4              20h
kube-system   pod/coredns-7f6cbbb7b8-h9wtb          1/1     Running           4              20h
kube-system   pod/kube-apiserver-master1            1/1     Running           7              22h
kube-system   pod/kube-apiserver-master2            1/1     Running           6              22h
kube-system   pod/kube-apiserver-master3            0/1     Running           5              21h
kube-system   pod/kube-controller-manager-master1   1/1     Running           0              71m
kube-system   pod/kube-controller-manager-master2   1/1     Running           0              67m
kube-system   pod/kube-controller-manager-master3   0/1     Running           0              3s
kube-system   pod/kube-flannel-ds-7dmgh             1/1     Running           6              21h
kube-system   pod/kube-flannel-ds-b99h7             1/1     Running           19 (70m ago)   21h
kube-system   pod/kube-flannel-ds-hmzj7             1/1     Running           5              21h
kube-system   pod/kube-flannel-ds-vvld8             0/1     PodInitializing   4 (140m ago)   21h
kube-system   pod/kube-proxy-nkgdf                  1/1     Running           5              21h
kube-system   pod/kube-proxy-rb9zk                  1/1     Running           5              21h
kube-system   pod/kube-proxy-rvbb7                  1/1     Running           7              22h
kube-system   pod/kube-proxy-xmrp5                  1/1     Running           6              22h
kube-system   pod/kube-scheduler-master1            1/1     Running           0              71m
kube-system   pod/kube-scheduler-master2            1/1     Running           0              67m
kube-system   pod/kube-scheduler-master3            0/1     Running           0              3s

附2:kubeadm部署的证书期限问题

[root@master1 ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Oct 27, 2023 11:19 UTC   364d                                    no      
apiserver                  Oct 27, 2023 11:19 UTC   364d            ca                      no      
apiserver-kubelet-client   Oct 27, 2023 11:19 UTC   364d            ca                      no      
controller-manager.conf    Oct 27, 2023 11:19 UTC   364d                                    no      
front-proxy-client         Oct 27, 2023 11:19 UTC   364d            front-proxy-ca          no      
scheduler.conf             Oct 27, 2023 11:19 UTC   364d                                    no      
CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Oct 24, 2032 11:19 UTC   9y              no      
front-proxy-ca          Oct 24, 2032 11:19 UTC   9y              no      

这个证书期限问题后面单独写一个博文解释如何修改。

附3:kubeadm部署的一个bug

利用kubeadm部署的集群的状态检查可以看到是不正确的:

[root@node1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Healthy     ok                                                                                            
etcd-1               Healthy     {"health":"true"}                                                                             
etcd-2               Healthy     {"health":"true"}                                                                             
etcd-0               Healthy     {"health":"true"}   

在三个master节点都操作一次:

[root@master1 ~]# cd /etc/kubernetes/manifests/
[root@master1 manifests]# ll
total 12
-rw------- 1 root root 3452 Oct 27 19:19 kube-apiserver.yaml
-rw------- 1 root root 2893 Oct 27 19:19 kube-controller-manager.yaml
-rw------- 1 root root 1479 Oct 27 19:19 kube-scheduler.yaml

编辑kube-controller-manager.yaml和kube-scheduler.yaml这两个文件,将--port=0 字段删除即可,任何服务都不需要重启,即可看到集群状态正常了:

注意,此命令仅仅是快速查看集群的状态,知道集群的组件是否正常,当然也就知道有哪些组件了,可以看到我这个集群是有一个三节点的etcd集群。

[root@node1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
20天前
|
Prometheus Kubernetes 监控
k8s部署针对外部服务器的prometheus服务
通过上述步骤,您不仅成功地在Kubernetes集群内部署了Prometheus,还实现了对集群外服务器的有效监控。理解并实施网络配置是关键,确保监控数据的准确无误传输。随着监控需求的增长,您还可以进一步探索Prometheus生态中的其他组件,如Alertmanager、Grafana等,以构建完整的监控与报警体系。
109 60
|
21天前
|
Prometheus Kubernetes 监控
k8s部署针对外部服务器的prometheus服务
通过上述步骤,您不仅成功地在Kubernetes集群内部署了Prometheus,还实现了对集群外服务器的有效监控。理解并实施网络配置是关键,确保监控数据的准确无误传输。随着监控需求的增长,您还可以进一步探索Prometheus生态中的其他组件,如Alertmanager、Grafana等,以构建完整的监控与报警体系。
127 62
|
19天前
|
NoSQL 关系型数据库 Redis
高可用和性能:基于ACK部署Dify的最佳实践
本文介绍了基于阿里云容器服务ACK,部署高可用、可伸缩且具备高SLA的生产可用的Dify服务的详细解决方案。
|
24天前
|
Kubernetes Cloud Native 流计算
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
61 0
|
10天前
|
运维 Cloud Native 持续交付
云原生架构的演进与实践####
【10月更文挑战第16天】 云原生,这一概念自提出以来,便以其独特的魅力和无限的可能性,引领着现代软件开发与部署的新浪潮。本文旨在探讨云原生架构的核心理念、关键技术及其在实际项目中的应用实践,揭示其如何帮助企业实现更高效、更灵活、更可靠的IT系统构建与管理。通过深入剖析容器化、微服务、持续集成/持续部署(CI/CD)等核心技术,结合具体案例,本文将展现云原生架构如何赋能企业数字化转型,推动业务创新与发展。 ####
111 47
|
6天前
|
Kubernetes Cloud Native 持续交付
云端新纪元:云原生技术重塑IT架构####
【10月更文挑战第20天】 本文深入探讨了云原生技术的兴起背景、核心理念、关键技术组件以及它如何引领现代IT架构迈向更高效、灵活与可扩展的新阶段。通过剖析Kubernetes、微服务、Docker等核心技术,本文揭示了云原生架构如何优化资源利用、加速应用开发与部署流程,并促进企业数字化转型的深度实践。 ####
|
5天前
|
监控 Cloud Native Java
云原生架构下微服务治理策略与实践####
【10月更文挑战第20天】 本文深入探讨了云原生环境下微服务架构的治理策略,通过分析当前技术趋势与挑战,提出了一系列高效、可扩展的微服务治理最佳实践方案。不同于传统摘要概述内容要点,本部分直接聚焦于治理核心——如何在动态多变的分布式系统中实现服务的自动发现、配置管理、流量控制及故障恢复,旨在为开发者提供一套系统性的方法论,助力企业在云端构建更加健壮、灵活的应用程序。 ####
46 10
|
5天前
|
运维 Cloud Native 持续交付
云原生架构下的微服务设计原则与实践####
【10月更文挑战第20天】 本文深入探讨了云原生环境中微服务设计的几大核心原则,包括服务的细粒度划分、无状态性、独立部署、自动化管理及容错机制。通过分析这些原则背后的技术逻辑与业务价值,结合具体案例,展示了如何在现代云平台上实现高效、灵活且可扩展的微服务架构,以应对快速变化的市场需求和技术挑战。 ####
25 7
|
5天前
|
监控 Cloud Native 持续交付
云原生架构下微服务的最佳实践与挑战####
【10月更文挑战第20天】 本文深入探讨了云原生架构在现代软件开发中的应用,特别是针对微服务设计模式的最优实践与面临的主要挑战。通过分析容器化、持续集成/持续部署(CI/CD)、服务网格等关键技术,阐述了如何高效构建、部署及运维微服务系统。同时,文章也指出了在云原生转型过程中常见的难题,如服务间的复杂通信、安全性问题以及监控与可观测性的实现,为开发者和企业提供了宝贵的策略指导和解决方案建议。 ####
29 5
|
3天前
|
监控 Cloud Native 测试技术
云原生架构下的性能优化与实践####
【10月更文挑战第21天】 本文深入探讨了在云原生环境下,如何通过一系列技术手段和最佳实践来提升应用性能。文章首先概述了云原生架构的基本原则与优势,随后详细分析了影响性能的关键因素,包括容器编排、微服务设计、持续集成/持续部署(CI/CD)流程以及监控与日志管理。针对这些因素,文中不仅介绍了具体的优化策略,如资源请求与限制的合理配置、服务间通信的高效实现、自动化测试与部署的优化,还结合案例分析,展示了如何在实际项目中有效实施这些策略以显著提升系统响应速度和处理能力。此外,文章还强调了性能测试的重要性,并提供了几种常用的性能测试工具和方法。最后,总结了云原生性能优化的未来趋势,为开发者和架构师
10 2