一文教会你,如何通过kubeadm,在生产环境部署K8S高可用集群(一)

简介: 一文教会你,如何通过kubeadm,在生产环境部署K8S高可用集群(一)

使用kubeadm安装K8S

我们都知道,k8s中有三位大哥:kubelet, kubeadm, kubectl.

其中:

kubelet是服务,用来调用下层的container管理器,从而对底层容器进行管理。

kubectl是API,供我们调用,键入命令对k8s资源进行管理。

kubeadm是管理器,我们可以使用它进行k8s节点的管理。

今天,我们就基于kubeadm来详细讲讲怎么部署高可用K8S集群

1.基本环境配置

高可用架构:

使用五台机器:

10.10.0.220 master01

10.10.0.221 master02

10.10.0.223 master03

10.10.0.224 node01

10.10.0.225 node02

etcd和master部署到一块的,生产环境如果master配置够高的话,也可以部署到一块

机器规模比较大的话,还是分开安装比较好,master一般部署三个节点以上

etcd只跟apiserver交互,跟其他组件都不交互

VIP不占系统资源,随着master漂移

1、先修改每台机子的hosts文件

[root@node02 ~ ]# vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

10.10.0.220 master01

10.10.0.221 master02

10.10.0.223 master03

10.10.0.224 node01

10.10.0.225 node02

10.10.0.10 master-lb

2、关闭selinux,firewalld,虚拟内存

3、时间同步

安装ntpdate:

[root@master01 ~ ]# yum install -y ntpdate

[root@master01 ~ ]# systemctl enable ntpdate.service --now

4、必备工具安装

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

所有节点配置limit:

ulimit -SHn 65535

将如下代码加入内核限制文件/etc/security/limits.conf的末尾:

soft noproc 65535

hard noproc 65535

soft nofile 65535

hard nofile 65535

5、配置免密

Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作

[root@master01 ~ ]# ssh-keygen -t rsa

[root@master01 ~ ]# for i in master02 master03 node01 node02; do ssh-copy-id -i .ssh/id_rsa.pub $i;done

所有节点安装ipvsadm: 使用ipvs流量调度模式

yum install ipvsadm ipset sysstat conntrack libseccomp -y

所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:

modprobe – ip_vs

modprobe – ip_vs_rr

modprobe – ip_vs_wrr

modprobe – ip_vs_sh

modprobe – nf_conntrack

[root@master01 ~ ]# vim /etc/modules-load.d/ipvs.conf

ip_vs

ip_vs_lc

ip_vs_wlc

ip_vs_rr

ip_vs_wrr

ip_vs_lblc

ip_vs_lblcr

ip_vs_dh

ip_vs_sh

ip_vs_fo

ip_vs_nq

ip_vs_sed

ip_vs_ftp

ip_vs_sh

nf_conntrack

ip_tables

ip_set

xt_set

ipt_set

ipt_rpfilter

ipt_REJECT

ipip

然后执行systemctl enable --now systemd-modules-load.service即可

内核参数修改:br_netfilter模块用于将桥接流量转发至iptables链,br_netfilter内核参数需要开启转发。

[root@master01 ~]# modprobe br_netfilter

6、配置内核参数

开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:

cat < /etc/sysctl.d/k8s.conf

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

fs.may_detach_mounts = 1

net.ipv4.conf.all.route_localnet = 1

vm.overcommit_memory=1

vm.panic_on_oom=0

fs.inotify.max_user_watches=89100

fs.file-max=52706963

fs.nr_open=52706963

net.netfilter.nf_conntrack_max=2310720


net.ipv4.tcp_keepalive_time = 600

net.ipv4.tcp_keepalive_probes = 3

net.ipv4.tcp_keepalive_intvl =15

net.ipv4.tcp_max_tw_buckets = 36000

net.ipv4.tcp_tw_reuse = 1

net.ipv4.tcp_max_orphans = 327680

net.ipv4.tcp_orphan_retries = 3

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 16384

net.ipv4.ip_conntrack_max = 65536

net.ipv4.tcp_max_syn_backlog = 16384

net.ipv4.tcp_timestamps = 0

net.core.somaxconn = 16384

EOF

sysctl --system

所有节点配置完内核后,重启服务器,保证重启后内核依旧加载

reboot

lsmod | grep --color=auto -e ip_vs -e nf_conntrack

[root@master01 ~ ]# lsmod | grep --color=auto -e ip_vs -e nf_conntrack

ip_vs_ftp 16384 0

nf_nat 32768 1 ip_vs_ftp

ip_vs_sed 16384 0

ip_vs_nq 16384 0

ip_vs_fo 16384 0

ip_vs_sh 16384 0

ip_vs_dh 16384 0

ip_vs_lblcr 16384 0

ip_vs_lblc 16384 0

ip_vs_wrr 16384 0

ip_vs_rr 16384 0

ip_vs_wlc 16384 0

ip_vs_lc 16384 0

ip_vs 151552 24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp

nf_conntrack 143360 2 nf_nat,ip_vs

nf_defrag_ipv6 20480 1 nf_conntrack

nf_defrag_ipv4 16384 1 nf_conntrack

libcrc32c 16384 4 nf_conntrack,nf_nat,xfs,ip_vs

2.基本组件安装

1、安装docker,kubernetes组件

阿里源安装docker-ce docker-cli Kubernetes各组件等

关闭检查:

[root@master01 yum.repos.d ]# vim kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

2、安装k8s组件

yum list kubeadm.x86_64 --showduplicates | sort -r

所有节点安装1.21版本kubeadm:安装kubeadm的时会把kubelet和kubectl一块安装 最新版初始化会出问题

yum install kubeadm-1.21* kubelet-1.21* kubectl-1.21* -y

3、配置docker镜像加速

[root@master01 ~ ]# cat /etc/docker/daemon.json

{

“registry-mirrors”: [“http://abcd1234.m.daocloud.io”],

“exec-opts”: [“native.cgroupdriver=systemd”]

}

默认配置的pause镜像使用gcr.io仓库,国内可能无法访问,所以这里配置Kubelet使用阿里云的pause镜像:

[root@node02 ~ ]# DOCKER_CGROUPS=(docker info |grep Cgroup|head -n1|awk ‘{print(docker info |grep Cgroup|head -n1|awk ‘{print(docker info |grep Cgroup|head -n1|awk ‘{print 3}’)

cat >/etc/sysconfig/kubelet<

KUBELET_EXTRA_ARGS=“–cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1”

EOF

4、设置Kubelet开机自启动

systemctl daemon-reload

systemctl enable --now kubelet

[root@master01 ~ ]# systemctl daemon-reload

[root@master01 ~ ]# systemctl enable kubelet.service --now

Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

3.高可用组件安装

高可用组件:

10.10.0.220 10.10.0.221 10.10.0.223

所有Master节点通过yum安装nginx和KeepAlived:

检测端口脚本:

[root@jh-221 keepalived ]# cat check_port.sh

#!/bin/bash

#keepalived 监控端口脚本

#使用方法:

#在keepalived的配置文件中

#vrrp_script check_port {#创建一个vrrp_script脚本,检查配置

#    script "/etc/keepalived/check_port.sh 6379" #配置监听的端口
#    interval 2 #检查脚本的频率,单位(秒)
#}
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
else
        echo "Check Port Canot Be Empty!"
fi

加上执行权限

[root@master01 keepalived ]# chmod +x /etc/keepalived/check_port.sh

yum install keepalived nginx -y

所有Master节点配置nginx(详细配置参考nginx文档,所有Master节点的nginx配置相同):

http模块下添加

[root@master01 ~ ]# cat /etc/nginx/nginx.conf

stream {

upstream jinghao {

server 10.10.0.220:6443 max_fails=3 fail_timeout=30s;

server 10.10.0.221:6443 max_fails=3 fail_timeout=30s;

server 10.10.0.223:6443 max_fails=3 fail_timeout=30s;

}

server {

listen 7443;

proxy_connect_timeout 2s;

proxy_timeout 900s;

proxy_pass jinghao;

}

}

所有Master节点配置KeepAlived,配置不一样,注意区分 公有云不支持keepalived

Master01节点的配置

[root@master01 keepalived ]# cat keepalived.conf

! Configuration File for keepalived

global_defs {

router_id 10.10.0.220

}

vrrp_script chk_nginx {

script “/etc/keepalived/check_port.sh 7443”

interval 2

weight -20

}

vrrp_instance VI_1 {

state MASTER

interface ens33

virtual_router_id 251 #如果机器上有其他keepalived,该id不能出现重复

priority 100

advert_int 1

mcast_src_ip 10.10.0.220

nopreempt

authentication {
    auth_type PASS
    auth_pass 11111111
}
track_script {
     chk_nginx
}
virtual_ipaddress {
    10.10.0.100
}

}

Master02节点的配置:

[root@master02 keepalived ]# cat keepalived.conf

! Configuration File for keepalived

global_defs {

router_id 10.10.0.221

}

vrrp_script chk_nginx {

script “/etc/keepalived/check_port.sh 7443”

interval 2

weight -20

}

vrrp_instance VI_1 {

state BACKUP

interface ens33

virtual_router_id 251

priority 90

advert_int 1

mcast_src_ip 10.10.0.221

authentication {
    auth_type PASS
    auth_pass 11111111
}
track_script {
     chk_nginx
}
virtual_ipaddress {
    10.10.0.100
}

}

Master03节点的配置:

[root@master03 ~ ]# cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {

router_id 10.10.0.223

}

vrrp_script chk_nginx {

script “/etc/keepalived/check_port.sh 7443”

interval 2

weight -20

}

vrrp_instance VI_1 {

state BACKUP

interface ens33

virtual_router_id 251

priority 90

advert_int 1mcast_src_ip 10.10.0.223

authentication {
    auth_type PASS
    auth_pass 11111111
}
track_script {
     chk_nginx
}
virtual_ipaddress {
    10.10.0.100
}

}

[root@master03 keepalived ]# ping 10.10.0.100

PING 10.10.0.100 (10.10.0.100) 56(84) bytes of data.

64 bytes from 10.10.0.100: icmp_seq=1 ttl=64 time=7.31 ms

64 bytes from 10.10.0.100: icmp_seq=2 ttl=64 time=0.546 ms

^C

— 10.10.0.100 ping statistics —

2 packets transmitted, 2 received, 0% packet loss, time 1002ms

rtt min/avg/max/mdev = 0.546/3.930/7.315/3.385 ms

4.集群初始化

以下操作只在master01节点执行

Master01节点创建kubeadm-config.yaml配置文件如下:

获取kubeadm默认配置文件并修改:

kubeadm config print init-defaults > kubeadm-init.yaml

[root@master01 ~ ]# cat kubeadm-init.yaml

apiVersion: kubeadm.k8s.io/v1beta3

bootstrapTokens:

groups:

system:bootstrappers:kubeadm:default-node-token

token: abcdef.0123456789abcdef

ttl: 24h0m0s

usages:

signing

authentication

kind: InitConfiguration

localAPIEndpoint:

advertiseAddress: 10.10.0.220 本机ip

bindPort: 6443

nodeRegistration:

criSocket: unix:///var/run/containerd/containerd.sock

imagePullPolicy: IfNotPresent

name: master01

taints:

effect: NoSchedule

key: node-role.kubernetes.io/master

apiServer:

timeoutForControlPlane: 4m0s

apiVersion: kubeadm.k8s.io/v1beta3

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controlPlaneEndpoint: 10.10.0.10:7443 虚VIP和端口

controllerManager: {}

dns:

type: CoreDNS

etcd:

local:

dataDir: /var/lib/etcd

imageRepository: registry.aliyuncs.com/google_containers #指定国内镜像仓库地址

kind: ClusterConfiguration

kubernetesVersion: 1.24.2 指定版本

networking:

dnsDomain: cluster.local

podSubnet: 172.16.0.0/12 #指定创建pod的IP地址范围

serviceSubnet: 192.168.0.0/16 #指定创建service的IP地址范围

scheduler: {}

镜像源imageRepository改成了阿里云的镜像,如果是最新版的,可能国内镜像源还没有完成同步,

如果拉取失败可以改成daocloud.io/daocloud进行尝试

1.21版本:

[root@master01 ~ ]# cat kubeadm-init.yaml

apiVersion: kubeadm.k8s.io/v1beta2

bootstrapTokens:

groups:

system:bootstrappers:kubeadm:default-node-token

token: abcdef.0123456789abcdef

ttl: 24h0m0s

usages:

signing

authentication

kind: InitConfiguration

localAPIEndpoint:

advertiseAddress: 10.10.0.220

bindPort: 6443

nodeRegistration:

criSocket: /var/run/dockershim.sock

name: master01

taints: null

apiServer:

timeoutForControlPlane: 4m0s

apiVersion: kubeadm.k8s.io/v1beta2

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controlPlaneEndpoint: 10.10.0.10:7443

controllerManager: {}

dns:

type: CoreDNS

etcd:

local:

dataDir: /var/lib/etcd

imageRepository: registry.aliyuncs.com/google_containers

kind: ClusterConfiguration

kubernetesVersion: 1.21.14

networking:

dnsDomain: cluster.local

podSubnet: 172.16.0.0/12

serviceSubnet: 192.168.0.0/16

scheduler: {}

更新kubeadm文件

kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml

[root@master01 ~ ]# kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml

W0707 18:04:34.176233 15690 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme “unix” to the “criSocket” with value “/var/run/dockershim.sock”. Please update your configuration!

master01下载镜像

[root@master01 ~ ]# kubeadm config images pull --config kubeadm-init.yaml

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0

[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.4.1

[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0

[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.0

[root@master01 ~ ]# docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

registry.aliyuncs.com/google_containers/kube-apiserver v1.21.0 4d217480042e 15 months ago 126MB

registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 15 months ago 122MB

registry.aliyuncs.com/google_containers/kube-scheduler v1.21.0 62ad3129eca8 15 months ago 50.6MB

registry.aliyuncs.com/google_containers/kube-controller-manager v1.21.0 09708983cc37 15 months ago 120MB

registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 18 months ago 683kB

registry.aliyuncs.com/google_containers/coredns v1.8.0 296a6d5035e2 20 months ago 42.5MB

报错分析:

[root@master01 ~ ]# kubeadm config images pull --config kubeadm-init.yaml

failed to pull image “k8s.gcr.io/kube-apiserver:v1.24.0”: output: E0708 09:07:59.782327 52040 remote_image.go:218] “PullImage from image service failed” err=“rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService” image=“k8s.gcr.io/kube-apiserver:v1.24.0”

time=“2022-07-08T09:07:59+08:00” level=fatal msg=“pulling image: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService”

, error: exit status 1

To see the stack trace of this error execute with --v=5 or higher

需要删除下配置文件,并重启下containerd

rm -f /etc/containerd/config.toml

systemctl restart containerd

并修改镜像仓库为国内镜像仓库 imageRepository: registry.aliyuncs.com/google_containers

root@master ~]# journalctl -xeu kubelet

5月 21 21:07:57 master kubelet[15135]: E0521 21:07:57.697075 15135 kubelet.go:2419] “Error getting node” err="node

是V1.24就会出现以上错误。换成1.23以及以下版本可以解决

如果初始化失败,重置后再次初始化,命令如下:

kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube

初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值):

Token过期后生成新的token:

kubeadm token create --print-join-command

Master需要生成–certificate-key

kubeadm init phase upload-certs --upload-certs

Token没有过期直接执行Join就行了

初始化:

执行kubeadm init初始化安装Master

kubeadm init --config kubeadm-init.yaml --upload-certs

日志分析:

journalctl -xeu kubelet

cat >/etc/sysconfig/kubelet<

KUBELET_EXTRA_ARGS=“–pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2”

EOF

kubeadm init --control-plane-endpoint “LOAD_BALANCER_DNS:LOAD_BALANCER_PORT” --upload-certs

初始化成功:

[kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown ( i d − u ) : (id -u):(id−u):(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

kubeadm join 10.10.0.10:7443 --token abcdef.0123456789abcdef

–discovery-token-ca-cert-hash sha256:eac394f46b758da8502c3e25882584432f195c809d29c6038f0fcefc201c8fac

–control-plane --certificate-key 3d295c9bb289f67d149674dcb413bec4accc44235873a24fbd2776a8f48eaf52

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!

As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use

“kubeadm init phase upload-certs --upload-certs” to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.0.10:7443 --token abcdef.0123456789abcdef

–discovery-token-ca-cert-hash sha256:eac394f46b758da8502c3e25882584432f195c809d29c6038f0fcefc201c8fac

将master02、master03加入k8s集群

kubeadm join 10.10.0.10:7443 --token abcdef.0123456789abcdef

–discovery-token-ca-cert-hash sha256:eac394f46b758da8502c3e25882584432f195c809d29c6038f0fcefc201c8fac

–control-plane --certificate-key 3d295c9bb289f67d149674dcb413bec4accc44235873a24fbd2776a8f48eaf52

This node has joined the cluster and a new control plane instance was created:

Certificate signing request was sent to apiserver and approval was received.

The Kubelet was informed of the new secure connection details.

Control plane (master) label and taint were applied to the new node.

The Kubernetes control plane instances scaled up.

A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

    一文教会你,如何通过kubeadm,在生产环境部署K8S高可用集群(二):https://developer.aliyun.com/article/1495660


相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
相关文章
|
12天前
|
Kubernetes 微服务 容器
Aspire项目发布到远程k8s集群
Aspire项目发布到远程k8s集群
47 2
Aspire项目发布到远程k8s集群
|
2天前
|
Kubernetes 数据处理 调度
天呐!部署 Kubernetes 模式的 Havenask 集群太震撼了!
【6月更文挑战第11天】Kubernetes 与 Havenask 集群结合,打造高效智能的数据处理解决方案。Kubernetes 如指挥家精准调度资源,Havenask 快速响应查询,简化复杂任务,优化资源管理。通过搭建 Kubernetes 环境并配置 Havenask,实现高可扩展性和容错性,保障服务连续性。开发者因此能专注业务逻辑,享受自动化基础设施管理带来的便利。这项创新技术组合引领未来,开启数据处理新篇章。拥抱技术新时代!
|
2天前
|
Kubernetes 前端开发 Serverless
Serverless 应用引擎产品使用合集之如何调用Kubernetes集群内服务
阿里云Serverless 应用引擎(SAE)提供了完整的微服务应用生命周期管理能力,包括应用部署、服务治理、开发运维、资源管理等功能,并通过扩展功能支持多环境管理、API Gateway、事件驱动等高级应用场景,帮助企业快速构建、部署、运维和扩展微服务架构,实现Serverless化的应用部署与运维模式。以下是对SAE产品使用合集的概述,包括应用管理、服务治理、开发运维、资源管理等方面。
|
13天前
|
运维 Kubernetes 调度
【kubernetes】关于k8s集群的污点、容忍、驱逐以及k8s集群故障排查思路
【kubernetes】关于k8s集群的污点、容忍、驱逐以及k8s集群故障排查思路
|
Kubernetes 开发者 微服务
简化Kubernetes应用部署工具-Helm之Hook
微服务和容器化给复杂应用部署与管理带来了极大的挑战。Helm是目前Kubernetes服务编排领域的唯一开源子项目,做为Kubernetes应用的一个包管理工具,可理解为Kubernetes的apt-get / yum,由Deis 公司发起,该公司已经被微软收购。
1574 0
|
Kubernetes 开发者 微服务
简化Kubernetes应用部署工具-Helm之Hook
本文讲的是简化Kubernetes应用部署工具-Helm之Hook【编者的话】微服务和容器化给复杂应用部署与管理带来了极大的挑战。Helm是目前Kubernetes服务编排领域的唯一开源子项目,做为Kubernetes应用的一个包管理工具,可理解为Kubernetes的apt-get / yum,由Deis 公司发起,该公司已经被微软收购。
2527 0
|
16天前
|
存储 运维 监控
Kubernetes 集群监控与日志管理实践
【5月更文挑战第28天】在微服务架构日益普及的当下,容器编排工具如 Kubernetes 已成为运维工作的核心。有效的集群监控和日志管理是确保系统稳定性和服务可靠性的关键。本文将深入探讨 Kubernetes 集群的监控策略,以及如何利用现有的工具进行日志收集、存储和分析,以实现对集群健康状况的实时掌握和问题快速定位。
|
17天前
|
存储 监控 Kubernetes
Kubernetes 集群监控与日志管理实践
【5月更文挑战第27天】 在微服务架构日益普及的当下,容器化技术与编排工具如Kubernetes已成为现代云原生应用的基石。然而,随着集群规模的不断扩大和复杂性的增加,如何有效监控和管理这些动态变化的服务成为了维护系统稳定性的关键。本文将深入探讨Kubernetes环境下的监控策略和日志管理的最佳实践,旨在为运维人员提供一套系统的解决思路,确保应用性能的最优化和问题的快速定位。
|
13天前
|
Kubernetes 微服务 容器
Aspire项目发布到win11本地k8s集群
Aspire项目发布到win11本地k8s集群
18 0
Aspire项目发布到win11本地k8s集群
|
14天前
|
运维 Prometheus 监控
Kubernetes 集群的监控与维护策略
【5月更文挑战第30天】 在微服务架构日益普及的背景下,容器编排工具如Kubernetes成为确保服务高效运行的关键。本文聚焦于Kubernetes集群的监控和维护,首先探讨了监控系统的重要性及其对集群健康的影响,随后详细介绍了一套综合监控策略,包括节点性能监控、应用服务质量跟踪以及日志管理等方面。此外,文章还提出了一系列实用的集群维护技巧和最佳实践,旨在帮助运维人员预防故障发生,快速定位问题,并确保集群长期稳定运行。

推荐镜像

更多