搭建Kubernetes v1.31.1服务器集群,采用Calico网络技术

本文涉及的产品
容器镜像服务 ACR,镜像仓库100个 不限时长
简介: 在阿里云服务器上部署k8s集群,一、3台k8s服务器,1个Master节点,2个工作节点,采用Calico网络技术。二、部署nginx服务到k8s集群,并验证nginx服务运行状态。

一、简介


在阿里云服务器上部署k8s集群,一、3台k8s服务器,1个Master节点,2个工作节点,采用Calico网络技术。二、部署nginx服务到k8s集群,并验证nginx服务运行状态。


二、前置条件

1、需要准备3台服务器

172.21.173.7 k8s-master01             CPU:4C,内存:8G

172.21.173.8 k8s-work01                CPU:2C,内存:2G

172.21.173.9 k8s-work02                CPU:2C,内存:2G

2、安装软件版本

Ubuntu版本:Linux 6.8.0-40-generic

Kubernetes版本:v1.31.1

Docker版本:v27.3.1

Containerd版本:v1.7.22

Kubeadm版本:v1.31.1

Kubectl版本:v1.31.1

Kubelet版本:v1.31.1

Calico版本:v3.28.2


3、参考Kubernetes、Docker、Calico的官方文档

https://kubernetes.io/zh-cn/docs/setup/production-environment/container-runtimes/

https://github.com/containerd/containerd/blob/main/docs/getting-started.md

https://docs.docker.com/engine/install/ubuntu/

https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart


三、安装containerd(所有服务器上)

1、卸载旧的服务

for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done

2、设置docker仓库地址

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

3、安装Docker

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

systemctl enable containerd

4、配置docker镜像地址

/etc/docker/daemon.json 
{
  "registry-mirrors" : ["https://o7klhlze.mirror.aliyuncs.com"]
}


5、验证docker安装是否成功

sudo docker run hello-world


四、安装kubectl、kubeadm、kubelet(所有服务器上)

1、交换分区的配置

sudo swapoff -a

2、 更新 apt 包索引并安装使用 Kubernetes apt 仓库所需要的包

sudo apt-get update# apt-transport-https 可能是一个虚拟包(dummy package);如果是的话,你可以跳过安装这个包sudo apt-get install -y apt-transport-https ca-certificates curl gpg


3、下载用于 Kubernetes 软件包仓库的公共签名密钥。所有仓库都使用相同的签名密钥,因此你可以忽略URL中的版本

# 如果 `/etc/apt/keyrings` 目录不存在,则应在 curl 命令之前创建它,请阅读下面的注释。# sudo mkdir -p -m 755 /etc/apt/keyringscurl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

4、添加 Kubernetes apt 仓库。 请注意,此仓库仅包含适用于 Kubernetes 1.31 的软件包; 对于其他 Kubernetes 次要版本,则需要更改 URL 中的 Kubernetes 次要版本以匹配你所需的次要版本 (你还应该检查正在阅读的安装文档是否为你计划安装的 Kubernetes 版本的文档)。

# 此操作会覆盖 /etc/apt/sources.list.d/kubernetes.list 中现存的所有配置。echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

5、更新 apt 包索引,安装 kubelet、kubeadm 和 kubectl

sudo apt-get updatesudo apt-get install -y kubelet kubeadm kubectlsudo apt-mark hold kubelet kubeadm kubectl
systemctl enable kubelet

6、检查服务状态

systemctl status kubelet
systemctl status containerd


7、配置主节点的kubeadm-config.yaml

kubeadm config print init-defaults > kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta4
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.21.173.7
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  imagePullSerial: true
  name: k8s-master01
  taints: null
timeouts:
  controlPlaneComponentHealthCheck: 4m0s
  discovery: 5m0s
  etcdAPICall: 2m0s
  kubeletHealthCheck: 4m0s
  kubernetesAPICall: 1m0s
  tlsBootstrap: 5m0s
  upgradeManifests: 5m0s
---
apiServer: {}
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 87600h0m0s
certificateValidityPeriod: 8760h0m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
encryptionAlgorithm: RSA-2048
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.31.1
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
proxy: {}
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

8、配置work节点的kubeadm-config.yaml

kubeadm config print join-defaults > kubeadm-config.yaml

 

kubeadm-config.yaml的内容:

apiVersion: kubeadm.k8s.io/v1beta4
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:
  bootstrapToken:
    apiServerEndpoint: kube-apiserver:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
  tlsBootstrapToken: abcdef.0123456789abcdef
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  imagePullSerial: true
  name: k8s-work02  修改work名称
  taints: null
timeouts:
  controlPlaneComponentHealthCheck: 4m0s
  discovery: 5m0s
  etcdAPICall: 2m0s
  kubeletHealthCheck: 4m0s
  kubernetesAPICall: 1m0s
  tlsBootstrap: 5m0s
  upgradeManifests: 5m0s

9、配置域名

/etc/hosts 新增配置:

172.21.173.7 k8s-master01

172.21.173.8 k8s-work01

172.21.173.9 k8s-work02

172.21.173.7 kube-apiserver

hostnamectl set-hostname k8s-work02


10、配置containerd

containerd config default > /etc/containerd/config.toml

 

修改配置文件:

sandbox_image = "registry.k8s.io/pause:3.8" 

修改为:"registry.aliyuncs.com/google_containers/pause:3.10 "

 

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]

           BinaryName = ""

           CriuImagePath = ""

           CriuPath = ""

           CriuWorkPath = ""

           IoGid = 0

           IoUid = 0

           NoNewKeyring = false

           NoPivotRoot = false

           Root = ""

           ShimCgroup = ""

           SystemdCgroup = false 修改为true


重启服务:

systemctl restart containerd

11、主节点加入集群(仅主节点)

kubeadm init --config kubeadm-config.yaml --upload-certs --v=3 --ignore-preflight-errors=all
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.21.173.7:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:135c555a101094be165b82b891fa14607e339725843cfb2d36c428266566d409 


12、主节点配置.kube/config

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


13、work节点加入集群(仅work节点)

kubeadm join --config kubeadm-config.yaml --ignore-preflight-errors=all
[preflight] Running pre-flight checks
        [WARNING FileExisting-socat]: socat not found in system path
        [WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.00277492s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

14、work节点加入验证

kubectl get nodes
NAME           STATUS   ROLES           AGE   VERSION
k8s-master01   Ready    control-plane   20h   v1.31.1
k8s-work01     Ready    <none>          23h   v1.31.1
k8s-work02     Ready    <none>          45m   v1.31.1
kubectl get po -n kube-system
NAME                                   READY   STATUS    RESTARTS       AGE
coredns-855c4dd65d-rtdqw               1/1     Running   0              39h
coredns-855c4dd65d-tbbxl               1/1     Running   0              39h
etcd-k8s-master01                      1/1     Running   9 (39h ago)    34h
kube-apiserver-k8s-master01            1/1     Running   50 (39h ago)   34h
kube-controller-manager-k8s-master01   1/1     Running   16 (39h ago)   34h
kube-proxy-5hzvx                       1/1     Running   0              39h
kube-proxy-7dh52                       1/1     Running   0              14h
kube-proxy-7vt2k                       1/1     Running   1 (13h ago)    37h
kube-scheduler-k8s-master01            1/1     Running   27 (39h ago)   34h



五、安装calico(仅主节点)

1、下载operator与custom-resource

wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml

wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/custom-resources.yaml

2、配置pod网段

修改:custom-resources.yaml

apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # Configures Calico networking.
  registry: m.daocloud.io   # 修改为该地址,默认为docker.io外网下载不了
  calicoNetwork:
    ipPools:
    - name: default-ipv4-ippool
      blockSize: 26
      cidr: 10.244.0.0/16    # 修改为主节点配置pod网段一样,podSubnet: 10.244.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec: {}


3、安装calico

kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml


4、查看calico状态

kubectl get po -n calico-system
NAME                                       READY   STATUS    RESTARTS      AGE
calico-kube-controllers-6549cbf949-9gnhh   1/1     Running   0             22h
calico-node-g8df6                          1/1     Running   1 (13h ago)   22h
calico-node-jwdqz                          1/1     Running   0             22h
calico-node-wrkf6                          1/1     Running   0             13h
calico-typha-67659f7458-6nlmt              1/1     Running   1 (13h ago)   13h
calico-typha-67659f7458-8zgw7              1/1     Running   0             22h
csi-node-driver-6sq7m                      2/2     Running   0             13h
csi-node-driver-l57xv                      2/2     Running   2 (13h ago)   22h
csi-node-driver-r5tr8                      2/2     Running   0             22h



六、发布nginx服务到k8s集群环境验证

kubectl apply -f nginx.yaml 

kubectl get po 

NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          15m


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
2月前
|
机器学习/深度学习 人工智能 运维
企业内训|LLM大模型在服务器和IT网络运维中的应用-某日企IT运维部门
本课程是为某在华日资企业集团的IT运维部门专门定制开发的企业培训课程,本课程旨在深入探讨大型语言模型(LLM)在服务器及IT网络运维中的应用,结合当前技术趋势与行业需求,帮助学员掌握LLM如何为运维工作赋能。通过系统的理论讲解与实践操作,学员将了解LLM的基本知识、模型架构及其在实际运维场景中的应用,如日志分析、故障诊断、网络安全与性能优化等。
67 2
|
7天前
|
弹性计算 监控 数据库
制造企业ERP系统迁移至阿里云ECS的实例,详细介绍了从需求分析、数据迁移、应用部署、网络配置到性能优化的全过程
本文通过一个制造企业ERP系统迁移至阿里云ECS的实例,详细介绍了从需求分析、数据迁移、应用部署、网络配置到性能优化的全过程,展示了企业级应用上云的实践方法与显著优势,包括弹性计算资源、高可靠性、数据安全及降低维护成本等,为企业数字化转型提供参考。
34 5
|
13天前
|
Kubernetes 监控 Cloud Native
Kubernetes集群的高可用性与伸缩性实践
Kubernetes集群的高可用性与伸缩性实践
46 1
|
2月前
|
存储 安全 数据可视化
提升网络安全防御有效性,服务器DDoS防御软件解读
提升网络安全防御有效性,服务器DDoS防御软件解读
51 1
提升网络安全防御有效性,服务器DDoS防御软件解读
|
2月前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
29天前
|
存储 关系型数据库 MySQL
查询服务器CPU、内存、磁盘、网络IO、队列、数据库占用空间等等信息
查询服务器CPU、内存、磁盘、网络IO、队列、数据库占用空间等等信息
246 2
|
2月前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
2月前
|
Kubernetes Ubuntu Linux
Centos7 搭建 kubernetes集群
本文介绍了如何搭建一个三节点的Kubernetes集群,包括一个主节点和两个工作节点。各节点运行CentOS 7系统,最低配置为2核CPU、2GB内存和15GB硬盘。详细步骤包括环境配置、安装Docker、关闭防火墙和SELinux、禁用交换分区、安装kubeadm、kubelet、kubectl,以及初始化Kubernetes集群和安装网络插件Calico或Flannel。
166 4
|
2月前
|
安全 区块链 数据库
|
2月前
|
Kubernetes Cloud Native 云计算
云原生之旅:Kubernetes 集群的搭建与实践
【8月更文挑战第67天】在云原生技术日益成为IT行业焦点的今天,掌握Kubernetes已成为每个软件工程师必备的技能。本文将通过浅显易懂的语言和实际代码示例,引导你从零开始搭建一个Kubernetes集群,并探索其核心概念。无论你是初学者还是希望巩固知识的开发者,这篇文章都将为你打开一扇通往云原生世界的大门。
128 17

热门文章

最新文章

相关产品

  • 容器服务Kubernetes版