安装k8s Master高可用集群

本文涉及的产品
传统型负载均衡 CLB,每月750个小时 15LCU
公共DNS(含HTTPDNS解析),每月1000万次HTTP解析
云解析 DNS,旗舰版 1个月
简介: 安装k8s Master高可用集群 主机 角色 组件 172.18.6.101 K8S Master Kubelet,kubectl,cni,etcd 172.18.6.102 K8S Master Kubelet,kubectl,cni,etcd 172.

安装k8s Master高可用集群

主机	          角色	            组件

172.18.6.101	K8S Master	Kubelet,kubectl,cni,etcd

172.18.6.102	K8S Master	Kubelet,kubectl,cni,etcd

172.18.6.103	K8S Master	Kubelet,kubectl,cni,etcd

172.18.6.104	K8S Worker	Kubelet,cni

172.18.6.105	K8S Worker	Kubelet,cni

172.18.6.106	K8S Worker	Kubelet,cni

etcd安装

保证k8smaster高可用,不建议使用container的方式启动etcd集群,因为container可能会 出现随时死掉的情况,etcd每个节点的启动service又是有状态的。因此此处将以二进制方式进行部署,建议在正式环境中最少部署3个节点的etcd集群,etcd具体安装步骤参考本地服务方式搭建etcd集群

必要组件以及证书安装

ca证书

参考kubernetes中证书生成创建CA证书,并将ca-key.pem与ca.pem放置到k8s集群中所有节点下的/etc/kubernetes/ssl下

woker证书制作

参考kubernetes中证书生成从节点证书生成段落,进行worker节点证书生成。对应ip的证书放置到对应worker节点的/etc/kubernetes/ssl下

kubelet.conf配置安装

创建/etc/kubernetes/kubelet.conf内容如下:

apiVersion: v1

kind: Config

clusters:

- name: local

  cluster:

    server: https://[负载均衡IP]:[apiserver端口]

    certificate-authority: /etc/kubernetes/ssl/ca.pem

users:

- name: kubelet

  user:

    client-certificate: /etc/kubernetes/ssl/worker.pem

    client-key: /etc/kubernetes/ssl/worker-key.pem

contexts:

- context:

    cluster: local

    user: kubelet

  name: kubelet-context

current-context: kubelet-context

cni插件安装

从containernetworking的cni项目中下载cni的必须二进制文件,需要放置到k8s集群中所有节点下的/opt/cni/bin下。

后续将提供rpm包进行一键安装。

kubelet服务部署

注意:后续将提供rpm包进行一键安装。

将对应版本的kubelet二进制文件放置到k8s集群中所有节点下的/usr/bin下

创建/etc/systemd/system/kubelet.service内容如下:

# /etc/systemd/system/kubelet.service

[Unit]

Description=kubelet: The Kubernetes Node Agent

Documentation=http://kubernetes.io/docs/


[Service]

Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"

Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"

Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"

Environment="KUBELET_DNS_ARGS=--cluster-dns=10.100.0.10 --cluster-domain=cluster.local"

Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.aliyuncs.com/shenshouer/pause-amd64:3.0"

ExecStart=

ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_EXTRA_ARGS

Restart=always

StartLimitInterval=0

RestartSec=10


[Install]

WantedBy=multi-user.target

创建如下目录:

/etc/kubernetes/

|-- kubelet.conf

|-- manifests

`-- ssl

   |-- ca-key.pem

   |-- ca.pem

   |-- worker.csr

   |-- worker-key.pem

   |-- worker-openssl.cnf

   `-- worker.pem

master组件安装

配置负载均衡

配置LVS使用VIP172.18.6.254指向后端172.18.6.101、172.18.6.102、172.18.6.103, 如需简单,则可使用nginx进行TCP4层的负载。

证书生成

openssl.cnf内容如下:

[req]

req_extensions = v3_req

distinguished_name = req_distinguished_name

[req_distinguished_name]

[ v3_req ]

basicConstraints = CA:FALSE

keyUsage = nonRepudiation, digitalSignature, keyEncipherment

subjectAltName = @alt_names

[alt_names]

DNS.1 = kubernetes

DNS.2 = kubernetes.default

DNS.3 = kubernetes.default.svc

DNS.4 = kubernetes.default.svc.cluster.local

DNS.5 = test.example.com.cn

IP.1 = 10.96.0.1

IP.2 = 172.18.6.101

IP.3 = 172.18.6.102

IP.3 = 172.18.6.103

IP.4 = 172.18.6.254

# 三个master的IP

IP.2 = 172.18.6.101

IP.3 = 172.18.6.102

IP.3 = 172.18.6.103

# LVS负载均衡的VIP

IP.4 = 172.18.6.254

# 可能会用到的负载均衡domain

DNS.5 = test.example.com.cn

证书生成具体步骤请参考kubernetes中证书生成 Master证书生成部分与Worker证书生成部分,生成后的证书需要放置到三台Master节点对应路径上

其他组件安装

Master节点上/etc/kubernetes/manifests下放置如下三个文件

kube-apiserver.manifest:

# /etc/kubernetes/manifests/kube-apiserver.manifest

{

  "kind": "Pod",

  "apiVersion": "v1",

  "metadata": {

    "name": "kube-apiserver",

    "namespace": "kube-system",

    "creationTimestamp": null,

    "labels": {

      "component": "kube-apiserver",

      "tier": "control-plane"

    }

  },

  "spec": {

    "volumes": [

      {

        "name": "k8s",

        "hostPath": {

          "path": "/etc/kubernetes"

        }

      },

      {

        "name": "certs",

        "hostPath": {

          "path": "/etc/ssl/certs"

        }

      }

    ],

    "containers": [

      {

        "name": "kube-apiserver",

        "image": "registry.aliyuncs.com.cn/shenshouer/kube-apiserver:v1.5.2",

        "command": [

          "kube-apiserver",

          "--insecure-bind-address=127.0.0.1",

          "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota",

          "--service-cluster-ip-range=10.96.0.0/12",

          "--service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem",

          "--client-ca-file=/etc/kubernetes/ssl/ca.pem",

          "--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem",

          "--tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem",

          "--secure-port=6443",

          "--allow-privileged",

          "--advertise-address=[当前Master节点IP]",

          "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",

          "--anonymous-auth=false",

          "--etcd-servers=http://127.0.0.1:2379"

        ],

        "resources": {

          "requests": {

            "cpu": "250m"

          }

        },

        "volumeMounts": [

          {

            "name": "k8s",

            "readOnly": true,

            "mountPath": "/etc/kubernetes/"

          },

          {

            "name": "certs",

            "mountPath": "/etc/ssl/certs"

          }

        ],

        "livenessProbe": {

          "httpGet": {

            "path": "/healthz",

            "port": 8080,

            "host": "127.0.0.1"

          },

          "initialDelaySeconds": 15,

          "timeoutSeconds": 15,

          "failureThreshold": 8

        }

      }

    ],

    "hostNetwork": true

  },

  "status": {}

}

kube-controller-manager.manifest


{

 "kind": "Pod",

 "apiVersion": "v1",

 "metadata": {

   "name": "kube-controller-manager",

   "namespace": "kube-system",

   "creationTimestamp": null,

   "labels": {

     "component": "kube-controller-manager",

     "tier": "control-plane"

   }

 },

 "spec": {

   "volumes": [

     {

       "name": "k8s",

       "hostPath": {

         "path": "/etc/kubernetes"

       }

     },

     {

       "name": "certs",

       "hostPath": {

         "path": "/etc/ssl/certs"

       }

     }

   ],

   "containers": [

     {

       "name": "kube-controller-manager",

       "image": "registry.aliyuncs.com/shenshouer/kube-controller-manager:v1.5.2",

       "command": [

         "kube-controller-manager",

         "--address=127.0.0.1",

         "--leader-elect",

         "--master=127.0.0.1:8080",

         "--cluster-name=kubernetes",

         "--root-ca-file=/etc/kubernetes/ssl/ca.pem",

         "--service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem",

         "--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem",

         "--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem",

         "--insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap",

         "--allocate-node-cidrs=true",

         "--cluster-cidr=10.244.0.0/16"

       ],

       "resources": {

         "requests": {

           "cpu": "200m"

         }

       },

       "volumeMounts": [

         {

           "name": "k8s",

           "readOnly": true,

           "mountPath": "/etc/kubernetes/"

         },

         {

           "name": "certs",

           "mountPath": "/etc/ssl/certs"

         }

       ],

       "livenessProbe": {

         "httpGet": {

           "path": "/healthz",

           "port": 10252,

           "host": "127.0.0.1"

         },

         "initialDelaySeconds": 15,

         "timeoutSeconds": 15,

         "failureThreshold": 8

       }

     }

   ],

   "hostNetwork": true

 },

 "status": {}

}



kube-scheduler.manifest


{

 "kind": "Pod",

 "apiVersion": "v1",

 "metadata": {

   "name": "kube-scheduler",

   "namespace": "kube-system",

   "creationTimestamp": null,

   "labels": {

     "component": "kube-scheduler",

     "tier": "control-plane"

   }

 },

 "spec": {

   "containers": [

     {

       "name": "kube-scheduler",

       "image": "registry.aliyuncs.com/shenshouer/kube-scheduler:v1.5.2",

       "command": [

         "kube-scheduler",

         "--address=127.0.0.1",

         "--leader-elect",

         "--master=127.0.0.1:8080"

       ],

       "resources": {

         "requests": {

           "cpu": "100m"

         }

       },

       "livenessProbe": {

         "httpGet": {

           "path": "/healthz",

           "port": 10251,

           "host": "127.0.0.1"

         },

         "initialDelaySeconds": 15,

         "timeoutSeconds": 15,

         "failureThreshold": 8

       }

     }

   ],

   "hostNetwork": true

 },

 "status": {}

其他组件安装

kube-proxy安装

在任意master上执行kubectl create -f kube-proxy-ds.yaml,其中kube-proxy-ds.yaml内容如下:

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

  labels:

    component: kube-proxy

    k8s-app: kube-proxy

    kubernetes.io/cluster-service: "true"

    name: kube-proxy

    tier: node

  name: kube-proxy

  namespace: kube-system

spec:

  selector:

    matchLabels:

      component: kube-proxy

      k8s-app: kube-proxy

      kubernetes.io/cluster-service: "true"

      name: kube-proxy

      tier: node

  template:

    metadata:

      labels:

        component: kube-proxy

        k8s-app: kube-proxy

        kubernetes.io/cluster-service: "true"

        name: kube-proxy

        tier: node

    spec:

      containers:

      - command:

        - kube-proxy

        - --kubeconfig=/run/kubeconfig

        - --cluster-cidr=10.244.0.0/16

        image: registry.aliyuncs.com/shenshouer/kube-proxy:v1.5.2

        imagePullPolicy: IfNotPresent

        name: kube-proxy

        resources: {}

        securityContext:

          privileged: true

        terminationMessagePath: /dev/termination-log

        volumeMounts:

        - mountPath: /var/run/dbus

          name: dbus

        - mountPath: /run/kubeconfig

          name: kubeconfig

        - mountPath: /etc/kubernetes/ssl

          name: ssl

      dnsPolicy: ClusterFirst

      hostNetwork: true

      restartPolicy: Always

      securityContext: {}

      terminationGracePeriodSeconds: 30

      volumes:

      - hostPath:

          path: /etc/kubernetes/kubelet.conf

        name: kubeconfig

      - hostPath:

          path: /var/run/dbus

        name: dbus

      - hostPath:

          path: /etc/kubernetes/ssl

        name: ssl

网络组件安装

在任意master上执行kubectl apply -f kube-flannel.yaml,其中kube-flannel.yaml内容如下,注意,如果是在vagrant启动的虚拟机中运行,请修改flannled启动参数将--iface指向具体通讯网卡

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: flannel

  namespace: kube-system

---

kind: ConfigMap

apiVersion: v1

metadata:

  namespace: kube-system

  name: kube-flannel-cfg

  labels:

    tier: node

    app: flannel

data:

  cni-conf.json: |

    {

      "name": "cbr0",

      "type": "flannel",

      "delegate": {

        "ipMasq": true,

        "bridge": "cbr0",

        "hairpinMode": true,

        "forceAddress": true,

        "isDefaultGateway": true

      }

    }

  net-conf.json: |

    {

      "Network": "10.244.0.0/16",

      "Backend": {

        "Type": "vxlan"

      }

    }

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

  namespace: kube-system

  name: kube-flannel-ds

  labels:

    tier: node

    app: flannel

spec:

  template:

    metadata:

      labels:

        tier: node

        app: flannel

    spec:

      hostNetwork: true

      nodeSelector:

        beta.kubernetes.io/arch: amd64

      serviceAccountName: flannel

      containers:

      - name: kube-flannel

        image: registry.aliyuncs.com/shenshouer/flannel:v0.7.0

        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=eth0" ]

        securityContext:

          privileged: true

        env:

        - name: POD_NAME

          valueFrom:

            fieldRef:

              fieldPath: metadata.name

        - name: POD_NAMESPACE

          valueFrom:

            fieldRef:

              fieldPath: metadata.namespace

        volumeMounts:

        - name: run

          mountPath: /run

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      - name: install-cni

        image: registry.aliyuncs.com/shenshouer/flannel:v0.7.0

        command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]

        volumeMounts:

        - name: cni

          mountPath: /etc/cni/net.d

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      volumes:

        - name: run

          hostPath:

            path: /run

        - name: cni

          hostPath:

            path: /etc/cni/net.d

        - name: flannel-cfg

          configMap:

            name: kube-flannel-cfg

DNS部署

在任意master上执行kubectl create -f skydns.yaml,其中skydns.yaml内容如下

apiVersion: v1

kind: Service

metadata:

  name: kube-dns

  namespace: kube-system

  labels:

    k8s-app: kube-dns

    kubernetes.io/cluster-service: "true"

    kubernetes.io/name: "KubeDNS"

spec:

  selector:

    k8s-app: kube-dns

  clusterIP: 10.100.0.10

  ports:

  - name: dns

    port: 53

    protocol: UDP

  - name: dns-tcp

    port: 53

    protocol: TCP


---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: kube-dns

  namespace: kube-system

  labels:

    k8s-app: kube-dns

    kubernetes.io/cluster-service: "true"

spec:

  # replicas: not specified here:

  # 1. In order to make Addon Manager do not reconcile this replicas parameter.

  # 2. Default is 1.

  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.

  strategy:

    rollingUpdate:

      maxSurge: 10%

      maxUnavailable: 0

  selector:

    matchLabels:

      k8s-app: kube-dns

  template:

    metadata:

      labels:

        k8s-app: kube-dns

      annotations:

        scheduler.alpha.kubernetes.io/critical-pod: ''

        scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'

    spec:

      containers:

      - name: kubedns

        image: registry.aliyuncs.com/shenshouer/kubedns-amd64:1.9

        resources:

          # TODO: Set memory limits when we've profiled the container for large

          # clusters, then set request = limit to keep this container in

          # guaranteed class. Currently, this container falls into the

          # "burstable" category so the kubelet doesn't backoff from restarting it.

          limits:

            memory: 170Mi

          requests:

            cpu: 100m

            memory: 70Mi

        livenessProbe:

          httpGet:

            path: /healthz-kubedns

            port: 8080

            scheme: HTTP

          initialDelaySeconds: 60

          timeoutSeconds: 5

          successThreshold: 1

          failureThreshold: 5

        readinessProbe:

          httpGet:

            path: /readiness

            port: 8081

            scheme: HTTP

          # we poll on pod startup for the Kubernetes master service and

          # only setup the /readiness HTTP server once that's available.

          initialDelaySeconds: 3

          timeoutSeconds: 5

        args:

        - --domain=cluster.local.

        - --dns-port=10053

        - --config-map=kube-dns

        # This should be set to v=2 only after the new image (cut from 1.5) has

        # been released, otherwise we will flood the logs.

        - --v=0

        - --federations=myfederation=federation.test

        env:

        - name: PROMETHEUS_PORT

          value: "10055"

        ports:

        - containerPort: 10053

          name: dns-local

          protocol: UDP

        - containerPort: 10053

          name: dns-tcp-local

          protocol: TCP

        - containerPort: 10055

          name: metrics

          protocol: TCP

      - name: dnsmasq

        image: registry.aliyuncs.com/shenshouer/kube-dnsmasq-amd64:1.4

        livenessProbe:

          httpGet:

            path: /healthz-dnsmasq

            port: 8080

            scheme: HTTP

          initialDelaySeconds: 60

          timeoutSeconds: 5

          successThreshold: 1

          failureThreshold: 5

        args:

        - --cache-size=1000

        - --no-resolv

        - --server=127.0.0.1#10053

        - --log-facility=-

        ports:

        - containerPort: 53

          name: dns

          protocol: UDP

        - containerPort: 53

          name: dns-tcp

          protocol: TCP

        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details

        resources:

          requests:

            cpu: 150m

            memory: 10Mi

      - name: dnsmasq-metrics

        image: registry.aliyuncs.com/shenshouer/dnsmasq-metrics-amd64:1.0

        livenessProbe:

          httpGet:

            path: /metrics

            port: 10054

            scheme: HTTP

          initialDelaySeconds: 60

          timeoutSeconds: 5

          successThreshold: 1

          failureThreshold: 5

        args:

        - --v=2

        - --logtostderr

        ports:

        - containerPort: 10054

          name: metrics

          protocol: TCP

        resources:

          requests:

            memory: 10Mi

      - name: healthz

        image: registry.aliyuncs.com/shenshouer/exechealthz-amd64:1.2

        resources:

          limits:

            memory: 50Mi

          requests:

            cpu: 10m

            # Note that this container shouldn't really need 50Mi of memory. The

            # limits are set higher than expected pending investigation on #29688.

            # The extra memory was stolen from the kubedns container to keep the

            # net memory requested by the pod constant.

            memory: 50Mi

        args:

        - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null

        - --url=/healthz-dnsmasq

        - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null

        - --url=/healthz-kubedns

        - --port=8080

        - --quiet

        ports:

        - containerPort: 8080

          protocol: TCP

      dnsPolicy: Default  # Don't use cluster DNS.

Node节点安装

Docker安装

新建/etc/kubernetes/目录


|-- kubelet.conf

|-- manifests

`-- ssl

  |-- ca-key.pem

  |-- ca.pem

  |-- ca.srl

  |-- worker.csr

  |-- worker-key.pem

  |-- worker-openssl.cnf

  `-- worker.pem

新建/etc/kubernetes/kubelet.conf配置,参考kubelet.conf配置

新建/etc/kubernetes/ssl,证书制作参考worker证书制作

新建/etc/kubernetes/manifests

新建/opt/cni/bin,安装CNI参考cni安装步骤

安装kubelet,参考kubelet安装

systemctl enable kubelet && systemctl restart kubelet && journalctl -fu kubelet

本文转自CSDN-安装k8s Master高可用集群

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
5天前
|
Kubernetes 监控 Cloud Native
Kubernetes集群的高可用性与伸缩性实践
Kubernetes集群的高可用性与伸缩性实践
22 1
|
26天前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
21天前
|
Kubernetes 关系型数据库 MySQL
Kubernetes入门:搭建高可用微服务架构
【10月更文挑战第25天】在快速发展的云计算时代,微服务架构因其灵活性和可扩展性备受青睐。本文通过一个案例分析,展示了如何使用Kubernetes将传统Java Web应用迁移到Kubernetes平台并改造成微服务架构。通过定义Kubernetes服务、创建MySQL的Deployment/RC、改造Web应用以及部署Web应用,最终实现了高可用的微服务架构。Kubernetes不仅提供了服务发现和负载均衡的能力,还通过各种资源管理工具,提升了系统的可扩展性和容错性。
62 3
|
27天前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
11天前
|
Kubernetes Ubuntu Linux
我应该如何安装Kubernetes
我应该如何安装Kubernetes
|
1月前
|
Kubernetes Ubuntu Docker
从0开始搞K8S:使用Ubuntu进行安装(环境安装)
通过上述步骤,你已经在Ubuntu上成功搭建了一个基本的Kubernetes单节点集群。这只是开始,Kubernetes的世界广阔且深邃,接下来你可以尝试部署应用、了解Kubernetes的高级概念如Services、Deployments、Ingress等,以及探索如何利用Helm等工具进行应用管理,逐步提升你的Kubernetes技能树。记住,实践是最好的老师,不断实验与学习,你将逐渐掌握这一强大的容器编排技术。
138 1
|
1月前
|
Kubernetes Cloud Native 云计算
云原生之旅:Kubernetes 集群的搭建与实践
【8月更文挑战第67天】在云原生技术日益成为IT行业焦点的今天,掌握Kubernetes已成为每个软件工程师必备的技能。本文将通过浅显易懂的语言和实际代码示例,引导你从零开始搭建一个Kubernetes集群,并探索其核心概念。无论你是初学者还是希望巩固知识的开发者,这篇文章都将为你打开一扇通往云原生世界的大门。
122 17
|
1月前
|
Kubernetes 应用服务中间件 nginx
搭建Kubernetes v1.31.1服务器集群,采用Calico网络技术
在阿里云服务器上部署k8s集群,一、3台k8s服务器,1个Master节点,2个工作节点,采用Calico网络技术。二、部署nginx服务到k8s集群,并验证nginx服务运行状态。
523 1
|
1月前
|
Kubernetes Cloud Native 微服务
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
109 1
|
1月前
|
负载均衡 应用服务中间件 nginx
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
53 1
下一篇
无影云桌面