安装kubernetes1.17.3多master节点的高可用集群(下)

简介: 安装kubernetes1.17.3多master节点的高可用集群

3.4 在master1节点初始化k8s集群,在master1上操作如下

echo """
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.17.3
controlPlaneEndpoint:"192.168.124.199:6443"
apiServer:
 certSANs:
  -192.168.124.16
  -192.168.124.26
  -192.168.124.36
  -192.168.124.56
  - 192.168.124.199
networking:
 podSubnet: 10.244.0.0/16
---
apiVersion:kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
""" > kubeadm-config.yaml

kubeadm init --config kubeadm-config.yaml

 

显示如下,说明初始化成功了

To start using your cluster, you need torun the following as a regular user:
 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to thecluster.
Run "kubectl apply -f [podnetwork].yaml"with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of workernodes by running the following on each as root:
kubeadm join 192.168.124.199:6443 --tokeny6ryw4.69atvhqbxins51vs \
--discovery-token-ca-cert-hashsha256:797acd093254f02e48e6528862de131cea18801d5b4f28a651592d2ad854c2b6

注:kubeadm join ... 这条命令需要记住,下面我们把k8s的master2、master3,node1节点加入到集群需要在这些节点节点输入这条命令,每次执行这个结果都是不一样的,大家记住自己执行的结果,在下面会用到

 

3.5 在master1节点执行如下,这样才能有权限操作k8s资源

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf$HOME/.kube/config
sudo chown $(id -u):$(id -g)$HOME/.kube/config

在master1节点执行kubectl get nodes

显示如下

    NAME        STATUS     ROLES    AGE    VERSION
    master1  NotReady   master   2m13s  v1.17.3

     

    kubectl get pods -n kube-system

    可看到cordns也是处于pending状态

      coredns-6955765f44-cj8cv          0/1     Pending  0          3m16
      scoredns-6955765f44-lxt6f          0/1     Pending  0          3m16s

      上面可以看到STATUS状态是NotReady,cordns是pending,是因为没有安装网络插件,需要安装calico或者flannel,接下来我们安装calico,在master1节点安装calico网络插件:

      在master1节点执行如下:

        kubectl apply -f calico.yaml

         

        cat calico.yaml

        # Calico Version v3.5.3
        #https://docs.projectcalico.org/v3.5/releases#v3.5.3
        # This manifest includes the followingcomponent versions:
        #  calico/node:v3.5.3
        #  calico/cni:v3.5.3
        # This ConfigMap is used to configure aself-hosted Calico installation.
        kind: ConfigMap
        apiVersion: v1
        metadata:
         name: calico-config
         namespace: kube-system
        data:
          #Typha is disabled.
         typha_service_name: "none"
          #Configure the Calico backend to use.
         calico_backend: "bird"
          #Configure the MTU to use
         veth_mtu: "1440"
          #The CNI network configuration to install on each node.  The special
          #values in this config will be automatically populated.
         cni_network_config: |-
            {
             "name": "k8s-pod-network",
             "cniVersion": "0.3.0",
             "plugins": [
               {
                 "type": "calico",
                 "log_level": "info",
                 "datastore_type": "kubernetes",
                 "nodename": "__KUBERNETES_NODE_NAME__",
                 "mtu": __CNI_MTU__,
                 "ipam": {
                   "type": "host-local",
                   "subnet": "usePodCidr"
                 },
                 "policy": {
                      "type": "k8s"
                 },
                 "kubernetes": {
                      "kubeconfig":"__KUBECONFIG_FILEPATH__"
                 }
               },
               {
                 "type": "portmap",
                 "snat": true,
                 "capabilities": {"portMappings": true}
               }
             ]
            }
        ---
        # This manifest installs the calico/nodecontainer, as well
        # as the Calico CNI plugins and networkconfig on
        # each master and worker node in aKubernetes cluster.
        kind: DaemonSet
        apiVersion: apps/v1
        metadata:
         name: calico-node
         namespace: kube-system
         labels:
           k8s-app: calico-node
        spec:
         selector:
           matchLabels:
             k8s-app: calico-node
         updateStrategy:
           type: RollingUpdate
           rollingUpdate:
             maxUnavailable: 1
         template:
           metadata:
             labels:
               k8s-app: calico-node
             annotations:
               # This, along with the CriticalAddonsOnly toleration below,
               # marks the pod as a critical add-on, ensuring it gets
               # priority scheduling and that its resources are reserved
               # if it ever gets evicted.
               scheduler.alpha.kubernetes.io/critical-pod: ''
           spec:
             nodeSelector:
               beta.kubernetes.io/os: linux
             hostNetwork: true
             tolerations:
               # Make sure calico-node gets scheduled on all nodes.
               - effect: NoSchedule
                 operator: Exists
               # Mark the pod as a critical add-on for rescheduling.
               - key: CriticalAddonsOnly
                 operator: Exists
               - effect: NoExecute
                 operator: Exists
             serviceAccountName: calico-node
             # Minimize downtime during a rolling upgrade or deletion; tellKubernetes to do a "force
             # deletion":https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
             terminationGracePeriodSeconds: 0
             initContainers:
               # This container installs the Calico CNI binaries
               # and CNI network config file on each node.
               - name: install-cni
                 image: quay.io/calico/cni:v3.5.3
                 command: ["/install-cni.sh"]
                 env:
                   # Name of the CNI config file to create.
                   - name: CNI_CONF_NAME
                      value:"10-calico.conflist"
                   # The CNI network config to install on each node.
                   - name: CNI_NETWORK_CONFIG
                      valueFrom:
                        configMapKeyRef:
                          name: calico-config
                          key: cni_network_config
                   # Set the hostname based on the k8s node name.
                   - name: KUBERNETES_NODE_NAME
                      valueFrom:
                        fieldRef:
                          fieldPath: spec.nodeName
                   # CNI MTU Config variable
                   - name: CNI_MTU
                     valueFrom:
                        configMapKeyRef:
                          name: calico-config
                          key: veth_mtu
                   # Prevents the container from sleeping forever.
                   - name: SLEEP
                      value: "false"
                 volumeMounts:
                    - mountPath: /host/opt/cni/bin
                      name: cni-bin-dir
                   - mountPath: /host/etc/cni/net.d
                      name: cni-net-dir
             containers:
               # Runs calico/node container on each Kubernetes node.  This
               # container programs network policy and routes on each
               # host.
               - name: calico-node
                 image: quay.io/calico/node:v3.5.3
                 env:
                   # Use Kubernetes API as the backing datastore.
                   - name: DATASTORE_TYPE
                      value: "kubernetes"
                   # Wait for the datastore.
                   - name: WAIT_FOR_DATASTORE
                      value: "true"
                   # Set based on the k8s node name.
                   - name: NODENAME
                      valueFrom:
                        fieldRef:
                          fieldPath: spec.nodeName
                   # Choose the backend to use.
                   - name: CALICO_NETWORKING_BACKEND
                      valueFrom:
                        configMapKeyRef:
                          name: calico-config
                          key: calico_backend
                   # Cluster type to identify the deployment type
                   - name: CLUSTER_TYPE
                      value: "k8s,bgp"
                   # Auto-detect the BGP IP address.
                   - name: IP
                      value: "autodetect"
                   - name: IP_AUTODETECTION_METHOD
                      value:"can-reach=192.168.124.56"
                   # Enable IPIP
                   - name: CALICO_IPV4POOL_IPIP
                      value: "Always"
                   # Set MTU for tunnel device used if ipip is enabled
                   - name: FELIX_IPINIPMTU
                      valueFrom:
                        configMapKeyRef:
                          name: calico-config
                          key: veth_mtu
                   # The default IPv4 pool to create on startup if none exists. Pod IPswill be
                   # chosen from this range. Changing this value after installation willhave
                   # no effect. This should fall within `--cluster-cidr`.
                   - name: CALICO_IPV4POOL_CIDR
                      value: "10.244.0.0/16"
                   # Disable file logging so `kubectl logs` works.
                   - name: CALICO_DISABLE_FILE_LOGGING
                      value: "true"
                   # Set Felix endpoint to host default action to ACCEPT.
                   - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
                      value: "ACCEPT"
                   # Disable IPv6 on Kubernetes.
                   - name: FELIX_IPV6SUPPORT
                      value: "false"
                   # Set Felix logging to"info"
                   - name: FELIX_LOGSEVERITYSCREEN
                      value: "info"
                   - name: FELIX_HEALTHENABLED
                      value: "true"
                 securityContext:
                   privileged: true
                 resources:
                   requests:
                      cpu: 250m
                 livenessProbe:
                   httpGet:
                      path: /liveness
                      port: 9099
                      host: localhost
                   periodSeconds: 10
                   initialDelaySeconds: 10
                   failureThreshold: 6
                 readinessProbe:
                   exec:
                      command:
                      - /bin/calico-node
                      - -bird-ready
                      - -felix-ready
                   periodSeconds: 10
                 volumeMounts:
                   - mountPath: /lib/modules
                      name: lib-modules
                      readOnly: true
                   - mountPath: /run/xtables.lock
                      name: xtables-lock
                      readOnly: false
                   - mountPath: /var/run/calico
                      name: var-run-calico
                      readOnly: false
                   - mountPath: /var/lib/calico
                      name: var-lib-calico
                      readOnly: false
             volumes:
               # Used by calico/node.
               - name: lib-modules
                 hostPath:
                   path: /lib/modules
               - name: var-run-calico
                 hostPath:
                   path: /var/run/calico
               - name: var-lib-calico
                 hostPath:
                   path: /var/lib/calico
               - name: xtables-lock
                 hostPath:
                   path: /run/xtables.lock
                   type: FileOrCreate
               # Used to install CNI.
               - name: cni-bin-dir
                 hostPath:
                   path: /opt/cni/bin
               - name: cni-net-dir
                 hostPath:
                   path: /etc/cni/net.d
        ---
        apiVersion: v1
        kind: ServiceAccount
        metadata:
         name: calico-node
         namespace: kube-system
        ---
        # Create all the CustomResourceDefinitionsneeded for
        # Calico policy and networking mode.
        apiVersion: apiextensions.k8s.io/v1beta1
        kind: CustomResourceDefinition
        metadata:
          name: felixconfigurations.crd.projectcalico.org
        spec:
         scope: Cluster
         group: crd.projectcalico.org
         version: v1
         names:
           kind: FelixConfiguration
           plural: felixconfigurations
           singular: felixconfiguration
        ---
        apiVersion: apiextensions.k8s.io/v1beta1
        kind: CustomResourceDefinition
        metadata:
         name: bgppeers.crd.projectcalico.org
        spec:
         scope: Cluster
         group: crd.projectcalico.org
         version: v1
         names:
           kind: BGPPeer
           plural: bgppeers
           singular: bgppeer
        ---
        apiVersion: apiextensions.k8s.io/v1beta1
        kind: CustomResourceDefinition
        metadata:
         name: bgpconfigurations.crd.projectcalico.org
        spec:
         scope: Cluster
         group: crd.projectcalico.org
         version: v1
         names:
           kind: BGPConfiguration
           plural: bgpconfigurations
           singular: bgpconfiguration
        ---
        apiVersion: apiextensions.k8s.io/v1beta1
        kind: CustomResourceDefinition
        metadata:
         name: ippools.crd.projectcalico.org
        spec:
         scope: Cluster
         group: crd.projectcalico.org
         version: v1
         names:
           kind: IPPool
           plural: ippools
           singular: ippool
        ---
        apiVersion: apiextensions.k8s.io/v1beta1
        kind: CustomResourceDefinition
        metadata:
         name: hostendpoints.crd.projectcalico.org
        spec:
         scope: Cluster
         group: crd.projectcalico.org
         version: v1
         names:
           kind: HostEndpoint
           plural: hostendpoints
           singular: hostendpoint
        ---
        apiVersion: apiextensions.k8s.io/v1beta1
        kind: CustomResourceDefinition
        metadata:
         name: clusterinformations.crd.projectcalico.org
        spec:
         scope: Cluster
         group: crd.projectcalico.org
         version: v1
         names:
           kind: ClusterInformation
           plural: clusterinformations
           singular: clusterinformation
        ---
        apiVersion: apiextensions.k8s.io/v1beta1
        kind: CustomResourceDefinition
        metadata:
         name: globalnetworkpolicies.crd.projectcalico.org
        spec:
         scope: Cluster
         group: crd.projectcalico.org
         version: v1
         names:
           kind: GlobalNetworkPolicy
           plural: globalnetworkpolicies
           singular: globalnetworkpolicy
        ---
        apiVersion: apiextensions.k8s.io/v1beta1
        kind: CustomResourceDefinition
        metadata:
         name: globalnetworksets.crd.projectcalico.org
        spec:
         scope: Cluster
         group: crd.projectcalico.org
         version: v1
         names:
           kind: GlobalNetworkSet
           plural: globalnetworksets
           singular: globalnetworkset
        ---
        apiVersion: apiextensions.k8s.io/v1beta1
        kind: CustomResourceDefinition
        metadata:
         name: networkpolicies.crd.projectcalico.org
        spec:
         scope: Namespaced
         group: crd.projectcalico.org
         version: v1
         names:
           kind: NetworkPolicy
           plural: networkpolicies
           singular: networkpolicy
        ---
        # Include a clusterrole for the calico-nodeDaemonSet,
        # and bind it to the calico-nodeserviceaccount.
        kind: ClusterRole
        apiVersion:rbac.authorization.k8s.io/v1beta1
        metadata:
         name: calico-node
        rules:
          #The CNI plugin needs to get pods, nodes, and namespaces.
          -apiGroups: [""]
           resources:
             - pods
             - nodes
             - namespaces
           verbs:
             - get
          -apiGroups: [""]
           resources:
             - endpoints
             - services
           verbs:
             # Used to discover service IPs for advertisement.
             - watch
             - list
             # Used to discover Typhas.
             - get
          -apiGroups: [""]
           resources:
             - nodes/status
           verbs:
             # Needed for clearing NodeNetworkUnavailable flag.
             - patch
             # Calico stores some configuration information in node annotations.
             - update
          #Watch for changes to Kubernetes NetworkPolicies.
          -apiGroups: ["networking.k8s.io"]
           resources:
             - networkpolicies
           verbs:
             - watch
             - list
          #Used by Calico for policy information.
          -apiGroups: [""]
           resources:
             - pods
             - namespaces
             - serviceaccounts
           verbs:
             - list
             - watch
          #The CNI plugin patches pods/status.
          -apiGroups: [""]
           resources:
             - pods/status
           verbs:
             - patch
          #Calico monitors various CRDs for config.
          -apiGroups: ["crd.projectcalico.org"]
           resources:
             - globalfelixconfigs
             - felixconfigurations
             - bgppeers
             - globalbgpconfigs
             - bgpconfigurations
             - ippools
             - globalnetworkpolicies
             - globalnetworksets
             - networkpolicies
             - clusterinformations
             - hostendpoints
           verbs:
             - get
             - list
             - watch
          #Calico must create and update some CRDs on startup.
          -apiGroups: ["crd.projectcalico.org"]
           resources:
             - ippools
             - felixconfigurations
             - clusterinformations
           verbs:
             - create
             - update
          #Calico stores some configuration information on the node.
          -apiGroups: [""]
           resources:
             - nodes
           verbs:
             - get
             - list
             - watch
          #These permissions are only requried for upgrade from v2.6, and can
          #be removed after upgrade or on fresh installations.
          -apiGroups: ["crd.projectcalico.org"]
           resources:
             - bgpconfigurations
             - bgppeers
           verbs:
             - create
             - update
        ---
        apiVersion:rbac.authorization.k8s.io/v1beta1
        kind: ClusterRoleBinding
        metadata:
         name: calico-node
        roleRef:
         apiGroup: rbac.authorization.k8s.io
         kind: ClusterRole
         name: calico-node
        subjects:
        - kind: ServiceAccount
         name: calico-node
         namespace: kube-system
        ---

         

        安装calico之后,在master1节点执行kubectl get nodes

        显示如下,看到STATUS是Ready,kubectl get pods -n kube-system可以看到cordns也是running状态,说明master1节点的calico安装完成

          NAME        STATUS     ROLES    AGE    VERSION
          master1       Ready   master  2m13s   v1.17.3

           

          3.6 把master1节点的证书拷贝到master2和master3上

          (1)在master2和master3上创建证书存放目录

          cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/

          在master1节点把证书拷贝到master2和master3上,在master1上操作

          scp /etc/kubernetes/pki/ca.crtmaster2:/etc/kubernetes/pki/
          scp /etc/kubernetes/pki/ca.keymaster2:/etc/kubernetes/pki/
          scp /etc/kubernetes/pki/sa.keymaster2:/etc/kubernetes/pki/
          scp /etc/kubernetes/pki/sa.pubmaster2:/etc/kubernetes/pki/
          scp /etc/kubernetes/pki/front-proxy-ca.crtmaster2:/etc/kubernetes/pki/
          scp /etc/kubernetes/pki/front-proxy-ca.keymaster2:/etc/kubernetes/pki/
          scp /etc/kubernetes/pki/etcd/ca.crtmaster2:/etc/kubernetes/pki/etcd/
          scp /etc/kubernetes/pki/etcd/ca.keymaster2:/etc/kubernetes/pki/etcd/
          scp /etc/kubernetes/pki/ca.crtmaster3:/etc/kubernetes/pki/
          scp /etc/kubernetes/pki/ca.keymaster3:/etc/kubernetes/pki/
          scp /etc/kubernetes/pki/sa.keymaster3:/etc/kubernetes/pki/
          scp /etc/kubernetes/pki/sa.pubmaster3:/etc/kubernetes/pki/
          scp /etc/kubernetes/pki/front-proxy-ca.crtmaster3:/etc/kubernetes/pki/
          scp /etc/kubernetes/pki/front-proxy-ca.keymaster3:/etc/kubernetes/pki/
          scp /etc/kubernetes/pki/etcd/ca.crtmaster3:/etc/kubernetes/pki/etcd/
          scp /etc/kubernetes/pki/etcd/ca.keymaster3:/etc/kubernetes/pki/etcd/

          证书拷贝之后在master2和master3上执行如下,形成集群

            kubeadm join 192.168.124.199:6443 --tokeny6ryw4.69atvhqbxins51vs \
            --discovery-token-ca-cert-hashsha256:797acd093254f02e48e6528862de131cea18801d5b4f28a651592d2ad854c2b6 --control-plane

            --control-plane:这个参数表示加入到k8s集群的是master节点

             

            在master2和master3上操作:

             

              mkdir -p $HOME/.kube    
              sudo cp -i /etc/kubernetes/admin.conf$HOME/.kube/config    
              sudo chown $(id -u):$(id -g)$HOME/.kube/config

              kubectl get nodes 显示如下:





              NAME     STATUS   ROLES    AGE    VERSION
              master1  Ready    master   39m    v1.17.3
              master2  Ready    master   5m9s   v1.17.3
              master3  Ready    master   2m33s  v1.17.3

               

              3.7 把node1节点加入到k8s集群,在node节点操作


                kubeadm join 192.168.124.199:6443 --tokeny6ryw4.69atvhqbxins51vs \
                --discovery-token-ca-cert-hashsha256:797acd093254f02e48e6528862de131cea18801d5b4f28a651592d2ad854c2b6

                 

                注:上面的这个加入到k8s节点的一串命令就是在3.3初始化的时候生成的


                3.6 在master1节点查看集群节点状态

                kubectl get nodes  显示如下

                  NAME        STATUS     ROLES    AGE    VERSION
                  master1  Ready   master   3m48s  v1.17.3
                  master2  Ready   master   3m48s  v1.17.3
                  master3  Ready   master   3m48s  v1.17.3
                  node1    Ready   <none>   3m48s  v1.17.3

                  说明node1节点也加入到k8s集群了,通过以上就完成了k8s多master高可用集群的搭建

                   

                  4.安装kubernetesdashboard(kubernetes的web ui界面)

                   

                  在k8s-master节点操作

                    kubectl apply -f kubernetes-dashboard.yaml

                     

                    cat kubernetes-dashboard.yaml

                    # Copyright 2017 The Kubernetes Authors.
                    #
                    # Licensed under the Apache License,Version 2.0 (the "License");
                    # you may not use this file except incompliance with the License.
                    # You may obtain a copy of the License at
                    #
                    #    http://www.apache.org/licenses/LICENSE-2.0
                    #
                    # Unless required by applicable law oragreed to in writing, software
                    # distributed under the License isdistributed on an "AS IS" BASIS,
                    # WITHOUT WARRANTIES OR CONDITIONS OF ANYKIND, either express or implied.
                    # See the License for the specific languagegoverning permissions and
                    # limitations under the License.
                    # Configuration to deploy release versionof the Dashboard UI compatible with
                    # Kubernetes 1.8.
                    #
                    # Example usage: kubectl create -f<this_file>
                    ---
                    apiVersion: v1
                    kind: Secret
                    metadata:
                     labels:
                       k8s-app: kubernetes-dashboard
                        #Allows editing resource and makes sure it is created first.
                       addonmanager.kubernetes.io/mode: EnsureExists
                     name: kubernetes-dashboard-certs
                     namespace: kube-system
                    type: Opaque
                    ---
                    apiVersion: v1
                    kind: Secret
                    metadata:
                     labels:
                       k8s-app: kubernetes-dashboard
                        #Allows editing resource and makes sure it is created first.
                       addonmanager.kubernetes.io/mode: EnsureExists
                     name: kubernetes-dashboard-key-holder
                     namespace: kube-system
                    type: Opaque
                    ---
                    # ------------------- Dashboard ServiceAccount ------------------- #
                    apiVersion: v1
                    kind: ServiceAccount
                    metadata:
                     labels:
                       k8s-app: kubernetes-dashboard
                     name: kubernetes-dashboard-admin
                     namespace: kube-system
                    ---
                    # ------------------- Dashboard Role &Role Binding ------------------- #
                    apiVersion: rbac.authorization.k8s.io/v1
                    kind: ClusterRoleBinding
                    metadata:
                     name: kubernetes-dashboard-admin
                     namespace: kube-system
                    roleRef:
                     apiGroup: rbac.authorization.k8s.io
                     kind: ClusterRole
                     name: cluster-admin
                    subjects:
                    - kind: ServiceAccount
                     name: kubernetes-dashboard-admin
                     namespace: kube-system
                    ---
                    apiVersion: rbac.authorization.k8s.io/v1
                    kind: ClusterRole
                    metadata:
                       name: cluster-watcher
                    rules:
                    - apiGroups:
                      -'*'
                     resources:
                      -'*'
                     verbs:
                      -'get'
                      -'list'
                    - nonResourceURLs:
                      -'*'
                     verbs:
                      -'get'
                      -'list'
                    - apiGroups: [""]
                     resources: ["secrets"]
                     resourceNames: ["kubernetes-dashboard-key-holder","kubernetes-dashboard-certs"]
                     verbs: ["get", "update", "delete"]
                      #Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
                    - apiGroups: [""]
                     resources: ["configmaps"]
                     resourceNames: ["kubernetes-dashboard-settings"]
                     verbs: ["get", "update"]
                      #Allow Dashboard to get metrics from heapster.
                    - apiGroups: [""]
                     resources: ["services"]
                     resourceNames: ["heapster"]
                     verbs: ["proxy"]
                    ---
                    # ------------------- Dashboard Deployment------------------- #
                    apiVersion: apps/v1
                    kind: Deployment
                    metadata:
                     name: kubernetes-dashboard
                     namespace: kube-system
                     labels:
                       k8s-app: kubernetes-dashboard
                       kubernetes.io/cluster-service: "true"
                       addonmanager.kubernetes.io/mode: Reconcile
                    spec:
                     selector:
                       matchLabels:
                         k8s-app: kubernetes-dashboard
                     template:
                       metadata:
                         labels:
                           k8s-app: kubernetes-dashboard
                         annotations:
                           scheduler.alpha.kubernetes.io/critical-pod: ''
                           seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
                       spec:
                         priorityClassName: system-cluster-critical
                         containers:
                         - name: kubernetes-dashboard
                           image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
                           resources:
                             limits:
                               cpu: 100m
                               memory: 300Mi
                             requests:
                               cpu: 50m
                               memory: 100Mi
                           ports:
                           - containerPort: 8443
                             protocol: TCP
                           args:
                             - --auto-generate-certificates
                           volumeMounts:
                           - name: kubernetes-dashboard-certs
                             mountPath: /certs
                           - name: tmp-volume
                             mountPath: /tmp
                           livenessProbe:
                             httpGet:
                               scheme: HTTPS
                               path: /
                               port: 8443
                             initialDelaySeconds: 30
                             timeoutSeconds: 30
                         volumes:
                         - name: kubernetes-dashboard-certs
                           secret:
                             secretName: kubernetes-dashboard-certs
                         - name: tmp-volume
                           emptyDir: {}
                         serviceAccountName: kubernetes-dashboard-admin
                         tolerations:
                         - key: node-role.kubernetes.io/master
                           effect: NoSchedule
                    ---
                    # ------------------- Dashboard Service------------------- #
                    kind: Service
                    apiVersion: v1
                    metadata:
                     labels:
                       k8s-app: kubernetes-dashboard
                     name: kubernetes-dashboard
                     namespace: kube-system
                    spec:
                     ports:
                        -port: 443
                         targetPort: 8443
                     selector:
                       k8s-app: kubernetes-dashboard
                     type: NodePort
                    ---
                    kind: Ingress
                    apiVersion: extensions/v1beta1
                    metadata:
                       name: dashboard
                       namespace: kube-system
                       annotations:
                           kubernetes.io/ingress.class: traefik
                    spec:
                       rules:
                       -   host: dashboard.multi.io
                           http:
                               paths:
                               -   backend:
                                       serviceName:kubernetes-dashboard
                                        servicePort: 443
                                    path: /

                    查看dashboard是否安装成功:

                     

                      kubectl get pods -n kube-system

                       

                      显示如下,说明dashboard安装成功了

                       

                        kubernetes-dashboard-7898456f45-7h2q6   1/1    Running   0          61s

                         

                        查看dashboard前端的service

                         

                          kubectl get svc -n kube-system

                          显示如下:


                          kubernetes-dashboard   NodePort   10.106.68.182   <none>        443:30982/TCP            12m

                           

                          上面可看到service类型是NodePort,访问k8s-master节点ip:30982端口即可访问kubernetes dashboard,我的环境需要输入如下地址

                           https://192.168.124.16:30982/

                          可看到出现了dashboard界面


                          5.安装metrics监控插件

                          在k8s-master节点操作


                          kubectl apply -f metrics.yaml

                          cat metrics.yaml

                          apiVersion: rbac.authorization.k8s.io/v1
                          kind: ClusterRoleBinding
                          metadata:
                           name: metrics-server:system:auth-delegator
                           labels:
                             kubernetes.io/cluster-service: "true"
                             addonmanager.kubernetes.io/mode: Reconcile
                          roleRef:
                           apiGroup: rbac.authorization.k8s.io
                           kind: ClusterRole
                           name: system:auth-delegator
                          subjects:
                          - kind: ServiceAccount
                           name: metrics-server
                            namespace: kube-system
                          ---
                          apiVersion: rbac.authorization.k8s.io/v1
                          kind: RoleBinding
                          metadata:
                           name: metrics-server-auth-reader
                           namespace: kube-system
                           labels:
                             kubernetes.io/cluster-service: "true"
                             addonmanager.kubernetes.io/mode: Reconcile
                          roleRef:
                           apiGroup: rbac.authorization.k8s.io
                           kind: Role
                           name: extension-apiserver-authentication-reader
                          subjects:
                          - kind: ServiceAccount
                           name: metrics-server
                           namespace: kube-system
                          ---
                          apiVersion: v1
                          kind: ServiceAccount
                          metadata:
                           name: metrics-server
                           namespace: kube-system
                           labels:
                             kubernetes.io/cluster-service: "true"
                             addonmanager.kubernetes.io/mode: Reconcile
                          ---
                          apiVersion: rbac.authorization.k8s.io/v1
                          kind: ClusterRole
                          metadata:
                           name: system:metrics-server
                           labels:
                             kubernetes.io/cluster-service: "true"
                             addonmanager.kubernetes.io/mode: Reconcile
                          rules:
                          - apiGroups:
                            -""
                           resources:
                            -pods
                            -nodes
                            -nodes/stats
                            -namespaces
                           verbs:
                            -get
                            -list
                            - watch
                          - apiGroups:
                            -"extensions"
                           resources:
                            -deployments
                           verbs:
                            -get
                            -list
                            -update
                            -watch
                          ---
                          apiVersion: rbac.authorization.k8s.io/v1
                          kind: ClusterRoleBinding
                          metadata:
                           name: system:metrics-server
                           labels:
                             kubernetes.io/cluster-service: "true"
                             addonmanager.kubernetes.io/mode: Reconcile
                          roleRef:
                           apiGroup: rbac.authorization.k8s.io
                           kind: ClusterRole
                           name: system:metrics-server
                          subjects:
                          - kind: ServiceAccount
                           name: metrics-server
                           namespace: kube-system
                          ---
                          apiVersion: v1
                          kind: ConfigMap
                          metadata:
                           name: metrics-server-config
                           namespace: kube-system
                           labels:
                             kubernetes.io/cluster-service: "true"
                             addonmanager.kubernetes.io/mode: EnsureExists
                          data:
                           NannyConfiguration: |-
                             apiVersion: nannyconfig/v1alpha1
                             kind: NannyConfiguration
                          ---
                          apiVersion: apps/v1
                          kind: Deployment
                          metadata:
                           name: metrics-server
                           namespace: kube-system
                           labels:
                             k8s-app: metrics-server
                             kubernetes.io/cluster-service: "true"
                             addonmanager.kubernetes.io/mode: Reconcile
                             version: v0.3.1
                          spec:
                           selector:
                             matchLabels:
                               k8s-app: metrics-server
                               version: v0.3.1
                           template:
                             metadata:
                               name: metrics-server
                               labels:
                                 k8s-app: metrics-server
                                 version: v0.3.1
                               annotations:
                                 scheduler.alpha.kubernetes.io/critical-pod: ''
                                 seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
                             spec:
                               priorityClassName: system-cluster-critical
                               serviceAccountName: metrics-server
                               containers:
                               - name: metrics-server
                                 image: k8s.gcr.io/metrics-server-amd64:v0.3.1
                                 command:
                                 - /metrics-server
                                 - --metric-resolution=30s
                                 - --kubelet-preferred-address-types=InternalIP
                                 - --kubelet-insecure-tls
                                 ports:
                                 - containerPort: 443
                                   name: https
                                   protocol: TCP
                               - name: metrics-server-nanny
                                 image: k8s.gcr.io/addon-resizer:1.8.4
                                 resources:
                                   limits:
                                     cpu: 100m
                                     memory: 300Mi
                                   requests:
                                     cpu: 5m
                                     memory: 50Mi
                                 env:
                                   - name: MY_POD_NAME
                                     valueFrom:
                                        fieldRef:
                                          fieldPath: metadata.name
                                   - name: MY_POD_NAMESPACE
                                     valueFrom:
                                        fieldRef:
                                          fieldPath: metadata.namespace
                                 volumeMounts:
                                 - name: metrics-server-config-volume
                                   mountPath: /etc/config
                                 command:
                                   - /pod_nanny
                                   - --config-dir=/etc/config
                                   - --cpu=300m
                                   - --extra-cpu=20m
                                   - --memory=200Mi
                                   - --extra-memory=10Mi
                                   - --threshold=5
                                   - --deployment=metrics-server
                                   - --container=metrics-server
                                   - --poll-period=300000
                                   - --estimator=exponential
                                   - --minClusterSize=2
                               volumes:
                                 - name: metrics-server-config-volume
                                   configMap:
                                     name: metrics-server-config
                               tolerations:
                                 - key: "CriticalAddonsOnly"
                                   operator: "Exists"
                                 - key: node-role.kubernetes.io/master
                                   effect: NoSchedule
                          ---
                          apiVersion: v1
                          kind: Service
                          metadata:
                           name: metrics-server
                           namespace: kube-system
                           labels:
                             addonmanager.kubernetes.io/mode: Reconcile
                             kubernetes.io/cluster-service: "true"
                             kubernetes.io/name: "Metrics-server"
                          spec:
                           selector:
                             k8s-app: metrics-server
                           ports:
                            -port: 443
                             protocol: TCP
                             targetPort: https
                          ---
                          apiVersion: apiregistration.k8s.io/v1beta1
                          kind: APIService
                          metadata:
                           name: v1beta1.metrics.k8s.io
                           labels:
                             kubernetes.io/cluster-service: "true"
                             addonmanager.kubernetes.io/mode: Reconcile
                          spec:
                           service:
                             name: metrics-server
                             namespace: kube-system
                           group: metrics.k8s.io
                           version: v1beta1
                           insecureSkipTLSVerify: true
                           groupPriorityMinimum: 100
                           versionPriority: 100

                          上面组件都安装之后,kubectl  get  pods  -n kube-system  -o wide,查看组件安装是否正常,STATUS状态是Running,说明组件正常,如下所示


                          相关实践学习
                          通过Ingress进行灰度发布
                          本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
                          容器应用与集群管理
                          欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
                          相关文章
                          |
                          2天前
                          |
                          Kubernetes 应用服务中间件 nginx
                          二进制安装Kubernetes(k8s)v1.32.0
                          本指南提供了一个详细的步骤,用于在Linux系统上通过二进制文件安装Kubernetes(k8s)v1.32.0,支持IPv4+IPv6双栈。具体步骤包括环境准备、系统配置、组件安装和配置等。
                          49 10
                          |
                          6天前
                          |
                          存储 Kubernetes 关系型数据库
                          阿里云ACK备份中心,K8s集群业务应用数据的一站式灾备方案
                          本文源自2024云栖大会苏雅诗的演讲,探讨了K8s集群业务为何需要灾备及其重要性。文中强调了集群与业务高可用配置对稳定性的重要性,并指出人为误操作等风险,建议实施周期性和特定情况下的灾备措施。针对容器化业务,提出了灾备的新特性与需求,包括工作负载为核心、云资源信息的备份,以及有状态应用的数据保护。介绍了ACK推出的备份中心解决方案,支持命名空间、标签、资源类型等维度的备份,并具备存储卷数据保护功能,能够满足GitOps流程企业的特定需求。此外,还详细描述了备份中心的使用流程、控制台展示、灾备难点及解决方案等内容,展示了备份中心如何有效应对K8s集群资源和存储卷数据的灾备挑战。
                          |
                          27天前
                          |
                          Kubernetes 监控 Cloud Native
                          Kubernetes集群的高可用性与伸缩性实践
                          Kubernetes集群的高可用性与伸缩性实践
                          60 1
                          |
                          2月前
                          |
                          JSON Kubernetes 容灾
                          ACK One应用分发上线:高效管理多集群应用
                          ACK One应用分发上线,主要介绍了新能力的使用场景
                          |
                          2月前
                          |
                          Kubernetes 持续交付 开发工具
                          ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
                          ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
                          |
                          1月前
                          |
                          Kubernetes Ubuntu Linux
                          我应该如何安装Kubernetes
                          我应该如何安装Kubernetes
                          |
                          2月前
                          |
                          Kubernetes Ubuntu Linux
                          Centos7 搭建 kubernetes集群
                          本文介绍了如何搭建一个三节点的Kubernetes集群,包括一个主节点和两个工作节点。各节点运行CentOS 7系统,最低配置为2核CPU、2GB内存和15GB硬盘。详细步骤包括环境配置、安装Docker、关闭防火墙和SELinux、禁用交换分区、安装kubeadm、kubelet、kubectl,以及初始化Kubernetes集群和安装网络插件Calico或Flannel。
                          198 4
                          |
                          2月前
                          |
                          Kubernetes Cloud Native 云计算
                          云原生之旅:Kubernetes 集群的搭建与实践
                          【8月更文挑战第67天】在云原生技术日益成为IT行业焦点的今天,掌握Kubernetes已成为每个软件工程师必备的技能。本文将通过浅显易懂的语言和实际代码示例,引导你从零开始搭建一个Kubernetes集群,并探索其核心概念。无论你是初学者还是希望巩固知识的开发者,这篇文章都将为你打开一扇通往云原生世界的大门。
                          133 17
                          |
                          2月前
                          |
                          Kubernetes 应用服务中间件 nginx
                          搭建Kubernetes v1.31.1服务器集群,采用Calico网络技术
                          在阿里云服务器上部署k8s集群,一、3台k8s服务器,1个Master节点,2个工作节点,采用Calico网络技术。二、部署nginx服务到k8s集群,并验证nginx服务运行状态。
                          865 1
                          |
                          2月前
                          |
                          Kubernetes Cloud Native 流计算
                          Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
                          Flink-12 Flink Java 3分钟上手 Kubernetes云原生下的Flink集群 Rancher Stateful Set yaml详细 扩容缩容部署 Docker容器编排
                          86 3