kubernetes实战练习1(1)

简介: kubernetes实战练习1(1)

文章目录

1. 启动单节点 Kubernetes 集群

1.1 创建集群

1.2 启动容器

1.3 创建界面

2. Kubeadm 入门

2.1 初始化 Master

2.2 部署容器网络接口 (CNI)

2.3 加入集群

2.4 查看节点

2.5 部署pod

2.6 部署仪表板

3. 使用 Kubectl 启动容器

3.1 启动集群

3.2 Kubectl 运行

3.3 Kubectl 暴露

3.4 Kubectl 运行和公开

3.5 扩展容器

4. YAML 部署容器

4.1 创建deployment

4.2 创建服务

4.3 扩容

kubernetes实战练习1

kubernetes实战练习2

1. 启动单节点 Kubernetes 集群

Minikube已经安装并配置到环境中。通过运行minikube version命令检查它是否已正确安装,通过运行minikube start命令启动集群

1.1 创建集群

$ minikube version
minikube version: v1.8.1
commit: cbda04cf6bbe65e987ae52bb393c10099ab62014
$ minikube start --wait=false
* minikube v1.8.1 on Ubuntu 18.04
* Using the none driver based on user configuration
* Running on localhost (CPUs=2, Memory=2460MB, Disk=145651MB) ...
* OS release is Ubuntu 18.04.4 LTS
* Preparing Kubernetes v1.17.3 on Docker 19.03.6 ...
  - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
* Launching Kubernetes ... 
* Enabling addons: default-storageclass, storage-provisioner
* Configuring local host environment ...
* Done! kubectl is now configured to use "minikube"
$ kubectl cluster-info
Kubernetes master is running at https://172.17.0.37:8443
KubeDNS is running at https://172.17.0.37:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
#如果节点被标记为NotReady则它仍在启动组件。
#此命令显示可用于托管我们的应用程序的所有节点。现在我们只有一个节点,可以看到它的状态是ready
$ kubectl get nodes
NAME       STATUS     ROLES    AGE   VERSION
minikube   NotReady   master   12s   v1.17.3

1.2 启动容器

$ kubectl create deployment first-deployment --image=katacoda/docker-http-server
deployment.apps/first-deployment created
$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
first-deployment-666c48b44-vqsvl   1/1     Running   0          20s
#容器运行后,它可以根据要求通过不同的网络选项公开。一种可能的解决方案是 NodePort,它为容器提供动态端口。
$ kubectl expose deployment first-deployment --port=80 --type=NodePort
service/first-deployment exposed
#下面的命令查找分配的端口并执行 HTTP 请求。
$ export PORT=$(kubectl get svc first-deployment -o go-template='{{range.spec.ports}}{{if .nodePort}}{{.nodePort}}{{"\n"}}{{end}}{{end}}')
$ echo "Accessing host01:$PORT"
Accessing host01:31400
$ curl host01:$PORT
<h1>This request was processed by host: first-deployment-666c48b44-vqsvl</h1>
#结果是处理请求的容器。

1.3 创建界面

使用 Minikube 和命令启用仪表板

$ minikube addons enable dashboard
* The 'dashboard' addon is enabled

通过部署以下 YAML 定义使 Kubernetes 仪表板可用。这应该只用于 Katacoda

$ cat /opt/kubernetes-dashboard.yaml
apiVersion: v1
kind: Namespace
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/minikube-addons: dashboard
  name: kubernetes-dashboard
  selfLink: /api/v1/namespaces/kubernetes-dashboard
spec:
  finalizers:
  - kubernetes
status:
  phase: Active
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard-katacoda
  namespace: kubernetes-dashboard
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 9090
    nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort

Kubernetes 仪表板允许您在 UI 中查看您的应用程序。在此部署中,仪表板已在端口30000上可用,但可能需要一段时间才能启动。要查看仪表板启动的进度,请使用以下命令查看kube-system命名空间中的Pod

$ kubectl get pods -n kubernetes-dashboard -w
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7b64584c5c-479th   1/1     Running   0          110s
kubernetes-dashboard-79d9cd965-lhlxl         1/1     Running   0          110s

2. Kubeadm 入门

2.1 初始化 Master

Kubeadm 已经安装在节点上。软件包适用于 Ubuntu 16.04+、CentOS 7 或 HypriotOS v1.0.1+。


初始化集群的第一阶段是启动主节点。Master 负责运行控制平面组件、etcd 和 API 服务器。客户端将与 API 通信以安排工作负载并管理集群状态。

controlplane $ kubeadm init --token=102952.1a7dd4cc8d1f4cc5 --kubernetes-version $(kubeadm version -o short)
[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [controlplane localhost] and IPs [172.17.0.18 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [controlplane localhost] and IPs [172.17.0.18 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [controlplane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.0.18]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.505046 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node controlplane as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node controlplane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 102952.1a7dd4cc8d1f4cc5
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.17.0.18:6443 --token 102952.1a7dd4cc8d1f4cc5 \
    --discovery-token-ca-cert-hash sha256:1672bc53b2f7bb2e545ae130dc43fc0cc05f3ddf3e3be037a75317290bd66fcb 

在生产中,建议排除导致 kubeadm 代表您生成一个的令牌。

要管理 Kubernetes 集群,需要客户端配置和证书。这个配置是在kubeadm初始化集群时创建的。该命令将配置复制到用户主目录并设置用于 CLI 的环境变量。

controlplane $ sudo cp /etc/kubernetes/admin.conf $HOME/
controlplane $ sudo chown $(id -u):$(id -g) $HOME/admin.conf
controlplane $ export KUBECONFIG=$HOME/admin.conf

2.2 部署容器网络接口 (CNI)

容器网络接口 (CNI) 定义了不同节点及其工作负载应如何通信。有多个网络提供商可用。

在这个场景中,我们将使用 WeaveWorks

controlplane $ cat /opt/weave-kube.yaml
apiVersion: v1
kind: List
items:
  - apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
              "date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
      namespace: kube-system
  - apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
              "date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
    rules:
      - apiGroups:
          - ''
        resources:
          - pods
          - namespaces
          - nodes
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - networking.k8s.io
        resources:
          - networkpolicies
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - ''
        resources:
          - nodes/status
        verbs:
          - patch
          - update
  - apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
              "date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
    roleRef:
      kind: ClusterRole
      name: weave-net
      apiGroup: rbac.authorization.k8s.io
    subjects:
      - kind: ServiceAccount
        name: weave-net
        namespace: kube-system
  - apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
              "date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
      namespace: kube-system
    rules:
      - apiGroups:
          - ''
        resourceNames:
          - weave-net
        resources:
          - configmaps
        verbs:
          - get
          - update
      - apiGroups:
          - ''
        resources:
          - configmaps
        verbs:
          - create
  - apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
              "date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
      namespace: kube-system
    roleRef:
      kind: Role
      name: weave-net
      apiGroup: rbac.authorization.k8s.io
    subjects:
      - kind: ServiceAccount
        name: weave-net
        namespace: kube-system
  - apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
              "date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
      namespace: kube-system
    spec:
      minReadySeconds: 5
      selector:
        matchLabels:
          name: weave-net
      template:
        metadata:
          labels:
            name: weave-net
        spec:
          containers:
            - name: weave
              command:
                - /home/weave/launch.sh
              env:
                - name: IPALLOC_RANGE
                  value: 10.32.0.0/24
                - name: HOSTNAME
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: spec.nodeName
              image: 'docker.io/weaveworks/weave-kube:2.6.0'
              readinessProbe:
                httpGet:
                  host: 127.0.0.1
                  path: /status
                  port: 6784
              resources:
                requests:
                  cpu: 10m
              securityContext:
                privileged: true
              volumeMounts:
                - name: weavedb
                  mountPath: /weavedb
                - name: cni-bin
                  mountPath: /host/opt
                - name: cni-bin2
                  mountPath: /host/home
                - name: cni-conf
                  mountPath: /host/etc
                - name: dbus
                  mountPath: /host/var/lib/dbus
                - name: lib-modules
                  mountPath: /lib/modules
                - name: xtables-lock
                  mountPath: /run/xtables.lock
            - name: weave-npc
              env:
                - name: HOSTNAME
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: spec.nodeName
              image: 'docker.io/weaveworks/weave-npc:2.6.0'
              resources:
                requests:
                  cpu: 10m
              securityContext:
                privileged: true
              volumeMounts:
                - name: xtables-lock
                  mountPath: /run/xtables.lock
          hostNetwork: true
          hostPID: true
          restartPolicy: Always
          securityContext:
            seLinuxOptions: {}
          serviceAccountName: weave-net
          tolerations:
            - effect: NoSchedule
              operator: Exists
          volumes:
            - name: weavedb
              hostPath:
                path: /var/lib/weave
            - name: cni-bin
              hostPath:
                path: /opt
            - name: cni-bin2
              hostPath:
                path: /home
            - name: cni-conf
              hostPath:
                path: /etc
            - name: dbus
              hostPath:
                path: /var/lib/dbus
            - name: lib-modules
              hostPath:
                path: /lib/modules
            - name: xtables-lock
              hostPath:
                path: /run/xtables.lock
                type: FileOrCreate

执行创建

controlplane $ kubectl apply -f /opt/weave-kube.yaml
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
#查看状态
controlplane $ kubectl get pod -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-22wq9                1/1     Running   0          21m
coredns-fb8b8dccf-k6tx5                1/1     Running   0          21m
etcd-controlplane                      1/1     Running   0          20m
kube-apiserver-controlplane            1/1     Running   0          20m
kube-controller-manager-controlplane   1/1     Running   0          19m
kube-proxy-scxbp                       1/1     Running   0          21m
kube-scheduler-controlplane            1/1     Running   1          20m
weave-net-9dmqb                        2/2     Running   0          37s

在集群上安装 Weave 时,请访问https://www.weave.works/docs/net/latest/kube-addon/了解详细信息。

2.3 加入集群

一旦 Master 和 CNI 初始化,其他节点只要拥有正确的令牌就可以加入集群。kubeadm token例如,可以通过 来管理令牌

controlplane $ kubeadm token list
TOKEN                     TTL       EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
102952.1a7dd4cc8d1f4cc5   23h       2021-11-08T02:56:07Z   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

这与 Master 初始化后提供的命令相同。

node01 $ kubeadm join --discovery-token-unsafe-skip-ca-verification --token=102952.1a7dd4cc8d1f4cc5 172.17.0.18:6443
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

--discovery-token-unsafe-skip-ca-verification标签用于绕过 Discovery Token 验证。由于此令牌是动态生成的,因此我们无法将其包含在步骤中。在生产中,使用由 提供的令牌kubeadm init

2.4 查看节点

controlplane $ kubectl get nodes
NAME           STATUS   ROLES    AGE    VERSION
controlplane   Ready    master   35m    v1.14.0
node01         Ready    <none>   105s   v1.14.0

2.5 部署pod

controlplane $ kubectl create deployment http --image=katacoda/docker-http-server:latest
deployment.apps/http created
controlplane $ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
http-7f8cbdf584-5p7dd   1/1     Running   0          18s
node01 $ docker ps | grep docker-http-server
9a6f2a14b008        katacoda/docker-http-server   "/app"                   32 seconds ago      Up 32 seconds                           k8s_docker-http-server_http-7f8cbdf584-5p7dd_default_4e464e6d-3f7b-11ec-9bfb-0242ac110012_0

2.6 部署仪表板

controlplane $ cat dashboard.yaml 
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  externalIPs:
  - 172.17.0.28
  ports:
    - port: 8443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboardcontrolplane $ 

创建dashborad

controlplane $ kubectl apply -f dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

查看

controlplane $ kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-22wq9                 1/1     Running   0          61m
coredns-fb8b8dccf-k6tx5                 1/1     Running   0          61m
etcd-controlplane                       1/1     Running   0          60m
kube-apiserver-controlplane             1/1     Running   0          60m
kube-controller-manager-controlplane    1/1     Running   0          60m
kube-proxy-c66jf                        1/1     Running   0          28m
kube-proxy-scxbp                        1/1     Running   0          61m
kube-scheduler-controlplane             1/1     Running   1          60m
kubernetes-dashboard-5f57845f9d-mrhrv   1/1     Running   0          48s
weave-net-78dch                         2/2     Running   1          28m
weave-net-9dmqb                         2/2     Running   0          41m

需要 ServiceAccount 才能登录。ClusterRoleBinding 用于为新的 ServiceAccount ( admin-user )分配集群上的 cluster -admin角色。

cat <<EOF | kubectl create -f - 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF

这意味着他们可以控制 Kubernetes 的所有方面。通过 ClusterRoleBinding 和 RBAC,可以根据安全要求定义不同级别的权限。可以在Dashboard 文档中找到有关为 Dashboard 创建用户的更多信息。


创建 ServiceAccount 后,可以通过以下方式找到登录令牌:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

部署仪表板时,它使用 externalIPs 将服务绑定到端口 8443。这使得仪表板可供集群外部使用,并可在https://2886795292-8443-ollie02.environments.katacoda.com/查看


使用管理员用户令牌访问仪表板。


对于生产,建议使用kubectl proxy来访问仪表板,而不是 externalIP 。在https://github.com/kubernetes/dashboard 上查看更多详细信息。


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
2月前
|
Kubernetes 监控 安全
容器化技术:Docker与Kubernetes的实战应用
容器化技术:Docker与Kubernetes的实战应用
|
2月前
|
存储 Kubernetes Devops
Kubernetes集群管理和服务部署实战
Kubernetes集群管理和服务部署实战
62 0
|
4月前
|
存储 Kubernetes 负载均衡
CentOS 7.9二进制部署K8S 1.28.3+集群实战
本文详细介绍了在CentOS 7.9上通过二进制方式部署Kubernetes 1.28.3+集群的全过程,包括环境准备、组件安装、证书生成、高可用配置以及网络插件部署等关键步骤。
700 3
CentOS 7.9二进制部署K8S 1.28.3+集群实战
|
4月前
|
Kubernetes 负载均衡 前端开发
二进制部署Kubernetes 1.23.15版本高可用集群实战
使用二进制文件部署Kubernetes 1.23.15版本高可用集群的详细教程,涵盖了从环境准备到网络插件部署的完整流程。
147 2
二进制部署Kubernetes 1.23.15版本高可用集群实战
|
3月前
|
Kubernetes 网络协议 Docker
Kubernetes入门到进阶实战
Kubernetes入门到进阶实战
115 0
|
4月前
|
Kubernetes Ubuntu 网络安全
Ubuntu基于kubeadm快速部署K8S实战
关于如何在Ubuntu系统上使用kubeadm工具快速部署Kubernetes集群的详细实战指南。
237 2
|
4月前
|
存储 Kubernetes Docker
深入探索容器化技术:Docker 实战与 Kubernetes 管理
深入探索容器化技术:Docker 实战与 Kubernetes 管理
101 0
|
4月前
|
Kubernetes Linux API
CentOS 7.6使用kubeadm部署k8s 1.17.2测试集群实战篇
该博客文章详细介绍了在CentOS 7.6操作系统上使用kubeadm工具部署kubernetes 1.17.2版本的测试集群的过程,包括主机环境准备、安装Docker、配置kubelet、初始化集群、添加节点、部署网络插件以及配置k8s node节点管理api server服务器。
173 0
CentOS 7.6使用kubeadm部署k8s 1.17.2测试集群实战篇
|
4月前
|
Kubernetes 容器
Kubernetes附加组件Dashboard部署实战篇
关于如何在Kubernetes集群中部署和配置Dashboard组件的详细实战指南,涵盖了从创建证书、部署Dashboard、设置服务访问到登录认证的完整流程。
554 0
Kubernetes附加组件Dashboard部署实战篇
|
5月前
|
Kubernetes Cloud Native Docker
云原生入门:Docker容器化部署实战
【8月更文挑战第31天】在数字化浪潮中,云原生技术成为企业转型的助推器。本文通过Docker容器化部署的实践案例,引导读者从零基础到掌握基础的云原生应用部署技能。我们将一起探索Docker的魅力,学习如何将一个应用容器化,并在云平台上运行起来,为深入云原生世界打下坚实基础。

热门文章

最新文章