Kubeadm部署-Kubernetes-1.18.6集群

简介: Kubeadm部署-Kubernetes-1.18.6集群

环境配置

IP hostname 操作系统
10.11.66.44 k8s-master centos7.6

10.11.66.27 k8s-node1 centos7.7
10.11.66.28 k8s-node2 centos7.7
# 官方建议每台机器至少双核2G内存,同时需确保MAC和product_uuid的唯一性
[root@localhost ~]# hostnamectl --static set-hostname k8s-master
[root@localhost ~]# hostnamectl --static set-hostname k8s-node1
[root@localhost ~]# hostnamectl --static set-hostname k8s-node2
[root@k8s-master ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@k8s-master ~]# sestatus
SELinux status:                 disabled
[root@k8s-master ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
[root@k8s-master ~]# cat >> /etc/hosts << EOF    # 三台都需要操作
> 10.11.66.44  k8s-master
> 10.11.66.27  k8s-node1
> 10.11.66.28  k8s-node2
> EOF
# ping主机测试hosts是否配置正确
[root@k8s-master ~]# ping k8s-master
PING k8s-master (10.11.66.44) 56(84) bytes of data.
64 bytes from k8s-master (10.11.66.44): icmp_seq=1 ttl=64 time=0.012 ms
64 bytes from k8s-master (10.11.66.44): icmp_seq=2 ttl=64 time=0.016 ms
^C
--- k8s-master ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.012/0.014/0.016/0.002 ms
[root@k8s-master ~]# ping k8s-node1
PING k8s-node1 (10.11.66.27) 56(84) bytes of data.
64 bytes from k8s-node1 (10.11.66.27): icmp_seq=1 ttl=64 time=0.924 ms
64 bytes from k8s-node1 (10.11.66.27): icmp_seq=2 ttl=64 time=1.36 ms
^C
--- k8s-node1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1010ms
rtt min/avg/max/mdev = 0.924/1.146/1.369/0.225 ms
[root@k8s-master ~]# ping k8s-node2
PING k8s-node2 (10.11.66.28) 56(84) bytes of data.
64 bytes from k8s-node2 (10.11.66.28): icmp_seq=1 ttl=64 time=1.18 ms
64 bytes from k8s-node2 (10.11.66.28): icmp_seq=2 ttl=64 time=1.30 ms
^C
--- k8s-node2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1003ms
rtt min/avg/max/mdev = 1.180/1.240/1.300/0.060 ms
[root@k8s-master ~]# ip link   # 三台机器mac地址不能一样
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:26:38:13 brd ff:ff:ff:ff:ff:ff
[root@k8s-master ~]# cat /sys/class/dmi/id/product_uuid    # 三台机器UUID不能一样
07B64D56-0D8B-6047-8E55-9ADE9F263813
# 设置为阿里云yum源(三台都需要操作)
[root@k8s-master ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-master ~]# rm -rf /var/cache/yum && yum makecache && yum -y update && yum -y autoremove
# 注意: 网络条件不好,可以不用 update
# 安装依赖包(三台都需要操作)
[root@k8s-master ~]# yum -y install epel-release.noarch conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
# 配置iptables(三台都需要操作)
[root@k8s-master ~]# iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 关闭swap分区(三台都需要操作)
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# 加载内核模块(三台都需要操作)
[root@k8s-node2 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs               # lvs基于4层的负载均衡
modprobe -- ip_vs_rr            # 轮询
modprobe -- ip_vs_wrr           # 加权轮询
modprobe -- ip_vs_sh            # 源地址散列调度算法
modprobe -- nf_conntrack_ipv4   # 链接跟踪模块
modprobe -- br_netfilter        # 遍历桥的数据包由iptables进行处理以进行
EOF
[root@k8s-master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
# 设置内核参数(三台都需要操作)
[root@k8s-master ~]# cat << EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
[root@k8s-master ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1     # 设置网桥iptables网络过滤通告
net.bridge.bridge-nf-call-ip6tables = 1    # 设置网桥iptables网络过滤通告
net.ipv4.ip_forward = 1                    # 开启内核转发
net.ipv4.tcp_tw_recycle = 0                # 设置 IP_TW 回收
vm.swappiness = 0                          # 禁用swap
vm.overcommit_memory = 1                   # 内核对内存分配的一种策略
vm.panic_on_oom = 0                        # 设置系统oom(内存溢出)
fs.inotify.max_user_watches = 89100        # 允许用户最大监控目录数
fs.file-max = 52706963                     # 允许系统打开的最大文件数
fs.nr_open = 52706963                      # 允许单个进程打开的最大文件数
net.ipv6.conf.all.disable_ipv6 = 1         # 禁用ipv6
net.netfilter.nf_conntrack_max = 2310720   # 系统的最大连接数
# overcommit_memory 是一个内核对内存分配的一种策略,取值又三种分别为0, 1, 2
- overcommit_memory=0   '表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。
- overcommit_memory=1   '表示内核允许分配所有的物理内存,而不管当前的内存状态如何。
- overcommit_memory=2   '表示内核允许分配超过所有物理内存和交换空间总和的内存

部署docker

# 卸载旧版docker(三台都需要操作)
[root@k8s-master ~]# yum remove docker \
           docker-client \
           docker-client-latest \
           docker-common \
           docker-latest \
           docker-latest-logrotate \
           docker-logrotate \
           docker-selinux \
           docker-engine-selinux \
           docker-engine
Loaded plugins: fastestmirror
No Match for argument: docker
No Match for argument: docker-client
No Match for argument: docker-client-latest
No Match for argument: docker-common
No Match for argument: docker-latest
No Match for argument: docker-latest-logrotate
No Match for argument: docker-logrotate
No Match for argument: docker-selinux
No Match for argument: docker-engine-selinux
No Match for argument: docker-engine
No Packages marked for removal
# 安装docker依赖包(三台都需要操作)
[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
# 设置docker源(阿里云)(三台都需要操作)
[root@k8s-master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
# 启用测试库(可选)
[root@k8s-master ~]# yum-config-manager --enable docker-ce-edge
[root@k8s-master ~]# yum-config-manager --enable docker-ce-test
# 安装docker(三台都需要操作)
[root@k8s-master ~]# yum makecache fast
[root@k8s-master ~]# yum -y install docker-ce
# 启动docker,并开机自启(三台都需要操作)
[root@k8s-master ~]# systemctl enable docker --now
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
# 配置docker(三台都需要操作)
[root@k8s-master ~]# sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service    # 安装完成后配置启动时的命令,否则 docker 会将 iptables FORWARD chain 的默认策略设置为DROP
[root@k8s-master ~]# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://bk6kzfqm.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker

部署kubeadm和kubelet

# 配置yum源(三台都需要操作)
[root@k8s-master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装并启动(三台都需要操作)
[root@k8s-master ~]# yum install -y kubelet-1.18.6 kubeadm-1.18.6 kubectl-1.18.6
[root@k8s-master ~]# systemctl enable kubelet.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
# 配置自动补全命令(三台都需要操作)
[root@k8s-master ~]# yum -y install bash-completion
# 设置kubectl与kubeadm命令补全,下次login生效
[root@k8s-master ~]# kubectl completion bash > /etc/bash_completion.d/kubectl
[root@k8s-master ~]# kubeadm completion bash > /etc/bash_completion.d/kubeadm
# 查看k8s依赖的包(三台都需要操作)
[root@k8s-master ~]# kubeadm config images list --kubernetes-version v1.18.6
W0803 15:10:18.910528   25638 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.6
k8s.gcr.io/kube-controller-manager:v1.18.6
k8s.gcr.io/kube-scheduler:v1.18.6
k8s.gcr.io/kube-proxy:v1.18.6
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
# 拉取所需镜像(三台都需要操作)
[root@k8s-master ~]# vim get-k8s-images.sh
#!/bin/bash
# Script For Quick Pull K8S Docker Images
KUBE_VERSION=v1.18.6
PAUSE_VERSION=3.2
CORE_DNS_VERSION=1.6.7
ETCD_VERSION=3.4.3-0
# pull kubernetes images from hub.docker.com
docker pull kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION
# pull aliyuncs mirror docker images
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
# retag to k8s.gcr.io prefix
docker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION  k8s.gcr.io/kube-proxy:$KUBE_VERSION
docker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION
docker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION
docker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION
# untag origin tag, the images won't be delete.
docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
[root@k8s-master ~]# sh get-k8s-images.sh
'或者
[root@k8s-master ~]# docker save $(docker images | grep -v REPOSITORY | awk 'BEGIN{OFS=":";ORS=" "}{print $1,$2}') -o k8s-images.tar    # master节点导出
[root@k8s-node1 ~]# docker image load -i k8s-images.tar     # node节点导入

初始化集群

# 使用kubeadm init初始化集群,ip为本机ip(在k8s-master上操作)
[root@k8s-master ~]# kubeadm init  --kubernetes-version=v1.18.6 --apiserver-advertise-address=10.11.66.44 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16
--kubernetes-version=v1.18.6 : 加上该参数后启动相关镜像(刚才下载的那一堆)
--pod-network-cidr=10.244.0.0/16 :(Pod 中间网络通讯我们用flannel,flannel要求是10.244.0.0/16,这个IP段就是Pod的IP段)
--service-cidr=10.1.0.0/16 : Service(服务)网段(和微服务架构有关)
# 初始化成功后,会有以下回显
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.11.66.44:6443 --token ecqlbq.1k41wwa3gn57oonq \
    --discovery-token-ca-cert-hash sha256:daeec6df945f3f4a646d074d9f9144f414373106ff8849450c1d10b5a663e87e
# 为需要使用kubectl的用户进行配置(在k8s-master上操作)
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 使用下面的命令确保所有的Pod都处于Running状态,可能要等到许久
[root@k8s-master ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
kube-system   coredns-66bff467f8-cxtrj             0/1     Pending   0          8m14s   <none>        <none>       <none>           <none>
kube-system   coredns-66bff467f8-znlm2             0/1     Pending   0          8m14s   <none>        <none>       <none>           <none>
kube-system   etcd-k8s-master                      1/1     Running   0          8m23s   10.11.66.44   k8s-master   <none>           <none>
kube-system   kube-apiserver-k8s-master            1/1     Running   0          8m23s   10.11.66.44   k8s-master   <none>           <none>
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          8m23s   10.11.66.44   k8s-master   <none>           <none>
kube-system   kube-proxy-vh964                     1/1     Running   0          8m14s   10.11.66.44   k8s-master   <none>           <none>
kube-system   kube-scheduler-k8s-master            1/1     Running   0          8m23s   10.11.66.44   k8s-master   <none>           <none>
[root@k8s-master ~]# kubectl get pods -n kube-system   # 这个也可以查看
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-cxtrj             0/1     Pending   0          3m52s
coredns-66bff467f8-znlm2             0/1     Pending   0          3m52s
etcd-k8s-master                      1/1     Running   0          4m1s
kube-apiserver-k8s-master            1/1     Running   0          4m1s
kube-controller-manager-k8s-master   1/1     Running   0          4m1s
kube-proxy-vh964                     1/1     Running   0          3m52s
kube-scheduler-k8s-master            1/1     Running   0          4m1s

集群网络配置(选择一种就可以)

flannel 网络

[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 注意:修改集群初始化地址及镜像能否拉取(在k8s-master上操作)

Pod Network(使用七牛云镜像)

# (在k8s-master上操作)
[root@k8s-master ~]# curl -o kube-flannel.yml   https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master ~]# sed -i "s/quay.io\/coreos\/flannel/quay-mirror.qiniu.com\/coreos\/flannel/g" kube-flannel.yml
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
[root@k8s-master ~]# rm -f kube-flannel.yml

calico 网络

# (在k8s-master上操作)
[root@k8s-master ~]# wget https://docs.projectcalico.org/v3.15/manifests/calico.yaml
[root@k8s-master ~]# vim calico.yaml
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"
[root@k8s-master ~]# kubectl apply -f calico.yaml
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-578894d4cd-rchx6   1/1     Running   0          2m31s
calico-node-slgg9                          1/1     Running   0          2m32s
coredns-66bff467f8-cxtrj                   1/1     Running   0          55m
coredns-66bff467f8-znlm2                   1/1     Running   0          55m
etcd-k8s-master                            1/1     Running   0          55m
kube-apiserver-k8s-master                  1/1     Running   0          55m
kube-controller-manager-k8s-master         1/1     Running   0          55m
kube-proxy-vh964                           1/1     Running   0          55m
kube-scheduler-k8s-master                  1/1     Running   0          55m

kubernetes集群中添加node节点

# 在k8s-node1和k8s-node2上,运行之前在k8s-master输出的命令
[root@k8s-node1 ~]# kubeadm join 10.11.66.44:6443 --token ecqlbq.1k41wwa3gn57oonq \
      --discovery-token-ca-cert-hash sha256:daeec6df945f3f4a646d074d9f9144f414373106ff8849450c1d10b5a663e87e
[root@k8s-node2 ~]# kubeadm join 10.11.66.44:6443 --token ecqlbq.1k41wwa3gn57oonq \
      --discovery-token-ca-cert-hash sha256:daeec6df945f3f4a646d074d9f9144f414373106ff8849450c1d10b5a663e87e
# 没有记录集群 join 命令的可以通过以下方式重新获取(在k8s-master上操作)
[root@k8s-master ~]# kubeadm token create --print-join-command --ttl=0
[root@k8s-master ~]# kubectl get nodes    # 查看集群中的节点状态,可能要等等许久才Ready
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   64m     v1.18.6
k8s-node1    Ready    <none>   3m37s   v1.18.6
k8s-node2    Ready    <none>   3m36s   v1.18.6

kube-proxy开启ipvs

# (在k8s-master上操作)
[root@k8s-master ~]# kubectl get configmap kube-proxy -n kube-system -o yaml > kube-proxy-configmap.yaml
[root@k8s-master ~]# sed -i 's/mode: ""/mode: "ipvs"/' kube-proxy-configmap.yaml
[root@k8s-master ~]# kubectl apply -f kube-proxy-configmap.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
configmap/kube-proxy configured
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-578894d4cd-rchx6   1/1     Running   0          14m
calico-node-kfc5p                          1/1     Running   0          7m17s
calico-node-slgg9                          1/1     Running   0          14m
calico-node-xcc92                          1/1     Running   0          7m16s
coredns-66bff467f8-cxtrj                   1/1     Running   0          67m
coredns-66bff467f8-znlm2                   1/1     Running   0          67m
etcd-k8s-master                            1/1     Running   0          67m
kube-apiserver-k8s-master                  1/1     Running   0          67m
kube-controller-manager-k8s-master         1/1     Running   0          67m
kube-proxy-6fnpb                           1/1     Running   0          16s
kube-proxy-tflld                           1/1     Running   0          20s
kube-proxy-x47c8                           1/1     Running   0          26s
kube-scheduler-k8s-master                  1/1     Running   0          67m
-

部署 kubernetes-dashboard

# Dashboard安装脚本(在k8s-master上操作)
cat > recommended.yaml <<-EOF
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard
---
#apiVersion: v1
#kind: Secret
#metadata:
#  labels:
#    k8s-app: kubernetes-dashboard
#  name: kubernetes-dashboard-certs
#  namespace: kubernetes-dashboard
#type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-beta1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: kubernetes-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-metrics-scraper
  name: kubernetes-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: kubernetes-metrics-scraper
    spec:
      containers:
        - name: kubernetes-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.0
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
EOF

创建证书

[root@k8s-master ~]# cd /etc/kubernetes/
[root@k8s-master kubernetes]# mkdir dashboard-certs
[root@k8s-master kubernetes]# cd dashboard-certs/
[root@k8s-master dashboard-certs]# kubectl create namespace kubernetes-dashboard    # 创建命名空间
namespace/kubernetes-dashboard created
[root@k8s-master dashboard-certs]# kubectl get namespace   # 查看命名空间
NAME                   STATUS   AGE
default                Active   75m
kube-node-lease        Active   75m
kube-public            Active   75m
kube-system            Active   75m
kubernetes-dashboard   Active   9s
[root@k8s-master dashboard-certs]# openssl genrsa -out dashboard.key 2048   # 创建私钥key文件
Generating RSA private key, 2048 bit long modulus
........................................+++
..........+++
e is 65537 (0x10001)
[root@k8s-master dashboard-certs]# openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'      # 证书请求
[root@k8s-master dashboard-certs]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt     # 自签证书
Signature ok
subject=/CN=dashboard-cert
Getting Private key
[root@k8s-master dashboard-certs]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard   # 创建kubernetes-dashboard-certs对象
secret/kubernetes-dashboard-certs created
[root@k8s-master dashboard-certs]# kubectl get secret -A
NAMESPACE              NAME                                             TYPE                                  DATA   AGE
default                default-token-j6m5t                              kubernetes.io/service-account-token   3      77m
kube-node-lease        default-token-n5lxf                              kubernetes.io/service-account-token   3      77m
.........
.........
kubernetes-dashboard   default-token-bjp2p                              kubernetes.io/service-account-token   3      2m33s
kubernetes-dashboard   kubernetes-dashboard-certs                       Opaque                                2      90s

创建 dashboard 管理员

[root@k8s-master dashboard-certs]# cat > dashboard-admin.yaml <<-EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: dashboard-admin
  namespace: kubernetes-dashboard
EOF
[root@k8s-master dashboard-certs]# kubectl apply -f dashboard-admin.yaml
serviceaccount/dashboard-admin created

为用户分配权限

[root@k8s-master dashboard-certs]# cat > dashboard-admin-bind-cluster-role.yaml <<-EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin-bind-cluster-role
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard
EOF
[root@k8s-master dashboard-certs]# kubectl apply -f dashboard-admin-bind-cluster-role.yaml
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin-bind-cluster-role created

安装 Dashboard

[root@k8s-master dashboard-certs]# kubectl apply -f /root/recommended.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace/kubernetes-dashboard configured
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/kubernetes-metrics-scraper created
[root@k8s-master dashboard-certs]# kubectl get pods -A
NAMESPACE              NAME                                          READY   STATUS    RESTARTS   AGE
kube-system            calico-kube-controllers-578894d4cd-rchx6      1/1     Running   0          29m
kube-system            calico-node-kfc5p                             1/1     Running   0          22m
kube-system            calico-node-slgg9                             1/1     Running   0          29m
kube-system            calico-node-xcc92                             1/1     Running   0          22m
kube-system            coredns-66bff467f8-cxtrj                      1/1     Running   0          82m
kube-system            coredns-66bff467f8-znlm2                      1/1     Running   0          82m
kube-system            etcd-k8s-master                               1/1     Running   0          82m
kube-system            kube-apiserver-k8s-master                     1/1     Running   0          82m
kube-system            kube-controller-manager-k8s-master            1/1     Running   0          82m
kube-system            kube-proxy-6fnpb                              1/1     Running   0          15m
kube-system            kube-proxy-tflld                              1/1     Running   0          15m
kube-system            kube-proxy-x47c8                              1/1     Running   0          15m
kube-system            kube-scheduler-k8s-master                     1/1     Running   0          82m
kubernetes-dashboard   kubernetes-dashboard-84b6b4578b-8t9bp         1/1     Running   0          75s
kubernetes-dashboard   kubernetes-metrics-scraper-86f6785867-bqvpg   1/1     Running   0          75s
[root@k8s-master dashboard-certs]# kubectl get service -n kubernetes-dashboard  -o wide
NAME                        TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE    SELECTOR
dashboard-metrics-scraper   ClusterIP   10.1.16.181   <none>        8000/TCP        2m6s   k8s-app=kubernetes-metrics-scraper
kubernetes-dashboard        NodePort    10.1.99.111   <none>        443:30000/TCP   2m6s   k8s-app=kubernetes-dashboard

查看并复制用户Token

[root@k8s-master dashboard-certs]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name:         dashboard-admin-token-528w2
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 7c3955d3-2c0c-4b99-b69b-8a3f330661de
Type:  kubernetes.io/service-account-token
Data
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1oVnpzUlUzRU4zbXJRV2F5VUZMc3JmYWFBTWMyWU1IenY1d1NET1U0bDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNTI4dzIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2MzOTU1ZDMtMmMwYy00Yjk5LWI2OWItOGEzZjMzMDY2MWRlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.nVS3WCiIU90o5WIYG9iHYE90Gfox_Q5eHNzz3UsGLDDBIfgDt7veX-4pl7GLV8FFsAap0fTLo_pU7sbehd5mOYcgh_QRlZ3ELR4mVZYNW6fmPBFZn7Tbjv7LLieGDPzELrefQJwS4sZus2WsH1OdQbMIry6AYKpl5AAKw4rhh_679QnEBjCsJiEebg0hzlKyXoXGqmaGwfetsCB5DOmoNss2WbIKfGJ7pasTTKa29F3T19NIh9VbDmavyvYZp9VPgfcKiuBKlxrakzwH9fosS8V3faMgH64CMIWwrEqv1cybd85gQkA1u0SGZ5mOQJ3tYWGHGJBFlO8J-RKSo8gJOw

访问测试

1、浏览器访问:https://10.11.66.44:30000/
2、选择token,输入上面输出的token

用文件认证登录

导出认证
[root@k8s-master dashboard-certs]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name:         dashboard-admin-token-528w2
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 7c3955d3-2c0c-4b99-b69b-8a3f330661de
Type:  kubernetes.io/service-account-token
Data
====
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1oVnpzUlUzRU4zbXJRV2F5VUZMc3JmYWFBTWMyWU1IenY1d1NET1U0bDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNTI4dzIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2MzOTU1ZDMtMmMwYy00Yjk5LWI2OWItOGEzZjMzMDY2MWRlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.nVS3WCiIU90o5WIYG9iHYE90Gfox_Q5eHNzz3UsGLDDBIfgDt7veX-4pl7GLV8FFsAap0fTLo_pU7sbehd5mOYcgh_QRlZ3ELR4mVZYNW6fmPBFZn7Tbjv7LLieGDPzELrefQJwS4sZus2WsH1OdQbMIry6AYKpl5AAKw4rhh_679QnEBjCsJiEebg0hzlKyXoXGqmaGwfetsCB5DOmoNss2WbIKfGJ7pasTTKa29F3T19NIh9VbDmavyvYZp9VPgfcKiuBKlxrakzwH9fosS8V3faMgH64CMIWwrEqv1cybd85gQkA1u0SGZ5mOQJ3tYWGHGJBFlO8J-RKSo8gJOw
ca.crt:     1025 bytes
[root@k8s-master ~]# vim .kube/config
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJZmk1aXZZNkxXb0F3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBNE1ETXdOelF5TlRoYUZ3MHlNVEE0TURNd056UXpNREJhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXpLazFvSnVPenQ3R3kzWnIKYjY5UkFqOXpzZ0hsNDdBOVVGOGIvQm1oYjVZalAwNTZuSG5FUVg4Qi85eDRaQmI0U2VLOTZkVVhIaTlFcEZuUQpDUlNKTFUwNnFRcW1GeUdXc1JJcEJPVDlUQmtrSW1XM25aRFZvKzI2dWFnVEp0V1BsOWtaWHZ5Z1hGUkJxeDNYCkxvTHIwZ2FrWE56dWd6TzBhMnFwQ1hQK0xmTE1Pa2gzUlJRZmQ4NUtaWWFXcWhNSStjNkZEVGtnTi84Z3BNKzYKWkE0a0UzT0x3OWFORkpvakl2amNIY1h5N0RNdGxCaFVRZVU4bEk2NHVRVk9zcDllTDR2WjBFRmo1djZFejNnbwp4ZFYrbzd6NWd3N3pzUENrdlJjc3RRcVhSRnV6emlpTVVQQTRDbzFhZkt3R1VZcmtBbmNzZnQxbVhGb2V3WDFPCjkwQ2xod0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDd0lKc0JreEV4UXBpeW8zTkNmQmkrL3hOQ0U3YnpNLzhmRAp4Q0VwQlZ0MWR1NkU1ZFdJQy82a3B0OVZzNHhHc1gvVVA4aUNaejVHZmtxT1JmTklDM0dZUFZJWlhNTUN2RHp0CnFubkk0Z1p2YXhyMnNoSDNpVkw2Rzd0Y2hCZmNJV0J4K1lnTEt3ZW9iTDUvaUorbXJmT2xsNXV4eit6cGUveHIKTjArWWVsTXJBaS9PeWpJR1N0WjVOblRzcnVILzZVRXRFZUwwRE9WQ0FrR3JQYnlkQVdNQUxaeWlQMTU4bCticQpNRkFkMHc2ZG82R3R2NlRCMGVaaXdzT1RHVzN6Ti85YlZWS2NFcGIzaE1MVVk0YVhvNC9laXl6TnF6MzlDdEpBCklPb3djOEFuakdGRDYraUdKbWU0VVdXcUxzMDI5US82eXF6WWFsUmFqWkwyL2FkNHRuaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBektrMW9KdU96dDdHeTNacmI2OVJBajl6c2dIbDQ3QTlVRjhiL0JtaGI1WWpQMDU2Cm5IbkVRWDhCLzl4NFpCYjRTZUs5NmRVWEhpOUVwRm5RQ1JTSkxVMDZxUXFtRnlHV3NSSXBCT1Q5VEJra0ltVzMKblpEVm8rMjZ1YWdUSnRXUGw5a1pYdnlnWEZSQnF4M1hMb0xyMGdha1hOenVnek8wYTJxcENYUCtMZkxNT2toMwpSUlFmZDg1S1pZYVdxaE1JK2M2RkRUa2dOLzhncE0rNlpBNGtFM09MdzlhTkZKb2pJdmpjSGNYeTdETXRsQmhVClFlVThsSTY0dVFWT3NwOWVMNHZaMEVGajV2NkV6M2dveGRWK283ejVndzd6c1BDa3ZSY3N0UXFYUkZ1enppaU0KVVBBNENvMWFmS3dHVVlya0FuY3NmdDFtWEZvZXdYMU85MENsaHdJREFRQUJBb0lCQVFDMHlLZXhkcGZnanhObAp1UFpRVXJvcFZTbDZ6WWhuNTA5U0JxR3V3R2xGSzRkNUxYYkxjQmgzanB5U2lncml4eE9PR0xlUHJZYmRSLzNICmUvcHpldXR0MC9HRVR2N0dJZ3A5NGIvUUxnSzl6TnVKY3ZhT1Bka3FGQjVFVDM2VGFFU09hdHlwZGxpbEZseG4KcmxWZEpaTHdGS1B0ejg3MG9LQzMzaUR4VTcvc2p4MWUwc3FFQ1NMdW5aY2FiaWJtYUpjT2RXYk0yM3JBdEdYQQp0YlFIYVZneHJldEZFREx0Ym9IMFB3Qit3eFNHdFh4WUFwSXR0RkowNWM3QWc1OVhWSFc2akdiYWd2VVlPcDFQCmdGVndSbjdwT1daNlNHTDBqdXgvbTl2UzZoakZ1aVVhVXhkM2ZOSVNKbUljRjZ2MTlmVTQwV3kyYXBCK1B0bHIKOU5zM2RpSGhBb0dCQU01ZW9QcFNGNmp0U1V0NTlERktZdUJUUG9wQWxiZFlxM0QvWnVBQlpkaFdJWXNoS1JvRwpUSGhjaTFlKzBPbmZlZ2pvMzhGM0syaHVJRVdrNEFhQ25QaWVyRWc3Yk1mVjNkMjYyNHBFeGRBN3J5Y1JvaWJuClJlTVA5K1BvVy9IaXJVQW4wUFdyRFUydEpLekxwNlhCcnozeE02VmFiWGxFcnNnZ0pybHN1cEwzQW9HQkFQM2gKWW5QLzVWWHBWeUtvMkhuZEEwWkwwK0pscFhNeFY4NDA4ZE1QMXE1WkVQbkZ2aVNXVjlLdFJVa3lCR2ZDUW1WeApEWkp3KzBRcmZUbXV5elZ6aUFZTFJJbHJKZ285QmN0NmRGUmpFaUo4NkVIeGdlV1J5UkhmaUZqalhqSXlCVGYyCmFxOGM2UlBTZmEyTEh1SVBlZEZVY2lrN0Z5WDg4dzJabkpBcjJFM3hBb0dBWnBOVWtuZkJlTjdRNHFvd2ZWdUwKQUJPQWIzbWdzU3hxc3RUUURxSERQSis3Tm90NkFZeUY4QUdYNVRwY1h4TU1kbWRCNk1qU0U2dEJjVHg5ZWQ3cwpKUXZCZUhuSkhSOHBrMit3ZGU2dklFeTZSOElWQmg5SWRvOVdXTHNERUp6cUhveHI2ZUJtMFdneFpZNG91MVFsClJiV2hSUnhJYzlGMnl0Um9TeHhITklzQ2dZQmRxSFQ2bUMrUmx3aG5KK1RjYUJWYUxJVVpJeWg3SzN2Wi9ad3MKb2M0ditYbVN1MGxmRS91SUpCWElYK1JTSnM3NXYxQWpjdnl1OUdBNUZHdXc1MU1KNzhRejhjeFJ3SnRQcW5nWgozWWFHSkpCR0s0TWhIcndQbE9nbTZwSUljSDJPWEtDVXcxU1UxSFU2dlhVQ0xuVmhMUWNFZ09FVVNaR2N0Y3VWClFDZUc4UUtCZ0UrMkFrZTR3QlRnZDhuZFhlTHRPcHBRZ21IUVViZUN1elZyRzFEVEJxam0rcVpnSzhKR2RUdXIKUDhybjY3TGNFSFpyRlJVODEwQXJUNU92QXRGOTlnU0dnKzd1Q2x5bzJtVGtxZWRIUTZ6RVZld0JUQlFQUEx1VAp6UGRYbjl5cTZSaVZPajU1QUROdmFuNXdQNUE3clRSTGZjNXZqQWRmV3hmYUZqYVIxNE85Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1oVnpzUlUzRU4zbXJRV2F5VUZMc3JmYWFBTWMyWU1IenY1d1NET1U0bDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNTI4dzIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2MzOTU1ZDMtMmMwYy00Yjk5LWI2OWItOGEzZjMzMDY2MWRlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.nVS3WCiIU90o5WIYG9iHYE90Gfox_Q5eHNzz3UsGLDDBIfgDt7veX-4pl7GLV8FFsAap0fTLo_pU7sbehd5mOYcgh_QRlZ3ELR4mVZYNW6fmPBFZn7Tbjv7LLieGDPzELrefQJwS4sZus2WsH1OdQbMIry6AYKpl5AAKw4rhh_679QnEBjCsJiEebg0hzlKyXoXGqmaGwfetsCB5DOmoNss2WbIKfGJ7pasTTKa29F3T19NIh9VbDmavyvYZp9VPgfcKiuBKlxrakzwH9fosS8V3faMgH64CMIWwrEqv1cybd85gQkA1u0SGZ5mOQJ3tYWGHGJBFlO8J-RKSo8gJOw
[root@k8s-master ~]# cp .kube/config /usr/local/k8s-dashboard.kubeconfig
[root@k8s-master ~]# cd /usr/local/
[root@k8s-master local]# ll
total 8
drwxr-xr-x. 2 root root    6 Apr 11  2018 bin
drwxr-xr-x. 2 root root    6 Apr 11  2018 etc
drwxr-xr-x. 2 root root    6 Apr 11  2018 games
drwxr-xr-x. 2 root root    6 Apr 11  2018 include
-rw-------  1 root root 6425 Aug  3 17:48 k8s-dashboard.kubeconfig
drwxr-xr-x. 2 root root    6 Apr 11  2018 lib
drwxr-xr-x. 2 root root    6 Apr 11  2018 lib64
drwxr-xr-x. 2 root root    6 Apr 11  2018 libexec
drwxr-xr-x. 2 root root    6 Apr 11  2018 sbin
drwxr-xr-x. 5 root root   49 Mar 30  2019 share
drwxr-xr-x. 2 root root    6 Apr 11  2018 src
[root@k8s-master local]# sz k8s-dashboard.kubeconfig 
# 登录的时候,选择文件认证的方式登录即可

安装部署 metrics-server 插件

链接:https://pan.baidu.com/s/1QRndSG88L5w-_DHfMxrd_g 
提取码:62dj 
复制这段内容后打开百度网盘手机App,操作更方便哦
[root@k8s-master ~]# unzip metrics-server-master.zip
[root@k8s-master ~]# cd metrics-server-master/deploy/1.8+/
[root@k8s-master 1.8+]# ll
total 28
-rw-r--r-- 1 root root  397 Nov 12  2019 aggregated-metrics-reader.yaml
-rw-r--r-- 1 root root  303 Nov 12  2019 auth-delegator.yaml
-rw-r--r-- 1 root root  324 Nov 12  2019 auth-reader.yaml
-rw-r--r-- 1 root root  298 Nov 12  2019 metrics-apiservice.yaml
-rw-r--r-- 1 root root 1091 Nov 12  2019 metrics-server-deployment.yaml
-rw-r--r-- 1 root root  297 Nov 12  2019 metrics-server-service.yaml
-rw-r--r-- 1 root root  517 Nov 12  2019 resource-reader.yaml

修改安装脚本

[root@k8s-master 1.8+]# vim metrics-server-deployment.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6   # # 修改镜像下载地址
        args:   # 添加以下内容
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-insecure-tls
          - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        ports:
        - name: main-port
          containerPort: 4443
          protocol: TCP
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        imagePullPolicy: Always
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
# 执行脚本
[root@k8s-master 1.8+]# kubectl apply -f .
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@k8s-master 1.8+]# kubectl top nodes
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k8s-master   887m         22%    1701Mi          59%
k8s-node1    158m         7%     954Mi           35%
k8s-node2    137m         6%     894Mi           32%
# 以下情况表示还没创建完成,等待1-3分钟即可
[root@k8s-master 1.8+]# kubectl top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
[root@k8s-master 1.8+]#
[root@k8s-master 1.8+]# kubectl top nodes
error: metrics not available yet
相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
目录
相关文章
|
6天前
|
运维 Kubernetes 监控
Kubernetes 集群的持续性能优化实践
【4月更文挑战第26天】 在动态且不断增长的云计算环境中,维护高性能的 Kubernetes 集群是一个挑战。本文将探讨一系列实用的策略和工具,旨在帮助运维专家监控、分析和优化 Kubernetes 集群的性能。我们将讨论资源分配的最佳实践,包括 CPU 和内存管理,以及集群规模调整的策略。此外,文中还将介绍延迟和吞吐量的重要性,并提供日志和监控工具的使用技巧,以实现持续改进的目标。
|
2天前
|
运维 Kubernetes 监控
Kubernetes 集群的监控与维护策略
【4月更文挑战第30天】 在现代云计算环境中,容器化技术已成为应用程序部署和管理的重要手段。其中,Kubernetes 作为一个开源的容器编排平台,以其强大的功能和灵活性受到广泛欢迎。然而,随之而来的是对 Kubernetes 集群监控和维护的复杂性增加。本文将探讨针对 Kubernetes 集群的监控策略和维护技巧,旨在帮助运维人员确保集群的稳定性和高效性。通过分析常见的性能瓶颈、故障诊断方法以及自动化维护工具的应用,我们将提供一套实用的解决方案,以优化 Kubernetes 环境的性能和可靠性。
|
2天前
|
运维 Kubernetes 监控
Kubernetes集群的持续性能优化策略
【4月更文挑战第30天】 在动态且不断扩展的云计算环境中,保持应用性能的稳定性是一个持续的挑战。本文将探讨针对Kubernetes集群的持续性能优化策略,旨在为运维工程师提供一套系统化的性能调优框架。通过分析集群监控数据,我们将讨论如何诊断常见问题、实施有效的资源管理和调度策略,以及采用自动化工具来简化这一过程。
|
2天前
|
Prometheus 监控 Kubernetes
Kubernetes 集群的监控与日志管理策略
【4月更文挑战第30天】 在微服务架构日益普及的当下,容器化技术与编排工具如Kubernetes成为了运维领域的重要话题。有效的监控和日志管理对于保障系统的高可用性和故障快速定位至关重要。本文将探讨在Kubernetes环境中实施监控和日志管理的最佳实践,包括选用合适的工具、部署策略以及如何整合这些工具来提供端到端的可见性。我们将重点讨论Prometheus监控解决方案和EFK(Elasticsearch, Fluentd, Kibana)日志管理堆栈,分析其在Kubernetes集群中的应用,并给出优化建议。
|
3天前
|
Kubernetes 应用服务中间件 nginx
K8S二进制部署详解,一文教会你部署高可用K8S集群(二)
K8S二进制部署详解,一文教会你部署高可用K8S集群(二)
|
3天前
|
Kubernetes 网络安全 数据安全/隐私保护
K8S二进制部署详解,一文教会你部署高可用K8S集群(一)
K8S二进制部署详解,一文教会你部署高可用K8S集群(一)
|
4天前
|
Kubernetes 网络协议 Python
一文教会你,如何通过kubeadm,在生产环境部署K8S高可用集群(二)
一文教会你,如何通过kubeadm,在生产环境部署K8S高可用集群(二)
|
4天前
|
Kubernetes 应用服务中间件 开发工具
一文教会你,如何通过kubeadm,在生产环境部署K8S高可用集群(一)
一文教会你,如何通过kubeadm,在生产环境部署K8S高可用集群(一)
|
8天前
|
Kubernetes 应用服务中间件 nginx
【博客大赛】搭建一套完整的企业级Kubernetes高可用集群(v1.20,二进制)
【博客大赛】搭建一套完整的企业级Kubernetes高可用集群(v1.20,二进制)
|
8天前
|
Kubernetes 负载均衡 应用服务中间件
部署一套完整的Kubernetes高可用集群(二进制,最新版v1.18)下
部署一套完整的Kubernetes高可用集群(二进制,最新版v1.18)下
部署一套完整的Kubernetes高可用集群(二进制,最新版v1.18)下

推荐镜像

更多