Kubeadm部署-Kubernetes-1.18.6集群

本文涉及的产品
全局流量管理 GTM,标准版 1个月
公共DNS(含HTTPDNS解析),每月1000万次HTTP解析
云解析 DNS,旗舰版 1个月
简介: Kubeadm部署-Kubernetes-1.18.6集群

环境配置

IP hostname 操作系统
10.11.66.44 k8s-master centos7.6

10.11.66.27 k8s-node1 centos7.7
10.11.66.28 k8s-node2 centos7.7
# 官方建议每台机器至少双核2G内存,同时需确保MAC和product_uuid的唯一性
[root@localhost ~]# hostnamectl --static set-hostname k8s-master
[root@localhost ~]# hostnamectl --static set-hostname k8s-node1
[root@localhost ~]# hostnamectl --static set-hostname k8s-node2
[root@k8s-master ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@k8s-master ~]# sestatus
SELinux status:                 disabled
[root@k8s-master ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
[root@k8s-master ~]# cat >> /etc/hosts << EOF    # 三台都需要操作
> 10.11.66.44  k8s-master
> 10.11.66.27  k8s-node1
> 10.11.66.28  k8s-node2
> EOF
# ping主机测试hosts是否配置正确
[root@k8s-master ~]# ping k8s-master
PING k8s-master (10.11.66.44) 56(84) bytes of data.
64 bytes from k8s-master (10.11.66.44): icmp_seq=1 ttl=64 time=0.012 ms
64 bytes from k8s-master (10.11.66.44): icmp_seq=2 ttl=64 time=0.016 ms
^C
--- k8s-master ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.012/0.014/0.016/0.002 ms
[root@k8s-master ~]# ping k8s-node1
PING k8s-node1 (10.11.66.27) 56(84) bytes of data.
64 bytes from k8s-node1 (10.11.66.27): icmp_seq=1 ttl=64 time=0.924 ms
64 bytes from k8s-node1 (10.11.66.27): icmp_seq=2 ttl=64 time=1.36 ms
^C
--- k8s-node1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1010ms
rtt min/avg/max/mdev = 0.924/1.146/1.369/0.225 ms
[root@k8s-master ~]# ping k8s-node2
PING k8s-node2 (10.11.66.28) 56(84) bytes of data.
64 bytes from k8s-node2 (10.11.66.28): icmp_seq=1 ttl=64 time=1.18 ms
64 bytes from k8s-node2 (10.11.66.28): icmp_seq=2 ttl=64 time=1.30 ms
^C
--- k8s-node2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1003ms
rtt min/avg/max/mdev = 1.180/1.240/1.300/0.060 ms
[root@k8s-master ~]# ip link   # 三台机器mac地址不能一样
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:26:38:13 brd ff:ff:ff:ff:ff:ff
[root@k8s-master ~]# cat /sys/class/dmi/id/product_uuid    # 三台机器UUID不能一样
07B64D56-0D8B-6047-8E55-9ADE9F263813
# 设置为阿里云yum源(三台都需要操作)
[root@k8s-master ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-master ~]# rm -rf /var/cache/yum && yum makecache && yum -y update && yum -y autoremove
# 注意: 网络条件不好,可以不用 update
# 安装依赖包(三台都需要操作)
[root@k8s-master ~]# yum -y install epel-release.noarch conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
# 配置iptables(三台都需要操作)
[root@k8s-master ~]# iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 关闭swap分区(三台都需要操作)
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# 加载内核模块(三台都需要操作)
[root@k8s-node2 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs               # lvs基于4层的负载均衡
modprobe -- ip_vs_rr            # 轮询
modprobe -- ip_vs_wrr           # 加权轮询
modprobe -- ip_vs_sh            # 源地址散列调度算法
modprobe -- nf_conntrack_ipv4   # 链接跟踪模块
modprobe -- br_netfilter        # 遍历桥的数据包由iptables进行处理以进行
EOF
[root@k8s-master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
# 设置内核参数(三台都需要操作)
[root@k8s-master ~]# cat << EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
[root@k8s-master ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1     # 设置网桥iptables网络过滤通告
net.bridge.bridge-nf-call-ip6tables = 1    # 设置网桥iptables网络过滤通告
net.ipv4.ip_forward = 1                    # 开启内核转发
net.ipv4.tcp_tw_recycle = 0                # 设置 IP_TW 回收
vm.swappiness = 0                          # 禁用swap
vm.overcommit_memory = 1                   # 内核对内存分配的一种策略
vm.panic_on_oom = 0                        # 设置系统oom(内存溢出)
fs.inotify.max_user_watches = 89100        # 允许用户最大监控目录数
fs.file-max = 52706963                     # 允许系统打开的最大文件数
fs.nr_open = 52706963                      # 允许单个进程打开的最大文件数
net.ipv6.conf.all.disable_ipv6 = 1         # 禁用ipv6
net.netfilter.nf_conntrack_max = 2310720   # 系统的最大连接数
# overcommit_memory 是一个内核对内存分配的一种策略,取值又三种分别为0, 1, 2
- overcommit_memory=0   '表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。
- overcommit_memory=1   '表示内核允许分配所有的物理内存,而不管当前的内存状态如何。
- overcommit_memory=2   '表示内核允许分配超过所有物理内存和交换空间总和的内存

部署docker

# 卸载旧版docker(三台都需要操作)
[root@k8s-master ~]# yum remove docker \
           docker-client \
           docker-client-latest \
           docker-common \
           docker-latest \
           docker-latest-logrotate \
           docker-logrotate \
           docker-selinux \
           docker-engine-selinux \
           docker-engine
Loaded plugins: fastestmirror
No Match for argument: docker
No Match for argument: docker-client
No Match for argument: docker-client-latest
No Match for argument: docker-common
No Match for argument: docker-latest
No Match for argument: docker-latest-logrotate
No Match for argument: docker-logrotate
No Match for argument: docker-selinux
No Match for argument: docker-engine-selinux
No Match for argument: docker-engine
No Packages marked for removal
# 安装docker依赖包(三台都需要操作)
[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
# 设置docker源(阿里云)(三台都需要操作)
[root@k8s-master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
# 启用测试库(可选)
[root@k8s-master ~]# yum-config-manager --enable docker-ce-edge
[root@k8s-master ~]# yum-config-manager --enable docker-ce-test
# 安装docker(三台都需要操作)
[root@k8s-master ~]# yum makecache fast
[root@k8s-master ~]# yum -y install docker-ce
# 启动docker,并开机自启(三台都需要操作)
[root@k8s-master ~]# systemctl enable docker --now
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
# 配置docker(三台都需要操作)
[root@k8s-master ~]# sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service    # 安装完成后配置启动时的命令,否则 docker 会将 iptables FORWARD chain 的默认策略设置为DROP
[root@k8s-master ~]# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://bk6kzfqm.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker

部署kubeadm和kubelet

# 配置yum源(三台都需要操作)
[root@k8s-master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装并启动(三台都需要操作)
[root@k8s-master ~]# yum install -y kubelet-1.18.6 kubeadm-1.18.6 kubectl-1.18.6
[root@k8s-master ~]# systemctl enable kubelet.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
# 配置自动补全命令(三台都需要操作)
[root@k8s-master ~]# yum -y install bash-completion
# 设置kubectl与kubeadm命令补全,下次login生效
[root@k8s-master ~]# kubectl completion bash > /etc/bash_completion.d/kubectl
[root@k8s-master ~]# kubeadm completion bash > /etc/bash_completion.d/kubeadm
# 查看k8s依赖的包(三台都需要操作)
[root@k8s-master ~]# kubeadm config images list --kubernetes-version v1.18.6
W0803 15:10:18.910528   25638 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.6
k8s.gcr.io/kube-controller-manager:v1.18.6
k8s.gcr.io/kube-scheduler:v1.18.6
k8s.gcr.io/kube-proxy:v1.18.6
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
# 拉取所需镜像(三台都需要操作)
[root@k8s-master ~]# vim get-k8s-images.sh
#!/bin/bash
# Script For Quick Pull K8S Docker Images
KUBE_VERSION=v1.18.6
PAUSE_VERSION=3.2
CORE_DNS_VERSION=1.6.7
ETCD_VERSION=3.4.3-0
# pull kubernetes images from hub.docker.com
docker pull kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION
# pull aliyuncs mirror docker images
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
# retag to k8s.gcr.io prefix
docker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION  k8s.gcr.io/kube-proxy:$KUBE_VERSION
docker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION
docker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION
docker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION
# untag origin tag, the images won't be delete.
docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION
[root@k8s-master ~]# sh get-k8s-images.sh
'或者
[root@k8s-master ~]# docker save $(docker images | grep -v REPOSITORY | awk 'BEGIN{OFS=":";ORS=" "}{print $1,$2}') -o k8s-images.tar    # master节点导出
[root@k8s-node1 ~]# docker image load -i k8s-images.tar     # node节点导入

初始化集群

# 使用kubeadm init初始化集群,ip为本机ip(在k8s-master上操作)
[root@k8s-master ~]# kubeadm init  --kubernetes-version=v1.18.6 --apiserver-advertise-address=10.11.66.44 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16
--kubernetes-version=v1.18.6 : 加上该参数后启动相关镜像(刚才下载的那一堆)
--pod-network-cidr=10.244.0.0/16 :(Pod 中间网络通讯我们用flannel,flannel要求是10.244.0.0/16,这个IP段就是Pod的IP段)
--service-cidr=10.1.0.0/16 : Service(服务)网段(和微服务架构有关)
# 初始化成功后,会有以下回显
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.11.66.44:6443 --token ecqlbq.1k41wwa3gn57oonq \
    --discovery-token-ca-cert-hash sha256:daeec6df945f3f4a646d074d9f9144f414373106ff8849450c1d10b5a663e87e
# 为需要使用kubectl的用户进行配置(在k8s-master上操作)
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 使用下面的命令确保所有的Pod都处于Running状态,可能要等到许久
[root@k8s-master ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
kube-system   coredns-66bff467f8-cxtrj             0/1     Pending   0          8m14s   <none>        <none>       <none>           <none>
kube-system   coredns-66bff467f8-znlm2             0/1     Pending   0          8m14s   <none>        <none>       <none>           <none>
kube-system   etcd-k8s-master                      1/1     Running   0          8m23s   10.11.66.44   k8s-master   <none>           <none>
kube-system   kube-apiserver-k8s-master            1/1     Running   0          8m23s   10.11.66.44   k8s-master   <none>           <none>
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          8m23s   10.11.66.44   k8s-master   <none>           <none>
kube-system   kube-proxy-vh964                     1/1     Running   0          8m14s   10.11.66.44   k8s-master   <none>           <none>
kube-system   kube-scheduler-k8s-master            1/1     Running   0          8m23s   10.11.66.44   k8s-master   <none>           <none>
[root@k8s-master ~]# kubectl get pods -n kube-system   # 这个也可以查看
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-cxtrj             0/1     Pending   0          3m52s
coredns-66bff467f8-znlm2             0/1     Pending   0          3m52s
etcd-k8s-master                      1/1     Running   0          4m1s
kube-apiserver-k8s-master            1/1     Running   0          4m1s
kube-controller-manager-k8s-master   1/1     Running   0          4m1s
kube-proxy-vh964                     1/1     Running   0          3m52s
kube-scheduler-k8s-master            1/1     Running   0          4m1s

集群网络配置(选择一种就可以)

flannel 网络

[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 注意:修改集群初始化地址及镜像能否拉取(在k8s-master上操作)

Pod Network(使用七牛云镜像)

# (在k8s-master上操作)
[root@k8s-master ~]# curl -o kube-flannel.yml   https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master ~]# sed -i "s/quay.io\/coreos\/flannel/quay-mirror.qiniu.com\/coreos\/flannel/g" kube-flannel.yml
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
[root@k8s-master ~]# rm -f kube-flannel.yml

calico 网络

# (在k8s-master上操作)
[root@k8s-master ~]# wget https://docs.projectcalico.org/v3.15/manifests/calico.yaml
[root@k8s-master ~]# vim calico.yaml
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"
[root@k8s-master ~]# kubectl apply -f calico.yaml
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-578894d4cd-rchx6   1/1     Running   0          2m31s
calico-node-slgg9                          1/1     Running   0          2m32s
coredns-66bff467f8-cxtrj                   1/1     Running   0          55m
coredns-66bff467f8-znlm2                   1/1     Running   0          55m
etcd-k8s-master                            1/1     Running   0          55m
kube-apiserver-k8s-master                  1/1     Running   0          55m
kube-controller-manager-k8s-master         1/1     Running   0          55m
kube-proxy-vh964                           1/1     Running   0          55m
kube-scheduler-k8s-master                  1/1     Running   0          55m

kubernetes集群中添加node节点

# 在k8s-node1和k8s-node2上,运行之前在k8s-master输出的命令
[root@k8s-node1 ~]# kubeadm join 10.11.66.44:6443 --token ecqlbq.1k41wwa3gn57oonq \
      --discovery-token-ca-cert-hash sha256:daeec6df945f3f4a646d074d9f9144f414373106ff8849450c1d10b5a663e87e
[root@k8s-node2 ~]# kubeadm join 10.11.66.44:6443 --token ecqlbq.1k41wwa3gn57oonq \
      --discovery-token-ca-cert-hash sha256:daeec6df945f3f4a646d074d9f9144f414373106ff8849450c1d10b5a663e87e
# 没有记录集群 join 命令的可以通过以下方式重新获取(在k8s-master上操作)
[root@k8s-master ~]# kubeadm token create --print-join-command --ttl=0
[root@k8s-master ~]# kubectl get nodes    # 查看集群中的节点状态,可能要等等许久才Ready
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   64m     v1.18.6
k8s-node1    Ready    <none>   3m37s   v1.18.6
k8s-node2    Ready    <none>   3m36s   v1.18.6

kube-proxy开启ipvs

# (在k8s-master上操作)
[root@k8s-master ~]# kubectl get configmap kube-proxy -n kube-system -o yaml > kube-proxy-configmap.yaml
[root@k8s-master ~]# sed -i 's/mode: ""/mode: "ipvs"/' kube-proxy-configmap.yaml
[root@k8s-master ~]# kubectl apply -f kube-proxy-configmap.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
configmap/kube-proxy configured
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-578894d4cd-rchx6   1/1     Running   0          14m
calico-node-kfc5p                          1/1     Running   0          7m17s
calico-node-slgg9                          1/1     Running   0          14m
calico-node-xcc92                          1/1     Running   0          7m16s
coredns-66bff467f8-cxtrj                   1/1     Running   0          67m
coredns-66bff467f8-znlm2                   1/1     Running   0          67m
etcd-k8s-master                            1/1     Running   0          67m
kube-apiserver-k8s-master                  1/1     Running   0          67m
kube-controller-manager-k8s-master         1/1     Running   0          67m
kube-proxy-6fnpb                           1/1     Running   0          16s
kube-proxy-tflld                           1/1     Running   0          20s
kube-proxy-x47c8                           1/1     Running   0          26s
kube-scheduler-k8s-master                  1/1     Running   0          67m
-

部署 kubernetes-dashboard

# Dashboard安装脚本(在k8s-master上操作)
cat > recommended.yaml <<-EOF
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard
---
#apiVersion: v1
#kind: Secret
#metadata:
#  labels:
#    k8s-app: kubernetes-dashboard
#  name: kubernetes-dashboard-certs
#  namespace: kubernetes-dashboard
#type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-beta1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: kubernetes-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-metrics-scraper
  name: kubernetes-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: kubernetes-metrics-scraper
    spec:
      containers:
        - name: kubernetes-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.0
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
EOF

创建证书

[root@k8s-master ~]# cd /etc/kubernetes/
[root@k8s-master kubernetes]# mkdir dashboard-certs
[root@k8s-master kubernetes]# cd dashboard-certs/
[root@k8s-master dashboard-certs]# kubectl create namespace kubernetes-dashboard    # 创建命名空间
namespace/kubernetes-dashboard created
[root@k8s-master dashboard-certs]# kubectl get namespace   # 查看命名空间
NAME                   STATUS   AGE
default                Active   75m
kube-node-lease        Active   75m
kube-public            Active   75m
kube-system            Active   75m
kubernetes-dashboard   Active   9s
[root@k8s-master dashboard-certs]# openssl genrsa -out dashboard.key 2048   # 创建私钥key文件
Generating RSA private key, 2048 bit long modulus
........................................+++
..........+++
e is 65537 (0x10001)
[root@k8s-master dashboard-certs]# openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'      # 证书请求
[root@k8s-master dashboard-certs]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt     # 自签证书
Signature ok
subject=/CN=dashboard-cert
Getting Private key
[root@k8s-master dashboard-certs]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard   # 创建kubernetes-dashboard-certs对象
secret/kubernetes-dashboard-certs created
[root@k8s-master dashboard-certs]# kubectl get secret -A
NAMESPACE              NAME                                             TYPE                                  DATA   AGE
default                default-token-j6m5t                              kubernetes.io/service-account-token   3      77m
kube-node-lease        default-token-n5lxf                              kubernetes.io/service-account-token   3      77m
.........
.........
kubernetes-dashboard   default-token-bjp2p                              kubernetes.io/service-account-token   3      2m33s
kubernetes-dashboard   kubernetes-dashboard-certs                       Opaque                                2      90s

创建 dashboard 管理员

[root@k8s-master dashboard-certs]# cat > dashboard-admin.yaml <<-EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: dashboard-admin
  namespace: kubernetes-dashboard
EOF
[root@k8s-master dashboard-certs]# kubectl apply -f dashboard-admin.yaml
serviceaccount/dashboard-admin created

为用户分配权限

[root@k8s-master dashboard-certs]# cat > dashboard-admin-bind-cluster-role.yaml <<-EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin-bind-cluster-role
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard
EOF
[root@k8s-master dashboard-certs]# kubectl apply -f dashboard-admin-bind-cluster-role.yaml
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin-bind-cluster-role created

安装 Dashboard

[root@k8s-master dashboard-certs]# kubectl apply -f /root/recommended.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace/kubernetes-dashboard configured
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/kubernetes-metrics-scraper created
[root@k8s-master dashboard-certs]# kubectl get pods -A
NAMESPACE              NAME                                          READY   STATUS    RESTARTS   AGE
kube-system            calico-kube-controllers-578894d4cd-rchx6      1/1     Running   0          29m
kube-system            calico-node-kfc5p                             1/1     Running   0          22m
kube-system            calico-node-slgg9                             1/1     Running   0          29m
kube-system            calico-node-xcc92                             1/1     Running   0          22m
kube-system            coredns-66bff467f8-cxtrj                      1/1     Running   0          82m
kube-system            coredns-66bff467f8-znlm2                      1/1     Running   0          82m
kube-system            etcd-k8s-master                               1/1     Running   0          82m
kube-system            kube-apiserver-k8s-master                     1/1     Running   0          82m
kube-system            kube-controller-manager-k8s-master            1/1     Running   0          82m
kube-system            kube-proxy-6fnpb                              1/1     Running   0          15m
kube-system            kube-proxy-tflld                              1/1     Running   0          15m
kube-system            kube-proxy-x47c8                              1/1     Running   0          15m
kube-system            kube-scheduler-k8s-master                     1/1     Running   0          82m
kubernetes-dashboard   kubernetes-dashboard-84b6b4578b-8t9bp         1/1     Running   0          75s
kubernetes-dashboard   kubernetes-metrics-scraper-86f6785867-bqvpg   1/1     Running   0          75s
[root@k8s-master dashboard-certs]# kubectl get service -n kubernetes-dashboard  -o wide
NAME                        TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE    SELECTOR
dashboard-metrics-scraper   ClusterIP   10.1.16.181   <none>        8000/TCP        2m6s   k8s-app=kubernetes-metrics-scraper
kubernetes-dashboard        NodePort    10.1.99.111   <none>        443:30000/TCP   2m6s   k8s-app=kubernetes-dashboard

查看并复制用户Token

[root@k8s-master dashboard-certs]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name:         dashboard-admin-token-528w2
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 7c3955d3-2c0c-4b99-b69b-8a3f330661de
Type:  kubernetes.io/service-account-token
Data
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1oVnpzUlUzRU4zbXJRV2F5VUZMc3JmYWFBTWMyWU1IenY1d1NET1U0bDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNTI4dzIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2MzOTU1ZDMtMmMwYy00Yjk5LWI2OWItOGEzZjMzMDY2MWRlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.nVS3WCiIU90o5WIYG9iHYE90Gfox_Q5eHNzz3UsGLDDBIfgDt7veX-4pl7GLV8FFsAap0fTLo_pU7sbehd5mOYcgh_QRlZ3ELR4mVZYNW6fmPBFZn7Tbjv7LLieGDPzELrefQJwS4sZus2WsH1OdQbMIry6AYKpl5AAKw4rhh_679QnEBjCsJiEebg0hzlKyXoXGqmaGwfetsCB5DOmoNss2WbIKfGJ7pasTTKa29F3T19NIh9VbDmavyvYZp9VPgfcKiuBKlxrakzwH9fosS8V3faMgH64CMIWwrEqv1cybd85gQkA1u0SGZ5mOQJ3tYWGHGJBFlO8J-RKSo8gJOw

访问测试

1、浏览器访问:https://10.11.66.44:30000/
2、选择token,输入上面输出的token

用文件认证登录

导出认证
[root@k8s-master dashboard-certs]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name:         dashboard-admin-token-528w2
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 7c3955d3-2c0c-4b99-b69b-8a3f330661de
Type:  kubernetes.io/service-account-token
Data
====
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1oVnpzUlUzRU4zbXJRV2F5VUZMc3JmYWFBTWMyWU1IenY1d1NET1U0bDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNTI4dzIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2MzOTU1ZDMtMmMwYy00Yjk5LWI2OWItOGEzZjMzMDY2MWRlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.nVS3WCiIU90o5WIYG9iHYE90Gfox_Q5eHNzz3UsGLDDBIfgDt7veX-4pl7GLV8FFsAap0fTLo_pU7sbehd5mOYcgh_QRlZ3ELR4mVZYNW6fmPBFZn7Tbjv7LLieGDPzELrefQJwS4sZus2WsH1OdQbMIry6AYKpl5AAKw4rhh_679QnEBjCsJiEebg0hzlKyXoXGqmaGwfetsCB5DOmoNss2WbIKfGJ7pasTTKa29F3T19NIh9VbDmavyvYZp9VPgfcKiuBKlxrakzwH9fosS8V3faMgH64CMIWwrEqv1cybd85gQkA1u0SGZ5mOQJ3tYWGHGJBFlO8J-RKSo8gJOw
ca.crt:     1025 bytes
[root@k8s-master ~]# vim .kube/config
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJZmk1aXZZNkxXb0F3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBNE1ETXdOelF5TlRoYUZ3MHlNVEE0TURNd056UXpNREJhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXpLazFvSnVPenQ3R3kzWnIKYjY5UkFqOXpzZ0hsNDdBOVVGOGIvQm1oYjVZalAwNTZuSG5FUVg4Qi85eDRaQmI0U2VLOTZkVVhIaTlFcEZuUQpDUlNKTFUwNnFRcW1GeUdXc1JJcEJPVDlUQmtrSW1XM25aRFZvKzI2dWFnVEp0V1BsOWtaWHZ5Z1hGUkJxeDNYCkxvTHIwZ2FrWE56dWd6TzBhMnFwQ1hQK0xmTE1Pa2gzUlJRZmQ4NUtaWWFXcWhNSStjNkZEVGtnTi84Z3BNKzYKWkE0a0UzT0x3OWFORkpvakl2amNIY1h5N0RNdGxCaFVRZVU4bEk2NHVRVk9zcDllTDR2WjBFRmo1djZFejNnbwp4ZFYrbzd6NWd3N3pzUENrdlJjc3RRcVhSRnV6emlpTVVQQTRDbzFhZkt3R1VZcmtBbmNzZnQxbVhGb2V3WDFPCjkwQ2xod0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDd0lKc0JreEV4UXBpeW8zTkNmQmkrL3hOQ0U3YnpNLzhmRAp4Q0VwQlZ0MWR1NkU1ZFdJQy82a3B0OVZzNHhHc1gvVVA4aUNaejVHZmtxT1JmTklDM0dZUFZJWlhNTUN2RHp0CnFubkk0Z1p2YXhyMnNoSDNpVkw2Rzd0Y2hCZmNJV0J4K1lnTEt3ZW9iTDUvaUorbXJmT2xsNXV4eit6cGUveHIKTjArWWVsTXJBaS9PeWpJR1N0WjVOblRzcnVILzZVRXRFZUwwRE9WQ0FrR3JQYnlkQVdNQUxaeWlQMTU4bCticQpNRkFkMHc2ZG82R3R2NlRCMGVaaXdzT1RHVzN6Ti85YlZWS2NFcGIzaE1MVVk0YVhvNC9laXl6TnF6MzlDdEpBCklPb3djOEFuakdGRDYraUdKbWU0VVdXcUxzMDI5US82eXF6WWFsUmFqWkwyL2FkNHRuaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBektrMW9KdU96dDdHeTNacmI2OVJBajl6c2dIbDQ3QTlVRjhiL0JtaGI1WWpQMDU2Cm5IbkVRWDhCLzl4NFpCYjRTZUs5NmRVWEhpOUVwRm5RQ1JTSkxVMDZxUXFtRnlHV3NSSXBCT1Q5VEJra0ltVzMKblpEVm8rMjZ1YWdUSnRXUGw5a1pYdnlnWEZSQnF4M1hMb0xyMGdha1hOenVnek8wYTJxcENYUCtMZkxNT2toMwpSUlFmZDg1S1pZYVdxaE1JK2M2RkRUa2dOLzhncE0rNlpBNGtFM09MdzlhTkZKb2pJdmpjSGNYeTdETXRsQmhVClFlVThsSTY0dVFWT3NwOWVMNHZaMEVGajV2NkV6M2dveGRWK283ejVndzd6c1BDa3ZSY3N0UXFYUkZ1enppaU0KVVBBNENvMWFmS3dHVVlya0FuY3NmdDFtWEZvZXdYMU85MENsaHdJREFRQUJBb0lCQVFDMHlLZXhkcGZnanhObAp1UFpRVXJvcFZTbDZ6WWhuNTA5U0JxR3V3R2xGSzRkNUxYYkxjQmgzanB5U2lncml4eE9PR0xlUHJZYmRSLzNICmUvcHpldXR0MC9HRVR2N0dJZ3A5NGIvUUxnSzl6TnVKY3ZhT1Bka3FGQjVFVDM2VGFFU09hdHlwZGxpbEZseG4KcmxWZEpaTHdGS1B0ejg3MG9LQzMzaUR4VTcvc2p4MWUwc3FFQ1NMdW5aY2FiaWJtYUpjT2RXYk0yM3JBdEdYQQp0YlFIYVZneHJldEZFREx0Ym9IMFB3Qit3eFNHdFh4WUFwSXR0RkowNWM3QWc1OVhWSFc2akdiYWd2VVlPcDFQCmdGVndSbjdwT1daNlNHTDBqdXgvbTl2UzZoakZ1aVVhVXhkM2ZOSVNKbUljRjZ2MTlmVTQwV3kyYXBCK1B0bHIKOU5zM2RpSGhBb0dCQU01ZW9QcFNGNmp0U1V0NTlERktZdUJUUG9wQWxiZFlxM0QvWnVBQlpkaFdJWXNoS1JvRwpUSGhjaTFlKzBPbmZlZ2pvMzhGM0syaHVJRVdrNEFhQ25QaWVyRWc3Yk1mVjNkMjYyNHBFeGRBN3J5Y1JvaWJuClJlTVA5K1BvVy9IaXJVQW4wUFdyRFUydEpLekxwNlhCcnozeE02VmFiWGxFcnNnZ0pybHN1cEwzQW9HQkFQM2gKWW5QLzVWWHBWeUtvMkhuZEEwWkwwK0pscFhNeFY4NDA4ZE1QMXE1WkVQbkZ2aVNXVjlLdFJVa3lCR2ZDUW1WeApEWkp3KzBRcmZUbXV5elZ6aUFZTFJJbHJKZ285QmN0NmRGUmpFaUo4NkVIeGdlV1J5UkhmaUZqalhqSXlCVGYyCmFxOGM2UlBTZmEyTEh1SVBlZEZVY2lrN0Z5WDg4dzJabkpBcjJFM3hBb0dBWnBOVWtuZkJlTjdRNHFvd2ZWdUwKQUJPQWIzbWdzU3hxc3RUUURxSERQSis3Tm90NkFZeUY4QUdYNVRwY1h4TU1kbWRCNk1qU0U2dEJjVHg5ZWQ3cwpKUXZCZUhuSkhSOHBrMit3ZGU2dklFeTZSOElWQmg5SWRvOVdXTHNERUp6cUhveHI2ZUJtMFdneFpZNG91MVFsClJiV2hSUnhJYzlGMnl0Um9TeHhITklzQ2dZQmRxSFQ2bUMrUmx3aG5KK1RjYUJWYUxJVVpJeWg3SzN2Wi9ad3MKb2M0ditYbVN1MGxmRS91SUpCWElYK1JTSnM3NXYxQWpjdnl1OUdBNUZHdXc1MU1KNzhRejhjeFJ3SnRQcW5nWgozWWFHSkpCR0s0TWhIcndQbE9nbTZwSUljSDJPWEtDVXcxU1UxSFU2dlhVQ0xuVmhMUWNFZ09FVVNaR2N0Y3VWClFDZUc4UUtCZ0UrMkFrZTR3QlRnZDhuZFhlTHRPcHBRZ21IUVViZUN1elZyRzFEVEJxam0rcVpnSzhKR2RUdXIKUDhybjY3TGNFSFpyRlJVODEwQXJUNU92QXRGOTlnU0dnKzd1Q2x5bzJtVGtxZWRIUTZ6RVZld0JUQlFQUEx1VAp6UGRYbjl5cTZSaVZPajU1QUROdmFuNXdQNUE3clRSTGZjNXZqQWRmV3hmYUZqYVIxNE85Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1oVnpzUlUzRU4zbXJRV2F5VUZMc3JmYWFBTWMyWU1IenY1d1NET1U0bDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNTI4dzIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2MzOTU1ZDMtMmMwYy00Yjk5LWI2OWItOGEzZjMzMDY2MWRlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.nVS3WCiIU90o5WIYG9iHYE90Gfox_Q5eHNzz3UsGLDDBIfgDt7veX-4pl7GLV8FFsAap0fTLo_pU7sbehd5mOYcgh_QRlZ3ELR4mVZYNW6fmPBFZn7Tbjv7LLieGDPzELrefQJwS4sZus2WsH1OdQbMIry6AYKpl5AAKw4rhh_679QnEBjCsJiEebg0hzlKyXoXGqmaGwfetsCB5DOmoNss2WbIKfGJ7pasTTKa29F3T19NIh9VbDmavyvYZp9VPgfcKiuBKlxrakzwH9fosS8V3faMgH64CMIWwrEqv1cybd85gQkA1u0SGZ5mOQJ3tYWGHGJBFlO8J-RKSo8gJOw
[root@k8s-master ~]# cp .kube/config /usr/local/k8s-dashboard.kubeconfig
[root@k8s-master ~]# cd /usr/local/
[root@k8s-master local]# ll
total 8
drwxr-xr-x. 2 root root    6 Apr 11  2018 bin
drwxr-xr-x. 2 root root    6 Apr 11  2018 etc
drwxr-xr-x. 2 root root    6 Apr 11  2018 games
drwxr-xr-x. 2 root root    6 Apr 11  2018 include
-rw-------  1 root root 6425 Aug  3 17:48 k8s-dashboard.kubeconfig
drwxr-xr-x. 2 root root    6 Apr 11  2018 lib
drwxr-xr-x. 2 root root    6 Apr 11  2018 lib64
drwxr-xr-x. 2 root root    6 Apr 11  2018 libexec
drwxr-xr-x. 2 root root    6 Apr 11  2018 sbin
drwxr-xr-x. 5 root root   49 Mar 30  2019 share
drwxr-xr-x. 2 root root    6 Apr 11  2018 src
[root@k8s-master local]# sz k8s-dashboard.kubeconfig 
# 登录的时候,选择文件认证的方式登录即可

安装部署 metrics-server 插件

链接:https://pan.baidu.com/s/1QRndSG88L5w-_DHfMxrd_g 
提取码:62dj 
复制这段内容后打开百度网盘手机App,操作更方便哦
[root@k8s-master ~]# unzip metrics-server-master.zip
[root@k8s-master ~]# cd metrics-server-master/deploy/1.8+/
[root@k8s-master 1.8+]# ll
total 28
-rw-r--r-- 1 root root  397 Nov 12  2019 aggregated-metrics-reader.yaml
-rw-r--r-- 1 root root  303 Nov 12  2019 auth-delegator.yaml
-rw-r--r-- 1 root root  324 Nov 12  2019 auth-reader.yaml
-rw-r--r-- 1 root root  298 Nov 12  2019 metrics-apiservice.yaml
-rw-r--r-- 1 root root 1091 Nov 12  2019 metrics-server-deployment.yaml
-rw-r--r-- 1 root root  297 Nov 12  2019 metrics-server-service.yaml
-rw-r--r-- 1 root root  517 Nov 12  2019 resource-reader.yaml

修改安装脚本

[root@k8s-master 1.8+]# vim metrics-server-deployment.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6   # # 修改镜像下载地址
        args:   # 添加以下内容
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-insecure-tls
          - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        ports:
        - name: main-port
          containerPort: 4443
          protocol: TCP
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        imagePullPolicy: Always
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
# 执行脚本
[root@k8s-master 1.8+]# kubectl apply -f .
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@k8s-master 1.8+]# kubectl top nodes
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k8s-master   887m         22%    1701Mi          59%
k8s-node1    158m         7%     954Mi           35%
k8s-node2    137m         6%     894Mi           32%
# 以下情况表示还没创建完成,等待1-3分钟即可
[root@k8s-master 1.8+]# kubectl top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
[root@k8s-master 1.8+]#
[root@k8s-master 1.8+]# kubectl top nodes
error: metrics not available yet
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
2天前
|
存储 Kubernetes 容器
K8S部署nexus
该配置文件定义了Nexus 3的Kubernetes部署,包括PersistentVolumeClaim、Deployment和服务。PVC请求20Gi存储,使用NFS存储类。Deployment配置了一个Nexus 3容器,内存限制为6G,CPU为1000m,并挂载数据卷。Service类型为NodePort,通过30520端口对外提供服务。所有资源位于`nexus`命名空间中。
|
10天前
|
存储 Kubernetes 关系型数据库
阿里云ACK备份中心,K8s集群业务应用数据的一站式灾备方案
本文源自2024云栖大会苏雅诗的演讲,探讨了K8s集群业务为何需要灾备及其重要性。文中强调了集群与业务高可用配置对稳定性的重要性,并指出人为误操作等风险,建议实施周期性和特定情况下的灾备措施。针对容器化业务,提出了灾备的新特性与需求,包括工作负载为核心、云资源信息的备份,以及有状态应用的数据保护。介绍了ACK推出的备份中心解决方案,支持命名空间、标签、资源类型等维度的备份,并具备存储卷数据保护功能,能够满足GitOps流程企业的特定需求。此外,还详细描述了备份中心的使用流程、控制台展示、灾备难点及解决方案等内容,展示了备份中心如何有效应对K8s集群资源和存储卷数据的灾备挑战。
|
2月前
|
Prometheus Kubernetes 监控
k8s部署针对外部服务器的prometheus服务
通过上述步骤,您不仅成功地在Kubernetes集群内部署了Prometheus,还实现了对集群外服务器的有效监控。理解并实施网络配置是关键,确保监控数据的准确无误传输。随着监控需求的增长,您还可以进一步探索Prometheus生态中的其他组件,如Alertmanager、Grafana等,以构建完整的监控与报警体系。
135 60
|
25天前
|
Kubernetes Cloud Native 微服务
云原生入门与实践:Kubernetes的简易部署
云原生技术正改变着现代应用的开发和部署方式。本文将引导你了解云原生的基础概念,并重点介绍如何使用Kubernetes进行容器编排。我们将通过一个简易的示例来展示如何快速启动一个Kubernetes集群,并在其上运行一个简单的应用。无论你是云原生新手还是希望扩展现有知识,本文都将为你提供实用的信息和启发性的见解。
|
1月前
|
Kubernetes 监控 Cloud Native
Kubernetes集群的高可用性与伸缩性实践
Kubernetes集群的高可用性与伸缩性实践
71 1
|
2月前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
1月前
|
存储 Kubernetes Devops
Kubernetes集群管理和服务部署实战
Kubernetes集群管理和服务部署实战
48 0
|
2月前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
2月前
|
Kubernetes Ubuntu Linux
Centos7 搭建 kubernetes集群
本文介绍了如何搭建一个三节点的Kubernetes集群,包括一个主节点和两个工作节点。各节点运行CentOS 7系统,最低配置为2核CPU、2GB内存和15GB硬盘。详细步骤包括环境配置、安装Docker、关闭防火墙和SELinux、禁用交换分区、安装kubeadm、kubelet、kubectl,以及初始化Kubernetes集群和安装网络插件Calico或Flannel。
203 4
|
2月前
|
NoSQL 关系型数据库 Redis
高可用和性能:基于ACK部署Dify的最佳实践
本文介绍了基于阿里云容器服务ACK,部署高可用、可伸缩且具备高SLA的生产可用的Dify服务的详细解决方案。
下一篇
DataWorks