kubernetes二进制安装(四)

本文涉及的产品
云解析 DNS,旗舰版 1个月
全局流量管理 GTM,标准版 1个月
公共DNS(含HTTPDNS解析),每月1000万次HTTP解析
简介: kubernetes二进制安装(四)

14、验证集群功能

#检查节点状态
[root@master01 work]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
[root@master01 work]# kubectl cluster-info 
Kubernetes master is running at https://192.168.100.204:16443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@master01 work]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
master01   Ready    <none>   20m   v1.18.3
master02   Ready    <none>   19m   v1.18.3
worker01   Ready    <none>   19m   v1.18.3
worker02   Ready    <none>   19m   v1.18.3
#创建测试文件
————————————————————————————————————————————————————————————————————————
运行config下的脚本可以自己下载镜像,但是需要联网
[root@master01 work]# bash config/baseimage.sh    #提前pull镜像
————————————————————————————————————————————————————————————————————————
[root@master01 work]# source /root/environment.sh
#上传镜像,上传镜像四个节点全部都需要操作
[root@master01 work]# ll | grep nginx
-rw-r--r--  1 root    root        624 8月   6 11:19 kube-nginx.service
drwxr-xr-x 10    1001  1001       206 8月   6 11:16 nginx-1.19.0
-rw-r--r--  1 root    root    1043748 7月   1 2020 nginx-1.19.0.tar.gz
-rw-r--r--  1 root    root        586 8月   6 13:08 nginx-ds.yml
-rw-r--r--  1 root    root  136325120 7月   1 2020 nginx.tar.gz   #上传nginx1.19.0的镜像文件
[root@master01 work]# ll | grep pau
-rw-r--r--  1 root    root     692736 7月   1 2020 pause-amd64_3.2.tar  #上传pause-amd64_3.2镜像文件
[root@master01 work]# docker load -i nginx.tar.gz   #上传
13cb14c2acd3: Loading layer [==================================================>]  72.49MB/72.49MB
d4cf327d8ef5: Loading layer [==================================================>]   63.8MB/63.8MB
7c7d7f446182: Loading layer [==================================================>]  3.072kB/3.072kB
9040af41bb66: Loading layer [==================================================>]  4.096kB/4.096kB
f978b9ed3f26: Loading layer [==================================================>]  3.584kB/3.584kB
Loaded image: nginx:1.19.0
[root@master01 work]# docker load -i pause-amd64_3.2.tar 
ba0dae6243cc: Loading layer [==================================================>]  684.5kB/684.5kB
Loaded image: k8s.gcr.io/pause-amd64:3.2
[root@master01 work]# docker images        #查看
REPOSITORY            TAG                   IMAGE ID            CREATED             SIZE
nginx                 1.19.0                2622e6cca7eb        14 months ago       132MB
k8s.gcr.io/pause-amd64   3.2                  80d28bedfe5d        17 months ago       683kB
[root@master01 work]# scp nginx.tar.gz root@192.168.100.203:/root
[root@master01 work]# scp nginx.tar.gz root@192.168.100.205:/root
[root@master01 work]# scp nginx.tar.gz root@192.168.100.206:/root  
[root@master01 work]# scp pause-amd64_3.2.tar root@192.168.100.203:/root
[root@master01 work]# scp pause-amd64_3.2.tar root@192.168.100.205:/root
[root@master01 work]# scp pause-amd64_3.2.tar root@192.168.100.206:/root
#在另外三台节点使用docker load进行导入
[root@master02 ~]# docker load -i nginx.tar.gz
[root@master02 ~]# docker load -i pause-amd64_3.2.tar
[root@worker01 ~]# docker load -i nginx.tar.gz
[root@worker01 ~]# docker load -i pause-amd64_3.2.tar
[root@worker02 ~]# docker load -i nginx.tar.gz
[root@worker02 ~]# docker load -i pause-amd64_3.2.tar
#在master01编写测试文件
[root@master01 work]# cat > nginx-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  labels:
    app: nginx-svc
spec:
  type: NodePort
  selector:
    app: nginx-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 8888
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ds
  labels:
    app: nginx-ds
spec:
  selector:
    matchLabels:
      app: nginx-ds
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.19.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
EOF
[root@master01 work]# kubectl create -f nginx-ds.yml
service/nginx-svc created
daemonset.apps/nginx-ds created
#检查各节点的 Pod IP 连通性
[root@master01 work]# kubectl get pods  -o wide | grep nginx-ds
nginx-ds-8znb9   1/1     Running   0          17s   10.10.136.2   master01   <none>           <none>
nginx-ds-h2ssb   1/1     Running   0          17s   10.10.152.2   worker02   <none>           <none>
nginx-ds-pnjbf   1/1     Running   0          17s   10.10.192.2   master02   <none>           <none>
nginx-ds-wjx2z   1/1     Running   0          17s   10.10.168.2   worker01   <none>           <none>
[root@master01 work]# for all_ip in ${ALL_IPS[@]}
  do
    echo ">>> ${all_ip}"
    ssh ${all_ip} "ping -c 1 10.10.136.2"
    ssh ${all_ip} "ping -c 1 10.10.152.2"
    ssh ${all_ip} "ping -c 1 10.10.192.2"
    ssh ${all_ip} "ping -c 1 10.10.168.2"
  done
#能通即可
#检查服务 IP 和端口可达性
[root@master01 work]# kubectl get svc |grep nginx-svc
nginx-svc    NodePort    10.20.0.12   <none>        80:8888/TCP   88s
[root@master01 work]# for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh ${node_ip} "curl -s 10.20.0.12"
  done
#注意这个循环的ip要和上面查询到的ip相同,执行后会输出信息,可以看到nginx的页面,例如:
>>> 192.168.100.206
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>   #这样就是成功了
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
#解释:
Service Cluster IP:10.20.0.12
服务端口:80
NodePort 端口:8888
#检查服务的 NodePort 可达性
[root@master01 work]# for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh ${node_ip} "curl -s ${node_ip}:8888"
  done
#这个和上面的一样,可以看到nginx页面内容即可


15、部署集群插件coredns


 #下载解压
 [root@master01 work]# cd /opt/k8s/work/kubernetes/
 [root@master01 kubernetes]# tar -xzvf kubernetes-src.tar.gz
 # 修改配置
 [root@master01 ~]# cd /opt/k8s/work/kubernetes/cluster/addons/dns/coredns
[root@master01 coredns]# cp coredns.yaml.base coredns.yaml
[root@master01 coredns]# source /root/environment.sh
[root@master01 coredns]# sed -i -e "s/__PILLAR__DNS__DOMAIN__/${CLUSTER_DNS_DOMAIN}/" -e "s/__PILLAR__DNS__SERVER__/${CLUSTER_DNS_SVC_IP}/" -e "s/__PILLAR__DNS__MEMORY__LIMIT__/200Mi/" coredns.yaml
#在两台master上上传镜像
[root@master01 work]# docker load -i coredns_1.6.5.tar 
225df95e717c: Loading layer [==================================================>]  336.4kB/336.4kB
7c9b0f448297: Loading layer [==================================================>]  41.37MB/41.37MB
Loaded image: k8s.gcr.io/coredns:1.6.5
[root@master01 work]# docker load -i tutum-dnsutils.tar.gz 
5f70bf18a086: Loading layer [==================================================>]  1.024kB/1.024kB
3c9ca2b4b72a: Loading layer [==================================================>]  197.2MB/197.2MB
b83a6cb01503: Loading layer [==================================================>]  208.9kB/208.9kB
f5c259e37fdd: Loading layer [==================================================>]  4.608kB/4.608kB
47995420132c: Loading layer [==================================================>]  11.86MB/11.86MB
Loaded image: tutum/dnsutils:latest
[root@master01 work]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
nginx                    1.19.0              2622e6cca7eb        14 months ago       132MB
k8s.gcr.io/pause-amd64   3.2                 80d28bedfe5d        17 months ago       683kB
k8s.gcr.io/coredns       1.6.5               70f311871ae1        21 months ago       41.6MB
tutum/dnsutils           latest              6cd78a6d3256        6 years ago         200MB
[root@master02 ~]# docker load -i coredns_1.6.5.tar
225df95e717c: Loading layer [==================================================>]  336.4kB/336.4kB
7c9b0f448297: Loading layer [==================================================>]  41.37MB/41.37MB
Loaded image: k8s.gcr.io/coredns:1.6.5
[root@master01 work]# docker load -i tutum-dnsutils.tar.gz 
5f70bf18a086: Loading layer [==================================================>]  1.024kB/1.024kB
3c9ca2b4b72a: Loading layer [==================================================>]  197.2MB/197.2MB
b83a6cb01503: Loading layer [==================================================>]  208.9kB/208.9kB
f5c259e37fdd: Loading layer [==================================================>]  4.608kB/4.608kB
47995420132c: Loading layer [==================================================>]  11.86MB/11.86MB
Loaded image: tutum/dnsutils:latest
[root@master02 ~]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
nginx                    1.19.0              2622e6cca7eb        14 months ago       132MB
k8s.gcr.io/pause-amd64   3.2                 80d28bedfe5d        17 months ago       683kB
k8s.gcr.io/coredns       1.6.5               70f311871ae1        21 months ago       41.6MB
#创建 coredns
设置调度策略
提示:对于非业务应用(即集群内部应用)建议仅部署在master节点,如coredns及dashboard。
[root@master01 coredns]# kubectl label nodes master01 node-role.kubernetes.io/master=true
node/master01 labeled
[root@master01 coredns]# kubectl label nodes master02 node-role.kubernetes.io/master=true
node/master02 labeled
[root@master01 coredns]# vi coredns.yaml
#找到下面这个项里面修改:
apiVersion: apps/v1
kind: Deployment
。。。。。。    
   97   # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
   98   replicas: 2
   99   strategy:
。。。。。。
   118       nodeSelector:        #从这行添加,直接复制119到124行,把119行原来那一行删除
   119         node-role.kubernetes.io/master: "true"
   120       tolerations:
   121         - key: node-role.kubernetes.io/master
   122           operator: "Equal"
   123           value: ""
   124           effect: NoSchedule
#保存退出
#创建coredns并检查
[root@master01 coredns]# kubectl create -f coredns.yaml #都是created即可
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
#检查 coredns 功能
[root@master01 coredns]# kubectl get all -n kube-system
NAME                          READY   STATUS    RESTARTS   AGE
pod/coredns-7966bcdf9-hqx7t   1/1     Running   0          5s
pod/coredns-7966bcdf9-nvjk8   1/1     Running   0          5s
NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.20.0.254   <none>        53/UDP,53/TCP,9153/TCP   5s
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   2/2     2            2           5s
NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-7966bcdf9   2         2         2       5s
#新建Deployment,查看应用
[root@master01 coredns]# cd /opt/k8s/work/
[root@master01 work]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE    IP            NODE       NOMINATED NODE   READINESS GATES
nginx-ds-8znb9   1/1     Running   0          106m   10.10.136.2   master01   <none>           <none>
nginx-ds-h2ssb   1/1     Running   0          106m   10.10.152.2   worker02   <none>           <none>
nginx-ds-pnjbf   1/1     Running   0          106m   10.10.192.2   master02   <none>           <none>
nginx-ds-wjx2z   1/1     Running   0          106m   10.10.168.2   worker01   <none>           <none>
[root@master01 work]# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)       AGE     SELECTOR
kubernetes   ClusterIP   10.20.0.1    <none>        443/TCP       3h14m   <none>
nginx-svc    NodePort    10.20.0.12   <none>        80:8888/TCP   106m    app=nginx-ds
#创建测试pod
[root@master01 work]# source /root/environment.sh
[root@master01 work]# cat > dnsutils-ds.yml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: dnsutils-ds
  labels:
    app: dnsutils-ds
spec:
  type: NodePort
  selector:
    app: dnsutils-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: dnsutils-ds
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      app: dnsutils-ds
  template:
    metadata:
      labels:
        app: dnsutils-ds
    spec:
      containers:
      - name: my-dnsutils
        image: tutum/dnsutils:latest
        imagePullPolicy: IfNotPresent
        command:
          - sleep
          - "3600"
        ports:
        - containerPort: 80
EOF
[root@master01 work]# kubectl create -f dnsutils-ds.yml
service/dnsutils-ds created
daemonset.apps/dnsutils-ds created
[root@master01 work]# kubectl get pods -lapp=dnsutils-ds
NAME                READY   STATUS    RESTARTS   AGE
dnsutils-ds-hcsw2   1/1     Running   0          6s
dnsutils-ds-msmnd   1/1     Running   0          6s
dnsutils-ds-qvcwp   1/1     Running   0          6s
dnsutils-ds-vrczl   1/1     Running   0          6s
#检查解析
[root@master01 work]# kubectl -it exec dnsutils-ds-hcsw2 -- /bin/sh  #这里的pod填上面输出的podname
# cat /etc/resolv.conf
nameserver 10.20.0.254    #可以成功解析 
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
# exit

16、部署metrics、dashboard

Metrics介绍
Kubernetes的早期版本依靠Heapster来实现完整的性能数据采集和监控功能,Kubernetes从1.8版本开始,性能数据开始以Metrics API的方式提供标准化接口,并且从1.10版本开始将Heapster替换为Metrics Server。在Kubernetes新的监控体系中,Metrics Server用于提供核心指标(Core Metrics),包括Node、Pod的CPU和内存使用指标。
对其他自定义指标(Custom Metrics)的监控则由Prometheus等组件来完成。
#获取部署文件
[root@master01 work]#  mkdir metrics
[root@master01 work]# cd metrics/
[root@master01 metrics]# wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
[root@master01 metrics]# ll
总用量 4
-rw-r--r-- 1 root root 3509 1月   6 2021 components.yaml
[root@master01 metrics]# vi components.yaml
找到这个,在这个项里面修改配置
     62 apiVersion: apps/v1
     63 kind: Deployment
。。。。。。
     69 spec:
     70   replicas: 2   #添加这行
     71   selector:
。。。。。。
     79     spec:
     80       serviceAccountName: metrics-server
     81       hostNetwork: true    #添加这行
。。。。。。
     87       - name: metrics-server     #添加下面到94行
     88         image: k8s.gcr.io/metrics-server-amd64:v0.3.6
     89         imagePullPolicy: IfNotPresent
     90         args:
     91           - --cert-dir=/tmp
     92           - --secure-port=4443
     93           - --kubelet-insecure-tls
     94           - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
     95         ports:
#保存退出
#在两台master上上传镜像
[root@master01 metrics]# docker load -i metrics-server-amd64_v0.3.6.tar   #上传这个
932da5156413: Loading layer [==================================================>]  3.062MB/3.062MB
7bf3709d22bb: Loading layer [==================================================>]  38.13MB/38.13MB
Loaded image: k8s.gcr.io/metrics-server-amd64:v0.3.6
[root@master01 metrics]# docker images
REPOSITORY                        TAG                 IMAGE ID            CREATED             SIZE
nginx                             1.19.0              2622e6cca7eb        14 months ago       132MB
k8s.gcr.io/pause-amd64            3.2                 80d28bedfe5d        17 months ago       683kB
k8s.gcr.io/coredns                1.6.5               70f311871ae1        21 months ago       41.6MB
k8s.gcr.io/metrics-server-amd64   v0.3.6              9dd718864ce6        22 months ago       39.9MB
tutum/dnsutils                    latest              6cd78a6d3256        6 years ago         200MB
[root@master02 ~]# docker load -i metrics-server-amd64_v0.3.6.tar 
932da5156413: Loading layer [==================================================>]  3.062MB/3.062MB
7bf3709d22bb: Loading layer [==================================================>]  38.13MB/38.13MB
Loaded image: k8s.gcr.io/metrics-server-amd64:v0.3.6
[root@master02 ~]# docker images
REPOSITORY                        TAG                 IMAGE ID            CREATED             SIZE
nginx                             1.19.0              2622e6cca7eb        14 months ago       132MB
k8s.gcr.io/pause-amd64            3.2                 80d28bedfe5d        17 months ago       683kB
k8s.gcr.io/coredns                1.6.5               70f311871ae1        21 months ago       41.6MB
k8s.gcr.io/metrics-server-amd64   v0.3.6              9dd718864ce6        22 months ago       39.9MB
tutum/dnsutils                    latest              6cd78a6d3256        6 years ago         200MB
#正式部署
[root@master01 metrics]# kubectl apply -f components.yaml  
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@master01 metrics]# kubectl -n kube-system get pods -l k8s-app=metrics-server  #都是running即可
NAME                              READY   STATUS    RESTARTS   AGE
metrics-server-7b97647899-5tnn7   1/1     Running   0          2m44s
metrics-server-7b97647899-tr6qt   1/1     Running   0          2m45s
#查看资源监控
[root@master02 ~]# kubectl top nodes
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master01   111m         5%     2038Mi          53%       
master02   101m         5%     1725Mi          44%       
worker01   12m          1%     441Mi           23%       
worker02   14m          1%     439Mi           23%       
[root@master02 ~]# kubectl top pods --all-namespaces
NAMESPACE     NAME                              CPU(cores)   MEMORY(bytes)   
default       dnsutils-ds-hcsw2                 0m           0Mi             
default       dnsutils-ds-msmnd                 0m           1Mi             
default       dnsutils-ds-qvcwp                 0m           1Mi             
default       dnsutils-ds-vrczl                 0m           1Mi             
default       nginx-ds-8znb9                    0m           3Mi             
default       nginx-ds-h2ssb                    0m           2Mi             
default       nginx-ds-pnjbf                    0m           3Mi             
default       nginx-ds-wjx2z                    0m           2Mi             
kube-system   coredns-7966bcdf9-hqx7t           3m           9Mi             
kube-system   coredns-7966bcdf9-nvjk8           3m           14Mi            
kube-system   metrics-server-7b97647899-5tnn7   1m           13Mi            
kube-system   metrics-server-7b97647899-tr6qt   1m           11Mi  
#dashboard部署
#设置标签
[root@master01 ~]# kubectl label nodes master01 dashboard=yes
node/master01 labeled
[root@master01 ~]# kubectl label nodes master02 dashboard=yes
node/master02 labeled
#创建证书
[root@master01 metrics]#  mkdir -p /opt/k8s/work/dashboard/certs
[root@master01 metrics]#  cd /opt/k8s/work/dashboard/certs
[root@master01 certs]# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/C=CN/ST=ZheJiang/L=HangZhou/O=Xianghy/OU=Xianghy/CN=k8s.odocker.com"
[root@master01 certs]# ls
tls.crt  tls.key
[root@master01 certs]# pwd
/opt/k8s/work/dashboard/certs
[root@master01 certs]# scp tls.* root@192.168.100.203:$pwd  #要记得在master02上创建目录
tls.crt                                                                                                100% 1346     1.2MB/s   00:00    
tls.key                                                                                                100% 1704     1.5MB/s   00:00   


#手动创建secret
#v2版本dashboard独立ns
[root@master01 certs]# kubectl create ns kubernetes-dashboard
namespace/kubernetes-dashboard created
[root@master01 certs]# kubectl create secret generic kubernetes-dashboard-certs --from-file=/opt/k8s/work/dashboard/certs -n kubernetes-dashboard
secret/kubernetes-dashboard-certs created
#查看新证书
[root@master01 certs]# kubectl get secret kubernetes-dashboard-certs -n kubernetes-dashboard -o yaml
# 下载yaml文件
[root@master01 certs]#  cd /opt/k8s/work/dashboard/
[root@master01 dashboard]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.1/aio/deploy/recommended.yaml
[root@master01 dashboard]# ll
总用量 8
drwxr-xr-x 2 root root   36 8月   6 18:29 certs
-rw-r--r-- 1 root root 7767 7月   2 2020 recommended.yaml
 #修改yaml文件
 [root@master01 dashboard]# vi recommended.yaml
 。。。。。。
     30 ---
     31 
     32 kind: Service
     33 apiVersion: v1
     34 metadata:
     35   labels:
     36     k8s-app: kubernetes-dashboard
     37   name: kubernetes-dashboard
     38   namespace: kubernetes-dashboard
     39 spec:
     40   type: NodePort   #增加
     41   ports:
     42     - port: 443
     43       targetPort: 8443
     44       nodePort: 30001  #增加
     45   selector:
     46     k8s-app: kubernetes-dashboard
     47 
     48 ---
 。。。。。。
 #因为自动生成的证书很多浏览器无法使用,所以我们自己创建,注释掉kubernetes-dashboard-certs对象声明
     48 ---
     49 
     50 #apiVersion: v1            #注释掉
     51 #kind: Secret
     52 #metadata:
     53 #  labels:
     54 #    k8s-app: kubernetes-dashboard
     55 #  name: kubernetes-dashboard-certs
     56 #  namespace: kubernetes-dashboard
     57 #type: Opaque
     58 
     59 ---
 。。。。。。
    189     spec:                  #从189开始
    190       containers:
    191         - name: kubernetes-dashboard
    192           image: kubernetesui/dashboard:v2.0.0-beta8
    193           imagePullPolicy: IfNotPresent
    194           ports:
    195             - containerPort: 8443
    196               protocol: TCP
    197           args:
    198             - --auto-generate-certificates
    199             - --namespace=kubernetes-dashboard
    200             - --tls-key-file=tls.key
    201             - --tls-cert-file=tls.crt
    202             - --token-ttl=3600
#保存退出    
#正式部署
[root@master01 dashboard]#  kubectl apply -f recommended.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace/kubernetes-dashboard configured
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
#在master1和master2上都下载一下镜像
[root@master02 ~]# docker pull kubernetesui/metrics-scraper:v1.0.1
[root@master02 ~]# docker pull kubernetesui/dashboard:v2.0.0-beta8
[root@master02 ~]# docker images
REPOSITORY                        TAG                 IMAGE ID            CREATED             SIZE
nginx                             1.19.0              2622e6cca7eb        14 months ago       132MB
k8s.gcr.io/pause-amd64            3.2                 80d28bedfe5d        17 months ago       683kB
kubernetesui/dashboard            v2.0.0-beta8        eb51a3597525        20 months ago       90.8MB  #这个
k8s.gcr.io/coredns                1.6.5               70f311871ae1        21 months ago       41.6MB
k8s.gcr.io/metrics-server-amd64   v0.3.6              9dd718864ce6        22 months ago       39.9MB
kubernetesui/metrics-scraper      v1.0.1              709901356c11        2 years ago         40.1MB  #这个
tutum/dnsutils                    latest              6cd78a6d3256        6 years ago         200MB
[root@master01 ~]# docker pull kubernetesui/metrics-scraper:v1.0.1
[root@master01 ~]# docker pull kubernetesui/dashboard:v2.0.0-beta8
[root@master01 ~]# docker images
REPOSITORY                        TAG                 IMAGE ID            CREATED             SIZE
nginx                             1.19.0              2622e6cca7eb        14 months ago       132MB
k8s.gcr.io/pause-amd64            3.2                 80d28bedfe5d        17 months ago       683kB
kubernetesui/dashboard            v2.0.0-beta8        eb51a3597525        20 months ago       90.8MB  #这个
k8s.gcr.io/coredns                1.6.5               70f311871ae1        21 months ago       41.6MB
k8s.gcr.io/metrics-server-amd64   v0.3.6              9dd718864ce6        22 months ago       39.9MB
kubernetesui/metrics-scraper      v1.0.1              709901356c11        2 years ago         40.1MB  #这个
tutum/dnsutils                    latest              6cd78a6d3256        6 years ago         200MB
#查看状态
[root@master01 dashboard]# kubectl get deployment kubernetes-dashboard -n kubernetes-dashboard
NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
kubernetes-dashboard   1/1     1            1           18m
[root@master01 dashboard]#  kubectl get services -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.20.55.23    <none>        8000/TCP        18m
kubernetes-dashboard        NodePort    10.20.46.179   <none>        443:30001/TCP   18m
[root@master01 dashboard]# kubectl get pods -o wide -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE   IP            NODE       NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-694557449d-l2cpn   1/1     Running   0          19m   10.10.192.5   master02   <none>           <none>
kubernetes-dashboard-df75cc4c7-xz8nt         1/1     Running   0          19m   10.10.136.5   master01   <none>           <none>
#提示:master01 NodePort 30001/TCP映射到 dashboard pod 443 端口。
#创建管理员账户
提示:dashboard v2版本默认没有创建具有管理员权限的账户,可如下操作创建。
[root@master01 dashboard]# vi dashboard-admin.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
#保存退出
[root@master01 dashboard]#  kubectl apply -f dashboard-admin.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
#访问Dashboard
[root@master01 dashboard]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')  #查看token值
Name:         admin-user-token-87x8n
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: e7006035-fdda-4c70-b5bd-c5342cf9a9e8
Type:  kubernetes.io/service-account-token
Data
====
ca.crt:     1367 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjBMT1lOUFgycEpIOWpQajFoQUpNNHlWMkRxZDNmdUttcVBNVHpyajdTN2sifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTg3eDhuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlNzAwNjAzNS1mZGRhLTRjNzAtYjViZC1jNTM0MmNmOWE5ZTgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.Rrhwup0uhoIYQAzXhz7VylRYk-cJR1oyAyi4HZc4mJGE6rFcpA6CsfcHtJJvdGticX9hS7TErPlN9nP-yK0dA2T-oxB5mG2RA2H6mMEqa9wU_X4JcQv0Aw7JwEPXzO62I3ue6iThnT8PpsxAN6PQM3fSG1qJqKL8hneyenNzS8J-S09isdZSCChYSk_DsJLf1ICuUMIJvcTAbIELKcmhsf2ixY6k1FAvttmzfB8-EV6I6Brua63pY-5wc3i6Ptrg1FofH5tyOAV5mYDGBZzl9Y1B9N5QM9oJDAffgHP5oAKLnflVeuMeU8miba5phrLh7DTdXT8nKSo34dhUA6CEKw

使用浏览器访问https://192.168.100.204:30001/


a2c9f0f27f6043ff998fab752bd7de26.pngbde2977f0e3c42f5879c06bbf33aaf6a.pngf34caef5d73d4aefb5c66b5c40bc2070.png


成功访问,至此二进制成功安装k8s!!!

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
14天前
|
Kubernetes Ubuntu Docker
从0开始搞K8S:使用Ubuntu进行安装(环境安装)
通过上述步骤,你已经在Ubuntu上成功搭建了一个基本的Kubernetes单节点集群。这只是开始,Kubernetes的世界广阔且深邃,接下来你可以尝试部署应用、了解Kubernetes的高级概念如Services、Deployments、Ingress等,以及探索如何利用Helm等工具进行应用管理,逐步提升你的Kubernetes技能树。记住,实践是最好的老师,不断实验与学习,你将逐渐掌握这一强大的容器编排技术。
40 1
|
24天前
|
Kubernetes Linux 开发工具
centos7通过kubeadm安装k8s 1.27.1版本
centos7通过kubeadm安装k8s 1.27.1版本
|
26天前
|
Kubernetes Docker 容器
rancher docker k8s安装(一)
rancher docker k8s安装(一)
36 2
|
26天前
|
Kubernetes 网络安全 容器
基于Ubuntu-22.04安装K8s-v1.28.2实验(一)部署K8s
基于Ubuntu-22.04安装K8s-v1.28.2实验(一)部署K8s
68 2
|
26天前
|
存储 Kubernetes 负载均衡
基于Ubuntu-22.04安装K8s-v1.28.2实验(四)使用域名访问网站应用
基于Ubuntu-22.04安装K8s-v1.28.2实验(四)使用域名访问网站应用
19 1
|
26天前
|
负载均衡 应用服务中间件 nginx
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
42 1
|
28天前
|
Kubernetes 监控 调度
k8s学习--kubernetes服务自动伸缩之垂直伸缩(资源伸缩)VPA详细解释与安装
k8s学习--kubernetes服务自动伸缩之垂直伸缩(资源伸缩)VPA详细解释与安装
|
28天前
|
缓存 Kubernetes 应用服务中间件
k8s学习--helm的详细解释及安装和常用命令
k8s学习--helm的详细解释及安装和常用命令
k8s学习--helm的详细解释及安装和常用命令
|
24天前
|
Kubernetes 网络协议 安全
[kubernetes]二进制方式部署单机k8s-v1.30.5
[kubernetes]二进制方式部署单机k8s-v1.30.5
|
26天前
|
Kubernetes Docker 容器
rancher docker k8s安装(二)
rancher docker k8s安装(二)
31 0

推荐镜像

更多