Kubeadm部署高可用K8S集群

简介: Kubeadm部署高可用K8S集群

1. 设备清单

因为条件问题本次实验只准备了三台虚拟机。两台master节点,一台node节点。

同时高可用部署也是在两台master节点上进行。

有条件的可以自行把环境扩到5台多master节点多node节点,甚至可以7台把高可用单独分出来,部署过程不会有影响。

设备 IP
k8s-master01 192.168.1.3
k8s-master02 192.168.1.4
k8s-node01 192.168.1.3
vip 192.168.1.100

2. 初始环境准备

(所有节点都要进行的操作:)

2.1 主机名解析

[root@k8s-master01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.3 k8s-master01
192.168.1.4 k8s-master02
192.168.1.5 k8s-node01

2.2 关闭防火墙及SElinux

[root@k8s-master01 ~]# systemctl stop firewall && systemctl disable firewall
[root@k8s-master01 ~]# setenforce 0
setenforce: SELinux is disabled
[root@k8s-master01 ~]# cat /etc/sysconfig/selinux
SELINUX=disabled

2.3 关闭NetworkManager

[root@k8s-master01 ~]# systemctl disable --now NetworkManager
[root@k8s-master01 ~]# systemctl restart network
[root@k8s-master01 ~]# ping www.baidu.com
#确定网络是通的

2.4 进行时间同步

[root@k8s-master01 ~]# ntpdate ntp.aliyun.com

2.5 升级系统(可选,我没做)命令如下:

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 –y
yum update -y --exclude=kernel* && reboot

2.6 升级内核

[root@k8s-master01 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@k8s-master01 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
获取http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
获取http://elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
准备中...                          ################################# [100%]
正在升级/安装...
   1:elrepo-release-7.0-4.el7.elrepo  ################################# [100%]
[root@k8s-master01 ~]# yum --enablerepo=elrepo-kernel install kernel-ml kernel-ml-devel –y
[root@k8s-master01 ~]# grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg && grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)" && reboot
[root@k8s-master01 ~]# uname -r
5.8.8-1.el7.elrepo.x86_64

2.7 设置安装源

2.7.1 设置k8s源

[root@k8s-master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

2.7.2 设置docker源

[root@k8s-master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-master01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
已加载插件:fastestmirror
adding repo from: https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
[root@k8s-master01 ~]# yum makecache fast

2.8 安装ipvs模块

[root@k8s-master01 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp –y
[root@k8s-master01 ~]# vim /etc/modules-load.d/ipvs.conf
[root@k8s-master01 ~]# cat /etc/modules-load.d/ipvs.conf 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
[root@k8s-master01 ~]# systemctl enable --now systemd-modules-load.service

2.9 修改内核参数

[root@k8s-master01 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
> net.ipv4.ip_forward = 1
> net.bridge.bridge-nf-call-iptables = 1
> fs.may_detach_mounts = 1
> vm.overcommit_memory=1
> vm.panic_on_oom=0
> fs.inotify.max_user_watches=89100
> fs.file-max=52706963
> fs.nr_open=52706963
> net.netfilter.nf_conntrack_max=2310720
> 
> net.ipv4.tcp_keepalive_time = 600
> net.ipv4.tcp_keepalive_probes = 3
> net.ipv4.tcp_keepalive_intvl =15
> net.ipv4.tcp_max_tw_buckets = 36000
> net.ipv4.tcp_tw_reuse = 1
> net.ipv4.tcp_max_orphans = 327680
> net.ipv4.tcp_orphan_retries = 3
> net.ipv4.tcp_syncookies = 1
> net.ipv4.tcp_max_syn_backlog = 16384
> net.ipv4.ip_conntrack_max = 65536
> net.ipv4.tcp_max_syn_backlog = 16384
> net.ipv4.tcp_timestamps = 0
> net.core.somaxconn = 16384
> EOF
[root@k8s-master01 ~]# sysctl --system

2.10 关闭swap分区

[root@k8s-master01 ~]# swapoff -a
[root@k8s-master01 ~]# vim /etc/fstab 
[root@k8s-master01 ~]# cat /etc/fstab 
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

2.11 master01节点免密登录其他节点

[root@k8s-master01 ~]# ssh-keygen -t rsa
[root@k8s-master01 ~]# for i in k8s-master01 k8s-master02 k8s-node01;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

2.12 配置limit

[root@k8s-master01 ~]# ulimit -SHn 65535

3 安装基本组件

(所有节点都要安装:)

3.1 安装containerd

[root@k8s-master01 ~]# wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm
[root@k8s-master01 ~]# yum -y install containerd.io-1.2.13-3.2.el7.x86_64.rpm

3.2 安装k8s组件

[root@k8s-master01 ~]# yum -y install kubeadm kubelet kubectl --disableexcludes=kubernetes
已安装:
  kubeadm.x86_64 0:1.19.1-0  kubectl.x86_64 0:1.19.1-0  kubelet.x86_64 0:1.19.1-0 
作为依赖被安装:
  cri-tools.x86_64 0:1.13.0-0            kubernetes-cni.x86_64 0:0.8.7-0          
  socat.x86_64 0:1.7.3.2-2.el7          
[root@k8s-master01 ~]# systemctl enable --now kubelet

3.3 安装docker

[root@k8s-master01 ~]# yum -y install docker-ce
已安装:
  docker-ce.x86_64 3:19.03.12-3.el7
[root@k8s-master01 ~]# systemctl start docker && systemctl enable docker
[root@k8s-master01 ~]# vim /etc/docker/daemon.json
[root@k8s-master01 ~]# cat /etc/docker/daemon.json 
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors":["https://655dds7u.mirror.aliyuncs.com"]
}
[root@k8s-master01 ~]# systemctl restart docker

3.4 配置kubelet使用阿里云的pause镜像

[root@k8s-master01 ~]# cat >/etc/sysconfig/kubelet<<EOF
> KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
> EOF

4. 安装高可用组件

(所有master节点安装:)

4.1 安装HAproxy和keepalived服务

[root@k8s-master01 ~]# yum install keepalived haproxy –y
已安装:
  haproxy.x86_64 0:1.5.18-9.el7          keepalived.x86_64 0:1.3.5-16.el7

4.2 修改haproxy配置文件

(所有master节点配置相同:)

[root@k8s-master01 ~]# cd /etc/haproxy/
[root@k8s-master01 haproxy]# cp haproxy.cfg haproxy.cfg.bak
[root@k8s-master01 haproxy]# vim haproxy.cfg
[root@k8s-master01 haproxy]# cat haproxy.cfg
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s
defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s
frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor
listen stats
  bind    *:8006
  mode    http
  stats   enable
  stats   hide-version
  stats   uri       /stats
  stats   refresh   30s
  stats   realm     Haproxy\ Statistics
  stats   auth      admin:admin
frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master
backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server master01 192.168.1.3:6443  check
  server master02 192.168.1.4:6443  check

4.3 修改keepalived配置文件

4.3.1 修改master01配置文件

[root@k8s-master01 ~]# cd /etc/keepalived/
[root@k8s-master01 keepalived]# cp keepalived.conf keepalived.conf.bak
[root@k8s-master01 keepalived]# vim keepalived.conf
[root@k8s-master01 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 2
    weight -5
    fall 3  
    rise 2
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    mcast_src_ip 192.168.1.3
    virtual_router_id 51
    priority 150
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.1.100/24
    }
    track_script {
       chk_apiserver
    }

4.3.2 修改master02配置文件

[root@k8s-master02 ~]# cd /etc/keepalived/
[root@k8s-master02 keepalived]# cp keepalived.conf keepalived.conf.bak
[root@k8s-master02 keepalived]# vim keepalived.conf
[root@k8s-master02 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 2
    weight -5
    fall 3  
    rise 2
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    mcast_src_ip 192.168.1.4
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.1.100/24
    }
    track_script {
       chk_apiserver
    }
}

4.4 创建健康检查脚本

(所有master节点:)

[root@k8s-master01 ~]# vim /etc/keepalived/check_apiserver.sh
[root@k8s-master01 ~]# cat /etc/keepalived/check_apiserver.sh 
#!/bin/bash
err=0
for k in $(seq 1 5)
do
    check_code=$(pgrep kube-apiserver)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 5
        continue
    else
        err=0
        break
    fi
done
if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

4.5 启动haproxy和keepalived服务

[root@k8s-master01 ~]# systemctl start haproxy
[root@k8s-master01 ~]# systemctl enable haproxy
[root@k8s-master01 ~]# systemctl start keepalived
[root@k8s-master01 ~]# systemctl enable keepalived

5 部署k8s集群

5.1 获取初始化文件并修改

(所有master节点:)

[root@k8s-master01 ~]# kubeadm config print init-defaults > init.default.yaml
[root@k8s-master01 ~]# vim init.default.yaml 
[root@k8s-master01 ~]# cat init.default.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.3
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 192.168.1.100
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.1.100:16443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:
  dnsDomain: cluster.local
  podSubnet: 172.168.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}

5.2 拉取所需镜像

(所有master节点:)

[root@k8s-master01 ~]# kubeadm config images pull --config /root/init.default.yaml

5.2 初始化主master节点(master01)

[root@k8s-master01 ~]# kubeadm init --config /root/init.default.yaml --upload-certs

部分初始化内容:

Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
  kubeadm join 192.168.1.100:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:f0e18d595009a909d60378598bbf80895cf393b4ebf851fa75c73c88de5644cb \
    --control-plane --certificate-key d092072bb3f05ad103537a3d371b429edc84f38d55cfe57e6f62d80e79b7455d
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.100:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:f0e18d595009a909d60378598bbf80895cf393b4ebf851fa75c73c88de5644cb

按照要求创建目录:

[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

5.3 配置环境变量

(所有master节点:)

[root@k8s-master01 ~]# cat <<EOF >> /root/.bashrc
> export KUBECONFIG=/etc/kubernetes/admin.conf
> EOF
[root@k8s-master01 ~]# source /root/.bashrc

5.4 安装网络插件

[root@k8s-master01 ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  182k  100  182k    0     0  37949      0  0:00:04  0:00:04 --:--:-- 42388
[root@k8s-master01 ~]# vim calico.yaml
- name: CALICO_IPV4POOL_CIDR
  value: "172.168.0.0/16"
[root@k8s-master01 ~]# kubectl apply -f calico.yaml

5.5 查看主master状态及系统组件状态

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   26m   v1.19.1
[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-c9784d67d-52ksp   1/1     Running   0          3m23s   172.168.32.130   k8s-master01   <none>           <none>
calico-node-947h8                         1/1     Running   0          3m23s   192.168.1.3      k8s-master01   <none>           <none>
coredns-6c76c8bb89-qqc5h                  1/1     Running   0          26m     172.168.32.131   k8s-master01   <none>           <none>
coredns-6c76c8bb89-wwtgh                  1/1     Running   0          26m     172.168.32.129   k8s-master01   <none>           <none>
etcd-k8s-master01                         1/1     Running   0          26m     192.168.1.3      k8s-master01   <none>           <none>
kube-apiserver-k8s-master01               1/1     Running   0          26m     192.168.1.3      k8s-master01   <none>           <none>
kube-controller-manager-k8s-master01      1/1     Running   0          26m     192.168.1.3      k8s-master01   <none>           <none>
kube-proxy-f4fgp                          1/1     Running   0          26m     192.168.1.3      k8s-master01   <none>           <none>
kube-scheduler-k8s-master01               1/1     Running   0          26m     192.168.1.3      k8s-master01   <none>           <none>

5.6 节点加入集群

5.6.1 master节点加入集群

[root@k8s-master02 ~]# kubeadm join 192.168.1.100:16443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:f0e18d595009a909d60378598bbf80895cf393b4ebf851fa75c73c88de5644cb     --control-plane --certificate-key d092072bb3f05ad103537a3d371b429edc84f38d55cfe57e6f62d80e79b7455d

5.6.2 node节点加入集群

[root@k8s-node01 ~]# kubeadm join 192.168.1.100:16443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:f0e18d595009a909d60378598bbf80895cf393b4ebf851fa75c73c88de5644cb

6.7.3 在master查看集群节点

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   Ready      master   34m     v1.19.1
k8s-master02   Ready      master   4m42s   v1.19.1
k8s-node01     NotReady   <none>   60s     v1.19.1

此时查看所有pod:

[root@k8s-master01 ~]#  kubectl get pods -n kube-system -o wide
NAME                                      READY   STATUS    RESTARTS   AGE    IP               NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-c9784d67d-52ksp   1/1     Running   1          123m   172.168.32.134   k8s-master01   <none>           <none>
calico-node-947h8                         1/1     Running   1          123m   192.168.1.3      k8s-master01   <none>           <none>
calico-node-9hcfm                         1/1     Running   0          117m   192.168.1.4      k8s-master02   <none>           <none>
calico-node-cfzc2                         1/1     Running   0          113m   192.168.1.5      k8s-node01     <none>           <none>
coredns-6c76c8bb89-qqc5h                  1/1     Running   1          147m   172.168.32.133   k8s-master01   <none>           <none>
coredns-6c76c8bb89-wwtgh                  1/1     Running   1          147m   172.168.32.132   k8s-master01   <none>           <none>
etcd-k8s-master01                         1/1     Running   1          147m   192.168.1.3      k8s-master01   <none>           <none>
etcd-k8s-master02                         1/1     Running   1          117m   192.168.1.4      k8s-master02   <none>           <none>
kube-apiserver-k8s-master01               1/1     Running   2          147m   192.168.1.3      k8s-master01   <none>           <none>
kube-apiserver-k8s-master02               1/1     Running   1          117m   192.168.1.4      k8s-master02   <none>           <none>
kube-controller-manager-k8s-master01      1/1     Running   3          147m   192.168.1.3      k8s-master01   <none>           <none>
kube-controller-manager-k8s-master02      1/1     Running   1          117m   192.168.1.4      k8s-master02   <none>           <none>
kube-proxy-f4fgp                          1/1     Running   1          147m   192.168.1.3      k8s-master01   <none>           <none>
kube-proxy-lgdzm                          1/1     Running   0          117m   192.168.1.4      k8s-master02   <none>           <none>
kube-proxy-v8jm5                          1/1     Running   0          113m   192.168.1.5      k8s-node01     <none>           <none>
kube-scheduler-k8s-master01               1/1     Running   3          147m   192.168.1.3      k8s-master01   <none>           <none>
kube-scheduler-k8s-master02               1/1     Running   1          117m   192.168.1.4      k8s-master02   <none>           <none>
metrics-server-769bd9c6f4-nndxf           1/1     Running   21         83m    172.168.85.193   k8s-node01     <none>           <none>

6.7.4 如果忘记Token值

[root@k8s-master01 ~]# kubeadm token create --print-join-command
注:生成后node节点可以使用
[root@k8s-master01 ~]# kubeadm init phase upload-certs  --upload-certs
注:生成后的值加上上条命令的值可以供master节点加入集群使用

7. Metrics部署

(master01节点:)

[root@k8s-master01 ~]# git clone https://github.com/dotbalo/k8s-ha-install.git
[root@k8s-master01 ~]# cd k8s-ha-install/metrics-server-3.6.1/
[root@k8s-master01 metrics-server-3.6.1]# kubectl create -f .

8.Dashboard部署

[root@k8s-master01 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml

查看Dashboard

[root@k8s-master01 ~]# kubectl get po -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7b59f7d4df-4zhb5   1/1     Running   0          92s
kubernetes-dashboard-665f4c5ff-cbtst         1/1     Running   0          92s
[root@k8s-master01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.98.122.54   <none>        8000/TCP   2m16s
kubernetes-dashboard        ClusterIP   10.97.164.10   <none>        443/TCP    2m17s
[root@k8s-master01 ~]#  kubectl edit svc kubernetes-dashboard -n !$
 kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
service/kubernetes-dashboard edited
  type: NodePort
注:倒数第三行,修改成NodePort
[root@k8s-master01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.98.122.54   <none>        8000/TCP        5m23s
kubernetes-dashboard        NodePort    10.97.164.10   <none>        443:30424/TCP   5m24s
注:记住30424这个端口,等下会用到

创建管理员用户

[root@k8s-master01 ~]# vim admin.yaml
[root@k8s-master01 ~]# cat admin.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
[root@k8s-master01 ~]# kubectl create -f admin.yaml
serviceaccount/admin-user created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

查看token值

[root@k8s-master01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-cdj9z
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 69946572-41f4-4a72-9e84-025f6d9e0d67
Type:  kubernetes.io/service-account-token
Data
====
ca.crt:     1066 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImItbnNRd2NqaEJTUkNHX25WYlQwUVdReDU1ODFZWHgzbkV0bHhrdlB0eGsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWNkajl6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2OTk0NjU3Mi00MWY0LTRhNzItOWU4NC0wMjVmNmQ5ZTBkNjciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.hB8Ij0nHvH_iumG89ejqcZLaLGupSc-SuLyIpEKcdrYJhgXcT0EvNIC7LamQJdqr12b8p2LTYuiqVzz04yOYmXB0_MxmrXVpbySjWoqfKQHMon_ew9EAN-1oHgBHkzAbD9jiLaJyPduHajwaBQz6LtIC7QPn9cwRMYZvUXVqys3q7mU2EEFt3I4TS6CReg7oZ-WezTF-7ggRFjpe5E0pZrcovICIJTiG2e8y5d0Tflyx3oWhsEqDywSfkw3CJICJmKG_-TgxmMJuFcHY_1EzZEmvpsrnBIWTii6-6-3vAmC2irUs4TRBzldIt4gzcF4Dw2iXn1r8Tn_u77Wh9uReMA
注:这个值登录网页会用到

在浏览器登录测试

https://192.168.1.00:30424

20200911161420582.png

20200911161541262.png

能看到这里说面此次部署算是成功了。

相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
目录
相关文章
|
1天前
|
Kubernetes 应用服务中间件 nginx
Kubernetes详解(六)——Pod对象部署和应用
在Kubernetes系列中,本文聚焦Pod对象的部署和管理。首先,通过`kubectl run`命令创建Pod,如`kubectl run pod-test --image=nginx:1.12 --port=80 --replicas=1`。接着,使用`kubectl get deployment`或`kubectl get pods`查看Pod信息,添加`-o wide`参数获取详细详情。然后,利用Pod的IP地址进行访问。最后,用`kubectl delete pods [Pod名]`删除Pod,但因Controller控制器,删除后Pod可能自动重建。了解更多细节,请参阅原文链接。
8 5
|
1天前
|
Kubernetes Linux Docker
Kubernetes详解(四)——基于kubeadm的Kubernetes部署
Kubernetes详解(四)——基于kubeadm的Kubernetes部署
12 2
|
2天前
|
Kubernetes Java API
Kubernetes详解(三)——Kubernetes集群组件
Kubernetes详解(三)——Kubernetes集群组件
13 1
|
3天前
|
Kubernetes 关系型数据库 MySQL
MySQL在Kubernetes上的高可用实现
【5月更文挑战第1天】
|
7天前
|
运维 监控 Kubernetes
Kubernetes 集群的监控与维护策略
【5月更文挑战第4天】 在当今微服务架构盛行的时代,容器化技术已成为软件开发和部署的标准实践。Kubernetes 作为一个开源的容器编排平台,因其强大的功能和灵活性而广受欢迎。然而,随着 Kubernetes 集群规模的扩大,集群的监控和维护变得日益复杂。本文将探讨 Kubernetes 集群监控的重要性,分析常见的监控工具,并提出一套有效的集群维护策略,以帮助运维人员确保集群的健康运行和高可用性。
39 10
|
8天前
|
存储 运维 监控
Kubernetes 集群的持续监控与优化策略
【5月更文挑战第3天】在微服务架构和容器化部署日益普及的背景下,Kubernetes 已成为众多企业的首选容器编排平台。然而,随着集群规模的增长和业务复杂度的提升,有效的集群监控和性能优化成为确保系统稳定性和提升资源利用率的关键。本文将深入探讨针对 Kubernetes 集群的监控工具选择、监控指标的重要性解读以及基于数据驱动的性能优化实践,为运维人员提供一套系统的持续监控与优化策略。
|
11天前
|
运维 Kubernetes 监控
Kubernetes 集群的监控与维护策略
【4月更文挑战第30天】 在现代云计算环境中,容器化技术已成为应用程序部署和管理的重要手段。其中,Kubernetes 作为一个开源的容器编排平台,以其强大的功能和灵活性受到广泛欢迎。然而,随之而来的是对 Kubernetes 集群监控和维护的复杂性增加。本文将探讨针对 Kubernetes 集群的监控策略和维护技巧,旨在帮助运维人员确保集群的稳定性和高效性。通过分析常见的性能瓶颈、故障诊断方法以及自动化维护工具的应用,我们将提供一套实用的解决方案,以优化 Kubernetes 环境的性能和可靠性。
|
11天前
|
运维 Kubernetes 监控
Kubernetes集群的持续性能优化策略
【4月更文挑战第30天】 在动态且不断扩展的云计算环境中,保持应用性能的稳定性是一个持续的挑战。本文将探讨针对Kubernetes集群的持续性能优化策略,旨在为运维工程师提供一套系统化的性能调优框架。通过分析集群监控数据,我们将讨论如何诊断常见问题、实施有效的资源管理和调度策略,以及采用自动化工具来简化这一过程。
|
11天前
|
Prometheus 监控 Kubernetes
Kubernetes 集群的监控与日志管理策略
【4月更文挑战第30天】 在微服务架构日益普及的当下,容器化技术与编排工具如Kubernetes成为了运维领域的重要话题。有效的监控和日志管理对于保障系统的高可用性和故障快速定位至关重要。本文将探讨在Kubernetes环境中实施监控和日志管理的最佳实践,包括选用合适的工具、部署策略以及如何整合这些工具来提供端到端的可见性。我们将重点讨论Prometheus监控解决方案和EFK(Elasticsearch, Fluentd, Kibana)日志管理堆栈,分析其在Kubernetes集群中的应用,并给出优化建议。
|
12天前
|
Kubernetes 应用服务中间件 nginx
K8S二进制部署详解,一文教会你部署高可用K8S集群(二)
K8S二进制部署详解,一文教会你部署高可用K8S集群(二)