Kubeadm部署高可用K8S集群

简介: Kubeadm部署高可用K8S集群

1. 设备清单

因为条件问题本次实验只准备了三台虚拟机。两台master节点,一台node节点。

同时高可用部署也是在两台master节点上进行。

有条件的可以自行把环境扩到5台多master节点多node节点,甚至可以7台把高可用单独分出来,部署过程不会有影响。

设备 IP
k8s-master01 192.168.1.3
k8s-master02 192.168.1.4
k8s-node01 192.168.1.3
vip 192.168.1.100

2. 初始环境准备

(所有节点都要进行的操作:)

2.1 主机名解析

[root@k8s-master01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.3 k8s-master01
192.168.1.4 k8s-master02
192.168.1.5 k8s-node01

2.2 关闭防火墙及SElinux

[root@k8s-master01 ~]# systemctl stop firewall && systemctl disable firewall
[root@k8s-master01 ~]# setenforce 0
setenforce: SELinux is disabled
[root@k8s-master01 ~]# cat /etc/sysconfig/selinux
SELINUX=disabled

2.3 关闭NetworkManager

[root@k8s-master01 ~]# systemctl disable --now NetworkManager
[root@k8s-master01 ~]# systemctl restart network
[root@k8s-master01 ~]# ping www.baidu.com
#确定网络是通的

2.4 进行时间同步

[root@k8s-master01 ~]# ntpdate ntp.aliyun.com

2.5 升级系统(可选,我没做)命令如下:

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 –y
yum update -y --exclude=kernel* && reboot

2.6 升级内核

[root@k8s-master01 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@k8s-master01 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
获取http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
获取http://elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
准备中...                          ################################# [100%]
正在升级/安装...
   1:elrepo-release-7.0-4.el7.elrepo  ################################# [100%]
[root@k8s-master01 ~]# yum --enablerepo=elrepo-kernel install kernel-ml kernel-ml-devel –y
[root@k8s-master01 ~]# grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg && grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)" && reboot
[root@k8s-master01 ~]# uname -r
5.8.8-1.el7.elrepo.x86_64

2.7 设置安装源

2.7.1 设置k8s源

[root@k8s-master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

2.7.2 设置docker源

[root@k8s-master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-master01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
已加载插件:fastestmirror
adding repo from: https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
[root@k8s-master01 ~]# yum makecache fast

2.8 安装ipvs模块

[root@k8s-master01 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp –y
[root@k8s-master01 ~]# vim /etc/modules-load.d/ipvs.conf
[root@k8s-master01 ~]# cat /etc/modules-load.d/ipvs.conf 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
[root@k8s-master01 ~]# systemctl enable --now systemd-modules-load.service

2.9 修改内核参数

[root@k8s-master01 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
> net.ipv4.ip_forward = 1
> net.bridge.bridge-nf-call-iptables = 1
> fs.may_detach_mounts = 1
> vm.overcommit_memory=1
> vm.panic_on_oom=0
> fs.inotify.max_user_watches=89100
> fs.file-max=52706963
> fs.nr_open=52706963
> net.netfilter.nf_conntrack_max=2310720
> 
> net.ipv4.tcp_keepalive_time = 600
> net.ipv4.tcp_keepalive_probes = 3
> net.ipv4.tcp_keepalive_intvl =15
> net.ipv4.tcp_max_tw_buckets = 36000
> net.ipv4.tcp_tw_reuse = 1
> net.ipv4.tcp_max_orphans = 327680
> net.ipv4.tcp_orphan_retries = 3
> net.ipv4.tcp_syncookies = 1
> net.ipv4.tcp_max_syn_backlog = 16384
> net.ipv4.ip_conntrack_max = 65536
> net.ipv4.tcp_max_syn_backlog = 16384
> net.ipv4.tcp_timestamps = 0
> net.core.somaxconn = 16384
> EOF
[root@k8s-master01 ~]# sysctl --system

2.10 关闭swap分区

[root@k8s-master01 ~]# swapoff -a
[root@k8s-master01 ~]# vim /etc/fstab 
[root@k8s-master01 ~]# cat /etc/fstab 
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

2.11 master01节点免密登录其他节点

[root@k8s-master01 ~]# ssh-keygen -t rsa
[root@k8s-master01 ~]# for i in k8s-master01 k8s-master02 k8s-node01;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

2.12 配置limit

[root@k8s-master01 ~]# ulimit -SHn 65535

3 安装基本组件

(所有节点都要安装:)

3.1 安装containerd

[root@k8s-master01 ~]# wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm
[root@k8s-master01 ~]# yum -y install containerd.io-1.2.13-3.2.el7.x86_64.rpm

3.2 安装k8s组件

[root@k8s-master01 ~]# yum -y install kubeadm kubelet kubectl --disableexcludes=kubernetes
已安装:
  kubeadm.x86_64 0:1.19.1-0  kubectl.x86_64 0:1.19.1-0  kubelet.x86_64 0:1.19.1-0 
作为依赖被安装:
  cri-tools.x86_64 0:1.13.0-0            kubernetes-cni.x86_64 0:0.8.7-0          
  socat.x86_64 0:1.7.3.2-2.el7          
[root@k8s-master01 ~]# systemctl enable --now kubelet

3.3 安装docker

[root@k8s-master01 ~]# yum -y install docker-ce
已安装:
  docker-ce.x86_64 3:19.03.12-3.el7
[root@k8s-master01 ~]# systemctl start docker && systemctl enable docker
[root@k8s-master01 ~]# vim /etc/docker/daemon.json
[root@k8s-master01 ~]# cat /etc/docker/daemon.json 
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors":["https://655dds7u.mirror.aliyuncs.com"]
}
[root@k8s-master01 ~]# systemctl restart docker

3.4 配置kubelet使用阿里云的pause镜像

[root@k8s-master01 ~]# cat >/etc/sysconfig/kubelet<<EOF
> KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
> EOF

4. 安装高可用组件

(所有master节点安装:)

4.1 安装HAproxy和keepalived服务

[root@k8s-master01 ~]# yum install keepalived haproxy –y
已安装:
  haproxy.x86_64 0:1.5.18-9.el7          keepalived.x86_64 0:1.3.5-16.el7

4.2 修改haproxy配置文件

(所有master节点配置相同:)

[root@k8s-master01 ~]# cd /etc/haproxy/
[root@k8s-master01 haproxy]# cp haproxy.cfg haproxy.cfg.bak
[root@k8s-master01 haproxy]# vim haproxy.cfg
[root@k8s-master01 haproxy]# cat haproxy.cfg
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s
defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s
frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor
listen stats
  bind    *:8006
  mode    http
  stats   enable
  stats   hide-version
  stats   uri       /stats
  stats   refresh   30s
  stats   realm     Haproxy\ Statistics
  stats   auth      admin:admin
frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master
backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server master01 192.168.1.3:6443  check
  server master02 192.168.1.4:6443  check

4.3 修改keepalived配置文件

4.3.1 修改master01配置文件

[root@k8s-master01 ~]# cd /etc/keepalived/
[root@k8s-master01 keepalived]# cp keepalived.conf keepalived.conf.bak
[root@k8s-master01 keepalived]# vim keepalived.conf
[root@k8s-master01 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 2
    weight -5
    fall 3  
    rise 2
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    mcast_src_ip 192.168.1.3
    virtual_router_id 51
    priority 150
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.1.100/24
    }
    track_script {
       chk_apiserver
    }

4.3.2 修改master02配置文件

[root@k8s-master02 ~]# cd /etc/keepalived/
[root@k8s-master02 keepalived]# cp keepalived.conf keepalived.conf.bak
[root@k8s-master02 keepalived]# vim keepalived.conf
[root@k8s-master02 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 2
    weight -5
    fall 3  
    rise 2
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    mcast_src_ip 192.168.1.4
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        192.168.1.100/24
    }
    track_script {
       chk_apiserver
    }
}

4.4 创建健康检查脚本

(所有master节点:)

[root@k8s-master01 ~]# vim /etc/keepalived/check_apiserver.sh
[root@k8s-master01 ~]# cat /etc/keepalived/check_apiserver.sh 
#!/bin/bash
err=0
for k in $(seq 1 5)
do
    check_code=$(pgrep kube-apiserver)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 5
        continue
    else
        err=0
        break
    fi
done
if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

4.5 启动haproxy和keepalived服务

[root@k8s-master01 ~]# systemctl start haproxy
[root@k8s-master01 ~]# systemctl enable haproxy
[root@k8s-master01 ~]# systemctl start keepalived
[root@k8s-master01 ~]# systemctl enable keepalived

5 部署k8s集群

5.1 获取初始化文件并修改

(所有master节点:)

[root@k8s-master01 ~]# kubeadm config print init-defaults > init.default.yaml
[root@k8s-master01 ~]# vim init.default.yaml 
[root@k8s-master01 ~]# cat init.default.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.3
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 192.168.1.100
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.1.100:16443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:
  dnsDomain: cluster.local
  podSubnet: 172.168.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}

5.2 拉取所需镜像

(所有master节点:)

[root@k8s-master01 ~]# kubeadm config images pull --config /root/init.default.yaml

5.2 初始化主master节点(master01)

[root@k8s-master01 ~]# kubeadm init --config /root/init.default.yaml --upload-certs

部分初始化内容:

Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
  kubeadm join 192.168.1.100:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:f0e18d595009a909d60378598bbf80895cf393b4ebf851fa75c73c88de5644cb \
    --control-plane --certificate-key d092072bb3f05ad103537a3d371b429edc84f38d55cfe57e6f62d80e79b7455d
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.100:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:f0e18d595009a909d60378598bbf80895cf393b4ebf851fa75c73c88de5644cb

按照要求创建目录:

[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

5.3 配置环境变量

(所有master节点:)

[root@k8s-master01 ~]# cat <<EOF >> /root/.bashrc
> export KUBECONFIG=/etc/kubernetes/admin.conf
> EOF
[root@k8s-master01 ~]# source /root/.bashrc

5.4 安装网络插件

[root@k8s-master01 ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  182k  100  182k    0     0  37949      0  0:00:04  0:00:04 --:--:-- 42388
[root@k8s-master01 ~]# vim calico.yaml
- name: CALICO_IPV4POOL_CIDR
  value: "172.168.0.0/16"
[root@k8s-master01 ~]# kubectl apply -f calico.yaml

5.5 查看主master状态及系统组件状态

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   26m   v1.19.1
[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-c9784d67d-52ksp   1/1     Running   0          3m23s   172.168.32.130   k8s-master01   <none>           <none>
calico-node-947h8                         1/1     Running   0          3m23s   192.168.1.3      k8s-master01   <none>           <none>
coredns-6c76c8bb89-qqc5h                  1/1     Running   0          26m     172.168.32.131   k8s-master01   <none>           <none>
coredns-6c76c8bb89-wwtgh                  1/1     Running   0          26m     172.168.32.129   k8s-master01   <none>           <none>
etcd-k8s-master01                         1/1     Running   0          26m     192.168.1.3      k8s-master01   <none>           <none>
kube-apiserver-k8s-master01               1/1     Running   0          26m     192.168.1.3      k8s-master01   <none>           <none>
kube-controller-manager-k8s-master01      1/1     Running   0          26m     192.168.1.3      k8s-master01   <none>           <none>
kube-proxy-f4fgp                          1/1     Running   0          26m     192.168.1.3      k8s-master01   <none>           <none>
kube-scheduler-k8s-master01               1/1     Running   0          26m     192.168.1.3      k8s-master01   <none>           <none>

5.6 节点加入集群

5.6.1 master节点加入集群

[root@k8s-master02 ~]# kubeadm join 192.168.1.100:16443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:f0e18d595009a909d60378598bbf80895cf393b4ebf851fa75c73c88de5644cb     --control-plane --certificate-key d092072bb3f05ad103537a3d371b429edc84f38d55cfe57e6f62d80e79b7455d

5.6.2 node节点加入集群

[root@k8s-node01 ~]# kubeadm join 192.168.1.100:16443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:f0e18d595009a909d60378598bbf80895cf393b4ebf851fa75c73c88de5644cb

6.7.3 在master查看集群节点

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   Ready      master   34m     v1.19.1
k8s-master02   Ready      master   4m42s   v1.19.1
k8s-node01     NotReady   <none>   60s     v1.19.1

此时查看所有pod:

[root@k8s-master01 ~]#  kubectl get pods -n kube-system -o wide
NAME                                      READY   STATUS    RESTARTS   AGE    IP               NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-c9784d67d-52ksp   1/1     Running   1          123m   172.168.32.134   k8s-master01   <none>           <none>
calico-node-947h8                         1/1     Running   1          123m   192.168.1.3      k8s-master01   <none>           <none>
calico-node-9hcfm                         1/1     Running   0          117m   192.168.1.4      k8s-master02   <none>           <none>
calico-node-cfzc2                         1/1     Running   0          113m   192.168.1.5      k8s-node01     <none>           <none>
coredns-6c76c8bb89-qqc5h                  1/1     Running   1          147m   172.168.32.133   k8s-master01   <none>           <none>
coredns-6c76c8bb89-wwtgh                  1/1     Running   1          147m   172.168.32.132   k8s-master01   <none>           <none>
etcd-k8s-master01                         1/1     Running   1          147m   192.168.1.3      k8s-master01   <none>           <none>
etcd-k8s-master02                         1/1     Running   1          117m   192.168.1.4      k8s-master02   <none>           <none>
kube-apiserver-k8s-master01               1/1     Running   2          147m   192.168.1.3      k8s-master01   <none>           <none>
kube-apiserver-k8s-master02               1/1     Running   1          117m   192.168.1.4      k8s-master02   <none>           <none>
kube-controller-manager-k8s-master01      1/1     Running   3          147m   192.168.1.3      k8s-master01   <none>           <none>
kube-controller-manager-k8s-master02      1/1     Running   1          117m   192.168.1.4      k8s-master02   <none>           <none>
kube-proxy-f4fgp                          1/1     Running   1          147m   192.168.1.3      k8s-master01   <none>           <none>
kube-proxy-lgdzm                          1/1     Running   0          117m   192.168.1.4      k8s-master02   <none>           <none>
kube-proxy-v8jm5                          1/1     Running   0          113m   192.168.1.5      k8s-node01     <none>           <none>
kube-scheduler-k8s-master01               1/1     Running   3          147m   192.168.1.3      k8s-master01   <none>           <none>
kube-scheduler-k8s-master02               1/1     Running   1          117m   192.168.1.4      k8s-master02   <none>           <none>
metrics-server-769bd9c6f4-nndxf           1/1     Running   21         83m    172.168.85.193   k8s-node01     <none>           <none>

6.7.4 如果忘记Token值

[root@k8s-master01 ~]# kubeadm token create --print-join-command
注:生成后node节点可以使用
[root@k8s-master01 ~]# kubeadm init phase upload-certs  --upload-certs
注:生成后的值加上上条命令的值可以供master节点加入集群使用

7. Metrics部署

(master01节点:)

[root@k8s-master01 ~]# git clone https://github.com/dotbalo/k8s-ha-install.git
[root@k8s-master01 ~]# cd k8s-ha-install/metrics-server-3.6.1/
[root@k8s-master01 metrics-server-3.6.1]# kubectl create -f .

8.Dashboard部署

[root@k8s-master01 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml

查看Dashboard

[root@k8s-master01 ~]# kubectl get po -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7b59f7d4df-4zhb5   1/1     Running   0          92s
kubernetes-dashboard-665f4c5ff-cbtst         1/1     Running   0          92s
[root@k8s-master01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.98.122.54   <none>        8000/TCP   2m16s
kubernetes-dashboard        ClusterIP   10.97.164.10   <none>        443/TCP    2m17s
[root@k8s-master01 ~]#  kubectl edit svc kubernetes-dashboard -n !$
 kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
service/kubernetes-dashboard edited
  type: NodePort
注:倒数第三行,修改成NodePort
[root@k8s-master01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.98.122.54   <none>        8000/TCP        5m23s
kubernetes-dashboard        NodePort    10.97.164.10   <none>        443:30424/TCP   5m24s
注:记住30424这个端口,等下会用到

创建管理员用户

[root@k8s-master01 ~]# vim admin.yaml
[root@k8s-master01 ~]# cat admin.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
[root@k8s-master01 ~]# kubectl create -f admin.yaml
serviceaccount/admin-user created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

查看token值

[root@k8s-master01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-cdj9z
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 69946572-41f4-4a72-9e84-025f6d9e0d67
Type:  kubernetes.io/service-account-token
Data
====
ca.crt:     1066 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImItbnNRd2NqaEJTUkNHX25WYlQwUVdReDU1ODFZWHgzbkV0bHhrdlB0eGsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWNkajl6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2OTk0NjU3Mi00MWY0LTRhNzItOWU4NC0wMjVmNmQ5ZTBkNjciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.hB8Ij0nHvH_iumG89ejqcZLaLGupSc-SuLyIpEKcdrYJhgXcT0EvNIC7LamQJdqr12b8p2LTYuiqVzz04yOYmXB0_MxmrXVpbySjWoqfKQHMon_ew9EAN-1oHgBHkzAbD9jiLaJyPduHajwaBQz6LtIC7QPn9cwRMYZvUXVqys3q7mU2EEFt3I4TS6CReg7oZ-WezTF-7ggRFjpe5E0pZrcovICIJTiG2e8y5d0Tflyx3oWhsEqDywSfkw3CJICJmKG_-TgxmMJuFcHY_1EzZEmvpsrnBIWTii6-6-3vAmC2irUs4TRBzldIt4gzcF4Dw2iXn1r8Tn_u77Wh9uReMA
注:这个值登录网页会用到

在浏览器登录测试

https://192.168.1.00:30424

20200911161420582.png

20200911161541262.png

能看到这里说面此次部署算是成功了。

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
7天前
|
存储 Kubernetes 关系型数据库
阿里云ACK备份中心,K8s集群业务应用数据的一站式灾备方案
本文源自2024云栖大会苏雅诗的演讲,探讨了K8s集群业务为何需要灾备及其重要性。文中强调了集群与业务高可用配置对稳定性的重要性,并指出人为误操作等风险,建议实施周期性和特定情况下的灾备措施。针对容器化业务,提出了灾备的新特性与需求,包括工作负载为核心、云资源信息的备份,以及有状态应用的数据保护。介绍了ACK推出的备份中心解决方案,支持命名空间、标签、资源类型等维度的备份,并具备存储卷数据保护功能,能够满足GitOps流程企业的特定需求。此外,还详细描述了备份中心的使用流程、控制台展示、灾备难点及解决方案等内容,展示了备份中心如何有效应对K8s集群资源和存储卷数据的灾备挑战。
|
21天前
|
Kubernetes Cloud Native 微服务
云原生入门与实践:Kubernetes的简易部署
云原生技术正改变着现代应用的开发和部署方式。本文将引导你了解云原生的基础概念,并重点介绍如何使用Kubernetes进行容器编排。我们将通过一个简易的示例来展示如何快速启动一个Kubernetes集群,并在其上运行一个简单的应用。无论你是云原生新手还是希望扩展现有知识,本文都将为你提供实用的信息和启发性的见解。
|
28天前
|
Kubernetes 监控 Cloud Native
Kubernetes集群的高可用性与伸缩性实践
Kubernetes集群的高可用性与伸缩性实践
61 1
|
2月前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
27天前
|
存储 Kubernetes Devops
Kubernetes集群管理和服务部署实战
Kubernetes集群管理和服务部署实战
45 0
|
2月前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
2月前
|
Kubernetes Ubuntu Linux
Centos7 搭建 kubernetes集群
本文介绍了如何搭建一个三节点的Kubernetes集群,包括一个主节点和两个工作节点。各节点运行CentOS 7系统,最低配置为2核CPU、2GB内存和15GB硬盘。详细步骤包括环境配置、安装Docker、关闭防火墙和SELinux、禁用交换分区、安装kubeadm、kubelet、kubectl,以及初始化Kubernetes集群和安装网络插件Calico或Flannel。
198 4
|
Kubernetes 开发者 微服务
简化Kubernetes应用部署工具-Helm之Hook
本文讲的是简化Kubernetes应用部署工具-Helm之Hook【编者的话】微服务和容器化给复杂应用部署与管理带来了极大的挑战。Helm是目前Kubernetes服务编排领域的唯一开源子项目,做为Kubernetes应用的一个包管理工具,可理解为Kubernetes的apt-get / yum,由Deis 公司发起,该公司已经被微软收购。
2562 0
|
2月前
|
Kubernetes Cloud Native 云计算
云原生之旅:Kubernetes 集群的搭建与实践
【8月更文挑战第67天】在云原生技术日益成为IT行业焦点的今天,掌握Kubernetes已成为每个软件工程师必备的技能。本文将通过浅显易懂的语言和实际代码示例,引导你从零开始搭建一个Kubernetes集群,并探索其核心概念。无论你是初学者还是希望巩固知识的开发者,这篇文章都将为你打开一扇通往云原生世界的大门。
134 17
|
2月前
|
Kubernetes 应用服务中间件 nginx
搭建Kubernetes v1.31.1服务器集群,采用Calico网络技术
在阿里云服务器上部署k8s集群,一、3台k8s服务器,1个Master节点,2个工作节点,采用Calico网络技术。二、部署nginx服务到k8s集群,并验证nginx服务运行状态。
869 1