kubernetes学习-集群搭建

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
日志服务 SLS,月写入数据量 50GB 1个月
简介: kubernetes学习-集群搭建

@[TOC]

1.环境准备

此次搭建准备三台服务器,系统皆为CentOS Linux release 7.6.1810 (Core)

主机名 ip
k8s-Master 192.168.204.128
k8s-Node1 192.168.204.130
k8s- Node2 192.168.204.131

1.1 注意事项

  • master节点安装配置etcd服务,作为k8s集群主数据库,保存所有资源的信息
  • 所有节点安装配置k8s服务,针对master和node分别配置各项服务
  • 所有节点安装配置flannel服务,用于跨主机容器间通信

    2.1 操作步骤

    2.1.1 初始化操作(三台机器都需要做)

    2.1.2 系统初始化

    ```bash

    关闭firewall

    systemctl stop firewalld.service
    systemctl disable firewalld.service

关闭selinux

setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config

关闭swap分区

swapoff -a
sed -ri 's/.swap./#&/' /etc/fstab

绑定hosts

[root@localhost ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.204.128 k8s-master
192.168.204.130 k8s-node1
192.168.204.131 k8s-node2

#### 2.1.3 检查网络连通性

```bash
#三台服务器全部执行
ping -c1 k8s-master
ping -c1 k8s-node1
ping -c1 k8s-node2

2.1.4 将桥接的 IPv4 流量传递到 iptables 的链

#追加配置
echo "net.ipv4.ip_forward = 1">> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1">> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1">> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1">> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1">> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1">> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1">> /etc/sysctl.conf
#立即生效
sysctl -p

2.1.5 安装docker

#卸载旧版本
yum remove docker  docker-client  docker-client-latest  docker-common  docker-latest  docker-latest-logrotate  docker-logrotate  docker-engine docker-ce* -y

#安装基础依赖
yum install -y yum-utils  device-mapper-persistent-data  lvm2

#配置docker源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

#安装并启动 docker
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker.service
systemctl start docker.service

#查看版本
[root@k8s-master ~]# docker --version
Docker version 18.06.1-ce, build e68fc7a

#查看docker运行状态
[root@localhost ~]# systemctl status docker.service 
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2021-03-31 18:11:11 CST; 30s ago
     Docs: https://docs.docker.com
 Main PID: 7280 (dockerd)
   CGroup: /system.slice/docker.service
           └─7280 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.354939836+08:00" level=info msg="Loading containers: start."
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.375579559+08:00" level=error msg="e56e04d95de9b030e77365f0539208360bef8b35cd...ontainer"
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.795605539+08:00" level=info msg="Removing stale sandbox 6ce64a8c27d18ab6f2cd...3c9e57f)"
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.799750708+08:00" level=warning msg="Error (Unable to complete atomic operati...ying...."
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.830765677+08:00" level=info msg="Default bridge (docker0) is assigned with a... address"
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.864931106+08:00" level=info msg="Loading containers: done."
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.881542895+08:00" level=info msg="Docker daemon" commit=afacb8b graphdriver(s...n=19.03.8
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.881657826+08:00" level=info msg="Daemon has completed initialization"
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.921209150+08:00" level=info msg="API listen on /var/run/docker.sock"
3月 31 18:11:11 k8s-master systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.

#配置docker加速
mkdir -p /etc/docker
vim /etc/docker/daemon.json
{
   
 "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"]
}

#重启docker
systemctl restart docker.service

2.1.6 安装nfs共享存储

# 必须先安装 nfs-utils 才能挂载 nfs 网络存储
  yum install -y nfs-utils

2.1.7 配置k8s源

#卸载旧的软件包
[root@localhost ~]# yum remove -y kubelet kubeadm kubectl
已加载插件:fastestmirror, langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
参数 kubelet 没有匹配
参数 kubeadm 没有匹配
参数 kubectl 没有匹配
不删除任何软件包

#配置K8S的yum源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.1.8 安装kubeadm,kubelet和kubectl

由于版本更新频繁,这里指定1.15.0进行实验

yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0

#设置成开机启动
systemctl enable kubelet

2.2 安装kubernetes Master

在==192.168.204.128==上执行

kubeadm init  --apiserver-advertise-address=192.168.204.128  --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version v1.15.0  --service-cidr=10.1.0.0/16  --pod-network-cidr=10.224.0.0/16

这里需要等待一段时间,下面是正常输出的一大坨(主要看你网速快慢 嘿嘿)

rsion v1.15.0  --service-cidr=10.1.0.0/16  --pod-network-cidr=10.224.0.0/16
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.204.128]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.204.128 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.204.128 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.502967 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: nnbkhp.4vawwyezmuio78hh
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown {
   mathJaxContainer[0]}(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.204.128:6443 --token nnbkhp.4vawwyezmuio78hh \
    --discovery-token-ca-cert-hash sha256:54662ac697a50b60882ab12060e88dddb2cb20efcd94a99dcd6c1a81ca663006

按照指引,建立目录,并拷贝配置文件

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown {
   mathJaxContainer[1]}(id -g) $HOME/.kube/config

[root@k8s-master .kube]# ls   /root/.kube/config 
/root/.kube/config

2.3 安装pod网络插件(CNI)

[root@k8s-master .kube]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

拉取镜像

[root@k8s-master .kube]# docker pull quay.io/coreos/flannel:v0.11.0-arm64
v0.11.0-arm64: Pulling from coreos/flannel

e3c488b39803: Pull complete 
05a63128803b: Pull complete 
3efb28f38731: Pull complete 
71ef15dbcc6f: Pull complete 
88a6c618f324: Pull complete 
75dbef1b80d7: Pull complete 
00e6d6cf2173: Pull complete 
Digest: sha256:2e56d44d0edaff330ac83f4ab82f0099f17ad3f81ad3564c6b0e1a85d05c9dfc
Status: Downloaded newer image for quay.io/coreos/flannel:v0.11.0-arm64

查看运行状态

[root@k8s-master .kube]# kubectl  get pods -n kube-system

NAME                                 READY   STATUS    RESTARTS   AGE
coredns-bccdc95cf-v6dcr              1/1     Running   0          12m
coredns-bccdc95cf-wbml6              1/1     Running   0          12m
etcd-k8s-master                      1/1     Running   0          11m
kube-apiserver-k8s-master            1/1     Running   0          11m
kube-controller-manager-k8s-master   1/1     Running   1          11m
kube-flannel-ds-amd64-ncprb          1/1     Running   0          4m9s
kube-proxy-mbrm7                     1/1     Running   0          12m
kube-scheduler-k8s-master            1/1     Running   1          11m

2.4 Node加入集群(插曲不断)

在==192.168.204.130==/==192.168.204.131==上执行kubeadm join命令,这个在master初始化打印出的信息里会有

[root@k8s-node1 app]# kubeadm join 192.168.204.128:6443 --token nnbkhp.4vawwyezmuio78hh  --discovery-token-ca-cert-hash sha256:54662ac697a50b60882ab12060e88dddb2cb20efcd94a99dcd6c1a81ca663006

[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

如果忘了可以通过kubeadm token list查看token,失效的话可通过kubeadm token create创建

查看加入是否成功

[root@k8s-master .kube]# kubectl get nodes
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   Ready      master   77m   v1.15.0
k8s-node1    NotReady   <none>   15m   v1.15.0
k8s-node2    Ready      <none>   43m   v1.15.0

发现k8s-node2处于notready状态,查看日志

[root@k8s-master .kube]# kubectl describe nodes k8s-node1
#发现如下错误
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

查看node2的/etc/cni/net.d/下没有文件生成,使用docker images未发现flannel镜像

解决:

#拉取镜像
docker pull quay.io/coreos/flannel:v0.11.0-arm64

#编写配置
cat <<EOF> /etc/cni/net.d/10-flannel.conf
{
   
  "name": "cbr0",
  "plugins": [
    {
   
      "type": "flannel",
      "delegate": {
   
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
   
      "type": "portmap",
      "capabilities": {
   
        "portMappings": true
      }
    }
  ]
}
EOF

#创建配置目录
mkdir /usr/share/oci-umount/oci-umount.d -p
mkdir /run/flannel/

#编写配置
cat <<EOF> /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.224.2.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
EOF

#升级网络驱动
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

然后重新注册,发现node连master一直超时,根本注册不上去(这个问题困扰比较久)

#查看系统日志
cat /var/log/message
#报错如下
Apr  1 15:33:21 k8s-node1 kubelet: E0401 15:33:21.730325   13112 controller.go:115] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "k8s-node1" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
#经过查阅资料,发现是匿名用户权限不够
#master服务器执行
kubectl create clusterrolebinding test:anonymous --clusterrole=cluster-admin --user=system:anonymous
#后面发现注册上了,但是状态还是不对
[root@k8s-master log]# kubectl get nodes
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   Ready      master   178m   v1.15.0
k8s-node1    NotReady   <none>   3m5s   v1.15.0
k8s-node2    Ready      <none>   143m   v1.15.0

没办法,治好继续看日志了,在日志里发现这么一个报错

 Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache

咱也看不懂,只觉得和内存有关系,free -h一看,内存确实剩余不多

#清理内存
echo 3 > /proc/sys/vm/drop_caches
#node清除记录
kubeadm reset
#master删除为准备的节点
kubectl delete node k8s-node1

在一次kubeadm join,发现还是为准备,在看系统日志,发现

Unable to update cni config: No networks found in /etc/cni/net.d

这个报错就很明显了,网络问题

ll /etc/cni/net.d/10-flannel.conflist
#发现没有文件,从master拷贝
scp -P 5522 -r /etc/cni/ 192.168.204.130:/etc/

再次重新注册,发现成功了

4月 01 15:47:31 k8s-node1 kubelet[14000]: I0401 15:47:31.379357   14000 kubelet_node_status.go:72] Attempting to register node k8s-nod
4月 01 15:47:31 k8s-node1 kubelet[14000]: I0401 15:47:31.413690   14000 kubelet_node_status.go:75] Successfully registered node k8s-no
4月 01 15:47:31 k8s-node1 kubelet[14000]: I0401 15:47:31.473948   14000 reconciler.go:150] Reconciler: start to sync state

在master上查看状态

[root@k8s-master log]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   3h18m   v1.15.0
k8s-node1    Ready    <none>   10m     v1.15.0
k8s-node2    Ready    <none>   163m    v1.15.0

哈哈,终于可以下一步了!

2.5 测试集群

集群中创建一个pod,看看是否正常运行

#创建nginx pod
[root@k8s-master log]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

#设置端口
[root@k8s-master log]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

#查看pod状态
[root@k8s-master log]# kubectl get pod,svc

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.1.0.1     <none>        443/TCP        3h21m
service/nginx        NodePort    10.1.44.94   <none>        80:32450/TCP   27s

#查看nginx pod
[root@k8s-master ~]# kubectl get pods
NAME                     READY   STATUS              RESTARTS   AGE
nginx-554b9c67f9-hpv7k   1/1     Running             0          5m53s

2.6 Dashboard表盘部署

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

默认镜像国内无法访问,修改kubernetes-dashboard.yaml的镜像地址

#先下载下来
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
#修改image地址
vim kubernetes-dashboard.yaml
:%s/k8s.gcr.io\/kubernetes-dashboard-amd64:v1.10.1/roeslys\/kubernetes-dashboard-amd64:v1.10.1
:wq!

#默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:不修改的话外部不能访问。

# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      nodePort: 31000
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

生成pod

#生成pod
kubectl apply -f kubernetes-dashboard.yaml
#查看端口
[root@k8s-master kubernetes]# ss -nlpt|grep 31000
LISTEN     0      128         :::31000                   :::*                   users:(("kube-proxy",pid=16248,fd=13))

 #查看信息
kubectl get pods,svc -n kube-system
NAME                                        READY   STATUS              RESTARTS   AGE
pod/coredns-bccdc95cf-v6dcr                 1/1     Running             1          3h58m
pod/coredns-bccdc95cf-wbml6                 1/1     Running             1          3h58m
pod/etcd-k8s-master                         1/1     Running             1          3h58m
pod/kube-apiserver-k8s-master               1/1     Running             1          3h57m
pod/kube-controller-manager-k8s-master      1/1     Running             39         3h58m
pod/kube-flannel-ds-amd64-9p2dc             1/1     Running             0          20m
pod/kube-flannel-ds-amd64-bmqwf             1/1     Running             2          3h24m
pod/kube-flannel-ds-amd64-ncprb             1/1     Running             2          3h50m
pod/kube-proxy-c2dlk                        1/1     Running             1          3h24m
pod/kube-proxy-mbrm7                        1/1     Running             1          3h58m
pod/kube-proxy-zwmgq                        1/1     Running             0          20m
pod/kube-scheduler-k8s-master               1/1     Running             36         3h57m
pod/kubernetes-dashboard-7d75c474bb-wp7p2   0/1     ImagePullBackOff    0          13m
pod/kubernetes-dashboard-974d89547-t442f    0/1     ContainerCreating   0          30s

NAME                           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns               ClusterIP   10.1.0.10     <none>        53/UDP,53/TCP,9153/TCP   3h59m
service/kubernetes-dashboard   NodePort    10.1.142.55   <none>        443:32483/TCP            13m

创建service account并绑定默认cluster-admin管理员集群角色

[root@k8s-master kubernetes]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created

[root@k8s-master kubernetes]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

[root@k8s-master kubernetes]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-w22dx
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 1f04e31c-bae2-42a9-aa37-a1007a50af57

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdzIyZHgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMWYwNGUzMWMtYmFlMi00MmE5LWFhMzctYTEwMDdhNTBhZjU3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.rW6OPJuaragcM3tAiabfVdXm6UWsWi7Ki7XhHseJO8Z5UcYhIZzcO3siaKHzz2pSYy5TQhpRZ87XPE5gVqJOS1tOoA0iDOuvTPqP8RBsR7Qm3nC1INKPP3gird8ZNmGa6hHnkA_dWldVPPzC-_8TkEx2z2I_t96BRN_NgiR_hJHpMyhL0Q5zt_QN_C9EMMNmbnMuTbUZ0RUyQq9Y_4X6bxbT8DQrGJZdxfN6PoMkxgCHYUIsDV0cL951XgmuicwIFzY8tsrLHXHd1MNgi36z6_-MNuqeMZWtNy1nQOhmj9gaTeL9h4YzaDh55TrGdQn4b0o-v-yNHEfCkaWywdJXGg
ca.crt:     1025 bytes
namespace:  11 bytes

记录token
浏览器测试访问,不通

vim /usr/lib/systemd/system/docker.service
#加入ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j 这句ACCEPT
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
ExecStart=/usr/bin/dockerd

#重启docker
systemctl restart docker.service

还是不行,查看所属节点

[root@k8s-master kubernetes]# kubectl get pods --all-namespaces -o wide
kube-system   kubernetes-dashboard-974d89547-t442f   0/1     CrashLoopBackOff   10         36m     10.224.2.5        k8s-node2    <none>           <none>

可以看到是在node2上,登录node2

kubectl get pods
#如果遇到不能使用命令可以vim /etc/profile最下面export KUBECONFIG=/etc/kubernetes/admin.conf,文件没有的话,从master拷贝

#删除pod
kubectl delete pod kubernetes-dashboard-974d89547-ztqs6 -n kube-system

#再次查看
kubectl get pods --all-namespaces -o wide
kube-system   kubernetes-dashboard-974d89547-hp5nv   1/1     Running   0          14s     10.224.2.7        k8s-node2    <none>           <none>

#查看端口
[root@k8s-node2 kubernetes]# ss -nltp|grep 30001
LISTEN     0      128         :::30001                   :::*                   users:(("kube-proxy",pid=27237,fd=13))

发现一直处在CrashLoopBackOff状态,查看日志

kubectl logs  kubernetes-dashboard-974d89547-vzb2w   -n kube-system
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
1月前
|
存储 Kubernetes 持续交付
k8s学习
【10月更文挑战第1天】
82 4
|
22天前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
1月前
|
Kubernetes 应用服务中间件 nginx
k8s学习--YAML资源清单文件托管服务nginx
k8s学习--YAML资源清单文件托管服务nginx
k8s学习--YAML资源清单文件托管服务nginx
|
23天前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
1月前
|
存储 Kubernetes 调度
|
1月前
|
Kubernetes 应用服务中间件 nginx
搭建Kubernetes v1.31.1服务器集群,采用Calico网络技术
在阿里云服务器上部署k8s集群,一、3台k8s服务器,1个Master节点,2个工作节点,采用Calico网络技术。二、部署nginx服务到k8s集群,并验证nginx服务运行状态。
461 1
|
1月前
|
Kubernetes Cloud Native 微服务
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
105 1
|
1月前
|
负载均衡 应用服务中间件 nginx
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
基于Ubuntu-22.04安装K8s-v1.28.2实验(二)使用kube-vip实现集群VIP访问
51 1
|
1月前
|
Kubernetes API 调度
k8s学习--pod的所有状态详解(图例展示)
k8s学习--pod的所有状态详解(图例展示)
137 1
|
1月前
|
Kubernetes Ubuntu Linux
Centos7 搭建 kubernetes集群
本文介绍了如何搭建一个三节点的Kubernetes集群,包括一个主节点和两个工作节点。各节点运行CentOS 7系统,最低配置为2核CPU、2GB内存和15GB硬盘。详细步骤包括环境配置、安装Docker、关闭防火墙和SELinux、禁用交换分区、安装kubeadm、kubelet、kubectl,以及初始化Kubernetes集群和安装网络插件Calico或Flannel。
141 0