kubernetes学习-集群搭建

简介: kubernetes学习-集群搭建

@[TOC]

1.环境准备

此次搭建准备三台服务器,系统皆为CentOS Linux release 7.6.1810 (Core)

主机名 ip
k8s-Master 192.168.204.128
k8s-Node1 192.168.204.130
k8s- Node2 192.168.204.131

1.1 注意事项

  • master节点安装配置etcd服务,作为k8s集群主数据库,保存所有资源的信息
  • 所有节点安装配置k8s服务,针对master和node分别配置各项服务
  • 所有节点安装配置flannel服务,用于跨主机容器间通信

    2.1 操作步骤

    2.1.1 初始化操作(三台机器都需要做)

    2.1.2 系统初始化

    ```bash

    关闭firewall

    systemctl stop firewalld.service
    systemctl disable firewalld.service

关闭selinux

setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config

关闭swap分区

swapoff -a
sed -ri 's/.swap./#&/' /etc/fstab

绑定hosts

[root@localhost ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.204.128 k8s-master
192.168.204.130 k8s-node1
192.168.204.131 k8s-node2

#### 2.1.3 检查网络连通性

```bash
#三台服务器全部执行
ping -c1 k8s-master
ping -c1 k8s-node1
ping -c1 k8s-node2

2.1.4 将桥接的 IPv4 流量传递到 iptables 的链

#追加配置
echo "net.ipv4.ip_forward = 1">> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1">> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1">> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1">> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1">> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1">> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1">> /etc/sysctl.conf
#立即生效
sysctl -p

2.1.5 安装docker

#卸载旧版本
yum remove docker  docker-client  docker-client-latest  docker-common  docker-latest  docker-latest-logrotate  docker-logrotate  docker-engine docker-ce* -y

#安装基础依赖
yum install -y yum-utils  device-mapper-persistent-data  lvm2

#配置docker源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

#安装并启动 docker
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker.service
systemctl start docker.service

#查看版本
[root@k8s-master ~]# docker --version
Docker version 18.06.1-ce, build e68fc7a

#查看docker运行状态
[root@localhost ~]# systemctl status docker.service 
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2021-03-31 18:11:11 CST; 30s ago
     Docs: https://docs.docker.com
 Main PID: 7280 (dockerd)
   CGroup: /system.slice/docker.service
           └─7280 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.354939836+08:00" level=info msg="Loading containers: start."
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.375579559+08:00" level=error msg="e56e04d95de9b030e77365f0539208360bef8b35cd...ontainer"
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.795605539+08:00" level=info msg="Removing stale sandbox 6ce64a8c27d18ab6f2cd...3c9e57f)"
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.799750708+08:00" level=warning msg="Error (Unable to complete atomic operati...ying...."
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.830765677+08:00" level=info msg="Default bridge (docker0) is assigned with a... address"
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.864931106+08:00" level=info msg="Loading containers: done."
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.881542895+08:00" level=info msg="Docker daemon" commit=afacb8b graphdriver(s...n=19.03.8
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.881657826+08:00" level=info msg="Daemon has completed initialization"
3月 31 18:11:11 k8s-master dockerd[7280]: time="2021-03-31T18:11:11.921209150+08:00" level=info msg="API listen on /var/run/docker.sock"
3月 31 18:11:11 k8s-master systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.

#配置docker加速
mkdir -p /etc/docker
vim /etc/docker/daemon.json
{
   
 "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"]
}

#重启docker
systemctl restart docker.service

2.1.6 安装nfs共享存储

# 必须先安装 nfs-utils 才能挂载 nfs 网络存储
  yum install -y nfs-utils

2.1.7 配置k8s源

#卸载旧的软件包
[root@localhost ~]# yum remove -y kubelet kubeadm kubectl
已加载插件:fastestmirror, langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
参数 kubelet 没有匹配
参数 kubeadm 没有匹配
参数 kubectl 没有匹配
不删除任何软件包

#配置K8S的yum源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.1.8 安装kubeadm,kubelet和kubectl

由于版本更新频繁,这里指定1.15.0进行实验

yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0

#设置成开机启动
systemctl enable kubelet

2.2 安装kubernetes Master

在==192.168.204.128==上执行

kubeadm init  --apiserver-advertise-address=192.168.204.128  --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version v1.15.0  --service-cidr=10.1.0.0/16  --pod-network-cidr=10.224.0.0/16

这里需要等待一段时间,下面是正常输出的一大坨(主要看你网速快慢 嘿嘿)

rsion v1.15.0  --service-cidr=10.1.0.0/16  --pod-network-cidr=10.224.0.0/16
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.204.128]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.204.128 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.204.128 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.502967 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: nnbkhp.4vawwyezmuio78hh
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown {
   mathJaxContainer[0]}(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.204.128:6443 --token nnbkhp.4vawwyezmuio78hh \
    --discovery-token-ca-cert-hash sha256:54662ac697a50b60882ab12060e88dddb2cb20efcd94a99dcd6c1a81ca663006

按照指引,建立目录,并拷贝配置文件

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown {
   mathJaxContainer[1]}(id -g) $HOME/.kube/config

[root@k8s-master .kube]# ls   /root/.kube/config 
/root/.kube/config

2.3 安装pod网络插件(CNI)

[root@k8s-master .kube]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

拉取镜像

[root@k8s-master .kube]# docker pull quay.io/coreos/flannel:v0.11.0-arm64
v0.11.0-arm64: Pulling from coreos/flannel

e3c488b39803: Pull complete 
05a63128803b: Pull complete 
3efb28f38731: Pull complete 
71ef15dbcc6f: Pull complete 
88a6c618f324: Pull complete 
75dbef1b80d7: Pull complete 
00e6d6cf2173: Pull complete 
Digest: sha256:2e56d44d0edaff330ac83f4ab82f0099f17ad3f81ad3564c6b0e1a85d05c9dfc
Status: Downloaded newer image for quay.io/coreos/flannel:v0.11.0-arm64

查看运行状态

[root@k8s-master .kube]# kubectl  get pods -n kube-system

NAME                                 READY   STATUS    RESTARTS   AGE
coredns-bccdc95cf-v6dcr              1/1     Running   0          12m
coredns-bccdc95cf-wbml6              1/1     Running   0          12m
etcd-k8s-master                      1/1     Running   0          11m
kube-apiserver-k8s-master            1/1     Running   0          11m
kube-controller-manager-k8s-master   1/1     Running   1          11m
kube-flannel-ds-amd64-ncprb          1/1     Running   0          4m9s
kube-proxy-mbrm7                     1/1     Running   0          12m
kube-scheduler-k8s-master            1/1     Running   1          11m

2.4 Node加入集群(插曲不断)

在==192.168.204.130==/==192.168.204.131==上执行kubeadm join命令,这个在master初始化打印出的信息里会有

[root@k8s-node1 app]# kubeadm join 192.168.204.128:6443 --token nnbkhp.4vawwyezmuio78hh  --discovery-token-ca-cert-hash sha256:54662ac697a50b60882ab12060e88dddb2cb20efcd94a99dcd6c1a81ca663006

[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

如果忘了可以通过kubeadm token list查看token,失效的话可通过kubeadm token create创建

查看加入是否成功

[root@k8s-master .kube]# kubectl get nodes
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   Ready      master   77m   v1.15.0
k8s-node1    NotReady   <none>   15m   v1.15.0
k8s-node2    Ready      <none>   43m   v1.15.0

发现k8s-node2处于notready状态,查看日志

[root@k8s-master .kube]# kubectl describe nodes k8s-node1
#发现如下错误
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

查看node2的/etc/cni/net.d/下没有文件生成,使用docker images未发现flannel镜像

解决:

#拉取镜像
docker pull quay.io/coreos/flannel:v0.11.0-arm64

#编写配置
cat <<EOF> /etc/cni/net.d/10-flannel.conf
{
   
  "name": "cbr0",
  "plugins": [
    {
   
      "type": "flannel",
      "delegate": {
   
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
   
      "type": "portmap",
      "capabilities": {
   
        "portMappings": true
      }
    }
  ]
}
EOF

#创建配置目录
mkdir /usr/share/oci-umount/oci-umount.d -p
mkdir /run/flannel/

#编写配置
cat <<EOF> /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.224.2.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
EOF

#升级网络驱动
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

然后重新注册,发现node连master一直超时,根本注册不上去(这个问题困扰比较久)

#查看系统日志
cat /var/log/message
#报错如下
Apr  1 15:33:21 k8s-node1 kubelet: E0401 15:33:21.730325   13112 controller.go:115] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "k8s-node1" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
#经过查阅资料,发现是匿名用户权限不够
#master服务器执行
kubectl create clusterrolebinding test:anonymous --clusterrole=cluster-admin --user=system:anonymous
#后面发现注册上了,但是状态还是不对
[root@k8s-master log]# kubectl get nodes
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   Ready      master   178m   v1.15.0
k8s-node1    NotReady   <none>   3m5s   v1.15.0
k8s-node2    Ready      <none>   143m   v1.15.0

没办法,治好继续看日志了,在日志里发现这么一个报错

 Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache

咱也看不懂,只觉得和内存有关系,free -h一看,内存确实剩余不多

#清理内存
echo 3 > /proc/sys/vm/drop_caches
#node清除记录
kubeadm reset
#master删除为准备的节点
kubectl delete node k8s-node1

在一次kubeadm join,发现还是为准备,在看系统日志,发现

Unable to update cni config: No networks found in /etc/cni/net.d

这个报错就很明显了,网络问题

ll /etc/cni/net.d/10-flannel.conflist
#发现没有文件,从master拷贝
scp -P 5522 -r /etc/cni/ 192.168.204.130:/etc/

再次重新注册,发现成功了

4月 01 15:47:31 k8s-node1 kubelet[14000]: I0401 15:47:31.379357   14000 kubelet_node_status.go:72] Attempting to register node k8s-nod
4月 01 15:47:31 k8s-node1 kubelet[14000]: I0401 15:47:31.413690   14000 kubelet_node_status.go:75] Successfully registered node k8s-no
4月 01 15:47:31 k8s-node1 kubelet[14000]: I0401 15:47:31.473948   14000 reconciler.go:150] Reconciler: start to sync state

在master上查看状态

[root@k8s-master log]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   3h18m   v1.15.0
k8s-node1    Ready    <none>   10m     v1.15.0
k8s-node2    Ready    <none>   163m    v1.15.0

哈哈,终于可以下一步了!

2.5 测试集群

集群中创建一个pod,看看是否正常运行

#创建nginx pod
[root@k8s-master log]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

#设置端口
[root@k8s-master log]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

#查看pod状态
[root@k8s-master log]# kubectl get pod,svc

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.1.0.1     <none>        443/TCP        3h21m
service/nginx        NodePort    10.1.44.94   <none>        80:32450/TCP   27s

#查看nginx pod
[root@k8s-master ~]# kubectl get pods
NAME                     READY   STATUS              RESTARTS   AGE
nginx-554b9c67f9-hpv7k   1/1     Running             0          5m53s

2.6 Dashboard表盘部署

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

默认镜像国内无法访问,修改kubernetes-dashboard.yaml的镜像地址

#先下载下来
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
#修改image地址
vim kubernetes-dashboard.yaml
:%s/k8s.gcr.io\/kubernetes-dashboard-amd64:v1.10.1/roeslys\/kubernetes-dashboard-amd64:v1.10.1
:wq!

#默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:不修改的话外部不能访问。

# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      nodePort: 31000
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

生成pod

#生成pod
kubectl apply -f kubernetes-dashboard.yaml
#查看端口
[root@k8s-master kubernetes]# ss -nlpt|grep 31000
LISTEN     0      128         :::31000                   :::*                   users:(("kube-proxy",pid=16248,fd=13))

 #查看信息
kubectl get pods,svc -n kube-system
NAME                                        READY   STATUS              RESTARTS   AGE
pod/coredns-bccdc95cf-v6dcr                 1/1     Running             1          3h58m
pod/coredns-bccdc95cf-wbml6                 1/1     Running             1          3h58m
pod/etcd-k8s-master                         1/1     Running             1          3h58m
pod/kube-apiserver-k8s-master               1/1     Running             1          3h57m
pod/kube-controller-manager-k8s-master      1/1     Running             39         3h58m
pod/kube-flannel-ds-amd64-9p2dc             1/1     Running             0          20m
pod/kube-flannel-ds-amd64-bmqwf             1/1     Running             2          3h24m
pod/kube-flannel-ds-amd64-ncprb             1/1     Running             2          3h50m
pod/kube-proxy-c2dlk                        1/1     Running             1          3h24m
pod/kube-proxy-mbrm7                        1/1     Running             1          3h58m
pod/kube-proxy-zwmgq                        1/1     Running             0          20m
pod/kube-scheduler-k8s-master               1/1     Running             36         3h57m
pod/kubernetes-dashboard-7d75c474bb-wp7p2   0/1     ImagePullBackOff    0          13m
pod/kubernetes-dashboard-974d89547-t442f    0/1     ContainerCreating   0          30s

NAME                           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns               ClusterIP   10.1.0.10     <none>        53/UDP,53/TCP,9153/TCP   3h59m
service/kubernetes-dashboard   NodePort    10.1.142.55   <none>        443:32483/TCP            13m

创建service account并绑定默认cluster-admin管理员集群角色

[root@k8s-master kubernetes]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created

[root@k8s-master kubernetes]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

[root@k8s-master kubernetes]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-w22dx
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 1f04e31c-bae2-42a9-aa37-a1007a50af57

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdzIyZHgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMWYwNGUzMWMtYmFlMi00MmE5LWFhMzctYTEwMDdhNTBhZjU3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.rW6OPJuaragcM3tAiabfVdXm6UWsWi7Ki7XhHseJO8Z5UcYhIZzcO3siaKHzz2pSYy5TQhpRZ87XPE5gVqJOS1tOoA0iDOuvTPqP8RBsR7Qm3nC1INKPP3gird8ZNmGa6hHnkA_dWldVPPzC-_8TkEx2z2I_t96BRN_NgiR_hJHpMyhL0Q5zt_QN_C9EMMNmbnMuTbUZ0RUyQq9Y_4X6bxbT8DQrGJZdxfN6PoMkxgCHYUIsDV0cL951XgmuicwIFzY8tsrLHXHd1MNgi36z6_-MNuqeMZWtNy1nQOhmj9gaTeL9h4YzaDh55TrGdQn4b0o-v-yNHEfCkaWywdJXGg
ca.crt:     1025 bytes
namespace:  11 bytes

记录token
浏览器测试访问,不通

vim /usr/lib/systemd/system/docker.service
#加入ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j 这句ACCEPT
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
ExecStart=/usr/bin/dockerd

#重启docker
systemctl restart docker.service

还是不行,查看所属节点

[root@k8s-master kubernetes]# kubectl get pods --all-namespaces -o wide
kube-system   kubernetes-dashboard-974d89547-t442f   0/1     CrashLoopBackOff   10         36m     10.224.2.5        k8s-node2    <none>           <none>

可以看到是在node2上,登录node2

kubectl get pods
#如果遇到不能使用命令可以vim /etc/profile最下面export KUBECONFIG=/etc/kubernetes/admin.conf,文件没有的话,从master拷贝

#删除pod
kubectl delete pod kubernetes-dashboard-974d89547-ztqs6 -n kube-system

#再次查看
kubectl get pods --all-namespaces -o wide
kube-system   kubernetes-dashboard-974d89547-hp5nv   1/1     Running   0          14s     10.224.2.7        k8s-node2    <none>           <none>

#查看端口
[root@k8s-node2 kubernetes]# ss -nltp|grep 30001
LISTEN     0      128         :::30001                   :::*                   users:(("kube-proxy",pid=27237,fd=13))

发现一直处在CrashLoopBackOff状态,查看日志

kubectl logs  kubernetes-dashboard-974d89547-vzb2w   -n kube-system
相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
相关文章
|
6天前
|
存储 运维 监控
Kubernetes 集群监控与日志管理实践
【5月更文挑战第28天】在微服务架构日益普及的当下,容器编排工具如 Kubernetes 已成为运维工作的核心。有效的集群监控和日志管理是确保系统稳定性和服务可靠性的关键。本文将深入探讨 Kubernetes 集群的监控策略,以及如何利用现有的工具进行日志收集、存储和分析,以实现对集群健康状况的实时掌握和问题快速定位。
|
7天前
|
存储 监控 Kubernetes
Kubernetes 集群监控与日志管理实践
【5月更文挑战第27天】 在微服务架构日益普及的当下,容器化技术与编排工具如Kubernetes已成为现代云原生应用的基石。然而,随着集群规模的不断扩大和复杂性的增加,如何有效监控和管理这些动态变化的服务成为了维护系统稳定性的关键。本文将深入探讨Kubernetes环境下的监控策略和日志管理的最佳实践,旨在为运维人员提供一套系统的解决思路,确保应用性能的最优化和问题的快速定位。
|
2天前
|
Kubernetes 微服务 容器
Aspire项目发布到远程k8s集群
Aspire项目发布到远程k8s集群
18 2
Aspire项目发布到远程k8s集群
|
3天前
|
运维 Kubernetes 调度
【kubernetes】关于k8s集群的污点、容忍、驱逐以及k8s集群故障排查思路
【kubernetes】关于k8s集群的污点、容忍、驱逐以及k8s集群故障排查思路
|
3天前
|
Kubernetes 微服务 容器
Aspire项目发布到win11本地k8s集群
Aspire项目发布到win11本地k8s集群
10 0
Aspire项目发布到win11本地k8s集群
|
4天前
|
运维 Prometheus 监控
Kubernetes 集群的监控与维护策略
【5月更文挑战第30天】 在微服务架构日益普及的背景下,容器编排工具如Kubernetes成为确保服务高效运行的关键。本文聚焦于Kubernetes集群的监控和维护,首先探讨了监控系统的重要性及其对集群健康的影响,随后详细介绍了一套综合监控策略,包括节点性能监控、应用服务质量跟踪以及日志管理等方面。此外,文章还提出了一系列实用的集群维护技巧和最佳实践,旨在帮助运维人员预防故障发生,快速定位问题,并确保集群长期稳定运行。
|
4天前
|
Prometheus 监控 Kubernetes
Kubernetes 集群的监控与日志管理实践深入理解PHP的命名空间与自动加载机制
【5月更文挑战第30天】 在容器化和微服务架构日益普及的背景下,Kubernetes 已成为众多企业的首选容器编排工具。然而,随之而来的挑战是集群的监控与日志管理。本文将深入探讨 Kubernetes 集群监控的最佳实践,包括节点资源使用情况、Pods 健康状态以及网络流量分析等关键指标的监控方法。同时,我们也将讨论日志聚合、存储和查询策略,以确保快速定位问题并优化系统性能。文中将介绍常用的开源工具如 Prometheus 和 Fluentd,并分享如何结合这些工具构建高效、可靠的监控和日志管理系统。
|
4天前
|
Prometheus 监控 Kubernetes
Kubernetes 集群的监控与维护最佳实践
【5月更文挑战第30天】 在现代云计算环境中,容器编排工具如Kubernetes已成为部署和管理微服务的关键。随着其日益广泛的应用,对集群进行有效的监控和维护显得尤为重要。本文将深入探讨Kubernetes集群监控的策略,并分享维护的最佳实践,以确保系统的稳定性和性能优化。我们将从监控工具的选择、关键指标的跟踪到故障排除流程等方面进行详细阐述,并提供实用的操作建议。
|
4天前
|
存储 运维 Kubernetes
Kubernetes 集群的持续性能优化策略
【5月更文挑战第30天】 在动态且日益复杂的云计算环境中,保持 Kubernetes 集群的高性能和稳定性是一个持续的挑战。本文将探讨一系列实用的性能优化策略,旨在帮助运维专家识别并解决可能影响集群性能的问题。我们将从节点资源配置、网络优化、存储管理以及集群监控等方面入手,提供一系列经过实践检验的调优技巧,并分享最佳实践案例。这些策略不仅有助于提升现有集群的性能,也为规划新的 Kubernetes 部署提供了参考依据。
|
4天前
|
运维 Kubernetes 监控
Kubernetes 集群的持续性能优化实践
【5月更文挑战第30天】 在动态且日益复杂的云原生环境中,维持 Kubernetes 集群的高性能运行是一个持续的挑战。本文将探讨一系列针对性能监控、问题定位及优化措施的实践方法,旨在帮助运维专家确保其 Kubernetes 环境能够高效、稳定地服务于不断变化的业务需求。通过深入分析系统瓶颈,我们不仅提供即时的性能提升方案,同时给出长期维护的策略建议,确保集群性能的可持续性。