K8s查看集群 状态事件描述以及Pod日志信息

本文涉及的产品
RDS MySQL Serverless 基础系列,0.5-2RCU 50GB
云数据库 Redis 版,社区版 2GB
推荐场景:
搭建游戏排行榜
RDS MySQL Serverless 高可用系列,价值2615元额度,1个月
简介: K8s查看集群 状态事件描述以及Pod日志信息
[root@master kubernetes]# kubectl get events
LAST SEEN   TYPE      REASON                    OBJECT                                               MESSAGE
9m48s       Normal    SandboxChanged            pod/busybox                                          Pod sandbox changed, it will be killed and re-created.
9m47s       Normal    Pulled                    pod/busybox                                          Container image "busybox:1.28" already present on machine
9m47s       Normal    Created                   pod/busybox                                          Created container busybox
9m46s       Normal    Started                   pod/busybox                                          Started container busybox
3m5s        Normal    FailedBinding             persistentvolumeclaim/data-redis-redis-ha-server-0   no persistent volumes available for this claim and no storage class is set
9m53s       Normal    Starting                  node/master                                          Starting kubelet.
9m53s       Normal    NodeHasSufficientMemory   node/master                                          Node master status is now: NodeHasSufficientMemory
9m53s       Normal    NodeHasNoDiskPressure     node/master                                          Node master status is now: NodeHasNoDiskPressure
9m53s       Normal    NodeHasSufficientPID      node/master                                          Node master status is now: NodeHasSufficientPID
9m53s       Normal    NodeAllocatableEnforced   node/master                                          Updated Node Allocatable limit across pods
9m51s       Warning   Rebooted                  node/master                                          Node master has been rebooted, boot id: a6e93e98-4513-4419-a12d-af366d494e71
9m45s       Normal    Starting                  node/master                                          Starting kube-proxy.
9m20s       Normal    RegisteredNode            node/master                                          Node master event: Registered Node master in Controller
9m45s       Warning   FailedMount               pod/mysqldb-0                                        MountVolume.NodeAffinity check failed for volume "pvc-d138f573-608c-44a0-9f54-9c1e391241b5" : error retrieving node: node "node1" not found
9m29s       Normal    SandboxChanged            pod/mysqldb-0                                        Pod sandbox changed, it will be killed and re-created.
9m40s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5c07d18933e7b594f03ddd2a7e2cca0f64841041a04cef600ed5cc45f570416f" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m39s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ebadbe1af8a2deb84a2636fa8bb4f06cf35fc056d8e52ebc370add1ec6b6d3a" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m36s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "724288a2ec12f5474cdbf45e872fb0459056e2489f6598e9ea7102b615e69031" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m35s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fa77d4dfd3e111150ce1da3dc26b780cea9810d7d6f7ac1e1935a6437c1affdc" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m33s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3bc41abe18e0d3f65e9997f31d5934270910e6f844711e62b46c44cfa9e18ce2" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m32s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "bff35dd9a356549127bdb600f665d67445d61b335af34a5f3afca01cbc9d02c0" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m31s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b68e5e248808057f74627fab0a363d1f92d744c117e9b6759508b16283204401" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m29s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "55f144ef0c9a8478d8635e29d130f3c0da22be567a083310646129d3c20ab317" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m28s       Normal    Pulled                    pod/mysqldb-0                                        Container image "docker.io/bitnami/mysql:8.0.32-debian-11-r21" already present on machine
9m27s       Normal    Created                   pod/mysqldb-0                                        Created container mysql
9m26s       Normal    Started                   pod/mysqldb-0                                        Started container mysql
9m4s        Warning   Unhealthy                 pod/mysqldb-0                                        Startup probe failed: mysqladmin: [Warning] Using a password on the command line interface can be insecure.
mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mysql/tmp/mysql.sock' (2)'
Check that mysqld is running and that the socket: '/opt/bitnami/mysql/tmp/mysql.sock' exists!
10m         Normal    Starting                  node/node1                                           Starting kubelet.
10m         Normal    NodeHasSufficientMemory   node/node1                                           Node node1 status is now: NodeHasSufficientMemory
10m         Normal    NodeHasNoDiskPressure     node/node1                                           Node node1 status is now: NodeHasNoDiskPressure
10m         Normal    NodeHasSufficientPID      node/node1                                           Node node1 status is now: NodeHasSufficientPID
10m         Normal    NodeAllocatableEnforced   node/node1                                           Updated Node Allocatable limit across pods
9m45s       Normal    Starting                  node/node1                                           Starting kube-proxy.
9m20s       Normal    RegisteredNode            node/node1                                           Node node1 event: Registered Node node1 in Controller
3m5s        Normal    FailedBinding             persistentvolumeclaim/redis-data-redis-master-0      no persistent volumes available for this claim and no storage class is set
3m5s        Normal    FailedBinding             persistentvolumeclaim/redis-data-redis-slave-0       no persistent volumes available for this claim and no storage class is set
9m44s       Warning   FailedMount               pod/redis-db-master-0                                MountVolume.NodeAffinity check failed for volume "pvc-02087f98-25cf-41c4-9d74-810a922d4c65" : error retrieving node: node "node1" not found
9m29s       Normal    SandboxChanged            pod/redis-db-master-0                                Pod sandbox changed, it will be killed and re-created.
9m39s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ec7648600829436aefb107c31297e4ce6ffbf96de17e3bac7324e20119e5a531" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m37s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "99b5ba01b6c37779f58b926f7c5a5a276353ef02168f4b7691f9d5fb8d0ec37f" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m36s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9dda09d4c5cdab632377aaeecc30f3eca275f9950be8cd808fe5c492e6440013" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m35s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a20e5769519867b8d9c8aa0f134342ae786c7498e23800baba164d077b6afe22" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m32s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "07d68951f8bd72e1bb48ad9d1746d55fe2d71347ef6fcca6d8ff6f544c370682" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m31s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "98a8399a1492c1a46d408fff63382fd0d73c5fc660a58d35629c3107d08e1a2e" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m29s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2d33f6ed814d838c6001d6f43c86b096a4e1fbd6f52f2e7d2ebe55c4e638ee4d" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m28s       Normal    Pulled                    pod/redis-db-master-0                                Container image "docker.io/bitnami/redis:7.0.10-debian-11-r4" already present on machine
9m28s       Normal    Created                   pod/redis-db-master-0                                Created container redis
9m26s       Normal    Started                   pod/redis-db-master-0                                Started container redis
8m56s       Warning   Unhealthy                 pod/redis-db-master-0                                Liveness probe failed: Timed out
[root@master kubernetes]# kubectl describe node node1
Name:               node1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node1
                    kubernetes.io/os=linux
                    nodeenv=pro
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"86:6c:1f:00:91:5b"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.31.138
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 18 Sep 2022 14:37:35 +0800
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  node1
  AcquireTime:     <unset>
  RenewTime:       Tue, 19 Sep 2023 22:27:37 +0800
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Tue, 19 Sep 2023 22:17:39 +0800   Tue, 19 Sep 2023 22:17:39 +0800   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Tue, 19 Sep 2023 22:27:27 +0800   Sat, 26 Aug 2023 10:20:40 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Tue, 19 Sep 2023 22:27:27 +0800   Sat, 26 Aug 2023 10:20:40 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Tue, 19 Sep 2023 22:27:27 +0800   Sat, 26 Aug 2023 10:20:40 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Tue, 19 Sep 2023 22:27:27 +0800   Thu, 14 Sep 2023 22:19:07 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.31.138
  Hostname:    node1
Capacity:
  cpu:                2
  ephemeral-storage:  51175Mi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3861256Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  48294789041
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3758856Ki
  pods:               110
System Info:
  Machine ID:                 2da3702dd1ea45a59bee481c361484dc
  System UUID:                D4E94D56-3E90-005C-59F8-AE44B0623D1C
  Boot ID:                    3addb0d6-f928-470d-a65a-70b5c2790adf
  Kernel Version:             3.10.0-1160.76.1.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.18
  Kubelet Version:            v1.18.0
  Kube-Proxy Version:         v1.18.0
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
Non-terminated Pods:          (13 in total)
  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
  default                     busybox                                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20d
  default                     mysqldb-0                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         152d
  default                     redis-db-master-0                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         152d
  dev                         content-web-deloy-58f6465676-pd5pq                         400m (20%)    1 (50%)     256Mi (6%)       1Gi (27%)      163d
  dev                         mall-wx-deploy-768c46897-bhq4l                             400m (20%)    1 (50%)     256Mi (6%)       1Gi (27%)      153d
  kube-system                 coredns-7ff77c879f-ppgqc                                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     153d
  kube-system                 grafana-core-768b6bf79c-gwkdc                              100m (5%)     100m (5%)   100Mi (2%)       100Mi (2%)     196d
  kube-system                 kube-flannel-ds-amd64-xzgr9                                100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      152d
  kube-system                 kube-proxy-d89lh                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         152d
  kube-system                 metrics-server-57bc7f4584-gc2s6                            200m (10%)    300m (15%)  100Mi (2%)       200Mi (5%)     215d
  kube-system                 node-exporter-5kk5l                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         197d
  kube-system                 prometheus-7486bf7f4b-hgb55                                100m (5%)     500m (25%)  100Mi (2%)       2500Mi (68%)   197d
  nfs-pro                     nfs-provisioner-nfs-client-provisioner-59db6984d9-wbzb7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         161d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                1400m (70%)  3 (150%)
  memory             932Mi (25%)  5068Mi (138%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:
  Type    Reason                   Age                From               Message
  ----    ------                   ----               ----               -------
  Normal  Starting                 11m                kubelet, node1     Starting kubelet.
  Normal  NodeAllocatableEnforced  11m                kubelet, node1     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     10m (x7 over 11m)  kubelet, node1     Node node1 status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientMemory  10m (x8 over 11m)  kubelet, node1     Node node1 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    10m (x8 over 11m)  kubelet, node1     Node node1 status is now: NodeHasNoDiskPressure
  Normal  Starting                 10m                kube-proxy, node1  Starting kube-proxy.
[root@node1 kubernetes]# kubectl get pod -A
NAMESPACE     NAME                                                      READY   STATUS             RESTARTS   AGE
default       busybox                                                   1/1     Running            2          20d
default       mysqldb-0                                                 1/1     Running            8          152d
default       redis-db-master-0                                         1/1     Running            8          152d
dev           content-web-deloy-58f6465676-pd5pq                        1/1     Running            15         163d
dev           mall-wx-deploy-768c46897-bhq4l                            0/1     CrashLoopBackOff   65         153d
kube-system   coredns-7ff77c879f-8blg8                                  1/1     Running            9          153d
kube-system   coredns-7ff77c879f-ppgqc                                  1/1     Running            9          153d
kube-system   etcd-master                                               1/1     Running            38         366d
kube-system   grafana-core-768b6bf79c-gwkdc                             1/1     Running            20         196d
kube-system   kube-apiserver-master                                     1/1     Running            191        366d
kube-system   kube-controller-manager-master                            1/1     Running            50         366d
kube-system   kube-flannel-ds-amd64-pgjgh                               1/1     Running            12         152d
kube-system   kube-flannel-ds-amd64-xzgr9                               1/1     Running            12         152d
kube-system   kube-proxy-d89lh                                          1/1     Running            9          152d
kube-system   kube-proxy-r5dk6                                          1/1     Running            9          152d
kube-system   kube-scheduler-master                                     1/1     Running            50         366d
kube-system   metrics-server-57bc7f4584-gc2s6                           1/1     Running            29         215d
kube-system   node-exporter-5kk5l                                       1/1     Running            20         197d
kube-system   prometheus-7486bf7f4b-hgb55                               1/1     Running            20         197d
nfs-pro       nfs-provisioner-nfs-client-provisioner-59db6984d9-wbzb7   1/1     Running            13         161d
[root@node1 kubernetes]# kubectl describe pod busybox -n default
Name:         busybox
Namespace:    default
Priority:     0
Node:         node1/192.168.31.138
Start Time:   Tue, 29 Aug 2023 22:31:47 +0800
Labels:       <none>
Annotations:  Status:  Running
IP:           192.168.31.138
IPs:
  IP:  192.168.31.138
Containers:
  busybox:
    Container ID:  docker://082ee78c1807e47967d8ec25fc0047808f10645b4574b9ef9d30a05dd1984176
    Image:         busybox:1.28
    Image ID:      docker-pullable://busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      3600
    State:          Running
      Started:      Tue, 19 Sep 2023 22:17:22 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Thu, 14 Sep 2023 22:19:14 +0800
      Finished:     Thu, 14 Sep 2023 23:06:39 +0800
    Ready:          True
    Restart Count:  2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-n775f (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-n775f:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-n775f
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason          Age   From            Message
  ----    ------          ----  ----            -------
  Normal  SandboxChanged  11m   kubelet, node1  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          11m   kubelet, node1  Container image "busybox:1.28" already present on machine
  Normal  Created         11m   kubelet, node1  Created container busybox
  Normal  Started         11m   kubelet, node1  Started container busybox
[root@node1 kubernetes]# kubectl logs -f  busybox -n default
^C
[root@node1 kubernetes]# journalctl -u kubelet
-- Logs begin at 二 2023-09-19 20:57:42 CST, end at 二 2023-09-19 22:29:45 CST. --
9月 19 20:57:48 node1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.300712     777 server.go:417] Version: v1.18.0
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.301084     777 plugins.go:100] No cloud provider specified.
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.301234     777 server.go:837] Client rotation is on, will bootstrap in background
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.319352     777 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-curr
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.923862     777 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924160     777 container_manager_linux.go:266] container manager verified user specified cgroup-root exists:
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924171     777 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {Runti
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924443     777 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924455     777 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none poli
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924458     777 container_manager_linux.go:306] Creating device plugin manager: true
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924852     777 client.go:75] Connecting to docker on unix:///var/run/docker.sock
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.925049     777 client.go:92] Start docker client with request timeout=2m0s
9月 19 20:57:55 node1 kubelet[777]: W0919 20:57:55.934715     777 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, fa
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.934734     777 docker_service.go:238] Hairpin mode set to "hairpin-veth"
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.973207     777 docker_service.go:253] Docker cri networking managed by cni
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.004070     777 docker_service.go:258] Docker Info: &{ID:NSTL:BPGE:HFEB:2GVN:754N:AOXJ:LYVN:LN6C:6XIE:Z4UG:YWS
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.004140     777 docker_service.go:271] Setting cgroupDriver to systemd
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043595     777 remote_runtime.go:59] parsed scheme: ""
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043629     777 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043912     777 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil>
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043936     777 clientconn.go:933] ClientConn switching balancer to "pick_first"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044064     777 remote_image.go:50] parsed scheme: ""
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044071     777 remote_image.go:50] scheme "" not registered, fallback to default scheme
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044080     777 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil>
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044084     777 clientconn.go:933] ClientConn switching balancer to "pick_first"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044416     777 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044733     777 kubelet.go:317] Watching apiserver
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.072921     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get h
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.073155     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Ge
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.073227     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.073273     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get h
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.074707     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Ge
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.076218     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.314303     777 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers
9月 19 20:57:56 node1 kubelet[777]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.338432     777 kuberuntime_manager.go:211] Container runtime docker initialized, version: 20.10.18, apiVersio
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.349572     777 server.go:1125] Started kubelet
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.349946     777 server.go:145] Starting to listen on 0.0.0.0:10250
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.351740     777 server.go:393] Adding debug handlers to kubelet server.
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.353691     777 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have compl
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.354559     777 event.go:269] Unable to write event: 'Post https://192.168.31.119:6443/api/v1/namespaces/defau
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.359570     777 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.371002     777 volume_manager.go:265] Starting Kubelet Volume Manager
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.379815     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSIDriver: Get
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.388230     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSIDriver: Get
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.391736     777 desired_state_of_world_populator.go:139] Desired state populator starts to run
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.392193     777 controller.go:136] failed to ensure node lease exists, will retry in 200ms, error: Get https:/
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460585     777 clientconn.go:106] parsed scheme: "unix"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460599     777 clientconn.go:106] scheme "unix" not registered, fallback to default scheme
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460733     777 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containe
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460743     777 clientconn.go:933] ClientConn switching balancer to "pick_first"
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.475715     777 kubelet.go:2267] node "node1" not found
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.475756     777 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.496580     777 status_manager.go:158] Starting to sync pod status with apiserver
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.496617     777 kubelet.go:1821] Starting kubelet main sync loop.
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.496652     777 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have c
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.503657     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeCl
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.548176     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.561743     777 kubelet_node_status.go:70] Attempting to register node node1
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.577020     777 kubelet.go:2267] node "node1" not found
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.579803     777 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.581134     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.590337     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.592697     777 controller.go:136] failed to ensure node lease exists, will retry in 400ms, error: Get https:/
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.596794     777 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have co
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.597702     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.603749     777 cpu_manager.go:184] [cpumanager] starting with none policy
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.603761     777 cpu_manager.go:185] [cpumanager] reconciling every 10s
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.603776     777 state_mem.go:36] [cpumanager] initializing new in-memory state store
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.604701     777 state_mem.go:88] [cpumanager] updated default cpuset: ""
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.604710     777 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.604721     777 policy_none.go:43] [cpumanager] none policy: Start
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.612408     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.629396     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.636733     777 plugin_manager.go:114] Starting Kubelet Plugin Manager
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.637477     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.637516     777 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get node inf
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.647459     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.655311     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeCl
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.657280     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
云原生实践公开课
课程大纲 开篇:如何学习并实践云原生技术 基础篇: 5 步上手 Kubernetes 进阶篇:生产环境下的 K8s 实践 相关的阿里云产品:容器服务&nbsp;ACK 容器服务&nbsp;Kubernetes&nbsp;版(简称&nbsp;ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情:&nbsp;https://www.aliyun.com/product/kubernetes
相关文章
|
7天前
|
SQL 存储 监控
|
5天前
|
监控 关系型数据库 Java
|
6天前
|
机器学习/深度学习 数据可视化
【tensorboard】深度学习的日志信息events.out.tfevents文件可视化工具
【tensorboard】深度学习的日志信息events.out.tfevents文件可视化工具
|
6天前
|
Kubernetes Cloud Native 微服务
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
微服务实践之使用 kube-vip 搭建高可用 Kubernetes 集群
182 3
|
18天前
|
Kubernetes 微服务 容器
Aspire项目发布到远程k8s集群
Aspire项目发布到远程k8s集群
360 2
Aspire项目发布到远程k8s集群
|
1天前
|
Kubernetes API 调度
Pod无法调度到可用的节点上(K8s)
完成k8s单节点部署后,创建了一个pod进行测试,后续该pod出现以下报错: Warning FailedScheduling 3h7m (x3 over 3h18m) default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
8 0
|
5天前
|
SQL 运维 关系型数据库
|
8天前
|
Kubernetes 数据处理 调度
天呐!部署 Kubernetes 模式的 Havenask 集群太震撼了!
【6月更文挑战第11天】Kubernetes 与 Havenask 集群结合,打造高效智能的数据处理解决方案。Kubernetes 如指挥家精准调度资源,Havenask 快速响应查询,简化复杂任务,优化资源管理。通过搭建 Kubernetes 环境并配置 Havenask,实现高可扩展性和容错性,保障服务连续性。开发者因此能专注业务逻辑,享受自动化基础设施管理带来的便利。这项创新技术组合引领未来,开启数据处理新篇章。拥抱技术新时代!
|
8天前
|
Kubernetes 前端开发 Serverless
Serverless 应用引擎产品使用合集之如何调用Kubernetes集群内服务
阿里云Serverless 应用引擎(SAE)提供了完整的微服务应用生命周期管理能力,包括应用部署、服务治理、开发运维、资源管理等功能,并通过扩展功能支持多环境管理、API Gateway、事件驱动等高级应用场景,帮助企业快速构建、部署、运维和扩展微服务架构,实现Serverless化的应用部署与运维模式。以下是对SAE产品使用合集的概述,包括应用管理、服务治理、开发运维、资源管理等方面。
|
14天前
|
设计模式 Java 关系型数据库
Spring的配置文件,如何配置端口号,,properties,yml获取配置项等方法,外观模式及其优缺点,日志代表的信息
Spring的配置文件,如何配置端口号,,properties,yml获取配置项等方法,外观模式及其优缺点,日志代表的信息

热门文章

最新文章