K8s查看集群 状态事件描述以及Pod日志信息

本文涉及的产品
云数据库 RDS MySQL,集群系列 2核4GB
推荐场景:
搭建个人博客
RDS MySQL Serverless 基础系列,0.5-2RCU 50GB
云数据库 Tair(兼容Redis),内存型 2GB
简介: K8s查看集群 状态事件描述以及Pod日志信息
[root@master kubernetes]# kubectl get events
LAST SEEN   TYPE      REASON                    OBJECT                                               MESSAGE
9m48s       Normal    SandboxChanged            pod/busybox                                          Pod sandbox changed, it will be killed and re-created.
9m47s       Normal    Pulled                    pod/busybox                                          Container image "busybox:1.28" already present on machine
9m47s       Normal    Created                   pod/busybox                                          Created container busybox
9m46s       Normal    Started                   pod/busybox                                          Started container busybox
3m5s        Normal    FailedBinding             persistentvolumeclaim/data-redis-redis-ha-server-0   no persistent volumes available for this claim and no storage class is set
9m53s       Normal    Starting                  node/master                                          Starting kubelet.
9m53s       Normal    NodeHasSufficientMemory   node/master                                          Node master status is now: NodeHasSufficientMemory
9m53s       Normal    NodeHasNoDiskPressure     node/master                                          Node master status is now: NodeHasNoDiskPressure
9m53s       Normal    NodeHasSufficientPID      node/master                                          Node master status is now: NodeHasSufficientPID
9m53s       Normal    NodeAllocatableEnforced   node/master                                          Updated Node Allocatable limit across pods
9m51s       Warning   Rebooted                  node/master                                          Node master has been rebooted, boot id: a6e93e98-4513-4419-a12d-af366d494e71
9m45s       Normal    Starting                  node/master                                          Starting kube-proxy.
9m20s       Normal    RegisteredNode            node/master                                          Node master event: Registered Node master in Controller
9m45s       Warning   FailedMount               pod/mysqldb-0                                        MountVolume.NodeAffinity check failed for volume "pvc-d138f573-608c-44a0-9f54-9c1e391241b5" : error retrieving node: node "node1" not found
9m29s       Normal    SandboxChanged            pod/mysqldb-0                                        Pod sandbox changed, it will be killed and re-created.
9m40s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5c07d18933e7b594f03ddd2a7e2cca0f64841041a04cef600ed5cc45f570416f" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m39s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ebadbe1af8a2deb84a2636fa8bb4f06cf35fc056d8e52ebc370add1ec6b6d3a" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m36s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "724288a2ec12f5474cdbf45e872fb0459056e2489f6598e9ea7102b615e69031" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m35s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fa77d4dfd3e111150ce1da3dc26b780cea9810d7d6f7ac1e1935a6437c1affdc" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m33s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3bc41abe18e0d3f65e9997f31d5934270910e6f844711e62b46c44cfa9e18ce2" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m32s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "bff35dd9a356549127bdb600f665d67445d61b335af34a5f3afca01cbc9d02c0" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m31s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b68e5e248808057f74627fab0a363d1f92d744c117e9b6759508b16283204401" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m29s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "55f144ef0c9a8478d8635e29d130f3c0da22be567a083310646129d3c20ab317" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m28s       Normal    Pulled                    pod/mysqldb-0                                        Container image "docker.io/bitnami/mysql:8.0.32-debian-11-r21" already present on machine
9m27s       Normal    Created                   pod/mysqldb-0                                        Created container mysql
9m26s       Normal    Started                   pod/mysqldb-0                                        Started container mysql
9m4s        Warning   Unhealthy                 pod/mysqldb-0                                        Startup probe failed: mysqladmin: [Warning] Using a password on the command line interface can be insecure.
mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mysql/tmp/mysql.sock' (2)'
Check that mysqld is running and that the socket: '/opt/bitnami/mysql/tmp/mysql.sock' exists!
10m         Normal    Starting                  node/node1                                           Starting kubelet.
10m         Normal    NodeHasSufficientMemory   node/node1                                           Node node1 status is now: NodeHasSufficientMemory
10m         Normal    NodeHasNoDiskPressure     node/node1                                           Node node1 status is now: NodeHasNoDiskPressure
10m         Normal    NodeHasSufficientPID      node/node1                                           Node node1 status is now: NodeHasSufficientPID
10m         Normal    NodeAllocatableEnforced   node/node1                                           Updated Node Allocatable limit across pods
9m45s       Normal    Starting                  node/node1                                           Starting kube-proxy.
9m20s       Normal    RegisteredNode            node/node1                                           Node node1 event: Registered Node node1 in Controller
3m5s        Normal    FailedBinding             persistentvolumeclaim/redis-data-redis-master-0      no persistent volumes available for this claim and no storage class is set
3m5s        Normal    FailedBinding             persistentvolumeclaim/redis-data-redis-slave-0       no persistent volumes available for this claim and no storage class is set
9m44s       Warning   FailedMount               pod/redis-db-master-0                                MountVolume.NodeAffinity check failed for volume "pvc-02087f98-25cf-41c4-9d74-810a922d4c65" : error retrieving node: node "node1" not found
9m29s       Normal    SandboxChanged            pod/redis-db-master-0                                Pod sandbox changed, it will be killed and re-created.
9m39s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ec7648600829436aefb107c31297e4ce6ffbf96de17e3bac7324e20119e5a531" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m37s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "99b5ba01b6c37779f58b926f7c5a5a276353ef02168f4b7691f9d5fb8d0ec37f" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m36s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9dda09d4c5cdab632377aaeecc30f3eca275f9950be8cd808fe5c492e6440013" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m35s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a20e5769519867b8d9c8aa0f134342ae786c7498e23800baba164d077b6afe22" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m32s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "07d68951f8bd72e1bb48ad9d1746d55fe2d71347ef6fcca6d8ff6f544c370682" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m31s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "98a8399a1492c1a46d408fff63382fd0d73c5fc660a58d35629c3107d08e1a2e" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m29s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2d33f6ed814d838c6001d6f43c86b096a4e1fbd6f52f2e7d2ebe55c4e638ee4d" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m28s       Normal    Pulled                    pod/redis-db-master-0                                Container image "docker.io/bitnami/redis:7.0.10-debian-11-r4" already present on machine
9m28s       Normal    Created                   pod/redis-db-master-0                                Created container redis
9m26s       Normal    Started                   pod/redis-db-master-0                                Started container redis
8m56s       Warning   Unhealthy                 pod/redis-db-master-0                                Liveness probe failed: Timed out
[root@master kubernetes]# kubectl describe node node1
Name:               node1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node1
                    kubernetes.io/os=linux
                    nodeenv=pro
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"86:6c:1f:00:91:5b"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.31.138
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 18 Sep 2022 14:37:35 +0800
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  node1
  AcquireTime:     <unset>
  RenewTime:       Tue, 19 Sep 2023 22:27:37 +0800
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Tue, 19 Sep 2023 22:17:39 +0800   Tue, 19 Sep 2023 22:17:39 +0800   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Tue, 19 Sep 2023 22:27:27 +0800   Sat, 26 Aug 2023 10:20:40 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Tue, 19 Sep 2023 22:27:27 +0800   Sat, 26 Aug 2023 10:20:40 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Tue, 19 Sep 2023 22:27:27 +0800   Sat, 26 Aug 2023 10:20:40 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Tue, 19 Sep 2023 22:27:27 +0800   Thu, 14 Sep 2023 22:19:07 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.31.138
  Hostname:    node1
Capacity:
  cpu:                2
  ephemeral-storage:  51175Mi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3861256Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  48294789041
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3758856Ki
  pods:               110
System Info:
  Machine ID:                 2da3702dd1ea45a59bee481c361484dc
  System UUID:                D4E94D56-3E90-005C-59F8-AE44B0623D1C
  Boot ID:                    3addb0d6-f928-470d-a65a-70b5c2790adf
  Kernel Version:             3.10.0-1160.76.1.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.18
  Kubelet Version:            v1.18.0
  Kube-Proxy Version:         v1.18.0
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
Non-terminated Pods:          (13 in total)
  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
  default                     busybox                                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20d
  default                     mysqldb-0                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         152d
  default                     redis-db-master-0                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         152d
  dev                         content-web-deloy-58f6465676-pd5pq                         400m (20%)    1 (50%)     256Mi (6%)       1Gi (27%)      163d
  dev                         mall-wx-deploy-768c46897-bhq4l                             400m (20%)    1 (50%)     256Mi (6%)       1Gi (27%)      153d
  kube-system                 coredns-7ff77c879f-ppgqc                                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     153d
  kube-system                 grafana-core-768b6bf79c-gwkdc                              100m (5%)     100m (5%)   100Mi (2%)       100Mi (2%)     196d
  kube-system                 kube-flannel-ds-amd64-xzgr9                                100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      152d
  kube-system                 kube-proxy-d89lh                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         152d
  kube-system                 metrics-server-57bc7f4584-gc2s6                            200m (10%)    300m (15%)  100Mi (2%)       200Mi (5%)     215d
  kube-system                 node-exporter-5kk5l                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         197d
  kube-system                 prometheus-7486bf7f4b-hgb55                                100m (5%)     500m (25%)  100Mi (2%)       2500Mi (68%)   197d
  nfs-pro                     nfs-provisioner-nfs-client-provisioner-59db6984d9-wbzb7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         161d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                1400m (70%)  3 (150%)
  memory             932Mi (25%)  5068Mi (138%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:
  Type    Reason                   Age                From               Message
  ----    ------                   ----               ----               -------
  Normal  Starting                 11m                kubelet, node1     Starting kubelet.
  Normal  NodeAllocatableEnforced  11m                kubelet, node1     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     10m (x7 over 11m)  kubelet, node1     Node node1 status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientMemory  10m (x8 over 11m)  kubelet, node1     Node node1 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    10m (x8 over 11m)  kubelet, node1     Node node1 status is now: NodeHasNoDiskPressure
  Normal  Starting                 10m                kube-proxy, node1  Starting kube-proxy.
[root@node1 kubernetes]# kubectl get pod -A
NAMESPACE     NAME                                                      READY   STATUS             RESTARTS   AGE
default       busybox                                                   1/1     Running            2          20d
default       mysqldb-0                                                 1/1     Running            8          152d
default       redis-db-master-0                                         1/1     Running            8          152d
dev           content-web-deloy-58f6465676-pd5pq                        1/1     Running            15         163d
dev           mall-wx-deploy-768c46897-bhq4l                            0/1     CrashLoopBackOff   65         153d
kube-system   coredns-7ff77c879f-8blg8                                  1/1     Running            9          153d
kube-system   coredns-7ff77c879f-ppgqc                                  1/1     Running            9          153d
kube-system   etcd-master                                               1/1     Running            38         366d
kube-system   grafana-core-768b6bf79c-gwkdc                             1/1     Running            20         196d
kube-system   kube-apiserver-master                                     1/1     Running            191        366d
kube-system   kube-controller-manager-master                            1/1     Running            50         366d
kube-system   kube-flannel-ds-amd64-pgjgh                               1/1     Running            12         152d
kube-system   kube-flannel-ds-amd64-xzgr9                               1/1     Running            12         152d
kube-system   kube-proxy-d89lh                                          1/1     Running            9          152d
kube-system   kube-proxy-r5dk6                                          1/1     Running            9          152d
kube-system   kube-scheduler-master                                     1/1     Running            50         366d
kube-system   metrics-server-57bc7f4584-gc2s6                           1/1     Running            29         215d
kube-system   node-exporter-5kk5l                                       1/1     Running            20         197d
kube-system   prometheus-7486bf7f4b-hgb55                               1/1     Running            20         197d
nfs-pro       nfs-provisioner-nfs-client-provisioner-59db6984d9-wbzb7   1/1     Running            13         161d
[root@node1 kubernetes]# kubectl describe pod busybox -n default
Name:         busybox
Namespace:    default
Priority:     0
Node:         node1/192.168.31.138
Start Time:   Tue, 29 Aug 2023 22:31:47 +0800
Labels:       <none>
Annotations:  Status:  Running
IP:           192.168.31.138
IPs:
  IP:  192.168.31.138
Containers:
  busybox:
    Container ID:  docker://082ee78c1807e47967d8ec25fc0047808f10645b4574b9ef9d30a05dd1984176
    Image:         busybox:1.28
    Image ID:      docker-pullable://busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      3600
    State:          Running
      Started:      Tue, 19 Sep 2023 22:17:22 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Thu, 14 Sep 2023 22:19:14 +0800
      Finished:     Thu, 14 Sep 2023 23:06:39 +0800
    Ready:          True
    Restart Count:  2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-n775f (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-n775f:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-n775f
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason          Age   From            Message
  ----    ------          ----  ----            -------
  Normal  SandboxChanged  11m   kubelet, node1  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          11m   kubelet, node1  Container image "busybox:1.28" already present on machine
  Normal  Created         11m   kubelet, node1  Created container busybox
  Normal  Started         11m   kubelet, node1  Started container busybox
[root@node1 kubernetes]# kubectl logs -f  busybox -n default
^C
[root@node1 kubernetes]# journalctl -u kubelet
-- Logs begin at 二 2023-09-19 20:57:42 CST, end at 二 2023-09-19 22:29:45 CST. --
9月 19 20:57:48 node1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.300712     777 server.go:417] Version: v1.18.0
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.301084     777 plugins.go:100] No cloud provider specified.
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.301234     777 server.go:837] Client rotation is on, will bootstrap in background
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.319352     777 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-curr
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.923862     777 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924160     777 container_manager_linux.go:266] container manager verified user specified cgroup-root exists:
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924171     777 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {Runti
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924443     777 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924455     777 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none poli
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924458     777 container_manager_linux.go:306] Creating device plugin manager: true
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924852     777 client.go:75] Connecting to docker on unix:///var/run/docker.sock
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.925049     777 client.go:92] Start docker client with request timeout=2m0s
9月 19 20:57:55 node1 kubelet[777]: W0919 20:57:55.934715     777 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, fa
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.934734     777 docker_service.go:238] Hairpin mode set to "hairpin-veth"
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.973207     777 docker_service.go:253] Docker cri networking managed by cni
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.004070     777 docker_service.go:258] Docker Info: &{ID:NSTL:BPGE:HFEB:2GVN:754N:AOXJ:LYVN:LN6C:6XIE:Z4UG:YWS
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.004140     777 docker_service.go:271] Setting cgroupDriver to systemd
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043595     777 remote_runtime.go:59] parsed scheme: ""
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043629     777 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043912     777 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil>
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043936     777 clientconn.go:933] ClientConn switching balancer to "pick_first"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044064     777 remote_image.go:50] parsed scheme: ""
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044071     777 remote_image.go:50] scheme "" not registered, fallback to default scheme
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044080     777 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil>
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044084     777 clientconn.go:933] ClientConn switching balancer to "pick_first"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044416     777 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044733     777 kubelet.go:317] Watching apiserver
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.072921     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get h
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.073155     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Ge
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.073227     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.073273     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get h
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.074707     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Ge
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.076218     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.314303     777 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers
9月 19 20:57:56 node1 kubelet[777]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.338432     777 kuberuntime_manager.go:211] Container runtime docker initialized, version: 20.10.18, apiVersio
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.349572     777 server.go:1125] Started kubelet
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.349946     777 server.go:145] Starting to listen on 0.0.0.0:10250
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.351740     777 server.go:393] Adding debug handlers to kubelet server.
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.353691     777 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have compl
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.354559     777 event.go:269] Unable to write event: 'Post https://192.168.31.119:6443/api/v1/namespaces/defau
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.359570     777 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.371002     777 volume_manager.go:265] Starting Kubelet Volume Manager
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.379815     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSIDriver: Get
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.388230     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSIDriver: Get
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.391736     777 desired_state_of_world_populator.go:139] Desired state populator starts to run
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.392193     777 controller.go:136] failed to ensure node lease exists, will retry in 200ms, error: Get https:/
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460585     777 clientconn.go:106] parsed scheme: "unix"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460599     777 clientconn.go:106] scheme "unix" not registered, fallback to default scheme
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460733     777 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containe
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460743     777 clientconn.go:933] ClientConn switching balancer to "pick_first"
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.475715     777 kubelet.go:2267] node "node1" not found
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.475756     777 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.496580     777 status_manager.go:158] Starting to sync pod status with apiserver
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.496617     777 kubelet.go:1821] Starting kubelet main sync loop.
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.496652     777 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have c
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.503657     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeCl
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.548176     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.561743     777 kubelet_node_status.go:70] Attempting to register node node1
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.577020     777 kubelet.go:2267] node "node1" not found
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.579803     777 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.581134     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.590337     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.592697     777 controller.go:136] failed to ensure node lease exists, will retry in 400ms, error: Get https:/
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.596794     777 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have co
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.597702     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.603749     777 cpu_manager.go:184] [cpumanager] starting with none policy
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.603761     777 cpu_manager.go:185] [cpumanager] reconciling every 10s
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.603776     777 state_mem.go:36] [cpumanager] initializing new in-memory state store
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.604701     777 state_mem.go:88] [cpumanager] updated default cpuset: ""
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.604710     777 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.604721     777 policy_none.go:43] [cpumanager] none policy: Start
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.612408     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.629396     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.636733     777 plugin_manager.go:114] Starting Kubelet Plugin Manager
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.637477     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.637516     777 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get node inf
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.647459     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.655311     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeCl
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.657280     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
7天前
|
存储 Kubernetes 调度
【赵渝强老师】什么是Kubernetes的Pod
Pod 是 Kubernetes 中的基本逻辑单位,代表集群上的一个应用实例。它可以由一个或多个容器组成,并包含数据存储和网络配置等资源。Pod 支持多种容器执行环境,如 Docker。Kubernetes 使用 Pod 管理容器,具有简化部署、方便扩展和调度管理等优点。视频讲解和图示详细介绍了 Pod 的组成结构和使用方式。
|
7天前
|
存储 Kubernetes Docker
【赵渝强老师】Kubernetes中Pod的基础容器
Pod 是 Kubernetes 中的基本单位,代表集群上运行的一个进程。它由一个或多个容器组成,包括业务容器、基础容器、初始化容器和临时容器。基础容器负责维护 Pod 的网络空间,对用户透明。文中附有图片和视频讲解,详细介绍了 Pod 的组成结构及其在网络配置中的作用。
【赵渝强老师】Kubernetes中Pod的基础容器
|
7天前
|
运维 Kubernetes Shell
【赵渝强老师】K8s中Pod的临时容器
Pod 是 Kubernetes 中的基本调度单位,由一个或多个容器组成,包括业务容器、基础容器、初始化容器和临时容器。临时容器用于故障排查和性能诊断,不适用于构建应用程序。当 Pod 中的容器异常退出或容器镜像不包含调试工具时,临时容器非常有用。文中通过示例展示了如何使用 `kubectl debug` 命令创建临时容器进行调试。
|
7天前
|
Kubernetes 调度 容器
【赵渝强老师】K8s中Pod中的业务容器
Pod 是 Kubernetes 中的基本调度单元,由一个或多个容器组成。除了业务容器,Pod 还包括基础容器、初始化容器和临时容器。本文通过示例介绍如何创建包含业务容器的 Pod,并提供了一个视频讲解。示例中创建了一个名为 &quot;busybox-container&quot; 的业务容器,并使用 `kubectl create -f firstpod.yaml` 命令部署 Pod。
|
7天前
|
Kubernetes 容器 Perl
【赵渝强老师】K8s中Pod中的初始化容器
Kubernetes的Pod包含业务容器、基础容器、初始化容器和临时容器。初始化容器在业务容器前运行,用于执行必要的初始化任务。本文介绍了初始化容器的作用、配置方法及优势,并提供了一个示例。
|
8天前
|
弹性计算 Kubernetes Perl
k8s 设置pod 的cpu 和内存
在 Kubernetes (k8s) 中,设置 Pod 的 CPU 和内存资源限制和请求是非常重要的,因为这有助于确保集群资源的合理分配和有效利用。你可以通过定义 Pod 的 `resources` 字段来设置这些限制。 以下是一个示例 YAML 文件,展示了如何为一个 Pod 设置 CPU 和内存资源请求(requests)和限制(limits): ```yaml apiVersion: v1 kind: Pod metadata: name: example-pod spec: containers: - name: example-container image:
|
10天前
|
Kubernetes Nacos 微服务
探讨了在Kubernetes中使用Nacos v2.2.3时,强制删除Pod后Pod仍存在的常见问题
本文深入探讨了在Kubernetes中使用Nacos v2.2.3时,强制删除Pod后Pod仍存在的常见问题。通过检查Pod状态、事件、配置,调整Nacos和Kubernetes设置,以及手动干预等步骤,帮助开发者快速定位并解决问题,确保服务稳定运行。
32 2
|
7天前
|
存储 Kubernetes 调度
深入理解Kubernetes中的Pod与Container
深入理解Kubernetes中的Pod与Container
15 0
|
7天前
|
Kubernetes Java 调度
Kubernetes中的Pod垃圾回收策略是什么
Kubernetes中的Pod垃圾回收策略是什么
|
7天前
|
存储 Kubernetes 调度
深度解析Kubernetes中的Pod生命周期管理
深度解析Kubernetes中的Pod生命周期管理