K8s查看集群 状态事件描述以及Pod日志信息

本文涉及的产品
RDS MySQL Serverless 基础系列,0.5-2RCU 50GB
云原生内存数据库 Tair,内存型 2GB
云数据库 Redis 版,社区版 2GB
推荐场景:
搭建游戏排行榜
简介: K8s查看集群 状态事件描述以及Pod日志信息
[root@master kubernetes]# kubectl get events
LAST SEEN   TYPE      REASON                    OBJECT                                               MESSAGE
9m48s       Normal    SandboxChanged            pod/busybox                                          Pod sandbox changed, it will be killed and re-created.
9m47s       Normal    Pulled                    pod/busybox                                          Container image "busybox:1.28" already present on machine
9m47s       Normal    Created                   pod/busybox                                          Created container busybox
9m46s       Normal    Started                   pod/busybox                                          Started container busybox
3m5s        Normal    FailedBinding             persistentvolumeclaim/data-redis-redis-ha-server-0   no persistent volumes available for this claim and no storage class is set
9m53s       Normal    Starting                  node/master                                          Starting kubelet.
9m53s       Normal    NodeHasSufficientMemory   node/master                                          Node master status is now: NodeHasSufficientMemory
9m53s       Normal    NodeHasNoDiskPressure     node/master                                          Node master status is now: NodeHasNoDiskPressure
9m53s       Normal    NodeHasSufficientPID      node/master                                          Node master status is now: NodeHasSufficientPID
9m53s       Normal    NodeAllocatableEnforced   node/master                                          Updated Node Allocatable limit across pods
9m51s       Warning   Rebooted                  node/master                                          Node master has been rebooted, boot id: a6e93e98-4513-4419-a12d-af366d494e71
9m45s       Normal    Starting                  node/master                                          Starting kube-proxy.
9m20s       Normal    RegisteredNode            node/master                                          Node master event: Registered Node master in Controller
9m45s       Warning   FailedMount               pod/mysqldb-0                                        MountVolume.NodeAffinity check failed for volume "pvc-d138f573-608c-44a0-9f54-9c1e391241b5" : error retrieving node: node "node1" not found
9m29s       Normal    SandboxChanged            pod/mysqldb-0                                        Pod sandbox changed, it will be killed and re-created.
9m40s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5c07d18933e7b594f03ddd2a7e2cca0f64841041a04cef600ed5cc45f570416f" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m39s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ebadbe1af8a2deb84a2636fa8bb4f06cf35fc056d8e52ebc370add1ec6b6d3a" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m36s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "724288a2ec12f5474cdbf45e872fb0459056e2489f6598e9ea7102b615e69031" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m35s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fa77d4dfd3e111150ce1da3dc26b780cea9810d7d6f7ac1e1935a6437c1affdc" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m33s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3bc41abe18e0d3f65e9997f31d5934270910e6f844711e62b46c44cfa9e18ce2" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m32s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "bff35dd9a356549127bdb600f665d67445d61b335af34a5f3afca01cbc9d02c0" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m31s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b68e5e248808057f74627fab0a363d1f92d744c117e9b6759508b16283204401" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m29s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "55f144ef0c9a8478d8635e29d130f3c0da22be567a083310646129d3c20ab317" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m28s       Normal    Pulled                    pod/mysqldb-0                                        Container image "docker.io/bitnami/mysql:8.0.32-debian-11-r21" already present on machine
9m27s       Normal    Created                   pod/mysqldb-0                                        Created container mysql
9m26s       Normal    Started                   pod/mysqldb-0                                        Started container mysql
9m4s        Warning   Unhealthy                 pod/mysqldb-0                                        Startup probe failed: mysqladmin: [Warning] Using a password on the command line interface can be insecure.
mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mysql/tmp/mysql.sock' (2)'
Check that mysqld is running and that the socket: '/opt/bitnami/mysql/tmp/mysql.sock' exists!
10m         Normal    Starting                  node/node1                                           Starting kubelet.
10m         Normal    NodeHasSufficientMemory   node/node1                                           Node node1 status is now: NodeHasSufficientMemory
10m         Normal    NodeHasNoDiskPressure     node/node1                                           Node node1 status is now: NodeHasNoDiskPressure
10m         Normal    NodeHasSufficientPID      node/node1                                           Node node1 status is now: NodeHasSufficientPID
10m         Normal    NodeAllocatableEnforced   node/node1                                           Updated Node Allocatable limit across pods
9m45s       Normal    Starting                  node/node1                                           Starting kube-proxy.
9m20s       Normal    RegisteredNode            node/node1                                           Node node1 event: Registered Node node1 in Controller
3m5s        Normal    FailedBinding             persistentvolumeclaim/redis-data-redis-master-0      no persistent volumes available for this claim and no storage class is set
3m5s        Normal    FailedBinding             persistentvolumeclaim/redis-data-redis-slave-0       no persistent volumes available for this claim and no storage class is set
9m44s       Warning   FailedMount               pod/redis-db-master-0                                MountVolume.NodeAffinity check failed for volume "pvc-02087f98-25cf-41c4-9d74-810a922d4c65" : error retrieving node: node "node1" not found
9m29s       Normal    SandboxChanged            pod/redis-db-master-0                                Pod sandbox changed, it will be killed and re-created.
9m39s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ec7648600829436aefb107c31297e4ce6ffbf96de17e3bac7324e20119e5a531" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m37s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "99b5ba01b6c37779f58b926f7c5a5a276353ef02168f4b7691f9d5fb8d0ec37f" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m36s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9dda09d4c5cdab632377aaeecc30f3eca275f9950be8cd808fe5c492e6440013" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m35s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a20e5769519867b8d9c8aa0f134342ae786c7498e23800baba164d077b6afe22" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m32s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "07d68951f8bd72e1bb48ad9d1746d55fe2d71347ef6fcca6d8ff6f544c370682" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m31s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "98a8399a1492c1a46d408fff63382fd0d73c5fc660a58d35629c3107d08e1a2e" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m29s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2d33f6ed814d838c6001d6f43c86b096a4e1fbd6f52f2e7d2ebe55c4e638ee4d" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m28s       Normal    Pulled                    pod/redis-db-master-0                                Container image "docker.io/bitnami/redis:7.0.10-debian-11-r4" already present on machine
9m28s       Normal    Created                   pod/redis-db-master-0                                Created container redis
9m26s       Normal    Started                   pod/redis-db-master-0                                Started container redis
8m56s       Warning   Unhealthy                 pod/redis-db-master-0                                Liveness probe failed: Timed out
[root@master kubernetes]# kubectl describe node node1
Name:               node1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node1
                    kubernetes.io/os=linux
                    nodeenv=pro
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"86:6c:1f:00:91:5b"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.31.138
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 18 Sep 2022 14:37:35 +0800
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  node1
  AcquireTime:     <unset>
  RenewTime:       Tue, 19 Sep 2023 22:27:37 +0800
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Tue, 19 Sep 2023 22:17:39 +0800   Tue, 19 Sep 2023 22:17:39 +0800   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Tue, 19 Sep 2023 22:27:27 +0800   Sat, 26 Aug 2023 10:20:40 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Tue, 19 Sep 2023 22:27:27 +0800   Sat, 26 Aug 2023 10:20:40 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Tue, 19 Sep 2023 22:27:27 +0800   Sat, 26 Aug 2023 10:20:40 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Tue, 19 Sep 2023 22:27:27 +0800   Thu, 14 Sep 2023 22:19:07 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.31.138
  Hostname:    node1
Capacity:
  cpu:                2
  ephemeral-storage:  51175Mi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3861256Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  48294789041
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3758856Ki
  pods:               110
System Info:
  Machine ID:                 2da3702dd1ea45a59bee481c361484dc
  System UUID:                D4E94D56-3E90-005C-59F8-AE44B0623D1C
  Boot ID:                    3addb0d6-f928-470d-a65a-70b5c2790adf
  Kernel Version:             3.10.0-1160.76.1.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.18
  Kubelet Version:            v1.18.0
  Kube-Proxy Version:         v1.18.0
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
Non-terminated Pods:          (13 in total)
  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
  default                     busybox                                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20d
  default                     mysqldb-0                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         152d
  default                     redis-db-master-0                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         152d
  dev                         content-web-deloy-58f6465676-pd5pq                         400m (20%)    1 (50%)     256Mi (6%)       1Gi (27%)      163d
  dev                         mall-wx-deploy-768c46897-bhq4l                             400m (20%)    1 (50%)     256Mi (6%)       1Gi (27%)      153d
  kube-system                 coredns-7ff77c879f-ppgqc                                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     153d
  kube-system                 grafana-core-768b6bf79c-gwkdc                              100m (5%)     100m (5%)   100Mi (2%)       100Mi (2%)     196d
  kube-system                 kube-flannel-ds-amd64-xzgr9                                100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      152d
  kube-system                 kube-proxy-d89lh                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         152d
  kube-system                 metrics-server-57bc7f4584-gc2s6                            200m (10%)    300m (15%)  100Mi (2%)       200Mi (5%)     215d
  kube-system                 node-exporter-5kk5l                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         197d
  kube-system                 prometheus-7486bf7f4b-hgb55                                100m (5%)     500m (25%)  100Mi (2%)       2500Mi (68%)   197d
  nfs-pro                     nfs-provisioner-nfs-client-provisioner-59db6984d9-wbzb7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         161d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                1400m (70%)  3 (150%)
  memory             932Mi (25%)  5068Mi (138%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:
  Type    Reason                   Age                From               Message
  ----    ------                   ----               ----               -------
  Normal  Starting                 11m                kubelet, node1     Starting kubelet.
  Normal  NodeAllocatableEnforced  11m                kubelet, node1     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     10m (x7 over 11m)  kubelet, node1     Node node1 status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientMemory  10m (x8 over 11m)  kubelet, node1     Node node1 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    10m (x8 over 11m)  kubelet, node1     Node node1 status is now: NodeHasNoDiskPressure
  Normal  Starting                 10m                kube-proxy, node1  Starting kube-proxy.
[root@node1 kubernetes]# kubectl get pod -A
NAMESPACE     NAME                                                      READY   STATUS             RESTARTS   AGE
default       busybox                                                   1/1     Running            2          20d
default       mysqldb-0                                                 1/1     Running            8          152d
default       redis-db-master-0                                         1/1     Running            8          152d
dev           content-web-deloy-58f6465676-pd5pq                        1/1     Running            15         163d
dev           mall-wx-deploy-768c46897-bhq4l                            0/1     CrashLoopBackOff   65         153d
kube-system   coredns-7ff77c879f-8blg8                                  1/1     Running            9          153d
kube-system   coredns-7ff77c879f-ppgqc                                  1/1     Running            9          153d
kube-system   etcd-master                                               1/1     Running            38         366d
kube-system   grafana-core-768b6bf79c-gwkdc                             1/1     Running            20         196d
kube-system   kube-apiserver-master                                     1/1     Running            191        366d
kube-system   kube-controller-manager-master                            1/1     Running            50         366d
kube-system   kube-flannel-ds-amd64-pgjgh                               1/1     Running            12         152d
kube-system   kube-flannel-ds-amd64-xzgr9                               1/1     Running            12         152d
kube-system   kube-proxy-d89lh                                          1/1     Running            9          152d
kube-system   kube-proxy-r5dk6                                          1/1     Running            9          152d
kube-system   kube-scheduler-master                                     1/1     Running            50         366d
kube-system   metrics-server-57bc7f4584-gc2s6                           1/1     Running            29         215d
kube-system   node-exporter-5kk5l                                       1/1     Running            20         197d
kube-system   prometheus-7486bf7f4b-hgb55                               1/1     Running            20         197d
nfs-pro       nfs-provisioner-nfs-client-provisioner-59db6984d9-wbzb7   1/1     Running            13         161d
[root@node1 kubernetes]# kubectl describe pod busybox -n default
Name:         busybox
Namespace:    default
Priority:     0
Node:         node1/192.168.31.138
Start Time:   Tue, 29 Aug 2023 22:31:47 +0800
Labels:       <none>
Annotations:  Status:  Running
IP:           192.168.31.138
IPs:
  IP:  192.168.31.138
Containers:
  busybox:
    Container ID:  docker://082ee78c1807e47967d8ec25fc0047808f10645b4574b9ef9d30a05dd1984176
    Image:         busybox:1.28
    Image ID:      docker-pullable://busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      3600
    State:          Running
      Started:      Tue, 19 Sep 2023 22:17:22 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Thu, 14 Sep 2023 22:19:14 +0800
      Finished:     Thu, 14 Sep 2023 23:06:39 +0800
    Ready:          True
    Restart Count:  2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-n775f (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-n775f:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-n775f
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason          Age   From            Message
  ----    ------          ----  ----            -------
  Normal  SandboxChanged  11m   kubelet, node1  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          11m   kubelet, node1  Container image "busybox:1.28" already present on machine
  Normal  Created         11m   kubelet, node1  Created container busybox
  Normal  Started         11m   kubelet, node1  Started container busybox
[root@node1 kubernetes]# kubectl logs -f  busybox -n default
^C
[root@node1 kubernetes]# journalctl -u kubelet
-- Logs begin at 二 2023-09-19 20:57:42 CST, end at 二 2023-09-19 22:29:45 CST. --
9月 19 20:57:48 node1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.300712     777 server.go:417] Version: v1.18.0
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.301084     777 plugins.go:100] No cloud provider specified.
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.301234     777 server.go:837] Client rotation is on, will bootstrap in background
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.319352     777 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-curr
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.923862     777 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924160     777 container_manager_linux.go:266] container manager verified user specified cgroup-root exists:
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924171     777 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {Runti
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924443     777 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924455     777 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none poli
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924458     777 container_manager_linux.go:306] Creating device plugin manager: true
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924852     777 client.go:75] Connecting to docker on unix:///var/run/docker.sock
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.925049     777 client.go:92] Start docker client with request timeout=2m0s
9月 19 20:57:55 node1 kubelet[777]: W0919 20:57:55.934715     777 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, fa
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.934734     777 docker_service.go:238] Hairpin mode set to "hairpin-veth"
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.973207     777 docker_service.go:253] Docker cri networking managed by cni
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.004070     777 docker_service.go:258] Docker Info: &{ID:NSTL:BPGE:HFEB:2GVN:754N:AOXJ:LYVN:LN6C:6XIE:Z4UG:YWS
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.004140     777 docker_service.go:271] Setting cgroupDriver to systemd
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043595     777 remote_runtime.go:59] parsed scheme: ""
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043629     777 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043912     777 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil>
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043936     777 clientconn.go:933] ClientConn switching balancer to "pick_first"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044064     777 remote_image.go:50] parsed scheme: ""
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044071     777 remote_image.go:50] scheme "" not registered, fallback to default scheme
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044080     777 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil>
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044084     777 clientconn.go:933] ClientConn switching balancer to "pick_first"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044416     777 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044733     777 kubelet.go:317] Watching apiserver
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.072921     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get h
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.073155     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Ge
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.073227     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.073273     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get h
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.074707     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Ge
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.076218     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.314303     777 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers
9月 19 20:57:56 node1 kubelet[777]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.338432     777 kuberuntime_manager.go:211] Container runtime docker initialized, version: 20.10.18, apiVersio
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.349572     777 server.go:1125] Started kubelet
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.349946     777 server.go:145] Starting to listen on 0.0.0.0:10250
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.351740     777 server.go:393] Adding debug handlers to kubelet server.
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.353691     777 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have compl
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.354559     777 event.go:269] Unable to write event: 'Post https://192.168.31.119:6443/api/v1/namespaces/defau
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.359570     777 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.371002     777 volume_manager.go:265] Starting Kubelet Volume Manager
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.379815     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSIDriver: Get
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.388230     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSIDriver: Get
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.391736     777 desired_state_of_world_populator.go:139] Desired state populator starts to run
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.392193     777 controller.go:136] failed to ensure node lease exists, will retry in 200ms, error: Get https:/
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460585     777 clientconn.go:106] parsed scheme: "unix"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460599     777 clientconn.go:106] scheme "unix" not registered, fallback to default scheme
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460733     777 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containe
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460743     777 clientconn.go:933] ClientConn switching balancer to "pick_first"
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.475715     777 kubelet.go:2267] node "node1" not found
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.475756     777 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.496580     777 status_manager.go:158] Starting to sync pod status with apiserver
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.496617     777 kubelet.go:1821] Starting kubelet main sync loop.
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.496652     777 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have c
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.503657     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeCl
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.548176     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.561743     777 kubelet_node_status.go:70] Attempting to register node node1
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.577020     777 kubelet.go:2267] node "node1" not found
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.579803     777 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.581134     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.590337     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.592697     777 controller.go:136] failed to ensure node lease exists, will retry in 400ms, error: Get https:/
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.596794     777 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have co
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.597702     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.603749     777 cpu_manager.go:184] [cpumanager] starting with none policy
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.603761     777 cpu_manager.go:185] [cpumanager] reconciling every 10s
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.603776     777 state_mem.go:36] [cpumanager] initializing new in-memory state store
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.604701     777 state_mem.go:88] [cpumanager] updated default cpuset: ""
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.604710     777 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.604721     777 policy_none.go:43] [cpumanager] none policy: Start
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.612408     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.629396     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.636733     777 plugin_manager.go:114] Starting Kubelet Plugin Manager
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.637477     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.637516     777 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get node inf
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.647459     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.655311     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeCl
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.657280     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
相关实践学习
容器服务Serverless版ACK Serverless 快速入门:在线魔方应用部署和监控
通过本实验,您将了解到容器服务Serverless版ACK Serverless 的基本产品能力,即可以实现快速部署一个在线魔方应用,并借助阿里云容器服务成熟的产品生态,实现在线应用的企业级监控,提升应用稳定性。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
2天前
|
Kubernetes 应用服务中间件 调度
如何在Kubernetes集群中配置跨可用区的Pod调度
如何在Kubernetes集群中配置跨可用区的Pod调度
|
1天前
|
Kubernetes Linux 调度
k8s环境设置-pod下载及重启策略
k8s环境设置-pod下载及重启策略
|
10天前
|
Kubernetes API 索引
|
18天前
|
存储 Kubernetes 监控
Kubernetes 集群的持续性能优化策略
【5月更文挑战第70天】 随着容器化技术的普及,Kubernetes 已成为管理微服务架构的首选平台。然而,在大规模部署和长期运行过程中,集群往往会遭遇性能瓶颈,影响服务的响应速度和稳定性。本文将探讨针对 Kubernetes 集群的性能优化策略,包括资源调度优化、网络延迟降低、存储效率提升及监控与日志分析等方面,旨在为运维工程师提供一套系统化的持续优化方法,确保集群性能的长期稳定。
|
15天前
|
监控
查看服务器/IIS日志、log、访问信息基本方法
除了手动查看,你也可以使用日志分析工具,如Log Parser、AWStats等,这些工具可以帮助你更方便地分析日志数据。
8 1
|
29天前
|
Kubernetes 网络协议 Docker
k8s 开船记-故障公告:自建 k8s 集群在阿里云上大翻船
k8s 开船记-故障公告:自建 k8s 集群在阿里云上大翻船
|
29天前
|
Kubernetes Ubuntu jenkins
超详细实操教程!在现有K8S集群上安装JenkinsX,极速提升CI/CD体验!
超详细实操教程!在现有K8S集群上安装JenkinsX,极速提升CI/CD体验!
|
1月前
|
Kubernetes 应用服务中间件 nginx
K8s高可用集群二进制部署-V1.20
2.4 部署Etcd集群 以下在节点1上操作,为简化操作,待会将节点1生成的所有文件拷贝到节点2和节点3. 1. 创建工作目录并解压二进制包 mkdir /opt/etcd/{bin,cfg,ssl} -p tar zxvf etcd-v3.4.9-linux-amd64.tar.gz mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
|
9天前
|
缓存 Kubernetes Serverless
阿里云云效操作报错合集之AppStack资源池添加外部k8s集群时报错,该如何解决
本合集将整理呈现用户在使用过程中遇到的报错及其对应的解决办法,包括但不限于账户权限设置错误、项目配置不正确、代码提交冲突、构建任务执行失败、测试环境异常、需求流转阻塞等问题。阿里云云效是一站式企业级研发协同和DevOps平台,为企业提供从需求规划、开发、测试、发布到运维、运营的全流程端到端服务和工具支撑,致力于提升企业的研发效能和创新能力。
|
9天前
|
弹性计算 Kubernetes Java
阿里云云效操作报错合集之在绑定其他主体下的k8s集群时,通过kubeconfig导入集群时,出现报错,该如何解决
本合集将整理呈现用户在使用过程中遇到的报错及其对应的解决办法,包括但不限于账户权限设置错误、项目配置不正确、代码提交冲突、构建任务执行失败、测试环境异常、需求流转阻塞等问题。阿里云云效是一站式企业级研发协同和DevOps平台,为企业提供从需求规划、开发、测试、发布到运维、运营的全流程端到端服务和工具支撑,致力于提升企业的研发效能和创新能力。