K8s查看集群 状态事件描述以及Pod日志信息

本文涉及的产品
RDS MySQL Serverless 基础系列,0.5-2RCU 50GB
Redis 开源版,标准版 2GB
推荐场景:
搭建游戏排行榜
云数据库 RDS MySQL,集群系列 2核4GB
推荐场景:
搭建个人博客
简介: K8s查看集群 状态事件描述以及Pod日志信息
[root@master kubernetes]# kubectl get events
LAST SEEN   TYPE      REASON                    OBJECT                                               MESSAGE
9m48s       Normal    SandboxChanged            pod/busybox                                          Pod sandbox changed, it will be killed and re-created.
9m47s       Normal    Pulled                    pod/busybox                                          Container image "busybox:1.28" already present on machine
9m47s       Normal    Created                   pod/busybox                                          Created container busybox
9m46s       Normal    Started                   pod/busybox                                          Started container busybox
3m5s        Normal    FailedBinding             persistentvolumeclaim/data-redis-redis-ha-server-0   no persistent volumes available for this claim and no storage class is set
9m53s       Normal    Starting                  node/master                                          Starting kubelet.
9m53s       Normal    NodeHasSufficientMemory   node/master                                          Node master status is now: NodeHasSufficientMemory
9m53s       Normal    NodeHasNoDiskPressure     node/master                                          Node master status is now: NodeHasNoDiskPressure
9m53s       Normal    NodeHasSufficientPID      node/master                                          Node master status is now: NodeHasSufficientPID
9m53s       Normal    NodeAllocatableEnforced   node/master                                          Updated Node Allocatable limit across pods
9m51s       Warning   Rebooted                  node/master                                          Node master has been rebooted, boot id: a6e93e98-4513-4419-a12d-af366d494e71
9m45s       Normal    Starting                  node/master                                          Starting kube-proxy.
9m20s       Normal    RegisteredNode            node/master                                          Node master event: Registered Node master in Controller
9m45s       Warning   FailedMount               pod/mysqldb-0                                        MountVolume.NodeAffinity check failed for volume "pvc-d138f573-608c-44a0-9f54-9c1e391241b5" : error retrieving node: node "node1" not found
9m29s       Normal    SandboxChanged            pod/mysqldb-0                                        Pod sandbox changed, it will be killed and re-created.
9m40s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5c07d18933e7b594f03ddd2a7e2cca0f64841041a04cef600ed5cc45f570416f" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m39s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ebadbe1af8a2deb84a2636fa8bb4f06cf35fc056d8e52ebc370add1ec6b6d3a" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m36s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "724288a2ec12f5474cdbf45e872fb0459056e2489f6598e9ea7102b615e69031" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m35s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fa77d4dfd3e111150ce1da3dc26b780cea9810d7d6f7ac1e1935a6437c1affdc" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m33s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3bc41abe18e0d3f65e9997f31d5934270910e6f844711e62b46c44cfa9e18ce2" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m32s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "bff35dd9a356549127bdb600f665d67445d61b335af34a5f3afca01cbc9d02c0" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m31s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b68e5e248808057f74627fab0a363d1f92d744c117e9b6759508b16283204401" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m29s       Warning   FailedCreatePodSandBox    pod/mysqldb-0                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "55f144ef0c9a8478d8635e29d130f3c0da22be567a083310646129d3c20ab317" network for pod "mysqldb-0": networkPlugin cni failed to set up pod "mysqldb-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m28s       Normal    Pulled                    pod/mysqldb-0                                        Container image "docker.io/bitnami/mysql:8.0.32-debian-11-r21" already present on machine
9m27s       Normal    Created                   pod/mysqldb-0                                        Created container mysql
9m26s       Normal    Started                   pod/mysqldb-0                                        Started container mysql
9m4s        Warning   Unhealthy                 pod/mysqldb-0                                        Startup probe failed: mysqladmin: [Warning] Using a password on the command line interface can be insecure.
mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mysql/tmp/mysql.sock' (2)'
Check that mysqld is running and that the socket: '/opt/bitnami/mysql/tmp/mysql.sock' exists!
10m         Normal    Starting                  node/node1                                           Starting kubelet.
10m         Normal    NodeHasSufficientMemory   node/node1                                           Node node1 status is now: NodeHasSufficientMemory
10m         Normal    NodeHasNoDiskPressure     node/node1                                           Node node1 status is now: NodeHasNoDiskPressure
10m         Normal    NodeHasSufficientPID      node/node1                                           Node node1 status is now: NodeHasSufficientPID
10m         Normal    NodeAllocatableEnforced   node/node1                                           Updated Node Allocatable limit across pods
9m45s       Normal    Starting                  node/node1                                           Starting kube-proxy.
9m20s       Normal    RegisteredNode            node/node1                                           Node node1 event: Registered Node node1 in Controller
3m5s        Normal    FailedBinding             persistentvolumeclaim/redis-data-redis-master-0      no persistent volumes available for this claim and no storage class is set
3m5s        Normal    FailedBinding             persistentvolumeclaim/redis-data-redis-slave-0       no persistent volumes available for this claim and no storage class is set
9m44s       Warning   FailedMount               pod/redis-db-master-0                                MountVolume.NodeAffinity check failed for volume "pvc-02087f98-25cf-41c4-9d74-810a922d4c65" : error retrieving node: node "node1" not found
9m29s       Normal    SandboxChanged            pod/redis-db-master-0                                Pod sandbox changed, it will be killed and re-created.
9m39s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ec7648600829436aefb107c31297e4ce6ffbf96de17e3bac7324e20119e5a531" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m37s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "99b5ba01b6c37779f58b926f7c5a5a276353ef02168f4b7691f9d5fb8d0ec37f" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m36s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9dda09d4c5cdab632377aaeecc30f3eca275f9950be8cd808fe5c492e6440013" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m35s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a20e5769519867b8d9c8aa0f134342ae786c7498e23800baba164d077b6afe22" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m32s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "07d68951f8bd72e1bb48ad9d1746d55fe2d71347ef6fcca6d8ff6f544c370682" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m31s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "98a8399a1492c1a46d408fff63382fd0d73c5fc660a58d35629c3107d08e1a2e" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m29s       Warning   FailedCreatePodSandBox    pod/redis-db-master-0                                Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2d33f6ed814d838c6001d6f43c86b096a4e1fbd6f52f2e7d2ebe55c4e638ee4d" network for pod "redis-db-master-0": networkPlugin cni failed to set up pod "redis-db-master-0_default" network: open /run/flannel/subnet.env: no such file or directory
9m28s       Normal    Pulled                    pod/redis-db-master-0                                Container image "docker.io/bitnami/redis:7.0.10-debian-11-r4" already present on machine
9m28s       Normal    Created                   pod/redis-db-master-0                                Created container redis
9m26s       Normal    Started                   pod/redis-db-master-0                                Started container redis
8m56s       Warning   Unhealthy                 pod/redis-db-master-0                                Liveness probe failed: Timed out
[root@master kubernetes]# kubectl describe node node1
Name:               node1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node1
                    kubernetes.io/os=linux
                    nodeenv=pro
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"86:6c:1f:00:91:5b"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.31.138
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 18 Sep 2022 14:37:35 +0800
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  node1
  AcquireTime:     <unset>
  RenewTime:       Tue, 19 Sep 2023 22:27:37 +0800
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Tue, 19 Sep 2023 22:17:39 +0800   Tue, 19 Sep 2023 22:17:39 +0800   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Tue, 19 Sep 2023 22:27:27 +0800   Sat, 26 Aug 2023 10:20:40 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Tue, 19 Sep 2023 22:27:27 +0800   Sat, 26 Aug 2023 10:20:40 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Tue, 19 Sep 2023 22:27:27 +0800   Sat, 26 Aug 2023 10:20:40 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Tue, 19 Sep 2023 22:27:27 +0800   Thu, 14 Sep 2023 22:19:07 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.31.138
  Hostname:    node1
Capacity:
  cpu:                2
  ephemeral-storage:  51175Mi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3861256Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  48294789041
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3758856Ki
  pods:               110
System Info:
  Machine ID:                 2da3702dd1ea45a59bee481c361484dc
  System UUID:                D4E94D56-3E90-005C-59F8-AE44B0623D1C
  Boot ID:                    3addb0d6-f928-470d-a65a-70b5c2790adf
  Kernel Version:             3.10.0-1160.76.1.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.18
  Kubelet Version:            v1.18.0
  Kube-Proxy Version:         v1.18.0
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
Non-terminated Pods:          (13 in total)
  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
  default                     busybox                                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20d
  default                     mysqldb-0                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         152d
  default                     redis-db-master-0                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         152d
  dev                         content-web-deloy-58f6465676-pd5pq                         400m (20%)    1 (50%)     256Mi (6%)       1Gi (27%)      163d
  dev                         mall-wx-deploy-768c46897-bhq4l                             400m (20%)    1 (50%)     256Mi (6%)       1Gi (27%)      153d
  kube-system                 coredns-7ff77c879f-ppgqc                                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     153d
  kube-system                 grafana-core-768b6bf79c-gwkdc                              100m (5%)     100m (5%)   100Mi (2%)       100Mi (2%)     196d
  kube-system                 kube-flannel-ds-amd64-xzgr9                                100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      152d
  kube-system                 kube-proxy-d89lh                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         152d
  kube-system                 metrics-server-57bc7f4584-gc2s6                            200m (10%)    300m (15%)  100Mi (2%)       200Mi (5%)     215d
  kube-system                 node-exporter-5kk5l                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         197d
  kube-system                 prometheus-7486bf7f4b-hgb55                                100m (5%)     500m (25%)  100Mi (2%)       2500Mi (68%)   197d
  nfs-pro                     nfs-provisioner-nfs-client-provisioner-59db6984d9-wbzb7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         161d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                1400m (70%)  3 (150%)
  memory             932Mi (25%)  5068Mi (138%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:
  Type    Reason                   Age                From               Message
  ----    ------                   ----               ----               -------
  Normal  Starting                 11m                kubelet, node1     Starting kubelet.
  Normal  NodeAllocatableEnforced  11m                kubelet, node1     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     10m (x7 over 11m)  kubelet, node1     Node node1 status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientMemory  10m (x8 over 11m)  kubelet, node1     Node node1 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    10m (x8 over 11m)  kubelet, node1     Node node1 status is now: NodeHasNoDiskPressure
  Normal  Starting                 10m                kube-proxy, node1  Starting kube-proxy.
[root@node1 kubernetes]# kubectl get pod -A
NAMESPACE     NAME                                                      READY   STATUS             RESTARTS   AGE
default       busybox                                                   1/1     Running            2          20d
default       mysqldb-0                                                 1/1     Running            8          152d
default       redis-db-master-0                                         1/1     Running            8          152d
dev           content-web-deloy-58f6465676-pd5pq                        1/1     Running            15         163d
dev           mall-wx-deploy-768c46897-bhq4l                            0/1     CrashLoopBackOff   65         153d
kube-system   coredns-7ff77c879f-8blg8                                  1/1     Running            9          153d
kube-system   coredns-7ff77c879f-ppgqc                                  1/1     Running            9          153d
kube-system   etcd-master                                               1/1     Running            38         366d
kube-system   grafana-core-768b6bf79c-gwkdc                             1/1     Running            20         196d
kube-system   kube-apiserver-master                                     1/1     Running            191        366d
kube-system   kube-controller-manager-master                            1/1     Running            50         366d
kube-system   kube-flannel-ds-amd64-pgjgh                               1/1     Running            12         152d
kube-system   kube-flannel-ds-amd64-xzgr9                               1/1     Running            12         152d
kube-system   kube-proxy-d89lh                                          1/1     Running            9          152d
kube-system   kube-proxy-r5dk6                                          1/1     Running            9          152d
kube-system   kube-scheduler-master                                     1/1     Running            50         366d
kube-system   metrics-server-57bc7f4584-gc2s6                           1/1     Running            29         215d
kube-system   node-exporter-5kk5l                                       1/1     Running            20         197d
kube-system   prometheus-7486bf7f4b-hgb55                               1/1     Running            20         197d
nfs-pro       nfs-provisioner-nfs-client-provisioner-59db6984d9-wbzb7   1/1     Running            13         161d
[root@node1 kubernetes]# kubectl describe pod busybox -n default
Name:         busybox
Namespace:    default
Priority:     0
Node:         node1/192.168.31.138
Start Time:   Tue, 29 Aug 2023 22:31:47 +0800
Labels:       <none>
Annotations:  Status:  Running
IP:           192.168.31.138
IPs:
  IP:  192.168.31.138
Containers:
  busybox:
    Container ID:  docker://082ee78c1807e47967d8ec25fc0047808f10645b4574b9ef9d30a05dd1984176
    Image:         busybox:1.28
    Image ID:      docker-pullable://busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      3600
    State:          Running
      Started:      Tue, 19 Sep 2023 22:17:22 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Thu, 14 Sep 2023 22:19:14 +0800
      Finished:     Thu, 14 Sep 2023 23:06:39 +0800
    Ready:          True
    Restart Count:  2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-n775f (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-n775f:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-n775f
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason          Age   From            Message
  ----    ------          ----  ----            -------
  Normal  SandboxChanged  11m   kubelet, node1  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled          11m   kubelet, node1  Container image "busybox:1.28" already present on machine
  Normal  Created         11m   kubelet, node1  Created container busybox
  Normal  Started         11m   kubelet, node1  Started container busybox
[root@node1 kubernetes]# kubectl logs -f  busybox -n default
^C
[root@node1 kubernetes]# journalctl -u kubelet
-- Logs begin at 二 2023-09-19 20:57:42 CST, end at 二 2023-09-19 22:29:45 CST. --
9月 19 20:57:48 node1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.300712     777 server.go:417] Version: v1.18.0
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.301084     777 plugins.go:100] No cloud provider specified.
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.301234     777 server.go:837] Client rotation is on, will bootstrap in background
9月 19 20:57:52 node1 kubelet[777]: I0919 20:57:52.319352     777 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-curr
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.923862     777 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924160     777 container_manager_linux.go:266] container manager verified user specified cgroup-root exists:
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924171     777 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {Runti
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924443     777 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924455     777 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none poli
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924458     777 container_manager_linux.go:306] Creating device plugin manager: true
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.924852     777 client.go:75] Connecting to docker on unix:///var/run/docker.sock
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.925049     777 client.go:92] Start docker client with request timeout=2m0s
9月 19 20:57:55 node1 kubelet[777]: W0919 20:57:55.934715     777 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, fa
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.934734     777 docker_service.go:238] Hairpin mode set to "hairpin-veth"
9月 19 20:57:55 node1 kubelet[777]: I0919 20:57:55.973207     777 docker_service.go:253] Docker cri networking managed by cni
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.004070     777 docker_service.go:258] Docker Info: &{ID:NSTL:BPGE:HFEB:2GVN:754N:AOXJ:LYVN:LN6C:6XIE:Z4UG:YWS
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.004140     777 docker_service.go:271] Setting cgroupDriver to systemd
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043595     777 remote_runtime.go:59] parsed scheme: ""
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043629     777 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043912     777 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil>
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.043936     777 clientconn.go:933] ClientConn switching balancer to "pick_first"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044064     777 remote_image.go:50] parsed scheme: ""
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044071     777 remote_image.go:50] scheme "" not registered, fallback to default scheme
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044080     777 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil>
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044084     777 clientconn.go:933] ClientConn switching balancer to "pick_first"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044416     777 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.044733     777 kubelet.go:317] Watching apiserver
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.072921     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get h
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.073155     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Ge
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.073227     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.073273     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get h
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.074707     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Ge
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.076218     777 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.314303     777 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers
9月 19 20:57:56 node1 kubelet[777]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.338432     777 kuberuntime_manager.go:211] Container runtime docker initialized, version: 20.10.18, apiVersio
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.349572     777 server.go:1125] Started kubelet
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.349946     777 server.go:145] Starting to listen on 0.0.0.0:10250
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.351740     777 server.go:393] Adding debug handlers to kubelet server.
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.353691     777 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have compl
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.354559     777 event.go:269] Unable to write event: 'Post https://192.168.31.119:6443/api/v1/namespaces/defau
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.359570     777 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.371002     777 volume_manager.go:265] Starting Kubelet Volume Manager
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.379815     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSIDriver: Get
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.388230     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSIDriver: Get
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.391736     777 desired_state_of_world_populator.go:139] Desired state populator starts to run
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.392193     777 controller.go:136] failed to ensure node lease exists, will retry in 200ms, error: Get https:/
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460585     777 clientconn.go:106] parsed scheme: "unix"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460599     777 clientconn.go:106] scheme "unix" not registered, fallback to default scheme
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460733     777 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containe
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.460743     777 clientconn.go:933] ClientConn switching balancer to "pick_first"
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.475715     777 kubelet.go:2267] node "node1" not found
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.475756     777 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.496580     777 status_manager.go:158] Starting to sync pod status with apiserver
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.496617     777 kubelet.go:1821] Starting kubelet main sync loop.
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.496652     777 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have c
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.503657     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeCl
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.548176     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.561743     777 kubelet_node_status.go:70] Attempting to register node node1
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.577020     777 kubelet.go:2267] node "node1" not found
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.579803     777 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.581134     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.590337     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.592697     777 controller.go:136] failed to ensure node lease exists, will retry in 400ms, error: Get https:/
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.596794     777 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have co
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.597702     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.603749     777 cpu_manager.go:184] [cpumanager] starting with none policy
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.603761     777 cpu_manager.go:185] [cpumanager] reconciling every 10s
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.603776     777 state_mem.go:36] [cpumanager] initializing new in-memory state store
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.604701     777 state_mem.go:88] [cpumanager] updated default cpuset: ""
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.604710     777 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.604721     777 policy_none.go:43] [cpumanager] none policy: Start
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.612408     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.629396     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: I0919 20:57:56.636733     777 plugin_manager.go:114] Starting Kubelet Plugin Manager
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.637477     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.637516     777 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get node inf
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.647459     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
9月 19 20:57:56 node1 kubelet[777]: E0919 20:57:56.655311     777 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeCl
9月 19 20:57:56 node1 kubelet[777]: W0919 20:57:56.657280     777 docker_sandbox.go:400] failed to read pod IP from plugin/docker: networkPlugin cni failed on t
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
1月前
|
缓存 容灾 网络协议
ACK One多集群网关:实现高效容灾方案
ACK One多集群网关可以帮助您快速构建同城跨AZ多活容灾系统、混合云同城跨AZ多活容灾系统,以及异地容灾系统。
|
2月前
|
Kubernetes Ubuntu 网络安全
ubuntu使用kubeadm搭建k8s集群
通过以上步骤,您可以在 Ubuntu 系统上使用 kubeadm 成功搭建一个 Kubernetes 集群。本文详细介绍了从环境准备、安装 Kubernetes 组件、初始化集群到管理和使用集群的完整过程,希望对您有所帮助。在实际应用中,您可以根据具体需求调整配置,进一步优化集群性能和安全性。
148 12
|
2月前
|
Prometheus Kubernetes 监控
OpenAI故障复盘 - 阿里云容器服务与可观测产品如何保障大规模K8s集群稳定性
聚焦近日OpenAI的大规模K8s集群故障,介绍阿里云容器服务与可观测团队在大规模K8s场景下我们的建设与沉淀。以及分享对类似故障问题的应对方案:包括在K8s和Prometheus的高可用架构设计方面、事前事后的稳定性保障体系方面。
|
2月前
|
Kubernetes 网络协议 应用服务中间件
Kubernetes Ingress:灵活的集群外部网络访问的利器
《Kubernetes Ingress:集群外部访问的利器-打造灵活的集群网络》介绍了如何通过Ingress实现Kubernetes集群的外部访问。前提条件是已拥有Kubernetes集群并安装了kubectl工具。文章详细讲解了Ingress的基本组成(Ingress Controller和资源对象),选择合适的版本,以及具体的安装步骤,如下载配置文件、部署Nginx Ingress Controller等。此外,还提供了常见问题的解决方案,例如镜像下载失败的应对措施。最后,通过部署示例应用展示了Ingress的实际使用方法。
87 2
|
2月前
|
存储 Kubernetes 关系型数据库
阿里云ACK备份中心,K8s集群业务应用数据的一站式灾备方案
本文源自2024云栖大会苏雅诗的演讲,探讨了K8s集群业务为何需要灾备及其重要性。文中强调了集群与业务高可用配置对稳定性的重要性,并指出人为误操作等风险,建议实施周期性和特定情况下的灾备措施。针对容器化业务,提出了灾备的新特性与需求,包括工作负载为核心、云资源信息的备份,以及有状态应用的数据保护。介绍了ACK推出的备份中心解决方案,支持命名空间、标签、资源类型等维度的备份,并具备存储卷数据保护功能,能够满足GitOps流程企业的特定需求。此外,还详细描述了备份中心的使用流程、控制台展示、灾备难点及解决方案等内容,展示了备份中心如何有效应对K8s集群资源和存储卷数据的灾备挑战。
|
3月前
|
Kubernetes 监控 Cloud Native
Kubernetes集群的高可用性与伸缩性实践
Kubernetes集群的高可用性与伸缩性实践
99 1
|
4月前
|
JSON Kubernetes 容灾
ACK One应用分发上线:高效管理多集群应用
ACK One应用分发上线,主要介绍了新能力的使用场景
|
4月前
|
Kubernetes 持续交付 开发工具
ACK One GitOps:ApplicationSet UI简化多集群GitOps应用管理
ACK One GitOps新发布了多集群应用控制台,支持管理Argo CD ApplicationSet,提升大规模应用和集群的多集群GitOps应用分发管理体验。
|
4月前
|
Kubernetes Cloud Native 云计算
云原生之旅:Kubernetes 集群的搭建与实践
【8月更文挑战第67天】在云原生技术日益成为IT行业焦点的今天,掌握Kubernetes已成为每个软件工程师必备的技能。本文将通过浅显易懂的语言和实际代码示例,引导你从零开始搭建一个Kubernetes集群,并探索其核心概念。无论你是初学者还是希望巩固知识的开发者,这篇文章都将为你打开一扇通往云原生世界的大门。
171 17
|
4月前
|
Kubernetes Ubuntu Linux
Centos7 搭建 kubernetes集群
本文介绍了如何搭建一个三节点的Kubernetes集群,包括一个主节点和两个工作节点。各节点运行CentOS 7系统,最低配置为2核CPU、2GB内存和15GB硬盘。详细步骤包括环境配置、安装Docker、关闭防火墙和SELinux、禁用交换分区、安装kubeadm、kubelet、kubectl,以及初始化Kubernetes集群和安装网络插件Calico或Flannel。
309 4