前言:
kubernetes的网络想比较原生docker来说要完善了很多很多,同时这也意味着kubernetes的网络要更为复杂了。当然,复杂肯定比简单功能更多,但麻烦也是更多了嘛。
下面就以二进制安装的kubernetes集群来做一些基本的概念梳理并介绍一哈如何安装两大主流网络插件calico和flannel以及两个都想要之如何从flannel切换到calico(二进制和别的方式安装的配置基本都是大同小异,比如kubeadmin方式,学会一种方式后,是可以灵活套用的,因此,别的部署方式不需要讲,殊途同归嘛。
一些基础概念
一,
cluster-ip 和cluster-cidr
A,cluster-cidr
CIDR一般指无类别域间路由。 无类别域间路由(Classless Inter-Domain Routing、CIDR)是一个用于给用户分配IP地址以及在互联网上有效地路由IP数据包的对IP地址进行归类的方法。说人话就是,在kubernetes集群内,cidr是分配给pod使用的,例如下面的这个查询pod的扩展信息,10.244.1.29就是了:
[root@master cfg]# k get po -A -owide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default hello-server-85d885f474-8ddcz 1/1 Running 0 3h32m 10.244.1.29 k8s-node1 <none> <none> default hello-server-85d885f474-jbklt 1/1 Running 0 3h32m 10.244.0.27 k8s-master <none> <none> default nginx-demo-76c8bff45f-6nfnl 1/1 Running 0 3h32m 10.244.1.30 k8s-node1 <none> <none> default nginx-demo-76c8bff45f-qv4w6 1/1 Running 0 3h32m 10.244.2.7 k8s-node2 <none> <none> default web-5dcb957ccc-xd9hl 1/1 Running 2 25h 10.244.0.26 k8s-master <none> <none> ingress-nginx ingress-nginx-admission-create-xc2z4 0/1 Completed 0 26h 192.168.169.133 k8s-node2 <none> <none> ingress-nginx ingress-nginx-admission-patch-7xgst 0/1 Completed 3 26h 192.168.235.197 k8s-master <none> <none>
那么,应该很多同学应该有一个疑问,为什么node1的cidr是10.244.1,node2的是10.244.2呢?OK,简单的说,这个是由于网络插件flannel或者calico造成的,深层次原因暂且不表。
OK,在二进制方式安装的,这个cidr一般是定义在kube-proxy和kube-controller-manager这两个核心服务的配置文件内的。
[root@master cfg]# grep -r -i "10.244" ./ ./kube-controller-manager.conf:--cluster-cidr=10.244.0.0/16 \ ./kube-proxy-config.yml:clusterCIDR: 10.244.0.0/24
vim kube-proxy-config.yaml
kind: KubeProxyConfiguration apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 metricsBindAddress: 0.0.0.0:10249 clientConnection: kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig hostnameOverride: k8s-master clusterCIDR: 10.244.0.0/24 #这个是cidr mode: "ipvs" ipvs: minSyncPeriod: 0s scheduler: "rr" strictARP: false syncPeriod: 0s tcpFinTimeout: 0s tcpTimeout: 0s udpTimeout: 0s
vim kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \ --v=2 \ --log-dir=/opt/kubernetes/logs \ --leader-elect=true \ --master=127.0.0.1:8080 \ --bind-address=127.0.0.1 \ --allocate-node-cidrs=true \ --cluster-cidr=10.244.0.0/16 \ #这个就是cidr了 --service-cluster-ip-range=10.0.0.0/24 \ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ --root-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \ --experimental-cluster-signing-duration=87600h0m0s"
两个配置文件定义的cidr要保持一致,这点需要非常注意!!!!!!!!!!!!!!!!!
如果是使用flannel网络插件,这两个cidr可以不一样,无所谓啦,因为它用的是iptables,那如果是calico,用的是ipvs,OK,你可以看到非常多的报错,pod调度会出问题的(具体表现就是删除新建pod都不行了,反正打开日志满屏红,以后有机会了给各位演示一哈)。
B,cluster-ip
集群的IP地址,OK,看一哈service的IP地址:
这些地址就比较的统一了,10.0.0.*,可以看到即使是nodeport也是10.0.0网段。
[root@master cfg]# k get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR hello-server ClusterIP 10.0.0.78 <none> 8000/TCP 3h36m app=hello-server kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 33d <none> nginx-demo ClusterIP 10.0.0.127 <none> 8000/TCP 3h36m app=nginx-demo web NodePort 10.0.0.100 <none> 80:31296/TCP 25h app=web
在配置文件的表现形式是:
[root@master cfg]# grep -r -i "10.0.0" ./ ./kube-apiserver.conf:--service-cluster-ip-range=10.0.0.0/24 \ ./kube-controller-manager.conf:--service-cluster-ip-range=10.0.0.0/24 \ ./kubelet-config.yml: - 10.0.0.2 ./kubelet-config.yml:maxOpenFiles: 1000000
也就是kube-apiserver 和kube-controller-manager 两个配置文件内,那么,我这定义的是10.0.0.0/24 ,这个有问题吗?
答案是有,而且问题会比较大,我这里这个是错误的哈(由于我的集群是测试性质,无所谓喽,爱谁谁,一般service不会太多的,生产上就不好说了),如果常和网络打交道,应该明白,10.0.0.0/24只有254个可用IP地址,那么也就是说,如果你的service超过了254个,抱歉,在创建service会报错的哦(报错为:Internal error occurred: failed to allocate a serviceIP: range is full)。因此,正确的设置应该是10.0.0.0/16, 这样service可用的IP地址将会是65536个ip地址(6w多个service应该很难达到吧!!!!!!~~~~~)。
OK,这个问题说清楚了,那么,修改就比较简单了嘛,24换成16谁都会,然后相关服务重启一哈就可以了,此操作也相当于是网络的扩展嘛,但还是善意提醒一哈,如果是从16换成24,那么,以往存在的service会受到影响。因此,在生产环境中还是建议尽可能在规划阶段都要考虑到这个容量规划的问题,否则很有可能不会解决问题,而是解决掉出问题的人。
服务重启命令为:
systemctl restart kube-apiserver kube-controller-manager
c:
OK,我估计上面的配置文件很多同学并没仔细看,10.0.0.2这个网段是什么鬼呢?
这个也是一对的哦,coredns的service和kubelet要统一使用这个cluster子网段哦,当然,你可以把它修改成10.0.0.3 4 5 6 随便啦,不过两个必须是clusterip的子网段并且一样就可以啦,你也可以把那个clusterip设置为10.90.0.0,coredns这里就使用10.90.0.2就可以啦,意思明白就可以了。那,又有同学会有疑问了,不一样会咋滴?不会咋滴,就是集群会报各种错。
kubelet的配置文件(请注意相关IP段的定义):
kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 0.0.0.0 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: - 10.0.0.2 clusterDomain: cluster.local failSwapOn: false authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /opt/kubernetes/ssl/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% maxOpenFiles: 1000000 maxPods: 110
coredns的service文件(请注意相关IP段的定义):
[root@master cfg]# cat ~/coredns/coredns-svc.yaml apiVersion: v1 kind: Service metadata: name: coredns namespace: kube-system labels: k8s-app: coredns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: coredns clusterIP: 10.0.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP
二,
flannel网络插件的安装
cat ~/coredns/coredns-svc.yaml
没什么好说的,直接apply这个文件就行了,只是有个个地方需要关注一哈:
a,
network的值应该和kube-proxy一致,如果不一致,当然是报错,type不更改,无需更改。
net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } }
b,
pod映射到宿主机的目录,一哈卸载的时候需要删除它。
allowedHostPaths: - pathPrefix: "/etc/cni/net.d"
c,
部署完成后应该有的虚拟网卡:
d:flannel的部署文件
根据前面说的那几点注意事项(IP,路径),确定是否正常后apply此文件,apply后查看网卡是否有上图标的虚拟网卡,有,表明flannel成功部署。
--- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN', 'NET_RAW'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoExecute serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.13.0 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.13.0 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
我是三个节点,因此,看到三个pod 是running就可以了,有多少节点就多少个flannel的pod:
[root@master cfg]# k get po -n kube-system NAME READY STATUS RESTARTS AGE coredns-76648cbfc9-zwjqz 1/1 Running 0 6h51m kube-flannel-ds-4mx69 1/1 Running 1 7h9m kube-flannel-ds-gmdph 1/1 Running 3 7h9m kube-flannel-ds-m8hzz 1/1 Running 1 7h9m
如果是新搭建集群,此时查看节点就会是ready的状态,证明确实安装好了:
[root@master cfg]# k get no NAME STATUS ROLES AGE VERSION k8s-master Ready <none> 33d v1.18.3 k8s-node1 Ready <none> 33d v1.18.3 k8s-node2 Ready <none> 33d v1.18.3
当然,还有一个svc:
[root@master cfg]# k get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE coredns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP 33d
三,
calico网络插件部署
Calico有三四种安装方式:
- 使用calico.yaml清单文件安装(推荐使用)
- 二进制安装方式(很少用,不介绍了)
- 插件方式(也很少用了,不介绍了)
- 使用Tigera Calico Operator安装Calico(官方最新指导)
Tigera Calico Operator,Calico操作员是一款用于管理Calico安装、升级的管理工具,它用于管理Calico的安装生命周期。从Calico-v3.15版本官方开始使用此工具。
Calico安装要求: - x86-64, arm64, ppc64le, or s390x processor
- 2个CPU
- 2GB运行内存
- 10GB硬盘空间
- RedHat Enterprise Linux 7.x+, CentOS 7.x+, Ubuntu 16.04+, or Debian 9.x+
- 确保Calico可以管理主机上的cali和tunl接口。
本例选用的是calico清单文件的方式安装:
calico和kubernetes之间的版本关系:
Kubernetes 版本 Calico 版本 Calico 文档
1.18、1.19、1.20 3.18 https://projectcalico.docs.tigera.io/archive/v3.18/getting-started/kubernetes/requirements https://projectcalico.docs.tigera.io/archive/v3.18/manifests/calico.yaml
1.19、1.20、1.21 3.19 https://projectcalico.docs.tigera.io/archive/v3.19/getting-started/kubernetes/requirements https://projectcalico.docs.tigera.io/archive/v3.19/manifests/calico.yaml
1.19、1.20、1.21 3.20 https://projectcalico.docs.tigera.io/archive/v3.20/getting-started/kubernetes/requirements https://projectcalico.docs.tigera.io/archive/v3.20/manifests/calico.yaml
1.20、1.21、1.22 3.21 https://projectcalico.docs.tigera.io/archive/v3.21/getting-started/kubernetes/requirements https://projectcalico.docs.tigera.io/archive/v3.21/manifests/calico.yaml
1.21、1.22、1.23 3.22 https://projectcalico.docs.tigera.io/archive/v3.22/getting-started/kubernetes/requirements https://projectcalico.docs.tigera.io/archive/v3.22/manifests/calico.yaml
1.21、1.22、1.23 3.23 https://projectcalico.docs.tigera.io/archive/v3.23/getting-started/kubernetes/requirements https://projectcalico.docs.tigera.io/archive/v3.23/manifests/calico.yaml
1.22、1.23、1.24 3.24 https://projectcalico.docs.tigera.io/archive/v3.24/getting-started/kubernetes/requirements https://projectcalico.docs.tigera.io/archive/v3.24/manifests/calico.yaml
安装命令为(先下载下来,一哈有些地方需要修改哦)
wget https://docs.projectcalico.org/manifests/calico.yaml --no-check-certificate
清单文件一些配置详解:
该清单文件安装了以下Kubernetes资源:
- 使用DaemonSet在每个主机上安装calico/node容器;
- 使用DaemonSet在每个主机上安装Calico CNI二进制文件和网络配置;
- 使用Deployment运行calico/kube-controller;
- Secert/calico-etcd-secrets提供可选的Calico连接到etcd的TLS密钥信息;
- ConfigMap/calico-config提供安装Calico时的配置参数。
(1)
清单文件中"CALICO_IPV4POOL_CIDR"部分
设置成了kube-proxy-config.yaml 文件相同的cidr,本例是10.244.0.0。
再次提醒此项用于设置安装Calico时要创建的默认IPv4池,PodIP将从该范围中选择。
Calico安装完成后修改此值将再无效。
默认情况下calico.yaml中"CALICO_IPV4POOL_CIDR"是注释的,如果kube-controller-manager的"--cluster-cidr"不存在任何值的话,则通常取默认值"192.168.0.0/16,172.16.0.0/16,..,172.31.0.0/16"。
当使用kubeadm时,PodIP的范围应该与kubeadm init的清单文件中的"podSubnet"字段或者"--pod-network-cidr"选项填写的值相同。
- name: CALICO_IPV4POOL_IPIP value: "Always" # Set MTU for tunnel device used if ipip is enabled - name: FELIX_IPINIPMTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # The default IPv4 pool to create on startup if none exists. Pod IPs will be # chosen from this range. Changing this value after installation will have # no effect. This should fall within `--cluster-cidr`. - name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16"
(2)
calico_backend: "bird"
设置Calico使用的后端机制。支持值:
bird,开启BIRD功能,根据Calico-Node的配置来决定主机的网络实现是采用BGP路由模式还是IPIP、VXLAN覆盖网络模式。这个是默认的模式。
vxlan,纯VXLAN模式,仅能够使用VXLAN协议的覆盖网络模式。
# Configure the backend to use. calico_backend: "bird"
其它的不需要更改,默认就好了,也没什么可设置的。
三,
flannel切换到calico
rm -rf /etc/cni/net.d/10-flannel.conflist(所有节点都这么操作,删除flannel相关配置文件),然后apply calico的清单文件,然后重启节点,当然,也可以重启相关服务,删除flannel的网卡和路由,但太麻烦了。
等待相关pod运行正常
[root@master ~]# k get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-57546b46d6-hcfg5 1/1 Running 1 32m calico-node-7x7ln 1/1 Running 2 32m calico-node-dbsmv 1/1 Running 1 32m calico-node-vqbqn 1/1 Running 3 32m coredns-76648cbfc9-zwjqz 1/1 Running 11 17h
查看网卡:
[root@master ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:55:91:06 brd ff:ff:ff:ff:ff:ff inet 192.168.217.16/24 brd 192.168.217.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe55:9106/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:51:da:97:25 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 4: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 4e:2f:8c:a7:d3:12 brd ff:ff:ff:ff:ff:ff 5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN link/ether 2a:8d:65:11:8f:7a brd ff:ff:ff:ff:ff:ff inet 10.0.0.12/32 brd 10.0.0.12 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.0.0.2/32 brd 10.0.0.2 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.0.0.78/32 brd 10.0.0.78 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.0.0.102/32 brd 10.0.0.102 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.0.0.1/32 brd 10.0.0.1 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.0.0.127/32 brd 10.0.0.127 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.0.0.100/32 brd 10.0.0.100 scope global kube-ipvs0 valid_lft forever preferred_lft forever 6: cali21d67233fc3@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 7: calibbdaeb2fa53@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 8: cali29233485d0f@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 9: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 inet 10.244.235.192/32 brd 10.244.235.192 scope global tunl0 valid_lft forever preferred_lft forever
新建一些测试用的series和pod,都运行正常,表明切换网络插件成功:
[root@master ~]# k get po -A NAMESPACE NAME READY STATUS RESTARTS AGE default hello-server-85d885f474-jbggc 1/1 Running 0 65s default hello-server-85d885f474-sx562 1/1 Running 0 65s default nginx-demo-76c8bff45f-pln6h 1/1 Running 0 65s default nginx-demo-76c8bff45f-tflnz 1/1 Running 0 65s
总结一哈:
快速查看kubernetes的网络配置:
可以看到是使用的ipip模式,vxlan没有启用
[root@master ~]# kubectl get ippools -o yaml apiVersion: v1 items: - apiVersion: crd.projectcalico.org/v1 kind: IPPool metadata: annotations: projectcalico.org/metadata: '{"uid":"85bfeb95-da98-4710-aed1-1f3f2ae16159","creationTimestamp":"2022-09-30T03:17:58Z"}' creationTimestamp: "2022-09-30T03:17:58Z" generation: 1 managedFields: - apiVersion: crd.projectcalico.org/v1 fieldsType: FieldsV1 manager: Go-http-client operation: Update time: "2022-09-30T03:17:58Z" name: default-ipv4-ippool resourceVersion: "863275" selfLink: /apis/crd.projectcalico.org/v1/ippools/default-ipv4-ippool uid: 1886cacb-700f-4440-893a-a24ae9b5d2d3 spec: blockSize: 26 cidr: 10.244.0.0/16 ipipMode: Always natOutgoing: true nodeSelector: all() vxlanMode: Never kind: List metadata: resourceVersion: "" selfLink: ""