我是三个节点,因此,看到三个pod 是running就可以了,有多少节点就多少个flannel的pod:
[root@master cfg]# k get po -n kube-system NAME READY STATUS RESTARTS AGE coredns-76648cbfc9-zwjqz 1/1 Running 0 6h51m kube-flannel-ds-4mx69 1/1 Running 1 7h9m kube-flannel-ds-gmdph 1/1 Running 3 7h9m kube-flannel-ds-m8hzz 1/1 Running 1 7h9m
如果是新搭建集群,此时查看节点就会是ready的状态,证明确实安装好了:
[root@master cfg]# k get no NAME STATUS ROLES AGE VERSION k8s-master Ready <none> 33d v1.18.3 k8s-node1 Ready <none> 33d v1.18.3 k8s-node2 Ready <none> 33d v1.18.3
当然,还有一个svc:
[root@master cfg]# k get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE coredns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP 33d
三,calico网络插件部署
Calico有三四种安装方式:
- 使用calico.yaml清单文件安装(推荐使用)
- 二进制安装方式(很少用,不介绍了)
- 插件方式(也很少用了,不介绍了)
- 使用Tigera Calico Operator安装Calico(官方最新指导)Tigera Calico Operator,Calico操作员是一款用于管理Calico安装、升级的管理工具,它用于管理Calico的安装生命周期。从Calico-v3.15版本官方开始使用此工具。
Calico安装要求:
- x86-64, arm64, ppc64le, or s390x processor
- 2个CPU
- 2GB运行内存
- 10GB硬盘空间
- RedHat Enterprise Linux 7.x+, CentOS 7.x+, Ubuntu 16.04+, or Debian 9.x+
- 确保Calico可以管理主机上的cali和tunl接口。
本例选用的是calico清单文件的方式安装:
安装命令为(先下载下来,一哈有些地方需要修改哦)
wget https://docs.projectcalico.org/manifests/calico.yaml --no-check-certificate
清单文件一些配置详解:
该清单文件安装了以下Kubernetes资源:
- 使用DaemonSet在每个主机上安装calico/node容器;
- 使用DaemonSet在每个主机上安装Calico CNI二进制文件和网络配置;
- 使用Deployment运行calico/kube-controller;
- Secert/calico-etcd-secrets提供可选的Calico连接到etcd的TLS密钥信息;
- ConfigMap/calico-config提供安装Calico时的配置参数。
(1)
清单文件中"CALICO_IPV4POOL_CIDR"部分
设置成了kube-proxy-config.yaml 文件相同的cidr,本例是10.244.0.0。
再次提醒此项用于设置安装Calico时要创建的默认IPv4池,PodIP将从该范围中选择。
Calico安装完成后修改此值将再无效。
默认情况下calico.yaml中"CALICO_IPV4POOL_CIDR"是注释的,如果kube-controller-manager的"--cluster-cidr"不存在任何值的话,则通常取默认值"192.168.0.0/16,172.16.0.0/16,..,172.31.0.0/16"。
当使用kubeadm时,PodIP的范围应该与kubeadm init的清单文件中的"podSubnet"字段或者"--pod-network-cidr"选项填写的值相同。
- name: CALICO_IPV4POOL_IPIP value: "Always" # Set MTU for tunnel device used if ipip is enabled - name: FELIX_IPINIPMTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # The default IPv4 pool to create on startup if none exists. Pod IPs will be # chosen from this range. Changing this value after installation will have # no effect. This should fall within `--cluster-cidr`. - name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16"
(2)
calico_backend: "bird"
设置Calico使用的后端机制。支持值:
bird,开启BIRD功能,根据Calico-Node的配置来决定主机的网络实现是采用BGP路由模式还是IPIP、VXLAN覆盖网络模式。这个是默认的模式。
vxlan,纯VXLAN模式,仅能够使用VXLAN协议的覆盖网络模式。
# Configure the backend to use. calico_backend: "bird"
其它的不需要更改,默认就好了,也没什么可设置的。
三,flannel切换到calico
rm -rf /etc/cni/net.d/10-flannel.conflist(所有节点都这么操作,删除flannel相关配置文件),然后apply calico的清单文件,然后重启节点,当然,也可以重启相关服务,删除flannel的网卡和路由,但太麻烦了。
等待相关pod运行正常
[root@master ~]# k get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-57546b46d6-hcfg5 1/1 Running 1 32m calico-node-7x7ln 1/1 Running 2 32m calico-node-dbsmv 1/1 Running 1 32m calico-node-vqbqn 1/1 Running 3 32m coredns-76648cbfc9-zwjqz 1/1 Running 11 17h
查看网卡:
[root@master ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:55:91:06 brd ff:ff:ff:ff:ff:ff inet 192.168.217.16/24 brd 192.168.217.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe55:9106/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:51:da:97:25 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 4: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 4e:2f:8c:a7:d3:12 brd ff:ff:ff:ff:ff:ff 5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN link/ether 2a:8d:65:11:8f:7a brd ff:ff:ff:ff:ff:ff inet 10.0.0.12/32 brd 10.0.0.12 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.0.0.2/32 brd 10.0.0.2 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.0.0.78/32 brd 10.0.0.78 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.0.0.102/32 brd 10.0.0.102 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.0.0.1/32 brd 10.0.0.1 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.0.0.127/32 brd 10.0.0.127 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.0.0.100/32 brd 10.0.0.100 scope global kube-ipvs0 valid_lft forever preferred_lft forever 6: cali21d67233fc3@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 7: calibbdaeb2fa53@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 8: cali29233485d0f@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 9: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 inet 10.244.235.192/32 brd 10.244.235.192 scope global tunl0 valid_lft forever preferred_lft forever
新建一些测试用的series和pod,都运行正常,表明切换网络插件成功:
[root@master ~]# k get po -A NAMESPACE NAME READY STATUS RESTARTS AGE default hello-server-85d885f474-jbggc 1/1 Running 0 65s default hello-server-85d885f474-sx562 1/1 Running 0 65s default nginx-demo-76c8bff45f-pln6h 1/1 Running 0 65s default nginx-demo-76c8bff45f-tflnz 1/1 Running 0 65s
总结一哈:
快速查看kubernetes的网络配置:
可以看到是使用的ipip模式,vxlan没有启用
[root@master ~]# kubectl get ippools -o yaml apiVersion: v1 items: - apiVersion: crd.projectcalico.org/v1 kind: IPPool metadata: annotations: projectcalico.org/metadata: '{"uid":"85bfeb95-da98-4710-aed1-1f3f2ae16159","creationTimestamp":"2022-09-30T03:17:58Z"}' creationTimestamp: "2022-09-30T03:17:58Z" generation: 1 managedFields: - apiVersion: crd.projectcalico.org/v1 fieldsType: FieldsV1 manager: Go-http-client operation: Update time: "2022-09-30T03:17:58Z" name: default-ipv4-ippool resourceVersion: "863275" selfLink: /apis/crd.projectcalico.org/v1/ippools/default-ipv4-ippool uid: 1886cacb-700f-4440-893a-a24ae9b5d2d3 spec: blockSize: 26 cidr: 10.244.0.0/16 ipipMode: Always natOutgoing: true nodeSelector: all() vxlanMode: Never kind: List metadata: resourceVersion: "" selfLink: ""