四,修改kube-proxy的配置文件
二进制方式安装部署的集群:
最后加一段mode: "ipvs"即可,如果想修改lvs的算法,scheduler: "" 这里是lvs的调度算法,默认是rr,当然也可以改成wrr或者sh等等其它算法,看自己需要啦。
[root@slave1 cfg]# cat kube-proxy-config.yml kind: KubeProxyConfiguration apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 metricsBindAddress: 0.0.0.0:10249 clientConnection: kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig hostnameOverride: k8s-node1 clusterCIDR: 10.0.0.0/24 mode: "ipvs" ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "wrr" strictARP: false syncPeriod: 0s tcpFinTimeout: 0s tcpTimeout: 0s udpTimeout: 0s
然后重启服务:
systemctl restart kube-proxy
此时的网络状态:
[root@master cfg]# k get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 30d kube-system coredns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP 29d
多了一个kube-pivs0网卡,现有多少个service都写上面了。
[root@slave1 cfg]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:0c:29:e9:9e:89 brd ff:ff:ff:ff:ff:ff inet 192.168.217.17/24 brd 192.168.217.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fee9:9e89/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:fa:3e:c9:3f brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 4: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether e2:7b:0c:50:67:28 brd ff:ff:ff:ff:ff:ff 5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN link/ether 4a:05:45:0b:b0:bc brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/32 brd 10.0.0.1 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.0.0.2/32 brd 10.0.0.2 scope global kube-ipvs0 valid_lft forever preferred_lft forever
查看一哈日志:
这个时候的日志告诉我们,ipvs调度模式(也叫做算法)没有指定,因此,使用的轮询rr模式作为默认,OK,这样就已经基本可以满足大集群的使用了。
[root@slave1 cfg]# cat ../logs/kube-proxy.slave1.root.log.WARNING.20220926-102105.4110 Log file created at: 2022/09/26 10:21:05 Running on machine: slave1 Binary: Built with gc go1.13.9 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg W0926 10:21:05.426300 4110 proxier.go:429] IPVS scheduler not specified, use rr by default
cat kube-proxy.INFO ipvs相关日志:
I0926 13:06:32.992903 14326 server_others.go:259] Using ipvs Proxier. I0926 13:06:32.993480 14326 proxier.go:426] nodeIP: 192.168.217.17, isIPv6: false I0926 13:06:32.993907 14326 server.go:583] Version: v1.18.3 I0926 13:06:32.994533 14326 conntrack.go:52] Setting nf_conntrack_max to 262144
kubeadm方式部署的集群:
scheduler: "" 这里是lvs的调度算法,默认是rr,当然也可以改成wrr或者sh等等其它算法,看自己需要啦。
kubectl edit configmap kube-proxy -n kube-system ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" strictARP: false syncPeriod: 0s tcpFinTimeout: 0s tcpTimeout: 0s udpTimeout: 0s kind: KubeProxyConfiguration metricsBindAddress: "" mode: "ipvs" #修改此处 nodePortAddresses: null
然后删除kube-proxy相关的pod以重启pod即可:
kubectl get pod -n kube-system kubectl delete pod kube-proxy-5ntj4 kube-proxy-82dk4 kube-proxy-s9jrw -n kube-system
查看pod日志,日志出现Using ipvs Proxier
即可
确认是否成功操作,如下,列出了service的IP即为正确(当然,算法也列出来了,具体算法百度即可。):
[root@slave1 cfg]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.1:443 rr -> 192.168.217.16:6443 Masq 1 0 0 TCP 10.0.0.2:53 rr -> 10.244.0.22:53 Masq 1 0 0 UDP 10.0.0.2:53 rr -> 10.244.0.22:53 Masq 1 0 0
OK,kubernetes集群启用ipvs(lvs)就成功了,别忘记了,所有的kube-proxy都要修改的哦,如果是二进制安装的话。
安装完ipvs后的一个错误解决:云原生|kubernetes|解决kube-proxy报错:Not using `--random-fully` in the MASQUERADE rule for iptables_zsk_john的博客-CSDN博客
也就是升级一哈iptables(内核都升级了,ipvs相关的iptables也需要来一哈嘛)
最后,在kubernetes之前的版本中,需要通过设置特性开关SupportIPVSProxyMode来使用IPVS。在kubernetes v1.10版本中,特性开关SupportIPVSProxyMode默认开启,在1.11版本中该特性开关已经被移除。但是如果您使用kubernetes 1.10之前的版本,需要通过--feature-gates=SupportIPVSProxyMode=true开启SupportIPVSProxyMode才能正常使用IPVS