容器编排之Kubernetes网络隔离NetworkPolicy

简介: Kubernetes的一个重要特性就是要把不同node节点的pod(container)连接起来,无视物理节点的限制。但是在某些应用环境中,比如公有云,不同租户的pod不应该互通,这个时候就需要网络隔离。

Kubernetes的一个重要特性就是要把不同node节点的pod(container)连接起来,无视物理节点的限制。但是在某些应用环境中,比如公有云,不同租户的pod不应该互通,这个时候就需要网络隔离。幸好,Kubernetes提供了NetworkPolicy,支持按Namespace级别的网络隔离,这篇文章就带你去了解如何使用NetworkPolicy。

需要注意的是,使用NetworkPolicy需要特定的网络解决方案,如果不启用,即使配置了NetworkPolicy也无济于事。我们这里使用Calico解决网络隔离问题。

互通测试

在使用NetworkPolicy之前,我们先验证不使用的情况下,pod是否互通。这里我们的测试环境是这样的:

Namespace:ns-calico1,ns-calico2

Deployment: ns-calico1/calico1-nginx, ns-calico2/busybox

Service: ns-calico1/calico1-nginx

先创建Namespace:

apiVersion: v1
kind: Namespace
metadata:
 name: ns-calico1
 labels:
 user: calico1
---
apiVersion: v1
kind: Namespace
metadata:
 name: ns-calico2
# kubectl create -f namespace.yaml namespace "ns-calico1" created
namespace "ns-calico2" created
# kubectl get ns
NAME STATUS AGE
default Active 9d
kube-public Active 9d
kube-system Active 9d
ns-calico1 Active 12s
ns-calico2 Active 8s

接着创建ns-calico1/calico1-nginx:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: calico1-nginx
 namespace: ns-calico1
spec:
 replicas: 1 template:
 metadata:
 labels:
 user: calico1
 app: nginx
 spec:
 containers: - name: nginx
 image: nginx
 ports: - containerPort: 80 ---
apiVersion: v1
kind: Service
metadata:
 name: calico1-nginx
 namespace: ns-calico1
 labels: 
 user: calico1
spec:
 selector:
 app: nginx
 ports: - port: 80
# kubectl create -f calico1-nginx.yaml
deployment "calico1-nginx" created
service "calico1-nginx" created
# kubectl get svc -n ns-calico1
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
calico1-nginx 192.168.3.141 <none> 80/TCP 26s # kubectl get deploy -n ns-calico1
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
calico1-nginx 1 1 1 1 34s

最后创建ns-calico2/calico2-busybox:

apiVersion: v1
kind: Pod
metadata:
 name: calico2-busybox
 namespace: ns-calico2
spec:
 containers: - name: busybox
 image: busybox
 command: - sleep
 - "3600"
# kubectl create -f calico2-busybox.yaml
pod "calico2-busybox" created
# kubectl get pod -n ns-calico2
NAME READY STATUS RESTARTS AGE
calico2-busybox 1/1 Running 0 40s

测试服务已经安装完成,现在我们登进calico2-busybox里,看是否能够连通calico1-nginx

# kubectl exec -it calico2-busybox -n ns-calico2 -- wget --spider --timeout=1 calico1-nginx.ns-calico1 Connecting to calico1-nginx.ns-calico1 (192.168.3.141:80)

由此可以看出,在没有设置网络隔离的时候,两个不同Namespace下的Pod是 可以互通的。接下来我们使用Calico进行网络隔离。

网络隔离

先决条件

要想在Kubernetes集群中使用Calico进行网络隔离,必须满足以下条件:

  1. kube-apiserver必须开启运行时extensions/v1beta1/networkpolicies,即设置启动参数:–runtime-config=extensions/v1beta1/networkpolicies=true
  2. kubelet必须启用cni网络插件,即设置启动参数:–network-plugin=cni
  3. kube-proxy必须启用iptables代理模式,这是默认模式,可以不用设置
  4. kube-proxy不得启用–masquerade-all,这会跟calico冲突

注意:配置Calico之后,之前在集群中运行的Pod都要重新启动

安装calico

首先需要安装Calico网络插件,我们直接在Kubernetes集群中安装,便于管理。

# Calico Version v2.1.4 # http://docs.projectcalico.org/v2.1/releases#v2.1.4 # This manifest includes the following component versions: # calico/node:v1.1.3 # calico/cni:v1.7.0 # calico/kube-policy-controller:v0.5.4 # This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
 name: calico-config
 namespace: kube-system
data: # Configure this with the location of your etcd cluster.
 etcd_endpoints: "https://10.1.2.154:2379,https://10.1.2.147:2379" # Configure the Calico backend to use.
 calico_backend: "bird" # The CNI network configuration to install on each node.
 cni_network_config: |- { "name": "k8s-pod-network", "type": "calico", "etcd_endpoints": "__ETCD_ENDPOINTS__", "etcd_key_file": "__ETCD_KEY_FILE__", "etcd_cert_file": "__ETCD_CERT_FILE__", "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__", "log_level": "info", "ipam": { "type": "calico-ipam" }, "policy": { "type": "k8s", "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__", "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__" }, "kubernetes": { "kubeconfig": "__KUBECONFIG_FILEPATH__" } } # If you're using TLS enabled etcd uncomment the following. # You must also populate the Secret below with these files.
 etcd_ca: "/calico-secrets/etcd-ca" # "/calico-secrets/etcd-ca"
 etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"
 etcd_key: "/calico-secrets/etcd-key" # "/calico-secrets/etcd-key" --- # The following contains k8s Secrets for use with a TLS enabled etcd cluster. # For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:
 name: calico-etcd-secrets
 namespace: kube-system
data: # Populate the following files with etcd TLS configuration if desired, but leave blank if # not using TLS for etcd. # This self-hosted install expects three files with the following names. The values # should be base64 encoded strings of the entire contents of each file.
 etcd-key: base64 key.pem
 etcd-cert: base64 cert.pem
 etcd-ca: base64 ca.pem

--- # This manifest installs the calico/node container, as well # as the Calico CNI plugins and network config on # each master and worker node in a Kubernetes cluster.
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: calico-node
 namespace: kube-system
 labels:
 k8s-app: calico-node
spec:
 selector:
 matchLabels:
 k8s-app: calico-node
 template:
 metadata:
 labels:
 k8s-app: calico-node
 annotations:
 scheduler.alpha.kubernetes.io/critical-pod: ''
 scheduler.alpha.kubernetes.io/tolerations: | [{"key": "dedicated", "value": "master", "effect": "NoSchedule" }, {"key":"CriticalAddonsOnly", "operator":"Exists"}]
 spec:
 hostNetwork: true
 containers: # Runs calico/node container on each Kubernetes node. This # container programs network policy and routes on each # host. - name: calico-node
 image: quay.io/calico/node:v1.1.3
 env: # The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS
 valueFrom:
 configMapKeyRef:
 name: calico-config
 key: etcd_endpoints
 # Choose the backend to use. - name: CALICO_NETWORKING_BACKEND
 valueFrom:
 configMapKeyRef:
 name: calico-config
 key: calico_backend
 # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING
 value: "true" # Set Felix endpoint to host default action to ACCEPT. - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
 value: "ACCEPT" # Configure the IP Pool from which Pod IPs will be chosen. - name: CALICO_IPV4POOL_CIDR
 value: "192.168.0.0/16" - name: CALICO_IPV4POOL_IPIP
 value: "always" # Disable IPv6 on Kubernetes. - name: FELIX_IPV6SUPPORT
 value: "false" # Set Felix logging to "info" - name: FELIX_LOGSEVERITYSCREEN
 value: "info" # Location of the CA certificate for etcd. - name: ETCD_CA_CERT_FILE
 valueFrom:
 configMapKeyRef:
 name: calico-config
 key: etcd_ca
 # Location of the client key for etcd. - name: ETCD_KEY_FILE
 valueFrom:
 configMapKeyRef:
 name: calico-config
 key: etcd_key
 # Location of the client certificate for etcd. - name: ETCD_CERT_FILE
 valueFrom:
 configMapKeyRef:
 name: calico-config
 key: etcd_cert
 # Auto-detect the BGP IP address. - name: IP
 value: ""
 securityContext:
 privileged: true #resources: #requests: #cpu: 250m
 volumeMounts: - mountPath: /lib/modules
 name: lib-modules
 readOnly: true - mountPath: /var/run/calico
 name: var-run-calico
 readOnly: false - mountPath: /calico-secrets
 name: etcd-certs
 # This container installs the Calico CNI binaries
 # and CNI network config file on each node.
 - name: install-cni
 image: quay.io/calico/cni:v1.7.0
 command: ["/install-cni.sh"]
 env: # The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS
 valueFrom:
 configMapKeyRef:
 name: calico-config
 key: etcd_endpoints
 # The CNI network config to install on each node. - name: CNI_NETWORK_CONFIG
 valueFrom:
 configMapKeyRef:
 name: calico-config
 key: cni_network_config
 volumeMounts: - mountPath: /host/opt/cni/bin
 name: cni-bin-dir
 - mountPath: /host/etc/cni/net.d
 name: cni-net-dir
 - mountPath: /calico-secrets
 name: etcd-certs
 volumes:
 # Used by calico/node. - name: lib-modules
 hostPath:
 path: /lib/modules
 - name: var-run-calico
 hostPath:
 path: /var/run/calico
 # Used to install CNI. - name: cni-bin-dir
 hostPath:
 path: /opt/cni/bin
 - name: cni-net-dir
 hostPath:
 path: /etc/cni/net.d
 # Mount in the etcd TLS secrets. - name: etcd-certs
 secret:
 secretName: calico-etcd-secrets

--- # This manifest deploys the Calico policy controller on Kubernetes. # See https://github.com/projectcalico/k8s-policy
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: calico-policy-controller
 namespace: kube-system
 labels:
 k8s-app: calico-policy
 annotations:
 scheduler.alpha.kubernetes.io/critical-pod: ''
 scheduler.alpha.kubernetes.io/tolerations: | [{"key": "dedicated", "value": "master", "effect": "NoSchedule" }, {"key":"CriticalAddonsOnly", "operator":"Exists"}]
spec: # The policy controller can only have a single active instance.
 replicas: 1
 strategy:
 type: Recreate template:
 metadata:
 name: calico-policy-controller
 namespace: kube-system
 labels:
 k8s-app: calico-policy
 spec: # The policy controller must run in the host network namespace so that # it isn't governed by policy that would prevent it from working.
 hostNetwork: true
 containers: - name: calico-policy-controller
 image: quay.io/calico/kube-policy-controller:v0.5.4
 env: # The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS
 valueFrom:
 configMapKeyRef:
 name: calico-config
 key: etcd_endpoints
 # Location of the CA certificate for etcd. - name: ETCD_CA_CERT_FILE
 valueFrom:
 configMapKeyRef:
 name: calico-config
 key: etcd_ca
 # Location of the client key for etcd. - name: ETCD_KEY_FILE
 valueFrom:
 configMapKeyRef:
 name: calico-config
 key: etcd_key
 # Location of the client certificate for etcd. - name: ETCD_CERT_FILE
 valueFrom:
 configMapKeyRef:
 name: calico-config
 key: etcd_cert
 # The location of the Kubernetes API. Use the default Kubernetes # service for API access. - name: K8S_API
 value: "https://kubernetes.default:443" # Since we're running in the host namespace and might not have KubeDNS # access, configure the container's /etc/hosts to resolve # kubernetes.default to the correct service clusterIP. - name: CONFIGURE_ETC_HOSTS
 value: "true"
 volumeMounts: # Mount in the etcd TLS secrets. - mountPath: /calico-secrets
 name: etcd-certs
 volumes: # Mount in the etcd TLS secrets. - name: etcd-certs
 secret:
 secretName: calico-etcd-secrets
# kubectl create -f calico.yaml
configmap "calico-config" created
secret "calico-etcd-secrets" created
daemonset "calico-node" created
deployment "calico-policy-controller" created
# kubectl get ds -n kube-system 
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE
calico-node 1 1 1 1 1 <none> 52s # kubectl get deploy -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
calico-policy-controller 1 1 1 1 6m

这样就搭建了Calico网络,接下来就可以配置NetworkPolicy了。

配置NetworkPolicy

首先,修改ns-calico1的配置:

apiVersion: v1
kind: Namespace
metadata:
 name: ns-calico1
 labels:
 user: calico1
 annotations:
 net.beta.kubernetes.io/network-policy: | { "ingress": { "isolation": "DefaultDeny" } }
# kubectl apply -f ns-calico1.yaml namespace "ns-calico1" configured

如果这个时候再测试两个pod是否连通,一定会不通:

# kubectl exec -it calico2-busybox -n ns-calico2 -- wget --spider --timeout=1 calico1-nginx.ns-calico1 Connecting to calico1-nginx.ns-calico1 (192.168.3.71:80)
wget: download timed out

这就是我们想要的效果,不同Namespace之间的pod不能互通,当然这只是最简单的情况,如果这时候ns-calico1的pod去连接ns-calico2的pod,还是互通的。因为ns-calico2没有设置Namespace annotations。

而且,这时候的ns-calico1会拒绝任何pod的通讯请求。因为,Namespace的annotations只是指定了拒绝所有的通讯请求,还未规定何时接受其他pod的通讯请求。在这里,我们指定只有拥有user=calico1标签的pod可以互联。

apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
 name: calico1-network-policy
 namespace: ns-calico1
spec:
 podSelector:
 matchLabels:
 user: calico1
 ingress: - from: - namespaceSelector:
 matchLabels:
 user: calico1
 - podSelector:
 matchLabels:
 user: calico1
--- 
apiVersion: v1
kind: Pod
metadata:
 name: calico1-busybox
 namespace: ns-calico1
 labels:
 user: calico1
spec:
 containers: - name: busybox
 image: busybox
 command: - sleep
 - "3600"
# kubectl create -f calico1-network-policy.yaml
networkpolicy "calico1-network-policy" created
# kubectl create -f calico1-busybox.yaml
pod "calico1-busybox" created

这时候,如果我通过calico1-busybox连接calico1-nginx,则可以连通。

# kubectl exec -it calico1-busybox -n ns-calico1 -- wget --spider --timeout=1 calico1-nginx.ns-calico1  Connecting to calico1-nginx.ns-calico1 (192.168.3.71:80)

这样我们就实现了Kubernetes的网络隔离。基于NetworkPolicy,可以实现公有云安全组策略。

本文转自中文社区-容器编排之Kubernetes网络隔离NetworkPolicy

相关实践学习
深入解析Docker容器化技术
Docker是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器上,也可以实现虚拟化,容器是完全使用沙箱机制,相互之间不会有任何接口。Docker是世界领先的软件容器平台。开发人员利用Docker可以消除协作编码时“在我的机器上可正常工作”的问题。运维人员利用Docker可以在隔离容器中并行运行和管理应用,获得更好的计算密度。企业利用Docker可以构建敏捷的软件交付管道,以更快的速度、更高的安全性和可靠的信誉为Linux和Windows Server应用发布新功能。 在本套课程中,我们将全面的讲解Docker技术栈,从环境安装到容器、镜像操作以及生产环境如何部署开发的微服务应用。本课程由黑马程序员提供。 &nbsp; &nbsp; 相关的阿里云产品:容器服务 ACK 容器服务 Kubernetes 版(简称 ACK)提供高性能可伸缩的容器应用管理能力,支持企业级容器化应用的全生命周期管理。整合阿里云虚拟化、存储、网络和安全能力,打造云端最佳容器化应用运行环境。 了解产品详情: https://www.aliyun.com/product/kubernetes
相关文章
|
3月前
|
运维 Kubernetes 前端开发
如何用 eBPF 实现 Kubernetes 网络可观测性?实战指南
本文深入探讨了Kubernetes网络观测的挑战与eBPF技术的应用。首先分析了传统工具在数据碎片化、上下文缺失和性能瓶颈上的局限性,接着阐述了eBPF通过零拷贝观测、全链路关联和动态过滤等特性带来的优势。文章进一步解析了eBPF观测架构的设计与实现,包括关键数据结构、内核探针部署及生产环境拓扑。实战部分展示了如何构建全栈观测系统,并结合NetworkPolicy验证、跨节点流量分析等高级场景,提供具体代码示例。最后,通过典型案例分析和性能数据对比,验证了eBPF方案的有效性,并展望了未来演进方向,如智能诊断系统与Wasm集成。
105 0
|
2月前
|
NoSQL Redis Docker
使用Docker Compose工具进行容器编排的教程
以上就是使用Docker Compose进行容器编排的基础操作。这能帮你更有效地在本地或者在服务器上部署和管理多容器应用。
292 11
|
3月前
|
Docker 容器
Docker网关冲突导致容器启动网络异常解决方案
当执行`docker-compose up`命令时,服务器网络可能因Docker创建新网桥导致IP段冲突而中断。原因是Docker默认的docker0网卡(172.17.0.1/16)与宿主机网络地址段重叠,引发路由异常。解决方法为修改docker0地址段,通过配置`/etc/docker/daemon.json`调整为非冲突段(如192.168.200.1/24),并重启服务。同时,在`docker-compose.yml`中指定网络模式为`bridge`,最后通过检查docker0地址、网络接口列表及测试容器启动验证修复效果。
|
4月前
|
网络协议 Docker 容器
使用网络--容器互联
使用网络--容器互联
95 18
|
9月前
|
人工智能 弹性计算 运维
ACK Edge与IDC:高效容器网络通信新突破
本文介绍如何基于ACK Edge以及高效的容器网络插件管理IDC进行容器化。
|
6月前
|
Kubernetes Shell Windows
【Azure K8S | AKS】在AKS的节点中抓取目标POD的网络包方法分享
在AKS中遇到复杂网络问题时,可通过以下步骤进入特定POD抓取网络包进行分析:1. 使用`kubectl get pods`确认Pod所在Node;2. 通过`kubectl node-shell`登录Node;3. 使用`crictl ps`找到Pod的Container ID;4. 获取PID并使用`nsenter`进入Pod的网络空间;5. 在`/var/tmp`目录下使用`tcpdump`抓包。完成后按Ctrl+C停止抓包。
212 12
|
9月前
|
监控 NoSQL 时序数据库
《docker高级篇(大厂进阶):7.Docker容器监控之CAdvisor+InfluxDB+Granfana》包括:原生命令、是什么、compose容器编排,一套带走
《docker高级篇(大厂进阶):7.Docker容器监控之CAdvisor+InfluxDB+Granfana》包括:原生命令、是什么、compose容器编排,一套带走
544 78
|
9月前
|
关系型数据库 MySQL Docker
《docker高级篇(大厂进阶):5.Docker-compose容器编排》包括是什么能干嘛去哪下、Compose核心概念、Compose使用三个步骤、Compose常用命令、Compose编排微服务
《docker高级篇(大厂进阶):5.Docker-compose容器编排》包括是什么能干嘛去哪下、Compose核心概念、Compose使用三个步骤、Compose常用命令、Compose编排微服务
483 24
|
9月前
|
Kubernetes 网络协议 应用服务中间件
Kubernetes Ingress:灵活的集群外部网络访问的利器
《Kubernetes Ingress:集群外部访问的利器-打造灵活的集群网络》介绍了如何通过Ingress实现Kubernetes集群的外部访问。前提条件是已拥有Kubernetes集群并安装了kubectl工具。文章详细讲解了Ingress的基本组成(Ingress Controller和资源对象),选择合适的版本,以及具体的安装步骤,如下载配置文件、部署Nginx Ingress Controller等。此外,还提供了常见问题的解决方案,例如镜像下载失败的应对措施。最后,通过部署示例应用展示了Ingress的实际使用方法。
290 2
|
9月前
|
关系型数据库 MySQL Docker
《docker高级篇(大厂进阶):5.Docker-compose容器编排》包括是什么能干嘛去哪下、Compose核心概念、Compose使用三个步骤、Compose常用命令、Compose编排微服务
《docker高级篇(大厂进阶):5.Docker-compose容器编排》包括是什么能干嘛去哪下、Compose核心概念、Compose使用三个步骤、Compose常用命令、Compose编排微服务
516 6

热门文章

最新文章

推荐镜像

更多