为了防止少数受损服务影响平台上的其他大多数服务,为此,在某些特定的业务场景中需要限制服务之间的交互。
安全,无论是基于传统的单体服务模型还是当下流行的云原生微服务模型,其自始至终是一个关注度较高的话题。诚然,安全性是一个广泛的概念,但本文更多地将聚焦在微服务架构体系,深入到一个至为关键的层面,即“服务间的交互安全“。
从某种意义上而言,如同应对人类历史上极为狡猾、变异率极高、扩散性极强的新型冠状病毒一样,针对微服务体系访问限制管理,同样最有效的方法也便是“隔离”。毕竟,隔离或许是最为基础、通用的一种有效措施。
Kubernetes 自 Version 1.3 后开始引入了 Network Policy 功能机制,支持基于 Namespace 和 Pod 级别进行网络访问控制。利用 Label 指定 Namespaces 或 Pod,底层基于 Iptables 实现。其提供以应用为中心,基于策略的网络控制,用于隔离应用以减少攻击面。
基于此网络策略,使得 Pod 之间能否通信可通过如下三种组合进行确认:
1、其他被允许的 Pods(例如:Pod 无法限制对自身的访问)
2、被允许访问的 Namespace
3、IP CIDR(例如:与 Pod 运行所在节点的通信总是被允许的)
在定义基于 Pod 或 Namespace 的 NetworkPolicy 时,可以使用“标签选择器”来设定哪些流量可以进入或离开 Pod。同时,当创建基于 IP 的 NetworkPolicy 时,可以基于 IP CIDR 来定义策略。
默认情况下,Kubernetes 等微服务容器平台允许服务之间进行无约束的通信。然而,为了防止少数受损的服务影响平台上的所有服务,微服务容器平台需要限制服务之间的交互。通过在 Kubernetes 中创建网络策略来实施此约束。基于所设定的网络策略允许指定哪些服务可以相互通信,哪些服务不能相互通信。例如,可以指定一个服务只能使用网络策略与同一命名空间中的其他服务通信。
正如上面章节所述,在默认条件下,容器平台所承载的 Pod 是非隔离的,即:意味着它们随时都可以接受来自任何方向的流量请求与交互。因此,只有当 Pod 在被某一 NetworkPolicy 选中时其才会进入隔离状态。一旦 Namespace 中有 NetworkPolicy 选择了特定的 Pod,此时,该 Pod 将会拒绝此 NetworkPolicy 所设定的不允许的连接。(Namespace 下其他未被 NetworkPolicy 所选择的 Pod 会继续接受所有的流量请求)
除此之外,在我们所设定的网络策略尽可能避免冲突。 如果任何一个或多个既定策略选择了某一 Pod,则该 Pod 受限于这些策略的 入站(Ingress)/出站(Egress)规则的并集。
在使用 Network Policy 策略时,所配置的网络插件需要支持 Network Policy,如 Calico、Romana、Weave Net 和 Trireme 等,其中 Engress 为出口流量,Ingress 为 入口流量等。
通常,Kubernetes 集群需要一个网络控制器来实施网络策略。网络控制器是一个特殊的 Pod (也称为“守护程序”),在集群中的每一个节点上运行。它监视服务之间的网络流量并强制执行网络策略,以使得服务实例之间的交互能够基于容器平台制定的规则安全运行。关于 Network Policy ,其入口结构体为:staging/src/k8s.io/api/networking/v1/types.go,具体源码详情如下所示:
/* Copyright 2017 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package v1 import ( "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/util/intstr" ) // +genclient // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object // NetworkPolicy describes what network traffic is allowed for a set of Pods type NetworkPolicy struct { metav1.TypeMeta `json:",inline"` // Standard object's metadata. // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata // +optional metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"` // Specification of the desired behavior for this NetworkPolicy. // +optional Spec NetworkPolicySpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"` } // Policy Type string describes the NetworkPolicy type // This type is beta-level in 1.8 type PolicyType string const ( // PolicyTypeIngress is a NetworkPolicy that affects ingress traffic on selected pods PolicyTypeIngress PolicyType = "Ingress" // PolicyTypeEgress is a NetworkPolicy that affects egress traffic on selected pods PolicyTypeEgress PolicyType = "Egress" ) // NetworkPolicySpec provides the specification of a NetworkPolicy type NetworkPolicySpec struct { // Selects the pods to which this NetworkPolicy object applies. The array of // ingress rules is applied to any pods selected by this field. Multiple network // policies can select the same set of pods. In this case, the ingress rules for // each are combined additively. This field is NOT optional and follows standard // label selector semantics. An empty podSelector matches all pods in this // namespace. PodSelector metav1.LabelSelector `json:"podSelector" protobuf:"bytes,1,opt,name=podSelector"` // List of ingress rules to be applied to the selected pods. Traffic is allowed to // a pod if there are no NetworkPolicies selecting the pod // (and cluster policy otherwise allows the traffic), OR if the traffic source is // the pod's local node, OR if the traffic matches at least one ingress rule // across all of the NetworkPolicy objects whose podSelector matches the pod. If // this field is empty then this NetworkPolicy does not allow any traffic (and serves // solely to ensure that the pods it selects are isolated by default) // +optional Ingress []NetworkPolicyIngressRule `json:"ingress,omitempty" protobuf:"bytes,2,rep,name=ingress"` // List of egress rules to be applied to the selected pods. Outgoing traffic is // allowed if there are no NetworkPolicies selecting the pod (and cluster policy // otherwise allows the traffic), OR if the traffic matches at least one egress rule // across all of the NetworkPolicy objects whose podSelector matches the pod. If // this field is empty then this NetworkPolicy limits all outgoing traffic (and serves // solely to ensure that the pods it selects are isolated by default). // This field is beta-level in 1.8 // +optional Egress []NetworkPolicyEgressRule `json:"egress,omitempty" protobuf:"bytes,3,rep,name=egress"` // List of rule types that the NetworkPolicy relates to. // Valid options are "Ingress", "Egress", or "Ingress,Egress". // If this field is not specified, it will default based on the existence of Ingress or Egress rules; // policies that contain an Egress section are assumed to affect Egress, and all policies // (whether or not they contain an Ingress section) are assumed to affect Ingress. // If you want to write an egress-only policy, you must explicitly specify policyTypes [ "Egress" ]. // Likewise, if you want to write a policy that specifies that no egress is allowed, // you must specify a policyTypes value that include "Egress" (since such a policy would not include // an Egress section and would otherwise default to just [ "Ingress" ]). // This field is beta-level in 1.8 // +optional PolicyTypes []PolicyType `json:"policyTypes,omitempty" protobuf:"bytes,4,rep,name=policyTypes,casttype=PolicyType"` } // NetworkPolicyIngressRule describes a particular set of traffic that is allowed to the pods // matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and from. type NetworkPolicyIngressRule struct { // List of ports which should be made accessible on the pods selected for this // rule. Each item in this list is combined using a logical OR. If this field is // empty or missing, this rule matches all ports (traffic not restricted by port). // If this field is present and contains at least one item, then this rule allows // traffic only if the traffic matches at least one port in the list. // +optional Ports []NetworkPolicyPort `json:"ports,omitempty" protobuf:"bytes,1,rep,name=ports"` // List of sources which should be able to access the pods selected for this rule. // Items in this list are combined using a logical OR operation. If this field is // empty or missing, this rule matches all sources (traffic not restricted by // source). If this field is present and contains at least one item, this rule // allows traffic only if the traffic matches at least one item in the from list. // +optional From []NetworkPolicyPeer `json:"from,omitempty" protobuf:"bytes,2,rep,name=from"` } // NetworkPolicyEgressRule describes a particular set of traffic that is allowed out of pods // matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and to. // This type is beta-level in 1.8 type NetworkPolicyEgressRule struct { // List of destination ports for outgoing traffic. // Each item in this list is combined using a logical OR. If this field is // empty or missing, this rule matches all ports (traffic not restricted by port). // If this field is present and contains at least one item, then this rule allows // traffic only if the traffic matches at least one port in the list. // +optional Ports []NetworkPolicyPort `json:"ports,omitempty" protobuf:"bytes,1,rep,name=ports"` // List of destinations for outgoing traffic of pods selected for this rule. // Items in this list are combined using a logical OR operation. If this field is // empty or missing, this rule matches all destinations (traffic not restricted by // destination). If this field is present and contains at least one item, this rule // allows traffic only if the traffic matches at least one item in the to list. // +optional To []NetworkPolicyPeer `json:"to,omitempty" protobuf:"bytes,2,rep,name=to"` } // NetworkPolicyPort describes a port to allow traffic on type NetworkPolicyPort struct { // The protocol (TCP, UDP, or SCTP) which traffic must match. If not specified, this // field defaults to TCP. // +optional Protocol *v1.Protocol `json:"protocol,omitempty" protobuf:"bytes,1,opt,name=protocol,casttype=k8s.io/api/core/v1.Protocol"` // The port on the given protocol. This can either be a numerical or named port on // a pod. If this field is not provided, this matches all port names and numbers. // +optional Port *intstr.IntOrString `json:"port,omitempty" protobuf:"bytes,2,opt,name=port"` } // IPBlock describes a particular CIDR (Ex. "192.168.1.1/24","2001:db9::/64") that is allowed // to the pods matched by a NetworkPolicySpec's podSelector. The except entry describes CIDRs // that should not be included within this rule. type IPBlock struct { // CIDR is a string representing the IP Block // Valid examples are "192.168.1.1/24" or "2001:db9::/64" CIDR string `json:"cidr" protobuf:"bytes,1,name=cidr"` // Except is a slice of CIDRs that should not be included within an IP Block // Valid examples are "192.168.1.1/24" or "2001:db9::/64" // Except values will be rejected if they are outside the CIDR range // +optional Except []string `json:"except,omitempty" protobuf:"bytes,2,rep,name=except"` } // NetworkPolicyPeer describes a peer to allow traffic from. Only certain combinations of // fields are allowed type NetworkPolicyPeer struct { // This is a label selector which selects Pods. This field follows standard label // selector semantics; if present but empty, it selects all pods. // // If NamespaceSelector is also set, then the NetworkPolicyPeer as a whole selects // the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. // Otherwise it selects the Pods matching PodSelector in the policy's own Namespace. // +optional PodSelector *metav1.LabelSelector `json:"podSelector,omitempty" protobuf:"bytes,1,opt,name=podSelector"` // Selects Namespaces using cluster-scoped labels. This field follows standard label // selector semantics; if present but empty, it selects all namespaces. // // If PodSelector is also set, then the NetworkPolicyPeer as a whole selects // the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. // Otherwise it selects all Pods in the Namespaces selected by NamespaceSelector. // +optional NamespaceSelector *metav1.LabelSelector `json:"namespaceSelector,omitempty" protobuf:"bytes,2,opt,name=namespaceSelector"` // IPBlock defines policy on a particular IPBlock. If this field is set then // neither of the other fields can be. // +optional IPBlock *IPBlock `json:"ipBlock,omitempty" protobuf:"bytes,3,rep,name=ipBlock"` } // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object // NetworkPolicyList is a list of NetworkPolicy objects. type NetworkPolicyList struct { metav1.TypeMeta `json:",inline"` // Standard list metadata. // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata // +optional metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"` // Items is a list of schema objects. Items []NetworkPolicy `json:"items" protobuf:"bytes,2,rep,name=items"` }
接下来,我们来看一个 NetworkPolicy 的一个 Demo ,针对此 Demo 文件的全面解析,大家可参看官网所给出的结构定义文档,具体如下所示:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: network-policy-sample namespace: default spec: podSelector: matchLabels: role: storage policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 172.17.0.0/16 except: - 172.17.1.0/24 - namespaceSelector: matchLabels: project: demo-pro - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 7777 egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 5555
针对上述文件,简要解析如下:
核心字段:与所有其他的 Kubernetes 对象文件一样,NetworkPolicy 组成也需要 apiVersion、 kind 和 metadata 字段。
spec:基于所设定的 NetworkPolicy 规约中包含了在 Namespace 中定义特定网络策略所需的所有相关联信息。
podSelector:基于所设定的每个 NetworkPolicy 均需要包括一个 podSelector,它用于选择适用于此策略约束下的 Pod。Demo 中的策略选择带有 “role=storage” 标签的 Pod。 若 podSelector 为空的,则会默认选择 Namespace 下所有 Pod。
policyTypes: 基于所设定的每个 NetworkPolicy 均将涉及包含一个 policyTypes 列表,其中包含 Ingress 或 Egress 或(两者亦可)。policyTypes 字段表示给定的策略是应用于所选 Pod 的入口流量还是出口流量(两者亦可)。 如果 NetworkPolicy 未指定 policyTypes 则默认情况下始终设置 Ingress; 如果 NetworkPolicy 有任何出口规则的话则设置 Egress。
ingress: 基于所设定的每个 NetworkPolicy 可包含一个 ingress 规则的白名单列表。 每个规则都允许同时匹配 from 和 ports 部分的流量。上述 Demo 策略中包含一条简单的规则: 它匹配某个特定端口,第一个通过 ipBlock 指定,第二个通过 namespaceSelector 指定,第三个通过 podSelector 指定。
egress: 基于所设定的每个 NetworkPolicy 可包含一个 egress 规则的白名单列表。 每个规则都允许匹配 to 和 port 部分的流量。该 Demo 策略包含一条规则, 该规则指定端口上的流量匹配到 10.0.0.0/24 中的任何目的地。
基于所设定的网络策略,其简要总结如下所示:
1、隔离 default Namespace 下 role=storage 的 Pod 。
2、出口限制:允许符合以下条件的 Pod 连接到 default Namespace 下标签为 role=storage 的所有 Pod 的 7777 TCP 端口:
(1)default Namespace 下带有 role=frontend 标签的所有 Pod
(2)带有 project=demo-pro 标签的所有 Namespace 中的 Pod
(3)IP 地址范围为 172.17.0.0–172.17.0.255 和 172.17.2.0–172.17.255.255(即除了 172.17.1.0/24 之外的所有 172.17.0.0/16)
3、入口限制:允许从带有 role=storage 标签的 Namespace 下的任何 Pod 到 CIDR 10.0.0.0/24 下 5555 TCP 端口。
在本文中,我们将以 Azure 云平台为例,简要阐述下其所涉及的网络策略。AKS 支持两种类型的网络控制器(称为网络插件):Azure CNI 和 Kubenet。无法在现有 Azure Kubernetes 服务(AKS)群集上安装网络控制器。此外,网络策略在没有网络控制器的 Kubernetes 群集中也无法发挥其作用。虽然这些策略不会产生任何错误,但也不会限制服务之间的通信量。
大家可以在 Azure Kubernetes 服务文档中阅读有关 AKS 上受支持的网络插件的更多信息:Azure(用于 Calico 和 Azure 网络策略)和 Kubenet(用于 Calico 策略)。我们将使用 Azure 网络插件为我们的网络策略创建 Azure CNI 网络控制器。
目前,由于 Docker Desktop 不支持网络控制器,因此我们需要为本教程创建一个 AKS 集群。接下来,我们来了解一下如何借助 Azure 平台进行微服务之间的通信限制,具体如下所示。
执行以下 AZ CLI 命令,在启用 Azure 网络插件的情况下创建名为 policy demo 的新 AKS 集群:
az group create --name demo-rg --location australiaeast az aks create -n policy-demo --node-count 1 --node-vm-size Standard_B2s --load-balancer-sku basic --node-osdisk-size 32 --resource-group demo-rg --generate-ssh-keys --network-plugin azure --network-policy azure az aks get-credentials --resource-group demo-rg --name policy-demo
为了验证网络策略,我们创建一个简单的 API,当在参数中传递产品 ID 时,它会返回产品的价格,如下所示:
curl -X GET http://localhost:8080/price/{product_id}
让我们使用以下清单为 API 创建 Kubernetes 部署和 ClusterIP 服务(在集群外不可见的服务):
kind: Namespace apiVersion: v1 metadata: name: pricing-ns labels: name: pricing --- apiVersion: apps/v1 kind: Deployment metadata: name: prices-api-deployment namespace: pricing-ns labels: app: prices-api spec: replicas: 1 selector: matchLabels: app: prices-api template: metadata: labels: app: prices-api spec: containers: - name: prices-api image: rahulrai/prices-api:latest ports: - containerPort: 8080 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" --- apiVersion: v1 kind: Service metadata: name: prices-api-service namespace: pricing-ns spec: ports: - port: 80 targetPort: 8080 protocol: TCP name: http selector: app: prices-api ---
如果没有网络策略,集群中的任何服务都可以访问 API 。要测试这一点,我们可以使用如下命令在集群中创建临时 Pod:
kubectl run curl-po --image=radial/busyboxplus:curl -i --tty --rm
上一条命令我们创建了 Pod,然后,我们在 Shell 中执行以下命令以访问其 API,具体如下所示:
curl -X GET prices-api-service.pricing-ns.svc.cluster.local/price/1
让我们创建另一个清单,该清单引入了一个网络策略,只接受来自一个名为app=prices 的 Pod 的流量,该 Pod 运行在一个名为 project=critical project 的命名空间中,如下所示:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: prices-api-network-policy namespace: pricing-ns spec: podSelector: matchLabels: app: prices-api policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: project: critical-project - podSelector: matchLabels: app: prices-api-consumer
网络策略首先定义它所应用的 Pod —prices-api Pod。该策略可以将传入流量限制为 Pod(入口)或传出流量(出口)。在这种情况下,我们希望限制 Pod 的传入流量。接下来,该策略定义了流量的来源——API 消费者 Pod。我们也可以在 Kubernetes 文档中阅读有关网络策略资源的完整定义的更多信息。
让我们根据网络策略规范创建一个 Namespace ,其中我们将有一个允许访问 API 的临时 Pod:
kind: Namespace apiVersion: v1 metadata: name: critical-project labels: project: critical-project
让我们创建并运行两个临时 Pod,一个满足网络策略 -curl-po-allow,另一个不满足 -curl-po-deny,具体如下所示:
kubectl run curl-po-allow --image=radial/busyboxplus:curl --labels="app=prices-api-consumer" -i --tty --rm -n critical-project kubectl run curl-po-deny --image=radial/busyboxplus:curl -i --tty --rm
然后,我们在命令行中执行 Curl 命令,以验证两 个 Pod 的返回结果情况,具体如下所示,其中只有一个是返回成功的:
综上所述,基于网络策略改进 Kubernetes 上微服务之间的安全互访,在实际的业务场景中具有重要意义。深度防御原则要求我们考虑在微服务之间可以接受的信任级别。微服务在某些系统中相互信任是可以接受的,但在其他系统中则不可以。大多数时候,这种权衡是由便利性驱动的。因此,建议大家最好分析高度信任的含义以及它可能在我们所采用的体系架构中所引入的漏洞。