Life of a Packet in Kubernetes - Calico网络进阶(注解版)

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
简介: As we discussed in Part 1, CNI plugins play an essential role in Kubernetes networking. There are many third-party CNI plugins available today; Calico is one of them. Many engineers prefer Calico; one of the main reasons is its ease of use and how it shapes the network fabric.

As we discussed in Part 1, CNI plugins play an essential role in Kubernetes networking. There are many third-party CNI plugins available today; Calico is one of them. Many engineers prefer Calico; one of the main reasons is its ease of use and how it shapes the network fabric.


Calico supports a broad range of platforms, including Kubernetes, OpenShift, Docker EE, OpenStack, and bare metal services. The Calico node runs in a Docker container on the Kubernetes master node and on each Kubernetes worker node in the cluster. The calico-cni plugin integrates directly with the Kubernetes kubelet process on each node to discover which Pods are created and add them to Calico networking.


We will talk about installation, Calico modules (Felix, BIRD, and Confd), and routing modes.


What is not covered? Network policy — It needs a separate article, therefore skipping that for now.

Topics — Part 2

  1. Requirements
  2. Modules and its functions
  3. Routing modes
  4. Installation (calico and calicoctl)

CNI Requirements

  1. Create veth-pair and move the same inside container
  2. Identify the right POD CIDR
  3. Create a CNI configuration file
  4. Assign and manage IP address
  5. Add default routes inside the container
  6. Advertise the routes to all the peer nodes (Not applicable for VxLan)
  7. Add routes in the HOST server
  8. Enforce Network Policy


There are many other requirements too, but the above ones are the basic. Let’s take a look at the routing table in the Master and Worker node. Each node has a container with an IP address and default container route.

23a462dcf09c1059f42da0e6b5193072.pngBasic Kubernetes network requirement


By seeing the routing table, it is evident(明显的) that the Pods can talk to each other via the L3 network as the routes are perfect. What module is responsible for adding this route, and how it gets to know the remote routes? Also, why there is a default route with gateway 169.254.1.1? We will talk about that in a moment.


  • What module is responsible for adding this route

是哪个组件添加这条路由?

  • how it gets to know the remote routes?

如何获取远程主机的路由?

  • why there is a default route with gateway 169.254.1.1?


pod的默认gateway是169.254.1.1


calico要解决的就是几个问题.


The core components of Calico are Bird, Felix, ConfD, Etcd, and Kubernetes API Server. The data-store is used to store the config information(ip-pools, endpoints info, network policies, etc.). In our example, we will use Kubernetes as a Calico data store.

BIRD (BGP)

The bird is a per-node BGP daemon that exchanges route information with BGP daemons running on other nodes. The common topology could be node-to-node mesh, where each BGP peers with every other.

01e028d5b501dc035eaed5c987590c18.png

For large scale deployments, this can get messy. There are Route Reflectors for completing the route propagation (Certain BGP nodes can be configured as Route Reflectors) to reduce the number of BGP-BGP connections. Rather than each BGP system having to peer with every other BGP system with the AS, each BGP speaker instead peers with a router reflector. Routing advertisements sent to the route reflector are then reflected out to all of the other BGP speakers. For more information, please refer to the RFC4456.

0dcbccd555cb4c0d51e1f2b7eb07d9bb.png

The BIRD instance is responsible for propagating the routes to other BIRD instances. The default configuration is ‘BGP Mesh,’ and this can be used for small deployments. In large-scale deployments, it is recommended to use a Route reflector to avoid issues. There can be more than one RR to have high availability. Also, external rack RRs can be used instead of BIRD.

ConfD

ConfD is a simple configuration management tool that runs in the Calico node container. It reads values (BIRD configuration for Calico) from etcd, and writes them to disk files. It loops through pools (networks and subnetworks) to apply configuration data (CIDR keys), and assembles them in a way that BIRD can use. So whenever there is a change in the network, BIRD can detect and propagate routes to other nodes.

Felix

The Calico Felix daemon runs in the Calico node container and brings the solution together by taking several actions:

  • Reads information from the Kubernetes etcd
  • Builds the routing table
  • Configures the IPTables (kube-proxy mode IPTables)
  • Configures IPVS (kube-proxy mode IPVS)


Let’s look at the cluster with all Calico modules,

Deployment with ‘NoSchedule’ Toleration


Something looks different? Yes, the one end of the veth is dangling(悬空的), not connected anywhere; It is in kernel space. veth一端悬空,它位于内核空间不连接到任何地方.


How the packet gets routed to the peer node?

  1. Pod in master tries to ping the IP address 10.0.2.11
  2. Pod sends an ARP request to the gateway.
  3. Get’s the ARP response with the MAC address.
  4. Wait, who sent the ARP response?


What’s going on? How can a container route at an IP that doesn't exist?


Let’s walk through what’s happening. Some of you reading this might have noticed that 169.254.1.1 is an IPv4 link-local address.


The container has a default route pointing at a link-local address. The container expects this IP address to be reachable on its directly connected interface, in this case, the containers eth0 address. The container will attempt to ARP for that IP address when it wants to route out through the default route.


If we capture the ARP response, it will show the MAC address of the other end of the veth (cali123  这个mac是全e). So you might be wondering how on earth the host is replying to an ARP request for which it doesn’t have an IP interface.(主机究竟是如何响应它没有IP接口的ARP请求的?)


The answer is proxy-arp. If we check the host side VETH interface, we’ll see that proxy-arp is enabled.


master $ cat /proc/sys/net/ipv4/conf/cali123/proxy_arp

1


image.png

Let’s take a closer look(细看) at the worker node,3f618c8cf7cb3ef161f4c364c1d9657d.pngOnce the packet reaches the kernel, it routes the packet based on routing table entries.

Incoming traffic

  1. The packet reaches the worker node kernel.
  2. Kernel puts the packet into the cali123.

Routing Modes

Calico supports 3 routing modes; in this section, we will see the pros and cons of each method and where we can use them.

  • IP-in-IP: default; encapsulated
  • Direct/NoEncapMode: unencapsulated (Preferred)
  • VXLAN: encapsulated (No BGP)

IP-in-IP (Default)

IP-in-IP is a simple form of encapsulation achieved by putting an IP packet inside another. A transmitted packet contains an outer header with host source and destination IPs and an inner header with pod source and destination IPs.


inner  header: source Pod ip -> target pod IP


outer header: source node IP of source Pod -> target node IP of target Pod


Azure doesn’t support IP-IP (As far I know); therefore, we can’t use IP-IP in that environment. It’s better to disable IP-IP to get better performance.

NoEncapMode

In this mode, send packets as if they came directly from the pod. Since there is no encapsulation and de-capsulation overhead, direct is highly performant.


Source IP check must be disabled in  AWS to use this mode.

VXLAN

VXLAN routing is supported in Calico 3.7+.


VXLAN stands for Virtual Extensible LAN.. VXLAN is an encapsulation technique in which layer 2 ethernet frames are encapsulated in UDP packets二层UDP封包模式,和flanel类似. VXLAN is a network virtualization technology. When devices communicate within a software-defined Datacenter, a VXLAN tunnel is set up between those devices. Those tunnels can be set up on both physical and virtual switches. The switch ports are known as VXLAN Tunnel Endpoints (VTEPs) and are responsible for the encapsulation and de-encapsulation of VXLAN packets(VTEPs 就是一个负责解封包). Devices without VXLAN support are connected to a switch with VTEP functionality. The switch will provide the conversion from and to VXLAN.


VXLAN is great for networks that do not support IP-in-IP, such as Azure or any other DC that doesn’t support BGP.1dfeccc6adbd4e399dc60eed747ab0af.png

Demo — IPIP and UnEncapMode

Check the cluster state before the Calico installation.image.png

Check the CNI bin and conf directory. There won’t be any configuration file or the calico binary as the calico installation would populate these via volume mount.

image.png

Check the IP routes in the master/worker node.image.pngDownload and apply the calico.yaml based on your environment.image.pngLet’s take a look at some useful configuration parameters,

cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "calico", >>> Calico's CNI plugin
          "log_level": "info",
          "log_file_path": "/var/log/calico/cni/cni.log",
          "datastore_type": "kubernetes",
          "nodename": "__KUBERNETES_NODE_NAME__",
          "mtu": __CNI_MTU__,
          "ipam": {
              "type": "calico-ipam" >>> Calico's IPAM instaed of default IPAM
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        },
        {
          "type": "bandwidth",
          "capabilities": {"bandwidth": true}
        }
      ]
    }# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
    value: "Always" >> Set this to 'Never' to disable IP-IP
# Enable or Disable VXLAN on the default IP pool.
- name: CALICO_IPV4POOL_VXLAN
    value: "Never"

Check POD and Node status after the calico installation.

master $ kubectl get pods --all-namespaces

NAMESPACE     NAME                                       READY   STATUS              RESTARTS   AGE

kube-system   calico-kube-controllers-799fb94867-6qj77   0/1     ContainerCreating   0          21s

kube-system   calico-node-bzttq                          0/1     PodInitializing     0          21s

kube-system   calico-node-r6bwj                          0/1     PodInitializing     0          21s

kube-system   coredns-66bff467f8-52tkd                   0/1     Pending             0          7m5s

kube-system   coredns-66bff467f8-g5gjb                   0/1     ContainerCreating   0          7m5s

kube-system   etcd-controlplane                          1/1     Running             0          7m7s

kube-system   kube-apiserver-controlplane                1/1     Running             0          7m7s

kube-system   kube-controller-manager-controlplane       1/1     Running             0          7m7s

kube-system   kube-proxy-b2j4x                           1/1     Running             0          6m46s

kube-system   kube-proxy-s46lv                           1/1     Running             0          7m5s

kube-system   kube-scheduler-controlplane                1/1     Running             0          7m6smaster $ kubectl get nodes

NAME           STATUS   ROLES    AGE     VERSION

controlplane   Ready    master   7m30s   v1.18.0

node01         Ready    <none>   6m59s   v1.18.0

Explore the CNI configuration as that’s what Kubelet needs to set up the network.


master $ cd /etc/cni/net.d/

master $ ls

10-calico.conflist  calico-kubeconfig

master $

master $

master $ cat 10-calico.conflist

{

 "name": "k8s-pod-network",

 "cniVersion": "0.3.1",

 "plugins": [

   {

     "type": "calico",

     "log_level": "info",

     "log_file_path": "/var/log/calico/cni/cni.log",

     "datastore_type": "kubernetes",

     "nodename": "controlplane",

     "mtu": 1440,

     "ipam": {

         "type": "calico-ipam"

     },

     "policy": {

         "type": "k8s"

     },

     "kubernetes": {

         "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"

     }

   },

   {

     "type": "portmap",

     "snat": true,

     "capabilities": {"portMappings": true}

   },

   {

     "type": "bandwidth",

     "capabilities": {"bandwidth": true}

   }

 ]

}


Check the CNI binary files,


master $ ls

bandwidth  bridge  calico  calico-ipam dhcp  flannel  host-device  host-local  install  ipvlan  loopback  macvlan  portmap  ptp  sample  tuning  vlan

master $

Let’s install the calicoctl to give good information about the calico and let us modify the Calico configuration.


master $ cd /usr/local/bin/

master $ curl -O -L  https://github.com/projectcalico/calicoctl/releases/download/v3.16.3/calicoctl

 % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                Dload  Upload   Total   Spent    Left  Speed

100   633  100   633    0     0   3087      0 --:--:-- --:--:-- --:--:--  3087

100 38.4M  100 38.4M    0     0  5072k      0  0:00:07  0:00:07 --:--:-- 4325k

master $ chmod +x calicoctl

master $ export DATASTORE_TYPE=kubernetes

master $ export KUBECONFIG=~/.kube/config# Check endpoints - it will be empty as we have't deployed any POD

master $ calicoctl get workloadendpoints

WORKLOAD   NODE   NETWORKS   INTERFACEmaster $

Check BGP peer status. This will show the ‘worker’ node as a peer.


master $ calicoctl node status

Calico process is running.IPv4 BGP status

+--------------+-------------------+-------+----------+-------------+

| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |

+--------------+-------------------+-------+----------+-------------+

| 172.17.0.40  | node-to-node mesh | up    | 00:24:04 | Established |

+--------------+-------------------+-------+----------+-------------+

Create a busybox POD with two replicas and master node toleration.


cat > busybox.yaml <<"EOF"

apiVersion: apps/v1

kind: Deployment

metadata:

 name: busybox-deployment

spec:

 selector:

   matchLabels:

     app: busybox

 replicas: 2

 template:

   metadata:

     labels:

       app: busybox

   spec:

     tolerations:

     - key: "node-role.kubernetes.io/master"

       operator: "Exists"

       effect: "NoSchedule"

     containers:

     - name: busybox

       image: busybox

       command: ["sleep"]

       args: ["10000"]

EOFmaster $ kubectl apply -f busybox.yaml

deployment.apps/busybox-deployment created


Get Pod and endpoint status,


master $ kubectl get pods -o wide

NAME                                 READY   STATUS    RESTARTS   AGE   IP                NODE           NOMINATED NODE   READINESS GATES

busybox-deployment-8c7dc8548-btnkv   1/1     Running   0          6s    192.168.196.131   node01         <none>           <none>

busybox-deployment-8c7dc8548-x6ljh   1/1     Running   0          6s    192.168.49.66     controlplane   <none>           <none>master $ calicoctl get workloadendpoints

WORKLOAD                             NODE           NETWORKS             INTERFACE

busybox-deployment-8c7dc8548-btnkv   node01         192.168.196.131/32   calib673e730d42

busybox-deployment-8c7dc8548-x6ljh   controlplane   192.168.49.66/32     cali9861acf9f07

Get the details of the host side veth peer of master node busybox POD.


master $ ifconfig cali9861acf9f07

cali9861acf9f07: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440

       inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>

       ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)

       RX packets 0  bytes 0 (0.0 B)

       RX errors 0  dropped 0  overruns 0  frame 0

       TX packets 5  bytes 446 (446.0 B)

       TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Get the details of the master Pod’s interface,


master $ kubectl exec busybox-deployment-8c7dc8548-x6ljh -- ifconfig

eth0      Link encap:Ethernet  HWaddr 92:7E:C4:15:B9:82

         inet addr:192.168.49.66  Bcast:192.168.49.66  Mask:255.255.255.255

         UP BROADCAST RUNNING MULTICAST  MTU:1440  Metric:1

         RX packets:5 errors:0 dropped:0 overruns:0 frame:0

         TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

         collisions:0 txqueuelen:0

         RX bytes:446 (446.0 B)  TX bytes:0 (0.0 B)lo        Link encap:Local Loopback

         inet addr:127.0.0.1  Mask:255.0.0.0

         UP LOOPBACK RUNNING  MTU:65536  Metric:1

         RX packets:0 errors:0 dropped:0 overruns:0 frame:0

         TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

         collisions:0 txqueuelen:1000

         RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)master $ kubectl exec busybox-deployment-8c7dc8548-x6ljh -- ip route

default via 169.254.1.1 dev eth0

169.254.1.1 dev eth0 scope link

master $ kubectl exec busybox-deployment-8c7dc8548-x6ljh -- arp

master $


Get the master node routes,


master $ ip route

default via 172.17.0.1 dev ens3

172.17.0.0/16 dev ens3 proto kernel scope link src 172.17.0.32

172.18.0.0/24 dev docker0 proto kernel scope link src 172.18.0.1 linkdown

blackhole 192.168.49.64/26 proto bird

192.168.49.65 dev calic22dbe57533 scope link

192.168.49.66 dev cali9861acf9f07 scope link

192.168.196.128/26 via 172.17.0.40 dev tunl0 proto bird onlink


Let’s try to ping the worker node Pod to trigger ARP.


master $ kubectl exec busybox-deployment-8c7dc8548-x6ljh -- ping 192.168.196.131 -c 1

PING 192.168.196.131 (192.168.196.131): 56 data bytes

64 bytes from 192.168.196.131: seq=0 ttl=62 time=0.823


msmaster $ kubectl exec busybox-deployment-8c7dc8548-x6ljh -- arp

? (169.254.1.1) at ee:ee:ee:ee:ee:ee [ether]  on eth0


The MAC address of the gateway is nothing but the cali9861acf9f07. From now, whenever the traffic goes out, it will directly hit the kernel; And, the kernel knows that it has to write the packet into the tunl0 based on the IP route.


Proxy ARP configuration,


master $ cat /proc/sys/net/ipv4/conf/cali9861acf9f07/proxy_arp

1


How the destination node handles the packet?


node01 $ ip route

default via 172.17.0.1 dev ens3

172.17.0.0/16 dev ens3 proto kernel scope link src 172.17.0.40

172.18.0.0/24 dev docker0 proto kernel scope link src 172.18.0.1 linkdown

192.168.49.64/26 via 172.17.0.32 dev tunl0 proto bird onlink

blackhole 192.168.196.128/26 proto bird

192.168.196.129 dev calid4f00d97cb5 scope link

192.168.196.130 dev cali257578b48b6 scope link

192.168.196.131 dev calib673e730d42 scope link


Upon receiving the packet, the kernel sends the right veth based on the routing table.


We can see the IP-IP protocol on the wire if we capture the packets. Azure doesn’t support IP-IP (As far I know); therefore, we can’t use IP-IP in that environment. It’s better to disable IP-IP to get better performance. Let’s try to disable and see what’s the effect.


Disable IP-IP


Update the ipPool configuration to disable IPIP.


master $ calicoctl get ippool default-ipv4-ippool -o yaml > ippool.yaml

master $ vi ippool.yaml


Open the ippool.yaml and set the IPIP to ‘Never,’ and apply the yaml via calicoctl.


master $ calicoctl apply -f ippool.yaml

Successfully applied 1 'IPPool' resource(s)


Recheck the IP route,


master $ ip route

default via 172.17.0.1 dev ens3

172.17.0.0/16 dev ens3 proto kernel scope link src 172.17.0.32

172.18.0.0/24 dev docker0 proto kernel scope link src 172.18.0.1 linkdown

blackhole 192.168.49.64/26 proto bird

192.168.49.65 dev calic22dbe57533 scope link

192.168.49.66 dev cali9861acf9f07 scope link

192.168.196.128/26 via 172.17.0.40 dev ens3 proto bird


The device is no more tunl0; it is set to the management interface of the master node.


Let’s ping the worker node POD and make sure all works fine. From now, there won’t be any IPIP protocol involved.


master $ kubectl exec busybox-deployment-8c7dc8548-x6ljh -- ping 192.168.196.131 -c 1

PING 192.168.196.131 (192.168.196.131): 56 data bytes

64 bytes from 192.168.196.131: seq=0 ttl=62 time=0.653 ms--- 192.168.196.131 ping statistics ---

1 packets transmitted, 1 packets received, 0% packet loss

round-trip min/avg/max = 0.653/0.653/0.653 ms


Note: Source IP check should be disabled in AWS environment to use this mode.

Demo — VXLAN

Re-initiate the cluster and download the calico.yaml file to apply the following changes,


1.Remove bird from livenessProbe and readinessProbe

livenessProbe:

           exec:

             command:

             - /bin/calico-node

             - -felix-live

             - -bird-live >> Remove this

           periodSeconds: 10

           initialDelaySeconds: 10

           failureThreshold: 6

         readinessProbe:

           exec:

             command:

             - /bin/calico-node

             - -felix-ready

             - -bird-ready >> Remove this


2. Change the calico_backend to ‘vxlan’ as we don’t need BGP anymore.


kind: ConfigMap

apiVersion: v1

metadata:

 name: calico-config

 namespace: kube-system

data:

 # Typha is disabled.

 typha_service_name: "none"

 # Configure the backend to use.

 calico_backend: "vxlan"


3. Disable IPIP


# Enable IPIP

- name: CALICO_IPV4POOL_IPIP

   value: "Never" >> Set this to 'Never' to disable IP-IP

# Enable or Disable VXLAN on the default IP pool.

- name: CALICO_IPV4POOL_VXLAN

   value: "Never"


Let’s apply this new yaml.


master $ ip route

default via 172.17.0.1 dev ens3

172.17.0.0/16 dev ens3 proto kernel scope link src 172.17.0.15

172.18.0.0/24 dev docker0 proto kernel scope link src 172.18.0.1 linkdown

192.168.49.65 dev calif5cc38277c7 scope link

192.168.49.66 dev cali840c047460a scope link

192.168.196.128/26 via 192.168.196.128 dev vxlan.calico onlinkvxlan.calico: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1440

       inet 192.168.196.128  netmask 255.255.255.255  broadcast 192.168.196.128

       inet6 fe80::64aa:99ff:fe2f:dc24  prefixlen 64  scopeid 0x20<link>

       ether 66:aa:99:2f:dc:24  txqueuelen 0  (Ethernet)

       RX packets 0  bytes 0 (0.0 B)

       RX errors 0  dropped 0  overruns 0  frame 0

       TX packets 0  bytes 0 (0.0 B)

       TX errors 0  dropped 11 overruns 0  carrier 0  collisions 0


Get the POD status,


master $ kubectl get pods -o wide

NAME                                 READY   STATUS    RESTARTS   AGE   IP                NODE           NOMINATED NODE   READINESS GATES

busybox-deployment-8c7dc8548-8bxnw   1/1     Running   0          11s   192.168.49.67     controlplane   <none>           <none>

busybox-deployment-8c7dc8548-kmxst   1/1     Running   0          11s   192.168.196.130   node01         <none>           <none>


Ping the worker node POD from


master $ kubectl exec busybox-deployment-8c7dc8548-8bxnw -- ip route

default via 169.254.1.1 dev eth0

169.254.1.1 dev eth0 scope link


Trigger the ARP request,


master $ kubectl exec busybox-deployment-8c7dc8548-8bxnw -- arp

master $ kubectl exec busybox-deployment-8c7dc8548-8bxnw -- ping 8.8.8.8

PING 8.8.8.8 (8.8.8.8): 56 data bytes

64 bytes from 8.8.8.8: seq=0 ttl=116 time=3.786 ms

^C

master $ kubectl exec busybox-deployment-8c7dc8548-8bxnw -- arp

? (169.254.1.1) at ee:ee:ee:ee:ee:ee [ether]  on eth0

master $


The concept is as the previous modes, but the only difference is that the packet reaches the vxland, and it encapsulates the packet with node IP and its MAC in the inner header and sends it. Also, the UDP port of the vxlan proto will be 4789. The etcd helps here to get the details of available nodes and their supported IP range so that the vxlan-calico can build the packet.


Note: VxLAN mode needs more processing power than the previous modes.

1dfeccc6adbd4e399dc60eed747ab0af.png

Disclaimer

This article does not provide any technical advice or recommendation; if you feel so, it is my personal view, not the company I work for.

References

Calico Documentation | Calico Documentation

https://www.openstack.org/videos/summits/vancouver-2018/kubernetes-networking-with-calico-deep-dive

Kubernetes

IBM Documentation

GitHub - flannel-io/flannel: flannel is a network fabric for containers, designed for Kubernetes


相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
3月前
|
Kubernetes 负载均衡 网络安全
Kubernetes 网络模型与实践
【8月更文第29天】Kubernetes(K8s)是当今容器编排领域的佼佼者,它提供了一种高效的方式来管理容器化应用的部署、扩展和运行。Kubernetes 的网络模型是其成功的关键因素之一,它支持服务发现、负载均衡和集群内外通信等功能。本文将深入探讨 Kubernetes 的网络模型,并通过实际代码示例来展示服务发现和服务网格的基本概念及其实现。
115 1
|
3月前
|
Kubernetes Devops 持续交付
DevOps实践:使用Docker和Kubernetes实现持续集成和部署网络安全的守护盾:加密技术与安全意识的重要性
【8月更文挑战第27天】本文将引导读者理解并应用DevOps的核心理念,通过Docker和Kubernetes的实战案例,深入探讨如何在现代软件开发中实现自动化的持续集成和部署。文章不仅提供理论知识,还结合真实示例,旨在帮助开发者提升效率,优化工作流程。
|
1月前
|
Kubernetes 网络协议 网络安全
k8s中网络连接问题
【10月更文挑战第3天】
126 7
|
1月前
|
Kubernetes 应用服务中间件 nginx
搭建Kubernetes v1.31.1服务器集群,采用Calico网络技术
在阿里云服务器上部署k8s集群,一、3台k8s服务器,1个Master节点,2个工作节点,采用Calico网络技术。二、部署nginx服务到k8s集群,并验证nginx服务运行状态。
456 1
|
2月前
|
Kubernetes 容器 Perl
Kubernetes网络插件体系及flannel基础
文章主要介绍了Kubernetes网络插件体系,特别是flannel网络模型的工作原理、配置和测试方法。
108 3
Kubernetes网络插件体系及flannel基础
|
1月前
|
Kubernetes 容器
基于Ubuntu-22.04安装K8s-v1.28.2实验(三)数据卷挂载NFS(网络文件系统)
基于Ubuntu-22.04安装K8s-v1.28.2实验(三)数据卷挂载NFS(网络文件系统)
131 0
|
3月前
|
Kubernetes Cloud Native 网络安全
云原生入门指南:Kubernetes和容器化技术云计算与网络安全:技术融合的新篇章
【8月更文挑战第30天】在云计算的浪潮中,云原生技术如Kubernetes已成为现代软件部署的核心。本文将引导读者理解云原生的基本概念,探索Kubernetes如何管理容器化应用,并展示如何通过实践加深理解。
|
3天前
|
存储 SQL 安全
网络安全与信息安全:关于网络安全漏洞、加密技术、安全意识等方面的知识分享
【10月更文挑战第39天】在数字化时代,网络安全和信息安全成为了我们生活中不可或缺的一部分。本文将介绍网络安全漏洞、加密技术和安全意识等方面的内容,帮助读者更好地了解网络安全的重要性,并提供一些实用的技巧和方法来保护自己的信息安全。
14 2
|
4天前
|
安全 网络安全 数据安全/隐私保护
网络安全与信息安全:关于网络安全漏洞、加密技术、安全意识等方面的知识分享
【10月更文挑战第38天】本文将探讨网络安全与信息安全的重要性,包括网络安全漏洞、加密技术和安全意识等方面。我们将通过代码示例和实际操作来展示如何保护网络和信息安全。无论你是个人用户还是企业,都需要了解这些知识以保护自己的网络安全和信息安全。
|
3天前
|
存储 安全 网络安全
云计算与网络安全:探索云服务中的信息安全策略
【10月更文挑战第39天】随着云计算的飞速发展,越来越多的企业和个人将数据和服务迁移到云端。然而,随之而来的网络安全问题也日益突出。本文将从云计算的基本概念出发,深入探讨在云服务中如何实施有效的网络安全和信息安全措施。我们将分析云服务模型(IaaS, PaaS, SaaS)的安全特性,并讨论如何在这些平台上部署安全策略。文章还将涉及最新的网络安全技术和实践,旨在为读者提供一套全面的云计算安全解决方案。

热门文章

最新文章