Neutron Networking QoS

简介:

目前,Neutron有一个QoS的proposal(https://wiki.openstack.org/wiki/Neutron/QoS#Documents),但是只有Ciscso和NVP插件实现了QoS功能,其他插件还未实现。因而,如果想在Neutron中来做网络QoS,还需要额外费些力。

一、基于OVS实现网络QoS

前面的proposal设计和接口都有了,完全可以自己来实现QoS功能:

1. 创建QoS-Rules数据库,写入QoS规则,主键qos_id
2. 创建QoS-Port-Binding数据库,记录port_id 与 qos_id绑定关系。
3. 创建虚拟机时,nova调用Quantum暴露出来的API,将绑定关系写入数据库。
4. ovs-agent通过远程调用函数(参数port_id)向ovs-plugin取得QoS规则。

5. ovs-agent将规则施行于Interface上。

比如,OVS的QoS可以用下面这个命令来实现

复制代码
       def set_interface_qos(self, interface, rate, burst):  
      
           ingress_policing_rate  = "ingress_policing_rate=%s" % rate  
           ingress_policing_burst = "ingress_policing_burst=%s" % burst  
             
           args = ["set", "interface", interface, ingress_policing_rate, ingress_policing_burst]  
      
           self.run_vsctl(args)  
      
       def clear_interface_qos(self, interface):  
    ingress_policing_rate  = "ingress_policing_rate=0"  
    ingress_policing_burst = "ingress_policing_burst=0"  
    args = ["set", "interface", interface, ingress_policing_rate, ingress_policing_burst]  
      
    self.run_vsctl(args)  
复制代码

具体实现见下面几篇文章:

http://blog.csdn.net/spch2008/article/details/9279445

http://blog.csdn.net/spch2008/article/details/9281947

http://blog.csdn.net/spch2008/article/details/9281779

http://blog.csdn.net/spch2008/article/details/9283561

http://blog.csdn.net/spch2008/article/details/9283627

http://blog.csdn.net/spch2008/article/details/9283927

http://blog.csdn.net/spch2008/article/details/9287311

二、使用Instance Resource Quota

How to set Server CPU,disk IO,Bandwidth consumption limit for instances using the new feature of nova. By using cgroup,libvirt can set the per instance CPU time consumption percent. and the instances's read_iops,read_byteps, write_iops,write_byteps.also libvirt support limit the instances in/out bandwidth. (https://wiki.openstack.org/wiki/InstanceResourceQuota)

bandwidth params :vif_inbound_average,vif_inbound_peak,vif_inbound_burst,vif_outbound_average,vif_outbound_peak,vif_outbound_burst

Incoming and outgoing traffic can be shaped independently. The bandwidth element can have at most one inbound and at most one outbound child elements. Leaving any of these children element out result in no QoS applied on that traffic direction. So, when you want to shape only network's incoming traffic, use inbound only, and vice versa. Each of these elements have one mandatory attribute average. It specifies average bit rate on interface being shaped. Then there are two optional attributes: peak, which specifies maximum rate at which bridge can send data, and burst, amount of bytes that can be burst at peak speed. Accepted values for attributes are integer numbers, The units for average and peak attributes are kilobytes per second, and for the burst just kilobytes. The rate is shared equally within domains connected to the network.

Config Bandwidth limit for instance network traffic

nova-manage flavor set_key --name m1.small  --key quota:vif_inbound_average --value 10240
nova-manage flavor set_key --name m1.small  --key quota:vif_outbound_average --value 10240

or using python-novaclient with admin credentials

nova flavor-key m1.small  set quota:vif_inbound_average=10240
nova flavor-key m1.small  set quota:vif_outbound_average=10240

这儿的网络QoS是直接使用libvirt提供的参数来实现的:(http://www.libvirt.org/formatnetwork.html)
...
  <forward mode='nat' dev='eth0'/>
  <bandwidth>
    <inbound average='1000' peak='5000' burst='5120'/>
    <outbound average='128' peak='256' burst='256'/>
  </bandwidth>
...

The <bandwidth> element allows setting quality of service for a particular network (since 0.9.4). Setting bandwidth for a network is supported only for networks with a <forward> mode of routenat, or no mode at all (i.e. an "isolated" network). Settingbandwidth is not supported for forward modes of bridgepassthroughprivate, or hostdev. Attempts to do this will lead to a failure to define the network or to create a transient network.

The <bandwidth> element can only be a subelement of a domain's <interface>, a subelement of a <network>, or a subelement of a <portgroup> in a <network>.

As a subelement of a domain's <interface>, the bandwidth only applies to that one interface of the domain. As a subelement of a <network>, the bandwidth is a total aggregate bandwidth to/from all guest interfaces attached to that network, not to each guest interface individually. If a domain's <interface> has <bandwidth> element values higher than the aggregate for the entire network, then the aggregate bandwidth for the <network> takes precedence. This is because the two choke points are independent of each other where the domain's <interface> bandwidth control is applied on the interface's tap device, while the <network> bandwidth control is applied on the interface part of the bridge device created for that network.

As a subelement of a <portgroup> in a <network>, if a domain's <interface> has a portgroup attribute in its <source> element and if the <interface> itself has no <bandwidth> element, then the <bandwidth> element of the portgroup will be applied individually to each guest interface defined to be a member of that portgroup. Any <bandwidth> element in the domain's <interface> definition will override the setting in the portgroup (since 1.0.1).

Incoming and outgoing traffic can be shaped independently. The bandwidth element can have at most one inbound and at most one outbound child element. Leaving either of these children elements out results in no QoS applied for that traffic direction. So, when you want to shape only incoming traffic, use inbound only, and vice versa. Each of these elements have one mandatory attribute - average (or floor as described below). The attributes are as follows, where accepted values for each attribute is an integer number.

average
Specifies the desired average bit rate for the interface being shaped (in kilobytes/second).
peak
Optional attribute which specifies the maximum rate at which the bridge can send data (in kilobytes/second). Note the limitation of implementation: this attribute in the  outbound element is ignored (as Linux ingress filters don't know it yet).
burst
Optional attribute which specifies the amount of kilobytes that can be transmitted in a single burst at  peak speed.
floor
Optional attribute available only for the  inbound element. This attribute guarantees minimal throughput for shaped interfaces. This, however, requires that all traffic goes through one point where QoS decisions can take place, hence why this attribute works only for virtual networks for now (that is  <interface type='network'/> with a forward type of route, nat, or no forward at all). Moreover, the virtual network the interface is connected to is required to have at least inbound QoS set ( average at least). If using the  floor attribute users don't need to specify  average. However,  peak and  burst attributes still require  average. Currently, the Linux kernel doesn't allow ingress qdiscs to have any classes therefore  floor can be applied only on  inbound and not  outbound.

Attributes averagepeak, and burst are available since 0.9.4, while the floor attribute is available since 1.0.1.

而libvirt中的网络QoS实际上是基于tc来实现的,使用 tc -s -d qdisc 很容易就查到最终的tc配置。

该方法要求虚拟机是基于libvirt的,并且虚拟机和网络相关的服务器上的操作系统要支持Linux Advanced Routing & Traffic Control。

三、基于tc实现网络QoS

这种方法其实是上面两种方法的结合,即还是在neutron中对外开放设置QoS接口,但把OVS的ingress_policing_rate等换成tc来实现。


本文转自feisky博客园博客,原文链接:http://www.cnblogs.com/feisky/p/3858389.html,如需转载请自行联系原作者

相关文章
|
6月前
|
Kubernetes 网络协议 Linux
Cilium 系列 -7-Cilium 的 NodePort 实现从 SNAT 改为 DSR
Cilium 系列 -7-Cilium 的 NodePort 实现从 SNAT 改为 DSR
|
Kubernetes 网络性能优化 调度
聊聊 K8S pod 的 QoS(Quality Of Service)
聊聊 K8S pod 的 QoS(Quality Of Service)
|
Java jenkins 程序员
jenkins部署的时候WARNING: IPv4 forwarding is disabled. Networking will not work.
jenkins部署的时候WARNING: IPv4 forwarding is disabled. Networking will not work.
|
开发工具 Docker 容器
Docker之WARNING: IPv4 forwarding is disabled. Networking will not work.
Docker之WARNING: IPv4 forwarding is disabled. Networking will not work.
180 0
Docker之WARNING: IPv4 forwarding is disabled. Networking will not work.
|
Kubernetes 网络协议 安全
Containerlab + Kind 部署 Cilium BGP
Containerlab + Kind 部署 Cilium BGP
663 1
|
Kubernetes 数据可视化 Perl
Kubernetes Network Policy
Kubernetes Network Policy
237 0
Kubernetes Network Policy
|
canal Kubernetes 网络协议
Kubernetes必备知识: Network Policy
网络策略(Network Policy )是 Kubernetes 的一种资源。Network Policy 通过 Label 选择 Pod,并指定其他 Pod 或外界如何与这些 Pod 通信。 Pod的网络流量包含流入(Ingress)和流出(Egress)两种方向。默认情况下,所有 Pod 是非隔离的,即任何来源的网络流量都能够访问 Pod,没有任何限制。当为 Pod 定义了 Network Policy,只有 Policy 允许的流量才能访问 Pod。
618 0
Kubernetes必备知识: Network Policy