开发者社区> 余二五> 正文

xenserver netback process解析&&调优

简介:
+关注继续查看

  在利用xenserver做性能测试的时候发现netback进程占用cpu很高(如下图),google了一下,才发现netback进程对xenserver性能调优有不小的影响。下面是摘录的E文,中文是我自己翻译的。E文水平有限,还请见谅。

image

Static allocation of virtual network interfaces (VIFs) and interrupt request (IRQ) queues among available CPUs can lead to non-optimal network throughput.

VIF的固定分配和各个cpu间的中断请求队列可能导致非最优的网络流量。

All guest network traffic is processed by netback processes in the control domain on the guest’s host.There is a fixed number of netback processes per host (four by default in XS 5.6 FP1), where each one is pinned to a speci?c control domain virtual CPU (VCPU).

所有客户端网络流量都被控制域的netback进程所控制。每个主机有固定数量的netback进程(XS 5.6 FP1中默认是4),每个netback进程被绑定在控制域的vcpu中。

All traffic for a specific VIF of a guest is processed by a statically-allocated netback process. The static allocation of VIFs to netback processes is done in a round robin fashion when virtual machines (VMs) are first started after XenServer restart. The allocation is not dynamic, since that would require further synchronisation mechanisms, and would close any active connection on every rebalance.

一个客户端的某个vif的所有流量被静态分配的netback进程控制。当xenserver重启后,虚拟机第一次启动时,vif以循环方式固定分配到某个netback进程。这种分配不是动态的,由于它需要更进一步地同步机制,并且在每次重新循环时会关闭所有活动连接。

For example, suppose we have a host with four CPUs, with four netback processes (running on four VCPUs in the control domain), and two physical network interfaces (PIFs). Furthermore, suppose the host has four VMs, where each VM has two VIFs that correspond to PIFs (in the same order for all VMs),and that VMs are started one after another (which is normally the case). Then,the first VIF of all four VMs will be allocated to odd-numbered netback processes, while the second VIF of all four VMs will be allocated to even-numbered netback processes. Therefore, if VMs send network trafficonly on their first VIF, then only two of the available four CPUs will be utilised. See Figure 1.With heavy network traffic on fast network interfaces, CPU can quickly become the bottleneck, so being able to utilise all available CPUs is crucial.

例如,假设有一个4cpu的主机,4个netback进程(在控制域中运行4个vcpus),并且两个物理网卡。另外,假设这个主机由4个vms,每个vm有2个vifs绑定到pifs(对于所有vms都是一样顺序),4个vms按顺序启动(一般情况下都是如此)。那么,4个vms的第一个vif将会被分配到奇数的netback进程,而第二个vif将会被分配给偶数的netback进程。因此,如果vms只通过它们的第一个vif发送网络流量,那么只有4cpu中的2个被使用(本例则只有1,3被使用,2和4不会被用到)例如图1。在快速网络接口中有很大的网络流量,cpu会迅速变成瓶颈。因此能够使用所有可用的cpu是决定因素。

image

图1

There are at least three ways to modify the allocation of VIFs on netback processes: 
modify the order in which VMs are first started; 
modify the order of declared VIFs (on VMs); and, 
add dummy VIFs to VMs.

Continuing with the working example from §3, suppose we find that all guest-level network traffic is going through the second VIF of each VM, and that all VMs require roughly the same network throughput. There are two simple solutions to achieving optimal network throughput here: 
  1. Add a VIF to each VM, restart host, start VMs in order — the second VIFs are allocated to netback processes (in this order) 2, 1, 4, and 3. 
  2. Switch the order of the two VIFs on the third and the fourth VMs, restart host, start VMs in order — the relevant VIFs (those that were previously the second VIFs) are allocated to netback processes (in this order) 2, 4,1, and 3.

考虑到上例,假设我们发现所有客户端网络流量都是通过每个vm的第二个虚拟网卡走,并且所有vms需要大概相同的网络流量。那么有两个简单的方法来达到最优的网络流量,如下:

  •   1、给每个vm加一个vif,然后重启xenserver主机,按顺序启动vms——第二个vifs将会按2,1,4,3的顺序分配给netback进程。
  •   2、对调第三个和第四个vm上两个网卡的顺序,重启xenserver主机,按顺序启动vms,相关的VIFS(之前的第二个vifs)将会分配按2,4,1,3的顺序分配给netback进程。

 

华丽的分割线

 

以上就是摘录的,最后两行调优例子,大家可以去看下。我按照方法对调,确实会用到4个cpu,顺序是2,4,1,3。有兴趣的童鞋可以试试。





本文转自 taojin1240 51CTO博客,原文链接:http://blog.51cto.com/taotao1240/774756,如需转载请自行联系原作者

版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。

相关文章
【大数据开发运维解决方案】记一次同事不慎用root起动weblogic以及启动日志卡在The server started in RUNNING mode 问题解决过程
最近因为单位换了新版本HD集群,有一些业务数据存在于hive数据库中。而有一些Smartbi的报表数据源是连接的华为HD Hive,因为变更了集群,需要将SmartBi的数据源改为新集群的。我将Kerberos认证凭据和新版本Hive jdbc驱动以及新的jdbc连接串给了同事,也将实施文档给了同事,但是同事在操作完成后,Smarbi节点无法正常起来(后台日志卡在:The server started in RUNNING mode,Server state changed to RUNNING),要么起来了就是无法联通Hive。
85 0
VMWare player启动虚拟机出错 - 模块Disk启动失败
VMWare player启动虚拟机出错 - 模块Disk启动失败
199 0
如何处理VMware启动虚拟机时的错误信息Failed to lock the file
如何处理VMware启动虚拟机时的错误信息Failed to lock the file
383 0
虚拟机模拟部署Extended Clusters(一)基础环境准备
操作系统版本: Red Hat Enterprise Linux Server release 6.8 (Santiago) 网络配置: eth0 ,eth1 用户rac的通信网络和心跳网络。 eth2 用于磁盘网络iscsi。
812 0
虚拟机模拟部署Extended Clusters(五)总结
1,ocr,votedisk分2个磁盘组存放。2,建议设置_asm_storagemaysplit 。
836 0
+关注
余二五
文章
问答
视频
文章排行榜
最热
最新
相关电子书
更多
AliSQL 内核定制方案
立即下载
低代码开发师(初级)实战教程
立即下载
阿里巴巴DevOps 最佳实践手册
立即下载
相关实验场景
更多