Infiniband性能测试

简介: Infiniband性能测试

ibping

先在一台服务器上使用ibping开启服务器模式,这台服务器的配置信息

[root@storage02 ~]# ibv_devices
    device                 node GUID
    ------              ----------------
    mlx4_0              0002c90300b382a0
    irdma0              ae1f6bfffeec331c
    irdma1              ae1f6bfffeec331d
[root@storage02 ~]# ibstat mlx4_0
CA 'mlx4_0'
        CA type: MT4099
        Number of ports: 2
        Firmware version: 2.42.5000
        Hardware version: 1
        Node GUID: 0x0002c90300b382a0
        System image GUID: 0x0002c90300b382a3
        Port 1:
                State: Down
                Physical state: Polling
                Rate: 10
                Base lid: 0
                LMC: 0
                SM lid: 0
                Capability mask: 0x02594868
                Port GUID: 0x0002c90300b382a1
                Link layer: InfiniBand
        Port 2:
                State: Active
                Physical state: LinkUp
                Rate: 56
                Base lid: 5
                LMC: 0
                SM lid: 1
                Capability mask: 0x0259486a
                Port GUID: 0x0002c90300b382a2
                Link layer: InfiniBand
[root@storage02 ~]# ibping -S -C mlx4_0 -P 2

然后另一台机器开启client模式

[root@storage03 ~]# ibv_devices
    device                 node GUID
    ------              ----------------
    mlx4_0              248a070300ea0390
    irdma0              ae1f6bfffeec32f2
    irdma1              ae1f6bfffeec32f3
[root@storage03 ~]# ibstat mlx4_0
CA 'mlx4_0'
        CA type: MT4099
        Number of ports: 1
        Firmware version: 2.42.5000
        Hardware version: 1
        Node GUID: 0x248a070300ea0390
        System image GUID: 0x248a070300ea0393
        Port 1:
                State: Active
                Physical state: LinkUp
                Rate: 56
                Base lid: 4
                LMC: 0
                SM lid: 1
                Capability mask: 0x0259486a
                Port GUID: 0x248a070300ea0391
                Link layer: InfiniBand

client端开始访问到server端

[root@storage03 ~]# ibping -G 0x0002c90300b382a2 -c 10
Pong from storage02.(none) (Lid 5): time 0.019 ms
Pong from storage02.(none) (Lid 5): time 0.021 ms
Pong from storage02.(none) (Lid 5): time 0.023 ms
Pong from storage02.(none) (Lid 5): time 0.017 ms
Pong from storage02.(none) (Lid 5): time 0.022 ms
Pong from storage02.(none) (Lid 5): time 0.022 ms
Pong from storage02.(none) (Lid 5): time 0.020 ms
Pong from storage02.(none) (Lid 5): time 0.019 ms
Pong from storage02.(none) (Lid 5): time 0.021 ms
Pong from storage02.(none) (Lid 5): time 0.021 ms

--- storage02.(none) (Lid 5) ibping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 10000 ms
rtt min/avg/max = 0.017/0.020/0.023 ms

qperf

qperf跟ibping类似需要,但是qperf在测试时,建议关闭防火墙systemctl stop firewalld

在其中一台机器启用server模式

[root@storage01 ~]# qperf

另一台机器,查看频道适配器的配置

[root@storage02 ~]# qperf -v -i mlx4_0:2 172.16.50.11 conf
conf:
    loc_node   =  storage02
    loc_cpu    =  20 Cores: Mixed CPUs
    loc_os     =  Linux 4.18.0-477.13.1.el8_8.x86_64
    loc_qperf  =  0.4.11
    rem_node   =  storage01
    rem_cpu    =  20 Cores: Mixed CPUs
    rem_os     =  Linux 4.18.0-477.13.1.el8_8.x86_64
    rem_qperf  =  0.4.11

也可以运行TCP带宽和延迟测试

[root@storage02 ~]# qperf 172.16.50.11 tcp_bw tcp_lat
tcp_bw:
    bw  =  3.96 GB/sec
tcp_lat:
    latency  =  11.1 us

也可以测试得到消息大小为1到64K的TCP延迟范围

[root@storage02 ~]# qperf 172.16.50.11 -oo msg_size:1:64K:*2 -vu tcp_lat
tcp_lat:
    latency   =  10.5 us
    msg_size  =     1 bytes
tcp_lat:
    latency   =  11.1 us
    msg_size  =     2 bytes
tcp_lat:
    latency   =  11.1 us
    msg_size  =     4 bytes
tcp_lat:
    latency   =  10.9 us
    msg_size  =     8 bytes
tcp_lat:
    latency   =  10.9 us
    msg_size  =    16 bytes
tcp_lat:
    latency   =  11.1 us
    msg_size  =    32 bytes
tcp_lat:
    latency   =  10.7 us
    msg_size  =    64 bytes
tcp_lat:
    latency   =  11.1 us
    msg_size  =   128 bytes
tcp_lat:
    latency   =  11.4 us
    msg_size  =   256 bytes
tcp_lat:
    latency   =  11.5 us
    msg_size  =   512 bytes
tcp_lat:
    latency   =  12.2 us
    msg_size  =     1 KiB (1,024)
tcp_lat:
    latency   =  13 us
    msg_size  =   2 KiB (2,048)
tcp_lat:
    latency   =  15.8 us
    msg_size  =     4 KiB (4,096)
tcp_lat:
    latency   =  17.2 us
    msg_size  =     8 KiB (8,192)
tcp_lat:
    latency   =  22.6 us
    msg_size  =    16 KiB (16,384)
tcp_lat:
    latency   =  38.8 us
    msg_size  =    32 KiB (32,768)
tcp_lat:
    latency   =  59.3 us
    msg_size  =    64 KiB (65,536)

perftest

perftest提供了很全面的RDMA测试工具,具体工具可以参考这里

  • ib_send_lat latency test with send transactions

  • ib_send_bw bandwidth test with send transactions

  • ib_write_lat latency test with RDMA write transactions
  • ib_write_bw bandwidth test with RDMA write transactions
  • ib_read_lat latency test with RDMA read transactions
  • ib_read_bw bandwidth test with RDMA read transactions
  • ib_atomic_lat latency test with atomic transactions
  • ib_atomic_bw bandwidth test with atomic transactions

这里我以ib_read_lat作为示例,先在其中一台机器开启ib_read_lat

[root@storage03 ~]# ib_read_lat -d mlx4_0

************************************
* Waiting for client to connect... *
************************************
---------------------------------------------------------------------------------------
                    RDMA_Read Latency Test
 Dual-port       : OFF          Device         : mlx4_0
 Number of qps   : 1            Transport type : IB
 Connection type : RC           Using SRQ      : OFF
 PCIe relax order: ON
 ibv_wr* API     : OFF
 Mtu             : 2048[B]
 Link type       : IB
 Outstand reads  : 16
 rdma_cm QPs     : OFF
 Data ex. method : Ethernet
---------------------------------------------------------------------------------------
 local address: LID 0x04 QPN 0x020f PSN 0x17e736 OUT 0x10 RKey 0x8010100 VAddr 0x0055f4c06e7000
 remote address: LID 0x01 QPN 0x0215 PSN 0x79fcd6 OUT 0x10 RKey 0x28010100 VAddr 0x0055fa855bd000
---------------------------------------------------------------------------------------

然后另一台作为client去访问测试

[root@storage01 ~]# ib_read_lat -d mlx4_0 172.16.50.13
---------------------------------------------------------------------------------------
                    RDMA_Read Latency Test
 Dual-port       : OFF          Device         : mlx4_0
 Number of qps   : 1            Transport type : IB
 Connection type : RC           Using SRQ      : OFF
 PCIe relax order: ON
 ibv_wr* API     : OFF
 TX depth        : 1
 Mtu             : 2048[B]
 Link type       : IB
 Outstand reads  : 16
 rdma_cm QPs     : OFF
 Data ex. method : Ethernet
---------------------------------------------------------------------------------------
 local address: LID 0x01 QPN 0x0215 PSN 0x79fcd6 OUT 0x10 RKey 0x28010100 VAddr 0x0055fa855bd000
 remote address: LID 0x04 QPN 0x020f PSN 0x17e736 OUT 0x10 RKey 0x8010100 VAddr 0x0055f4c06e7000
---------------------------------------------------------------------------------------
 #bytes #iterations    t_min[usec]    t_max[usec]  t_typical[usec]    t_avg[usec]    t_stdev[usec]   99% percentile[usec]   99.9% percentile[usec]
Conflicting CPU frequency values detected: 2400.000000 != 1873.463000. CPU Frequency is not max.
 2       1000          1.76           6.12         1.79                1.80             0.06            1.88                    6.12
---------------------------------------------------------------------------------------
目录
相关文章
|
弹性计算 容器 RDMA
在Kubernetes上使用RDMA
### RDMA RDMA(全称RemoteDirect Memory Access) 它为了解决网络传输中服务器端数据处理的延迟而产生。 它的原理是将待传输的数据从一台计算机的内存,直接传输到另一台计算机的内存,整个传输过程无需操作系统和协议栈的介入。
10044 0
|
人工智能 缓存 调度
技术改变AI发展:RDMA能优化吗?GDR性能提升方案(GPU底层技术系列二)
随着人工智能(AI)的迅速发展,越来越多的应用需要巨大的GPU计算资源。GPUDirect RDMA 是 Kepler 级 GPU 和 CUDA 5.0 中引入的一项技术,可以让使用pcie标准的gpu和第三方设备进行直接的数据交换,而不涉及CPU。
138071 6
|
机器学习/深度学习 网络协议 异构计算
浅析GPU通信技术(下)-GPUDirect RDMA
目录 浅析GPU通信技术(上)-GPUDirect P2P 浅析GPU通信技术(中)-NVLink 浅析GPU通信技术(下)-GPUDirect RDMA 1. 背景         前两篇文章我们介绍的GPUDirect P2P和NVLink技术可以大大提升GPU服务器单机的GPU通信性...
28189 0
|
供应链 芯片
电商黑话之 spu sku
SPU = Standard Product Unit (标准化产品单元),SPU是商品信息聚合的最小单位,是一组可复用、易检索的标准化信息的集合,该集合描述了一个产品的基本特性。因此在电商类产品库建立时,通常会根据SPU来建立。
电商黑话之 spu sku
|
Web App开发 存储 缓存
RDMA优化整理(一)
简要的介绍了下RDMA的背景,并给出了一些RDMA编程优化技巧
4226 1
RDMA优化整理(一)
|
12月前
|
人工智能 Cloud Native 调度
阿里云容器服务在AI智算场景的创新与实践
2024年云栖大会,我们总结过往支持AI智算基础底座的实践经验、发现与思考,给出《容器服务在AI智算场景的创新与实践》的演讲。不仅希望将所做所想与客户和社区分享,也期待引出更多云原生AI领域的交流和共建。
|
缓存 人工智能 算法
Nvidia_Mellanox_CX5和6DX系列网卡_RDMA_RoCE_无损和有损_DCQCN拥塞控制等技术简介-一文入门RDMA和RoCE有损无损
Nvidia_Mellanox_CX5和6DX系列网卡_RDMA_RoCE_无损和有损_DCQCN拥塞控制等技术简介-一文入门RDMA和RoCE有损无损
2946 0
|
缓存 网络协议 Unix
Linux 内核参数
Linux 内核参数
486 1
|
存储 网络协议 前端开发
这份RoCE、IB和TCP差异对比,没干10年网工,总结不出来。
这份RoCE、IB和TCP差异对比,没干10年网工,总结不出来。
605 0
|
存储 网络协议 数据中心