高性能计算高性能硬件

简介: SDP FAQ Frequently Asked Questions Q1.          What is SDP? SDP stands for "Sockets Direct Protocol".

SDP FAQ Frequently Asked Questions

Q1.          What is SDP?
SDP stands for "Sockets Direct Protocol". As the name indicates, it is a wire protocol for direct communication between RDMA hardware and the application sockets layer that enables applications to directly benefit from the performance benefits derived from RDMA technology.
 
Q2.          What are sockets layer?
Sockets are a standard programming interface used to communicate with TCP/IP from applications.
 
Q3.          What is the purpose of SDP?
SDP enables Internet applications to transparently take advantage of the performance advantages of the RDMA (Remote Direct Memory Access) protocol suite (RDMAP/DDP/MPA).
 
Q4.          What are the advantages of SDP?
SDP enables internet applications to take advantage of the low-latency, high-bandwidth performance benefits of RDMA, including Direct Data Placement and Kernel Bypass.
 
Q5.          What is SDP replacing?
SDP emulates sockets streaming semantics over the RDMA interface.  It does not replace any component, but instead emulates sockets semantics to allow applications to gain the performance benefits of RDMA without changing any application code which relies on sockets today.
 
Q6.          How is WSD related to SDP?
WinSock Direct Protocol (WSD, a.k.a. WSDP) is the predecessor to SDP.  Support for WSD is currently shipping in Microsoft Server Operating Systems.
 
Q7.          Is the RDMA Consortium planning any additional specifications?
No, the SDP specification is the final specification produced by the RDMA Consortium. The delivery of iSER, SDP, the RDMA wire-protocol suite and the Verbs Specifications complete the family of protocols necessary to enable deployment of RDMA based networking, Inter-Process Communication (IPC), and storage infrastructures.
 
Q8.          What is the plan for the RDMAC now that the specs are complete?
The consortium members will continue to address Errata for the RDMA Consortium specifications.  The members of the RDMAC will continue to work with the IETF on approval of the RDMA suite of specifications.
 
Q9.          Will the SDP specification be submitted to the IETF?
Yes, the RDMAC will submit the SDP specification to the IETF as a proposed informational RFC to increase familiarity of the SDP protocol with the IETF.
 
Q10.      When will RNIC hardware become available?
Specific details on availability of RNIC hardware need to come from RNIC vendors. In general, we expect RDMA solutions to be available in 2004.
 
Q11.      When will operating systems support SDP?
Specific details on availability of SDP need to come from OS vendors.
 
Q12.      Who should customers contact for information on RDMA products?
Customers should contact their respective vendors. 

 

iSER and DA Frequently Asked Questions
 
Q1.       What is iSER?
iSER stands for "iSCSI Extensions for RDMA".  As the name indicates, it is an extension of the data transfer model of iSCSI, a storage networking standard for TCP/IP. iSER enables the iSCSI protocol to take advantage of the direct data placement technology of the RDMA (Remote Direct Memory Access) protocol suite (RDMAP/DDP/MPA) publicly released by the RDMA Consortium in October 2002.
 
Q2.       What is the purpose of iSER?
The iSER protocol seeks to increase the connectivity of the iSCSI end nodes so that implementations can take advantage of the RDMA over TCP/IP protocol suite.
 
Q3.       What are the advantages of iSER?
The iSER data transfer protocol allows iSCSI implementations to have data transfers which achieve true zero copy behavior - eliminating TCP/IP processing overhead as network speeds approach 10Gb/s - on generic RDMA network interface controllers (RNIC), while preserving the compatibility with iSCSI infrastructure. Also, in an iSCSI/iSER implementation, certain protocol aspects of iSCSI, such as data integrity management and some error recovery features are simplified. 
 
Q3a          Why is true zero copy important?
As networking speeds approach 10Gb/s, much of the overhead of processing networking traffic is related to memory-to-memory copying, particularly at the receiver. True zero copy eliminates this overhead.
 
Q3b          Is an RNIC the only way to achieve true zero copy?
No, but other methods of achieving true zero copy in the NIC require the NIC to become aware of the actual upper layer protocol being utilized (iSCSI, NFS, DAFS etc.). An RNIC is the only way to achieve true zero copy without upper layer protocol specific extensions in the NIC itself.
 
Q4.       Is iSCSI being replaced by iSER?
No, in fact, an iSCSI/iSER implementation requires iSCSI components - such as login negotiation, discovery, boot, security, and authentication including the PDU formats � defined by the iSCSI specification.  As already noted, iSER is an extension of the data transfer model of the iSCSI protocol, but does not change the other areas of the iSCSI protocol. One should view iSER only as a Datamover for the iSCSI protocol.
 
Q5.       Is iSER a new "iSCSI 2.0"?
No, an iSCSI/iSER implementation must be compliant to the same iSCSI protocol defined by the IETF iSCSI specification.  The iSER protocol utilizes an existing, compatible iSCSI mechanism (login key negotiation) to determine whether to use iSER or standard iSCSI data transfer models. Other aspects of the iSCSI protocol (discovery, boot, security, authentication etc.) are left unchanged. One should view iSER only as a Datamover for the iSCSI protocol.
 
 
Q6.       What is DA (or Datamover Architecture)?
The Datamover Architecture for iSCSI (DA) specification defines an abstract model in which the movement of data between iSCSI end nodes is logically separated from the rest of the iSCSI protocol in order to allow iSCSI to adapt to innovations available in new IP transports such as RDMA over TCP/IP.
 
Q7.       How is DA related to iSER? [ or vice versa ]
The iSER protocol is a 揇atamover protocol� as defined in the Datamover Architecture for iSCSI specification. The iSER protocol thus applies the Datamover Architecture for iSCSI in extending the data-movement capabilities of iSCSI to include RDMA. The Datamover Architecture for iSCSI itself is agnostic about the specifics of iSER or other supporting protocols.
 
Q8.       Why did you have to define two specs - DA and iSER?
The architectural intent behind each specification is different.  The DA specification defines a logical separation of data movement from the rest of the iSCSI protocol in a way that is useful for increasing the connectivity of iSCSI in the future (such as running iSCSI on other RDMA protocols or even running on SCTP). The iSER protocol specification applies the Datamover Architecture in defining a specific mapping of iSCSI抯 data movement features to the RDMA protocol suite released by the RDMA Consortium in October 2002.
 
Q9.       Doesn't the publication of DA and iSER specs confuse the nascent iSCSI industry?
No. This work extends the connectivity of iSCSI and is a clear demonstration of the commitment of RDMA Consortium members to the iSCSI technology and its advancement.  The involvement of several individuals involved in the IETF iSCSI specification effort also reinforces the complementary nature of iSCSI, DA and iSER specifications.
 
Q10.    Is the RDMA Consortium advising iSCSI users to move to iSCSI/iSER-based solutions?
No.  The iSER protocol definition is aimed at making iSCSI more pervasive and is not meant to advise the customers on specific solutions.  We expect each customer to choose a solution that best fits their particular needs.
 
Q11.    What is the roadmap for DA and iSER specs?
The iSER and DA specifications have been released by the RDMA Consortium and are suitable for industry implementation today. The RDMA Consortium members will maintain the specifications for any errata discovered during product implementation. The specifications have also been forwarded to the IETF for their consideration. Further development of DA and iSER specifications and their eventual roadmap will be decided by the IETF.
 
Q12.     Is iSER ready for product development?
Yes, the member companies of the RDMA Consortium regard the iSER specification as suitable for iSCSI/iSER product implementation at the time of the iSER specification release from the RDMA Consortium.
 
Q13.    What about the interoperability problems between iSCSI end nodes and the new iSCSI/iSER end nodes?
We do not expect any interoperability problems between iSCSI end nodes and iSCSI/iSER end nodes.  The iSER functionality extends the iSCSI protocol via standard iSCSI architectural elements (login key negotiation) defined by the iSCSI specification so that each end node is clearly aware of the mode of operation of the other end node.  Furthermore, the design of the iSER protocol requires a node which supports iSCSI/iSER to be capable of providing iSCSI protocol login service compliant with the current iSCSI specification, and to move into the iSER mode as may be determined necessary during the login negotiation.  Therefore, new iSCSI/iSER implementations should be easily integrated with current iSCSI networks.  
 
Q14.    Does iSER need a special RDMA-capable NIC (RNIC) design?
No.  The iSER protocol is designed with an explicit intent to allow iSER to run on any generic off-the-shelf RNIC.  An equally important design goal also was to allow innovations from vendors who seek to more tightly integrate iSCSI/iSER into the RNIC hardware.
 
Q15.    Why would such integration be important to vendors?
Many vendors want to continue to use their same APIs to the NICs whether or not they are operating in traditional iSCSI or iSER mode.  By integrating the iSCSI/iSER functions within the RNIC hardware, the operating mode can be transparent to the End-Node.
 
Q16.    Does iSER work only on RDMA hardware?
The iSER protocol has no such constraints.  The iSER protocol as defined, should work transparently with either a hardware or software RDMA protocol stack.  Note however that an RNIC-based solution is likely to yield better performance, assuming typical software processing power.
 
Q17.    Does iSER require support for the recently released RDMA Verbs specification?
No.  The iSER protocol specification itself is completely independent of the Verbs specification.  However, an iWARP implementation compliant to the RDMA Verbs specification will naturally satisfy all the expectations of the iSER protocol in the most efficient manner.
 
Q18.    Does iSCSI/iSER code running on an RNIC perform better than an offloaded iSCSI NIC?
The relative performance depends on the specific iSCSI/iSER/RNIC design versus the iSCSI NIC design.  The iSER specification allows iSCSI end nodes to take advantage of generic RDMA and data placement technologies, but equally well-performing iSCSI NIC-based designs are feasible.
 
Q19.    How does iSCSI/iSER with an RNIC position itself against an iSCSI NIC?
The iSER Datamover is envisioned to be implemented in an iSCSI software environment which exploits an RNIC, and can thereby offer performance improvement and reduced overhead to the iSCSI software implementation.  iSCSI implementations that have been included in hardware, which are known as iSCSI NICs, do not require iSER type of functions since they already do direct memory placement, have the TCP/IP processing offloaded, have often been tailored to the needs of iSCSI and tuned to produce the best possible performance.  The iSER specification was intended to permit both types of implementations to co-exist, and thereby bring the compelling features of iSCSI to all platforms with as low overhead as possible.

 

RDMA Consortium FAQs  April 29, 2003
 
Q1: Who are the members of the RDMA Consortium?
A1: The founding members are Adaptec, Broadcom, Cisco, Dell, EMC, Hewlett-Packard, IBM, Intel, Microsoft and Network Appliance. Additionally there are over 50 member companies of the RDMA Consortium, a list of which is available at RMDACmembersApr29.htm.
 
Q2: What is the RDMA Consortium announcing at this time?
A2: The RDMA Consortium is announcing completion of version 1.0 of the RDMA Verbs specification. The completed Verbs specification accompanies the RDMA wire-protocol suite, which was completed in October of 2002.  The specifications are suitable for first generation industry implementations of RDMA over TCP solutions and comprise the information required for RDMA hardware development.  The consortium continues to work on additional protocol specifications to broaden usage of the RDMA protocol suite which are expected to be completed in 3Q03.
 
Q3: What is the schedule and status of the specifications?
A3: Version 1.0 of the RDMA over TCP wire protocol specifications was completed in October of 2002 and was forwarded to the IETF where it is now an official work item of the RDDP workgroup. The RDMA Verbs specification now is complete and has been forwarded to the IETF for their consideration. Work on additional protocol specifications to broaden usage of the RDMA protocol suite is expected to be completed in 3Q03.
 
Q4: What are the specifications that have been completed to date?
A4: The RDMA Verbs specification and the suite of three specifications that describe the RDMA over TCP wire protocol: RDMA Protocol, DDP protocol  and MPA protocol. All four specifications can be retrieved from rdmaconsortium.org.

Q5: What is RDMA over TCP?
A5: Remote Direct Memory Access is the ability of one computer to directly place information in another computer抯 memory with minimal demands on memory bus bandwidth and CPU processing overhead, while preserving memory protection semantics. RDMA over TCP/IP defines the interoperable protocols to support RDMA operations over standard TCP/IP networks.
 
Q6: What does it take to become a RDMA Consortium member?
A6: Information on applying for membership is available at rdmaconsortium.org. The RDMA Consortium will accept members according to transparent criteria which is published on the website.
 
Q7: Why is RDMA over TCP important?
A7: Demand for networking bandwidth and increases in network speeds are growing faster than the processing power and memory bandwidth of the compute nodes that ultimately must process the networking traffic. This is especially true as the industry begins migrating to 10Gigabit Ethernet infrastructures. RDMA over TCP addresses these issues in two very important ways: first, much of the overhead of protocol processing can be moved to the Ethernet adapter and second, each incoming network packet has enough information to allow its data payload to be placed directly into the correct destination memory location, even when packets arrive out of order. The direct data placement property of RDMA eliminates intermediate memory buffering and copying and the associated demands on the memory and processor resources of the compute nodes, without requiring the addition of expensive buffer memory on the Ethernet adapter. Additionally, RDMA over TCP/IP uses the existing IP/Ethernet based network infrastructure.
 
Q8: What is the relationship of RDMA over TCP to InfiniBand and VI Architecture?
A8: All three architectures specify a form of RDMA and have strong similarities. The VI Architecture goal was to specify RDMA capabilities without specifying the underlying transport. The InfiniBand architecture improved upon the RDMA capabilities of VI and specifies an underlying transport and physical layer. RDMA over TCP/IP will specify an RDMA layer that will interoperate over a standard TCP/IP transport layer. RDMA over TCP does not specify a physical layer; it will work over Ethernet, wide area networks (WAN) or any other network where TCP/IP is used.
 
Q9: What is an RNIC?
A9: An RNIC is an RDMA enabled NIC (Network Interface Controller). The RNIC provides support for the RDMA over TCP protocol suite and can include a combination of TCP offload and RDMA functions in the same network adapter.
 
Q10: What is the significant of the RNIC Verbs specification?
A10: The RNIC Verbs specification provides a standard, semantic interface definition for the functions performed by an RNIC. It is expected that network adapter vendors will support the RDMA protocol using the semantics defined by the RNIC Verbs.  It is also expected that software vendors will interface to RNICs using the semantics defined by the RNIC Verbs specification. As a result, a standard, semantic RNIC Verbs definition should accelerate the adoption rate for RNICs.
 
Q11:  How are the RDMA over TCP/IP Verbs related to the InfiniBand Verbs?
A11: The architectural interfaces to InfiniBand and RDMA over TCP/IP are both defined by a 揤erbs� interface specification. The Verbs specification developed by the RDMA Consortium has a large amount of semantic commonality with the InfiniBand Verbs. The Verbs specification developed by the RDMA Consortium also provides performance enhancements for some application environments.
 
Q12: Will RDMA/TCP require changes to applications to deliver customer benefit?
A12: No. It is expected that legacy applications will see significant performance advantages using standard interfaces such as Sockets and storage.
 
Q13: Will RDMA/TCP require changes to TCP or other Internet protocols?
A13: No. The RDMA over TCP specification takes as a requirement that it operate over standard TCP with no required changes to Internet infrastructure.
 
Q14: What is the relationship between TCP offload engines (TOE) and RDMA?
A14: A TCP offload engine is a specialized (intelligent) network adapter that moves much of the TCP/IP protocol processing overhead from the host CPU/OS to the network adapter. TCP Offload Engines reduce much of the TCP/IP protocol processing burden from the main CPU. However, the ability of performing zero copy of incoming data streams on a TOE is very dependent on the TOE design, the operating system's programming interface, and the application's communication model. In many cases, a TOE doesn抰 directly support zero copy of incoming data streams. RDMA directly supports a zero copy model of incoming data over a wider range of application environments than a TOE . The combination of TCP offload and RDMA in the same network adapter is expected to provide an optimal architecture for high speed networking with the lowest demands on both CPU and memory resources.
 
Q15: Will RDMA/TCP work over the Internet?
A15: Absolutely. RDMA is being layered on top of TCP to specifically work reliably over the Internet. RDMA does not change TCP's congestion-avoidance mechanisms or security architecture (IPSEC).
 
Q16: What is the status of RDMAC specifications in the IETF?
A16: Information on RDMA over TCP/IP wire-protocol specifications within the IETF is available at http://www.ietf.org/html.charters/rddp-charter.html.
 
Q17: What are the annual dues?
A17: There are no recurring annual dues. The founding members have committed financial resources sufficient for specification development and industry review.

目录
相关文章
|
网络协议 安全 RDMA
高性能计算高性能硬件
SDP FAQ Frequently Asked Questions Q1.          What is SDP? SDP stands for "Sockets Direct Protocol".
874 0
|
1月前
|
存储 人工智能 弹性计算
阿里云弹性计算(ECS)提供强大的AI工作负载平台,支持灵活的资源配置与高性能计算,适用于AI训练与推理
阿里云弹性计算(ECS)提供强大的AI工作负载平台,支持灵活的资源配置与高性能计算,适用于AI训练与推理。通过合理优化资源分配、利用自动伸缩及高效数据管理,ECS能显著提升AI系统的性能与效率,降低运营成本,助力科研与企业用户在AI领域取得突破。
50 6
|
1月前
|
存储 人工智能 调度
阿里云吴结生:高性能计算持续创新,响应数据+AI时代的多元化负载需求
在数字化转型的大潮中,每家公司都在积极探索如何利用数据驱动业务增长,而AI技术的快速发展更是加速了这一进程。
|
1月前
|
存储 人工智能 弹性计算
对话阿里云吴结生:AI时代,云上高性能计算的创新发展
在阿里云智能集团副总裁,弹性计算产品线负责人、存储产品线负责人 吴结生看来,如今已经有很多行业应用了高性能计算,且高性能计算的负载正呈现出多样化发展的趋势,“当下,很多基础模型的预训练、自动驾驶、生命科学,以及工业制造、半导体芯片等行业和领域都应用了高性能计算。”吴结生指出。
|
1月前
|
存储 人工智能 大数据
阿里云吴结生:高性能计算持续创新,响应数据+AI时代的多元化负载需求
在数字化转型的大潮中,每家公司都在积极探索如何利用数据驱动业务增长,而AI技术的快速发展更是加速了这一进程。
|
1月前
|
机器学习/深度学习 人工智能 弹性计算
阿里云AI服务器价格表_GPU服务器租赁费用_AI人工智能高性能计算推理
阿里云AI服务器提供多种配置选项,包括CPU+GPU、CPU+FPGA等组合,支持高性能计算需求。本文汇总了阿里云GPU服务器的价格信息,涵盖NVIDIA A10、V100、T4、P4、P100等多款GPU卡,适用于人工智能、机器学习和深度学习等场景。详细价格表和实例规格见文内图表。
233 0
|
4月前
|
机器学习/深度学习 人工智能 弹性计算
阿里云AI服务器价格表_GPU服务器租赁费用_AI人工智能高性能计算推理
阿里云AI服务器提供多样化的选择,包括CPU+GPU、CPU+FPGA等多种配置,适用于人工智能、机器学习和深度学习等计算密集型任务。其中,GPU服务器整合高性能CPU平台,单实例可实现最高5PFLOPS的混合精度计算能力。根据不同GPU类型(如NVIDIA A10、V100、T4等)和应用场景(如AI训练、推理、科学计算等),价格从数百到数千元不等。详情及更多实例规格可见阿里云官方页面。
293 1
|
6月前
|
存储 弹性计算 网络协议
阿里云hpc8ae服务器ECS高性能计算优化型实例性能详解
阿里云ECS的HPC优化型hpc8ae实例搭载3.75 GHz AMD第四代EPYC处理器,配备64 Gbps eRDMA网络,专为工业仿真、EDA、地质勘探等HPC工作负载设计。实例提供1:4的CPU内存配比,支持ESSD存储和IPv4/IPv6,操作系统限于特定版本的CentOS和Alibaba Cloud Linux。ecs.hpc8ae.32xlarge实例拥有64核和256 GiB内存,网络带宽和eRDMA带宽均为64 Gbit/s。适用于CFD、FEA、气象预报等场景。
|
6月前
|
存储 弹性计算 网络协议
阿里云高性能计算HPC优化实例商业化发布详解
基于云的高性能计算(Cloud HPC),与传统HPC相比更加灵活、高效。
|
7月前
|
存储 机器学习/深度学习 网络协议
阿里云高性能计算实例规格族有哪些?各自特点、适用场景介绍
阿里云高性能计算是的阿里云服务器ECS的架构之一,高性能计算实例规格族主要应用于各种需要超高性能、网络和存储能力的应用场景,例如人工智能、机器学习、科学计算、地质勘探、气象预报等场景。高性能计算实例规格族有高性能计算优化型实例规格族hpc8ae、高性能计算优化型实例规格族hpc7ip、计算型超级计算集群实例规格族sccc7等。下面是阿里云高性能计算实例规格族特点、适用场景介绍。
阿里云高性能计算实例规格族有哪些?各自特点、适用场景介绍

热门文章

最新文章