InfiniBand
The Linux SCSI Target Wiki
![]() | |
---|---|
Mellanox Technologies, Ltd. Mellanox Infiniband SRP fabric module | |
Original author(s) |
Vu Pham Bart Van Assche Nicholas Bellinger |
Developer(s) | Mellanox Technologies, Ltd. |
Initial release | March 18, 2012 |
Stable release | 4.1.0 / June 20, 2012 |
Preview release | 4.2.0-rc5 / June 28, 2012 |
Development status | Production |
Written in | C |
Operating system | Linux |
Type | Fabric module |
License | GNU General Public License |
Website | mellanox.com |
- See LIO for a complete overview over all fabric modules.
InfiniBand provides the target for various IB Host Channel Adapters (HCAs). The LinuxIO supports iSER and SRP target mode operation on Mellanox HCAs.
Contents |
Overview
InfiniBand is an industry standard, channel-based, switched-fabric, interconnect architecture for servers. It is used predominantly in high-performance computing (HPC), and recently has enjoyed increasing popularity for SANs. Its features include high throughput, low latency, quality of service and failover, and it is designed to be scalable.
The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices. InfiniBand forms a superset of the Virtual Interface Architecture (VIP).
Hardware support
The following Mellanox InfiniBand HCAs are supported:
- Mellanox ConnectX-2 VPI PCIe Gen2 HCAs (x8 lanes), single/dual-port QDR 40 Gb/s
- Mellanox ConnectX-3 VPI PCIe Gen3 HCAs (x8 lanes), single/dual-port FDR 56 Gb/s
- Mellanox ConnectX-IB PCIe Gen3 HCAs (x16 lanes), single/dual-port FDR 56 Gb/s
LIO supports iSCSI Extensions for RDMA (iSER) and SCSI RDMA Protocol (SRP) target mode operation on these HCAs.
Protocols
A brief overview over relevant or related InfiniBand protocols:
- Converged Enhanced Ethernet (CEE): A set of standards that allow enhanced communication over an Ethernet network. CEE is typically called Data Center Bridging (DCB).
- Data Center Bridging (DCB): A set of standards that allow enhanced communication over an Ethernet network. DCB is sometimes called Converged Enhanced Ethernet (CEE), or loosely "lossless" Ethernet.
- Fibre Channel over Infiniband (FCoIB): The SCSI protocol is embedded into the Fibre Channel interface, which is in turn run as a virtual interface inside of InfiniBand. This does not leverage RDMA.
- InfiniBand over Ethernet (IBoE): A technology that makes high-bandwidth low-latency communication possible over DCB Ethernet networks. Typically called RDMA over Converged Enhanced Ethernet (RoCE).
- Internet Protocol over InfiniBand (IPoIB): This transport is accomplished by encapsulating IP packets of InfiniBand packets.
- Internet Wide Area RDMA Protocol (iWARP): A network protocol that tunnels RDMA packets over IP networks (typically Ethernet) rather than using enhanced network fabrics. iWARP is an Internet Engineering Task Force (IETF) update of the RDMA Consortium's RDMA over TCP standard.
- iSCSI Extensions for RDMA (iSER): A protocol model defined by the IETF that maps the iSCSI protocol directly over RDMA and is part of the "Data Mover" architecture.
- Mellanox fabric module (under development)
- RDMA over Converged Ethernet (RoCE): A network protocol that allows RDMA over DCB ("lossless") Ethernet networks by running the IB transport protocol using Ethernet frames. RoCE is a link layer protocol and hence allows communication between any two hosts in the same Ethernet broadcast domain. RoCE packets consist of standard Ethernet frames with an IEEE assigned Ethertype, a GRH, unmodified IB transport headers and payload.[1] RoCE is sometimes also called InfiniBand over Ethernet (IBoE).
- Remote Direct Memory Access (RDMA): Peer-to-peer remote direct memory-to-memory access, very low latency/low overhead, high operation rate, high bandwidth.
- SCSI RDMA Protocol (SRP): RDMA defines a SCSI mapping onto the InfiniBand architecture and/or functionally similar cluster protocols, and generally allows higher throughput and lower latency than TCP/IP based communication. Defined by ANSI T10, latest draft is rev. 16a (6/3/02) - never ratified as a formal standard.
- Mellanox fabric module (srpt.ko, released)
- Sockets Direct Protocol (SDP): A transaction protocol enabling emulation of sockets semantics over RDMA. This allows applications to gain the performance benefits of RDMA without changing application code that relies on sockets. Version 1.0 of the SDP specification was publicly released by the RDMA Consortium in October 2003.
- Virtual Interface Architecture (VIA): Permits zero-copy transmission over TCP and SCTP.
Glossary
- Host Channel Adapter (HCA): provides the mechanism to connect InfiniBand devices to processors and memory.
- Target Channel Adapter (TCA): endpoint of an InfiniBand fabric, typically provides additional I/O functionality.
- Vitual Lane (VL): support multiple logical channels on the same physical link, the actual logical lane used on a given point-to-point link.
RFCs
- RFC 4297: Remote Direct Memory Access (RDMA) over IP Problem Statement
- RFC 4390: Dynamic Host Configuration Protocol (DHCP) over InfiniBand
- RFC 4391: Transmission of IP over InfiniBand (IPoIB)
- RFC 4392: IP over InfiniBand (IPoIB) Architecture
- RFC 4755: IP over InfiniBand: Connected Mode
- RFC 5040: A Remote Direct Memory Access Protocol Specification
- RFC 5045: Applicability of Remote Direct Memory Access Protocol (RDMA) and Direct Data Placement Protocol (DDP)
- RFC 5046: Internet Small Computer System Interface (iSCSI) Extensions for Remote Direct Memory Access (RDMA)
- RFC 5047: DA: Datamover Architecture for the Internet Small Computer System Interface (iSCSI)
See also
Notes
- ↑ Tom Talpey, et al. (8/26/2009). "Remote Direct Memory Access over the Converged Enhanced Ethernet Fabric: Evaluating the Options". IEEE Hot Interconnects 17.
Wikipedia entries
- InfiniBand (IB)
- Internet Wide Area RDMA Protocol (iWARP)
- iSCSI Extensions for RDMA (iSER)
- Remote direct memory access (RDMA)
- SCSI RDMA Protocol (SRP)
- Virtual Interface Architecture (VIA)
External links
- LIO Admin Manual
- RTSlib Reference Guide [HTML] [PDF]
- Ann Silverthorn (11/1/2006). "InfiniBand edging into storage market". dentistryiq.com.
- Ed Koehler (2/20/2010). "Infiniband and it’s unique potential for Storage and Business Continuity". edkoehler.wordpress.com.
- Odysseas Pentakalos (02/04/2002). "An Introduction to the InfiniBand Architecture". oreillynet.com.
- InfiniBand Wikipedia entry
- The InfiniBand Trade Association homepage
- OpenFabrics Alliance
- Mellanox website
- T10 home page
Timeline of the LinuxIO | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Release | Details | 2011 | 2012 | 2013 | 2014 | 2015 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | ||
4.x | Version | 4.0 | 4.1 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Feature | LIO Core | Loop back | FCoE | iSCSI | Perf | SRP | CM WQ | FC USB 1394 | vHost | Perf | Misc | 16 GFC | iSER | Misc | VAAI | Misc | DIF Core NPIV | DIF iSER | DIF FC vhost | TCMU Xen | Misc | Misc | virtio 1.0 | Misc | NVMe OF | ||||||||||||||||||||||||||||||||||||
Linux | 2.6.38 | 2.6.39 | 3.0 | 3.1 | 3.2 | 3.3 | 3.4 | 3.5 | 3.6 | 3.7 | 3.8 | 3.9 | 3.10 | 3.11 | 3.12 | 3.13 | 3.14 | 3.15 | 3.16 | 3.17 | 3.18 | 3.19 | 3.20 | 3.21 | 3.22 |