40 Gigabit Fabrics: The Bottleneck Killer has Arrived
November 10, 2014 | BY: Steve Edwards
Many system designers find their application underperforming due to interconnect bottlenecks. Today's leading-edge defense and aerospace systems need the fastest possible interconnects to support the large data sets typical to processing-intensive multiprocessor and HPEC systems. Relief is on the way as the latest commercial FPGAs, GPGPUs and microprocessors, such as Intel's recently announced 4th Generation Core® i7®s incorporate support for 40 Gigabit fabric. Such as 40 Gigabit Ethernet and Quad Data Rate (QSR) InfiniBand, effectively doubling in-system I/O bandwidth on embedded COTS systems. With its doubled signaling rates from 5 Gbaud to 10 Gbaud, 40Gb fabrics are expected to provide SWaP-constrained embedded systems with 2x-to-2.5x the performance of today's preferred high-speed fabric, Serial RapidIO (SRIO) Gen 2. The benefits of all this compute power are numerous and immediate. For example, in many applications this bandwidth growth spurt can enable system integrators to reduce their card count. Fewer cards means lowered total cost of ownership, an increasingly critical issue these days.
Leveraging Open Standards
Meanwhile, VITA's standards body, the VSO, is busy putting the wraps on adding 40 Gb support to the OpenVPX standard. To take full advantage of 40 Gb in HPEC systems, Curtiss-Wright is also leveraging open source Remote Direct Memory Access (RDMA) work from the commercial High Performance Computing (HPC) market. With the OpenFabrics Alliance's OFED (OpenFabrics Enterprise Distribution) open source software, which includes support for 40 Gb hardware, we have brought RDMA capability to MIL COTS market for the first time.
The RDMA Difference
In commercial HPC systems, RDMA provides memory-to-memory transfers, greatly reducing latency and processor overhead for network protocols such as 40 GbE, Infiniband (IB), and SRIO. OFED provides a device driver layer that largely abstracts RDMA functions and greatly improves data transfers of other higher-level middlewares, like MPI and uDAPL. Using OFED can make system integration for HPEC systems (and typically involves heterogeneous hardware and software elements) both simpler and more effective.
40 Gb is Here Now
You can get started today: Curtiss-Wright has a designed a complete family of OpenVPX boards and backplanes to speed and ease the job of integrating 40 Gb interconnects into your new system. These boards support 40 GbE, IB and PCIe Gen 3 and include SBCs, DSP engines, FPGA modules, GPGPUs, network switches and backplanes. For example, our 4th Generation Core i7-based CHAMP-AV9 multiprocessor and VPX6-1958 Single Board Computer modules, CHAMP-FX4 Xilinx Virtex 7 FPGA board, and our extremely high performance CHAMP-WB-DRFM card all support 40 Gb fabrics and are shipping today.
And to help you get you up to speed on HPEC computing, we've also put together several White Papers that you might useful as consider what bottleneck-free processing can do for your application:
- Understanding HPEC Computing: the Ten Axioms
- High-performance Element Processing Architecture for Open Standards Radar Systems
- High Performance, Rugged GP-GPUs Enable UAVs to Fly Faster/Higher While Mapping Terrain with Synthetic Aperture RADAR (SAR)