Data Transport for OpenVPX HPEC
The Perry Memo transformed the procurement of electronic computer equipment for the Department of Defense (DoD) in 1994 by creating the concept of Commercial-Off-The-Shelf (COTS). This memo helped mold the COTS VME/VPX industry. If we fast forward to today, the introduction of OpenVPX builds on the reality of an open standard eco-system for the development of next-generation rugged computer systems.
The following white paper introduces the basics of both the hardware and data transfer mechanisms for High-Performance Embedded Computing (HPEC) systems. We will discuss the fabric, middleware, and backplane technologies required for high-speed data transfer in OpenVPX platforms.
Intel has been a leader in data processing by providing the required performance needed for HPEC based systems. Their memory speeds, cache sizes, and vector processing are best-in-class, and Intel’s Internet of Things (IoT) group provides the support and services to maintain our success with their guaranteed seven-year part availability.
Producing processors in industrial-grade temperature ranges and packaging them as a Ball Grid Array (BGA) device directly from the foundry provides significant advantages as well for the rugged, harsh environments of the embedded space.
Accelerators also play a significant role in HPEC class systems when it comes to processing because they are capable of performing the “heavy lifting” allowing the processor to assume the role of data traffic (I/O) manager. This way of processing/accelerators is proven in HPC systems with the latest supercomputers which deploy more GPUs than CPUs. The invention of deep learning, where GPU cores mimic neurons of the human brain, is another reason for the huge uptake in the acceptance of GPUs as accelerators in the last couple of years.
Log in and download the white paper to learn more about:
- CPUs, GPUs, FPGAs
- Control Plane
- Data Plane
- Expansion Plane
- Mellanox Connect-3
- PCI Express Switching
- Remote Direct Memory Access