Rugged GPGPU processing cards featuring the latest NVIDIA embedded GPU technology
For systems requiring compute-intensive capability or using deep learning frameworks for AI applications, these highly engineered modules provide a field-proven hardware foundation. These processing powerhouses leverage the latest GPGPU advancements from NVIDIA for machine learning and artificial intelligence applications. Equipped with NVIDIA Tensor Cores (for machine learning), our 6U VPX GPGPU boards offer TFLOPS processing capability alongside maximum memory bandwidth for the most compute-intensive tasks. These rugged, embedded GPGPU modules are ideal of high-performance embedded computing (HPEC) systems requiring supercomputing performance using distributed processing, I/O and low latency system fabrics.
|Product Name||Product Image||Generation||# of CPUs||Memory||Memory Bandwidth||Features||PCIe Configuration||Product Sheet|
|VPX6-4955 Dual NVIDIA Quadro Turing TU104/RTX5000E GPGPU Module||NVIDIA Quadro Turing TU104/RTX5000E (3072 CUDA cores, 384 Tensor cores)||2||2 x 16GB GDDR6||448 GB/s||4 Video Ports out (DP, DVI, or HDMI)||x16 Gen 3|
|VPX6-4953 GPGPU Processor with dual NVIDIA GP104/Pascal 5200 GPUs||NVIDIA Pascal Quadro P5200 (2560 cores)||2||2 x 16GB GDDR5||243 GB/s||8 independent DisplayPorts++ 1.4, video outputs||x16 Gen 3|
|VPX6-4944 6U VPX GPGPU Processor Card with Dual NVIDIA Tesla Pascal P6||NVIDIA Pascal Tesla P6||2||32 GB GDDR5||192 GB/s||x16 Gen 3|
These 6U VPX processing powerhouses leverage the latest GPGPU advancements from NVIDIA for machine learning and artificial intelligence applications. Equipped with NVIDIA CUDA and Tensor machine learning cores, our 6U VPX GPGPU modules offer TFLOPS processing capability alongside maximum memory bandwidth for the most compute-intensive tasks.
Reduce cost, risk, and time to market with COTS hardware
Our broad selection of open-architecture, commercial off-the-shelf (COTS) rugged embedded computing solutions process data in real-time to support mission-critical functions. Field-proven, highly engineered and manufactured to stringent quality standards, Curtiss-Wright’s COTS boards leverage our extensive experience and expertise to reduce your program cost, development time and overall risk.
The Role of Tensor Cores in Enabling AI and Machine Learning
Tensor cores are indispensable for performing the types of calculations needed for artificial intelligence (AI) and machine learning. The role of AI and machine learning in defense applications is on the rise, making tensor cores critical for defense. In this white paper, you will discover how tensor cores are used in AI and machine learning, and ways to incorporate tensor cores into extremely rugged applications.
Enabling AI at the Network Edge of the Battlefield
This white paper describes how modern AI technology can benefit the warfighter as embedded solutions at the network edge are deployed for military and aerospace platforms. Edge computing brings data storage and computation closer to where it's needed. For the modern battlefield, this may be in a device, an unmanned vehicle, a manned vehicle, or with the warfighters themselves.