Rugged GPGPU processing cards featuring for machine learning and artificial intelligence
For systems requiring enhanced situational awareness or using deep learning frameworks for artificial intelligence applications, these highly engineered modules provide a field-proven hardware foundation. Answering the growing demand for (AI) and high-performance processing in deployed EW and ISR applications, our 3U VPX GPGPU modules are designed to deliver advanced capabilities in a highly rugged, SWaP-optimized board. These processing powerhouses leverage the latest GPGPU advancements from NVIDIA for machine learning and artificial intelligence applications. Equipped with NVIDIA CUDA and Tensor machine learning cores, our 3U VPX GPGPU boards offer TFLOPS processing capability alongside maximum memory bandwidth for the most compute-intensive tasks.
|Product Name||Product Image||Generation||Memory||Memory Bandwidth||Features||PCIe Configuration||Product Sheet|
|VPX3-4935 GPU Processor Card, Aligned with the SOSA Technical Standard||NVIDIA Quadro Turing RTX5000E (3072 CUDA cores, 384 Tensor cores)||16 GB GDDR6||448 GB/s||4 Video Ports out (DP, DVI, or HDMI)||x16 Gen 3|
|VPX3-4936 GPU Processor Card, Aligned with the SOSA Technical Standard||NVIDIA Quadro Turing RTX5000E (3072 CUDA cores, 384 Tensor cores)||16 GB GDDR6||448 GB/s||4 Video Ports out (DP, DVI, or HDMI)||x16 Gen 3|
|VPX3-4933 GPU Processor with NVIDIA GP104/Pascal 5200 GPU||NVIDIA GP104/Pascal 5200 (2560 CUDA cores)||16GB GDDR5||243 GB/s||x8 or x16 Gen 3|
|VPX3-4925 GPU Processor with NVIDIA Quadro Turing TU106/RTX3000E||NVIDIA Quadro Turing TU106-RTX3000E (2304 CUDA cores, 288 Tensor cores)||6 GBs GDDR6||336 GB/s||4 Video Ports out (DP, DVI, or HDMI)||x16 Gen 3|
|VPX3-4924 3U VPX GPU Processor Card with NVIDIA Tesla Pascal P6||NVIDIA Tesla Pascal (2048 CUDA cores)||16 GB GDDR5||192 GB/s||x16 Gen 3|
Reduce cost, risk, and time to market with COTS hardware
Our broad selection of open-architecture, commercial off-the-shelf (COTS) rugged embedded computing solutions process data in real time to support mission-critical functions. Field proven, highly engineered and manufactured to stringent quality standards, Curtiss-Wright’s COTS boards leverage our extensive experience and expertise to reduce your program cost, development time and overall risk.
The Role of Tensor Cores in Enabling AI and Machine Learning
Tensor cores are indispensable for performing the types of calculations needed for artificial intelligence (AI) and machine learning. The role of AI and machine learning in defense applications is on the rise, making tensor cores critical for defense. In this white paper, you will discover how tensor cores are used in AI and machine learning, and ways to incorporate tensor cores into extremely rugged applications.
How Can I Teach My Machine to Learn?
Read this white paper to learn about:
- Supervised, unsupervised, and semi-supervised approaches to machine learning
- Classification algorithms
- Regression analysis
- Dimensionality reduction
- Machine learning frameworks, including TensorFlow, Keras, PyTorch, MXNet and Gluon, and Caffe