GPU Cards

GPU Cards

Powerful Parallel GPU Co-Processing is the Cornerstone of Machine Learning

For applications requiring massive parallel compute capability, such as deep learning frameworks for AI applications, our highly engineered GPU co-processing engines provide a field-proven hardware foundation. Curtiss-Wright GPGPU co-processing engines leverage the latest NVIDIA Tensor Cores (for machine learning) technology and are a critical component of the high-performance embedded computing (HPEC) ecosystem that delivers data center capability at the tactical edge.

3U VPX GPU Cards
Equipped with NVIDIA CUDA and Tensor machine learning cores, our 3U VPX GPU boards offer TFLOPS processing capability alongside maximum memory bandwidth for the most compute-intensive tasks.
6U VPX GPU Modules
Answering the growing demand for artificial intelligence and high-performance processing in deployed EW and ISR applications, our 6U VPX GPU modules are designed to deliver advanced capabilities in a highly rugged board.
XMC GPU Modules
Delivering powerful capability in a minimalistic footprint, these cards add GPU processing without occupying an extra system slot.

Reduce Cost, Risk, and Time to Market With COTS Hardware

Our broad selection of open-architecture, commercial off-the-shelf (COTS) rugged embedded computing solutions process data in real-time to support mission-critical functions. Field-proven, highly engineered, and manufactured to stringent quality standards, Curtiss-Wright’s COTS boards leverage our extensive experience and expertise to reduce your program cost, development time, and overall risk.

Read the Brochure

How Can I Teach My Machine to Learn?

This white paper examines supervised, unsupervised, and semi-supervised approaches to machine learning, as well as their accuracy and trade-offs.

Strengthen your program with our end-to-end services and support