

For applications requiring massive parallel compute capability, such as deep learning frameworks for AI applications, our highly engineered GPU co-processing engines provide a field-proven hardware foundation. Curtiss-Wright GPGPU co-processing engines leverage the latest NVIDIA Tensor Cores (for machine learning) technology and are a critical component of the high-performance embedded computing (HPEC) ecosystem that delivers data center capability at the tactical edge.
Powerful Parallel GPU Co-Processing is the Cornerstone of Machine Learning
For applications requiring massive parallel compute capability, such as deep learning frameworks for AI applications, our highly engineered GPU co-processing engines provide a field-proven hardware foundation. Curtiss-Wright GPGPU co-processing engines leverage the latest NVIDIA Tensor Cores (for machine learning) technology and are a critical component of the high-performance embedded computing (HPEC) ecosystem that delivers data center capability at the tactical edge.
Reduce Cost, Risk, and Time to Market With COTS Hardware
Our broad selection of open-architecture, commercial off-the-shelf (COTS) rugged embedded computing solutions process data in real-time to support mission-critical functions. Field-proven, highly engineered, and manufactured to stringent quality standards, Curtiss-Wright’s COTS boards leverage our extensive experience and expertise to reduce your program cost, development time, and overall risk.
How Can I Teach My Machine to Learn?
This white paper examines supervised, unsupervised, and semi-supervised approaches to machine learning, as well as their accuracy and trade-offs.