White Papers

Enabling AI at the Network Edge of the Battlefield

January 08, 2021 | BY: Mike Southworth

Download PDF

The modern battlefield will soon be full of AI-enabled systems. Deep learning networks provide opportunities for everything from autonomous vehicles to smart weapons. The computing power required for such systems is substantial, which is the reason many AI tasks have been traditionally relegated to the data center. However, data center processing alone isn't enough. Modern system architects require significant processing power at the network's edge.

This white paper describes how modern AI technology can benefit the warfighter as embedded solutions at the network edge are deployed for military and aerospace platforms. Edge computing brings data storage and computation closer to where it's needed. For the modern battlefield, this may be in a device, an unmanned vehicle, a manned vehicle, or with the warfighters themselves.

Analyzing the large amounts of data it takes to complete complex machine and deep learning algorithms requires significant computer processing capabilities, which implies a general-purpose CPU isn't often up to the task. Thats why system architects turn to other types of processors, including field-programmable gate arrays (FPGAs) and graphics processing units (GPUs), to perform many of the necessary calculations.

FPGAs and GPUs both have their place in AI, as well as their own pros and cons. As more of the computing horsepower moves out to edge devices, factors like performance per watt become considerably important when architecting to minimize size, weight, and power (SWaP). GPUs, which have traditionally been described as power hungry, are now scoring significantly better when it comes to performance per watt. This boost in performance and power efficiency is why GPGPUs have been generally seen as taking center stage in AI development.

It's not always as simple as selecting a GPU alone, however. In many cases, GPUs are being complemented by other processors, such as general-purpose CPUs and dedicated machine learning engines. The architecture for NVIDIA's Jetson AGX Xavier system on module (SoM), for example, pairs a powerful GPU with deep learning accelerators, a vision accelerator, an eight-core Arm CPU, and multimedia engines. Each processing element is optimized for particular tasks yet works together to significantly increase the overall performance of a GPU-based solution. It's this class of performance that makes the Internet of things (IoT) and AI at the network edge practical today.

Download the white paper to learn more about:

  • Machine learning vs deep learning
  • What it takes to process deep learning algorithms
  • Why AI is needed on the battlefield

 

Related Content

Mike Southworth

Author’s Biography

Mike Southworth

Senior Product Manager

Mike Southworth serves as Senior Product Manager for Curtiss-Wright Defense Solutions where he is responsible for the small form-factor rugged mission computers and Ethernet networking subsystem product line targeting Size, Weight, and Power (SWaP)-constrained military and aerospace applications. Southworth has more than 15 years of experience in technical product management and marketing communications leadership roles. Mike holds an MBA from the University of Utah and a Bachelor of Arts in Public Relations from Brigham Young University.

Share This Article

  • Share on Linkedin
  • Share on Twitter
  • Share on Facebook
  • Share on Google+
Connect With Curtiss-Wright Connect With Curtiss-Wright Connect With Curtiss-Wright
Sales

CONTACT SALES

Contact our sales team today to learn more about our products and services.

YOUR LOCATION

PRODUCT INFORMATION

Support

GET SUPPORT

Our support team can help answer your questions - contact us today.

REQUEST TYPE

SELECT BY

SELECT Topic