Enabling AI at the Network Edge of the Battlefield
The modern battlefield will soon be full of AI-enabled systems. Deep learning networks provide opportunities for everything from autonomous vehicles to smart weapons. The computing power required for such systems is substantial, which is the reason many AI tasks have been traditionally relegated to the data center. However, data center processing alone isn't enough. Modern system architects require significant processing power at the network's edge.
This white paper describes how modern AI technology can benefit the warfighter as embedded solutions at the network edge are deployed for military and aerospace platforms. Edge computing brings data storage and computation closer to where it's needed. For the modern battlefield, this may be in a device, an unmanned vehicle, a manned vehicle, or with the warfighters themselves.
Analyzing the large amounts of data it takes to complete complex machine and deep learning algorithms requires significant computer processing capabilities, which implies a general-purpose CPU isn't often up to the task. That's why system architects turn to other types of processors, including field-programmable gate arrays (FPGAs) and graphics processing units (GPUs), to perform many of the necessary calculations.
FPGAs and GPUs both have their place in AI, as well as their own pros and cons. As more of the computing horsepower moves out to edge devices, factors like performance per watt become considerably important when architecting to minimize size, weight, and power (SWaP). GPUs, which have traditionally been described as power-hungry, are now scoring significantly better when it comes to performance per watt. This boost in performance and power efficiency is why GPGPUs have been generally seen as taking center stage in AI development.
It's not always as simple as selecting a GPU alone, however. In many cases, GPUs are being complemented by other processors, such as general-purpose CPUs and dedicated machine learning engines. The architecture for NVIDIA's Jetson AGX Xavier system on module (SoM), for example, pairs a powerful GPU with deep learning accelerators, a vision accelerator, an eight-core Arm CPU, and multimedia engines. Each processing element is optimized for particular tasks yet works together to significantly increase the overall performance of a GPU-based solution. It's this class of performance that makes the Internet of things (IoT) and AI at the network edge practical today.
Login and download the white paper to learn more about:
- Machine learning vs deep learning
- What it takes to process deep learning algorithms
- Why AI is needed on the battlefield
Login to download this resource