Avoiding Bottlenecks, Snail Threads, and Pitfalls in RADAR Software
October 05, 2016Download PDF
Facing the specter of parts obsolescence, antiquated computer architecture, and closed proprietary systems, the customer wanted a technology refresh that would facilitate the possible retrofit of several existing radar systems.
For both the upgraded systems and new designs going forward, the customer desired open, portable technologies that would improve performance, increase reuse, and lower costs. They believed that by leveraging the development infrastructure, technology roadmap and investments of standardized Commercial Off the Shelf (COTS) vendors, they could achieve their hardware and software objectives.
The challenge was to design an Application Ready Processor (ARP) consisting of processing hardware, operating system, and processor middleware for a podclass radar system using available off-the-shelf hardware. The ARP was required to demonstrate performance against a set of Synthetic Aperture Radar (SAR) and Ground Moving Target Indicator (GMTI) benchmarks, while meeting specified size, weight and power (SWaP) constraints.
In addition to characterizing the currently available hardware, the customer wanted to quantify expected near term processing gains from the next generation of hardware. As part of the solution, the benchmarks could be optimized to improve performance, but the accuracy of the improvements must pass verification.
The results of the study ultimately led to the recommendation of a Radar processing system. The OpenHPEC tool suite identified pulse compression as the major bottleneck in GMTI and uncovered the root cause; the pulse compression function was hogging approximately 50% of the compute timeline.
It revealed that the forward and reverse FFTs were each taking about 11% of the compute time, and the convolutions for the FIR filter consumed 45% of the time. By optimizing these sections of code, and resolving the issues highlighted by the profiler, the overall processing times across the four code sections improved by an average of 60%.
The hard real-time processing portion of the SAR scaled almost perfectly as the number of nodes increased. After profiling the SAR code, four sections of the code were pinpointed as the top priorities for optimization.
Let us show you how we can save you time, schedule and frustration.
Read the full case study and learn more about the solution and results download the full case study here.