Using HPC tools to solve HPEC development challenges


Using HPC tools to solve HPEC development challenges

The High-Performance Embedded Computing (HPEC) industry has turned to the same hardware building blocks (processors, GPUs, and fabrics) as the High-Performance Computing (HPC) industry. For example, the University of Texas’s supercomputer, Stampede, uses Intel dual Xeons, NVIDIA Tesla GPUs, and Mellanox FDR InfiniBand; the same hardware we now see in our world.

Powering over 6,400 processors with 522,080 cores, Stampede consumes a maximum of 4.5 Megawatts of power. With this overwhelming number of processors, a suite of tools is required to manage the system. They must also provide easy and reliable debug and optimization of the application code. Over the past decade, the HPC industry has evolved a feature rich set of software development tools including math libraries, communications APIs, testing tools and cluster managers in order to effectively utilize these massive systems. While a single tool may not have the power to solve it all, they filled this void by creating an ecosystem of dependable tools that has allowed the HPC industry to excel. The combination of cluster managers, debuggers, and profilers work together in harmony to ease the set-up and maintenance of system configurations and reduce the strain on the developers. 

While the embedded market is not a mirror image of HPC, floating point performance, throughput, latency, and standard software APIs are only a few examples of concerns that apply to both industries. Because of the similarity of the hardware, the maturity of the tools, and the larger installed user database, importing tools from the HPC community into the embedded market provides you with a solution that can reduce both cost and time to deployment.  

Download the white paper on Applying HPC Tools to the Embedded Market to learn more about:  

  • HPC applications for HPEC 
  • Cluster managers 
  • Debuggers 
  • Profilers 
  • Mapping 
  • A total solution