Deploying HPC Tools into Embedded Software Development to Save Time and Money
Estimating and managing a software development effort is a combination of art and science. The cost and schedule estimates of the program are based on the predicted lines of code, and the number of lines of code that can be designed, coded, and debugged in a mythical man-month. The lines of code per man-month can vary greatly depending on factors of code reuse, complexity, level of documentation, and even the skill set of the developers. For a project to be successful, it must be on time, within budget, and deliver the features and functions as promised. According to the Standish Group’s CHAOS report, of 50,000 software projects, only 19% of the projects were successful. Meanwhile, 52% of the studied projects were complete but came in over cost, over time, and/or missing some of the specified features and functions. Finally, a whopping 29% were canceled. Given these statistics, the choice of tools for software development becomes even more important in today’s multi-processor environment. For years in our embedded defense community, developers have used VxWorks and its associated tools, and vendors of embedded defense hardware had also started making their own tools. These tools had varying levels of success with the single-core processors, and of course, the traditional “printf” was and continues to be a debugging favorite of most developers.
With the new generation of multi-core Central Processing Units (CPUs), Graphics Processing Units (GPUs), and field-programmable gate arrays (FPGAs) as building blocks, embedded defense systems have become even more complicated to develop and debug. To take advantage of these embedded “supercomputers”, the code must be executed efficiently in parallel across many nodes. If serial coding techniques were used, it would be comparable to buying a Ferrari and driving it around in first gear. Though the data interface may be high-speed serial, once captured, it is more efficient to program multiple independent and concurrent activities to process the data. One could ask: how much improvement can be achieved if the price is paid to develop parallel code? The answer is Amdahl’s law, which predicts the theoretical maximum speed-up for a program using multiple processors. Amdahl’s law states that in parallelization, if P is the proportion of a system or program that can be made parallel, and 1-P is the proportion that remains serial, then the maximum speedup that can be achieved using N number of processors is 1 / ((1 - P) + (P / N). As N approaches infinity then the maximum speedup approaches 1 / (1 - P). This means concurrency needs to be identified in the algorithm and plans are made to exploit it by breaking up the computation into tasks that can be divided among the processors. Given this transition in our market space toward parallel programming, how can we more intelligently manage, develop and debug the increasingly more complex code? Traditional tools such as traces analysis, serial debuggers, and even our dear friend the “printf” fall short of this increased challenge.
Some vendors and customers in the embedded defense space have tried to develop parallel tools, which they have found is not a trivial exercise. Most of these efforts are being supported by small teams of less than ten people, and when adopted, are only being used by a handful. Therefore, the products are not as full-featured or reliable as tools with a much wider user base, and using these immature tools, could negatively impact your program’s cost and schedule. With this paper, we explore the potential role of High-Performance Computing (HPC) tools in the embedded market, speeding delivery time while decreasing costs. We will look at the power of combining HPC tools such as cluster managers, debugger and profilers, and mapping solutions.
Log in and download the white paper to learn more.
Login to download this resource