Trusted Computing Article: Application Development, Testing, and Analysis for Optimal Security

article
article
Article
June 27, 2018

Trusted Computing Article: Application Development, Testing, and Analysis for Optimal Security

Published in Military & Aerospace Electronics

Application software in military systems is what actually gets work done, and enables warfighters to carry out their missions, so it's essential that this code is is trusted and secure. All other hardware and software are designed only to start the application software securely; once it starts, application software relies on the system's fundamental security building blocks, but requires special attention to ensure it functions as intended.

This can be easier said than done, however, because only a miniscule number of system developers typically has a chance to look at the application code. Most application software is custom-built to execute a specific mission or run a particular algorithm, so far fewer software engineers will see it than would see open-source or even most commercial software.

This can result in undiscovered vulnerabilities, which can be made worse because opportunities to review and update application code typically are few and far-between. Application code in military systems is geared to a particular specification; once tested, the system often is deployed and has much less opportunity for re-testing than a general-purpose system would.

Complicating matters is the narrow technology refresh window of deployed systems. Limitations on time, budgets, and mission requirements can make it nearly impossible to update application software once it's in the field. Even if users discover code issues or security vulnerabilities, the costs to bring a deployed system back for update is excessive.

On the other hand, it takes far less time and cost to find, fix, and test software problems prior to deployment; it's imperative for system developers to make the right decisions about application code from the very beginning.

Application software can fall into several categories: libraries, middleware, and custom-built code to perform specific functions. Custom-built middleware requires special scrutiny because it often is widely reused. Middleware provides the glue that holds libraries and applications together, and any middleware vulnerability could make the entire system susceptible to malicious cyber attack.

It's preferable for software engineers to organize systems so that system functions and inputs naturally fall into related buckets. It’s important to allocate applications correctly to the user and kernel space. Designers should keep the amount of code running in kernel space to a minimum to reduce the impact of security vulnerabilities, since kernel space executes at a high privilege level, with broad access to system resources. It behooves the developer to consolidate related functions closely to each other. That way they don’t have to reach out to other parts of the application.

Moving large amounts of data also can create security concerns. Large data sets tend to move in the clear because it’s inefficient to encrypt a large amount of data moving at high speeds. Some systems, moreover, must share large pieces of data, especially when separate executable portions of an algorithm are located in multiple places; it's hard to verify and define this data flow in a secure manner.

A better solution is to divide the software such that application using the algorithm treats it like a black box, with no knowledge of its internal workings. Applications can instruct these black boxes to run specific portions of the algorithm, and receive a concise response in return. This provides a much better-defined information flow between applications. This data flow is easy to verify because it validates the information the system sends and receives it. This approach also helps encapsulate information better to enable encryption and authentication.

Systems designers should define the information flow logically in terms of security boundaries. It’s best to minimize the number and complexity of logical interconnections because each connection is a possible point of infiltration. Designers must examine, lock down, test, and verify each connection, as well as consider a secure messaging mechanism that uses authentication to make sure the data comes from a trusted source.

Secure coding practices are an important part of designing-in application security from the beginning. Using standard secure coding practices can minimize security vulnerabilities from programmer errors. The best-known example of such an industry standard is CERT. Others include MISRA, DO-178C, IEC61508, and ISO 26262.

Using secure coding practices can eliminate the likelihood of unintentional errors. A smart approach is to train all application programmers to know and follow the rules and submit to peer reviews. Automation tools also are available to scan code and verify adherence to the rules. Being consistent with coding rules also can enable programmers to move between projects without introducing unnecessary security issues.

Funding constraints can lead to program cutbacks, and when this happens, program managers might be tempted to eliminate security testing. Doing so, however, can introduce critical gaps and vulnerabilities. It’s important to budget for testing at the program’s front end.

One approach for analyzing how application components work together is static code analysis, which uses tools to check the code for any potential issues. Software engineers often can use the same plug-in tools they use to verify secure coding standards compliance to perform static code analysis. Examples of these tools include Coverity, cppcheck, Klocwork, lint, Parasoft, and Understand. Some of these tools work on several languages, some are language-specific, some are commercial, and some are available as free open-source software.

Using dynamic code analysis tools can help software developers analyze how the application will run under test conditions. Dynamic code analysis hooks into the running software to analyze what the system is doing, and works in the background to enable systems developers to validate their applications with normal inputs and outputs to make sure that everything works properly.

Static and dynamic code analysis each look at different things, and produce different results. Static code analysis typically will report more false positive errors because it flags every possible error no matter how unlikely. Dynamic code analysis, on the other hand, can verify the same error never does occur, under test conditions. Tools that provide dynamic code analysis include BoundsChecker, dmalloc, Parasoft Insure++, Purify, and Valgrind. Dynamic code analysis also can provide additional benefits such as thread coherence validation and code coverage analysis.

Another important decision for application code developers is whether to use regression testing or continuous testing. Regression testing might not uncover failures until the application goes through acceptance testing or somewhere else further down the line.

The continuous method runs tests at on a continuous basis to help find problems as early as possible. It should include a security-specific testing apparatus to identify software changes quickly that break some of the application’s security constraints. While continuous testing may add costs at the front-end, it can reduce costs at the back-end.

Some processors, such as the Intel SGX, have built-in security features. Intel SGX allows software developers to create enclaves in the application to partition data securely. If the programmer needs a key to encrypt some data, he can place the variable into a separate enclave. Using enclaves to separate the encryption key from other system functions prevents a cyber attacker from gaining access to system security.

Another processor-specific security feature is Arm TrustZone, which enables an Arm processor to separate memory, operating system, and application into trusted and non-trusted areas. TrustZone ensures that only the trusted area can access trusted data.

One of the major tenets of cryptography is don’t try to build it on your own; it’s complex and easy to get wrong. It’s much safer and wiser to use something that already exists, is readily available, and is kept up to date continually from the feedback of many users. Take advantage of the security libraries already available within your operating system to implement common security concepts, such as authentication, secure communication, encryption, and key derivation. Examples of robust security libraries include OpenSSL and IPsec.

Mandatory Access Controls (MACs), which most software operating systems support, enable developers to create configuration files that define how different people can use the application and its resources. Operating systems that support MACs include SE Linux and Windows Integrity Levels.

More information on Curtiss-Wright’s Trusted COTS program for protecting critical technologies and data in deployed embedded computing systems is online.

Read the full article here

Steve Edwards

Steve Edwards

Director and Technical Fellow

Steve has over 25 years of experience in the embedded system industry. He leads Curtiss-Wright Defense Solutions’ efforts in addressing physical and cyber security on their COTS products and represents the company in defense related security conferences. Steve has worked collaboratively in several standard bodies, including a time chairing the VITA 65 OpenVPX, and as lead for the Sensor Open Systems Architecture (SOSA) Security Subcommittee. Steve lead the design of Curtiss-Wright’s first rugged multiprocessor and FPGA products and was involved in the architecture, management, and evangelization of the industry’s first VPX products. He has a Bachelor of Science in Electrical Engineering from Rutgers University.

Trusted Computing for Defense & Aerospace

Curtiss-Wright goes well beyond standard approaches to Trusted Computing to provide truly secure solutions for air, ground, and sea platforms. We keep cybersecurity and physical protection in mind, from design and testing to supply chain and manufacturing. This comprehensive, end-to-end approach creates an effective mesh of protection layers that integrate to ensure reliability of Curtiss-Wright products in the face of attempted compromise.