Decomposing System Security to Prevent Cyber Attacks in Trusted Computing Architectures
Published in Military & Aerospace Electronics
Trusted computing systems designers should consider system security early in the design process to prevent cyber attacks
ASHBURN, Va. – High-level security requirements in trusted computing systems can flow down to the system from several sources, such as a request for proposal (RFP) or from reference documents that provide system design guidance. The best approach for dealing with security requirements is to follow a framework that determines the design process, based on the necessary security levels.
This framework not only will specify how to implement necessary security, but also will lead the designer through the thought process. For example, the Committee on National Security Systems (CNSS) at Fort Meade, Md., provides direction on the controls that designers should consider for system confidentiality, integrity, and availability.
Similarly, the CNSS provides a path for designing-in authorization capabilities or message integrity into system security. Although some security guidance documents for U.S. military systems are classified, there are similar sorts of documents that tell the system security designer what he needs to think about as he goes through the architecture and design process.
After establishing frameworks for high-level requirements, the customer and integrator must agree on how best to meet those requirements. This considers the context of the system and the use cases in which it will operate, as well as a risk analysis that looks at where and how the system will operate.
Will there be armed guards always present, for example, or will the system operate unattended for weeks or years? The risk analysis will evaluate how these different use cases influence the risks that different types of cyberattacks pose.
High-level architecture development
The first step in risk analysis is creating a high-level logical system description to model data and flows; this should happen when the system architecture is still being defined to best mold tradeoffs between security requirements and the implementation.
It’s important to have the system security engineers active and involved early to optimize design trade-offs. There will be fewer opportunities to mitigate security vulnerabilities once the physical implementation is set, which makes any needed modifications more difficult and costly. Even worse, systems designers may be forced to accept some vulnerability in the system if they don't take care of security concerns early.
Identify what to protect
Next, the system security architect must identify two key aspects of system security: determining what he needs to protect and identifying the threats he must protect against. What needs protecting can involve specific algorithms, data, interfaces, and capabilities. Anticipated threats, meanwhile, can involve preventing potential attackers from viewing or modifying certain algorithms -- especially if systems or software developers need to share those algorithms.
Some systems call for protecting the actual data -- especially in implementations like signals intelligence or reconnaissance imagery from an unmanned aerial vehicle (UAV) to protect against remote hacking. The security team also must consider other applications with which their system will communicate and their classification levels.
Systems designers also must take planned system availability into consideration when they identify potential threats. It’s important to mitigate against a denial of service (DoS) attack that could bring the system down.
Next, the designer should identify expected types of system attacks, like remote attacks, local attacks, or insider threats. Will potential attackers be typical hackers, resource-rich organized criminals, or nation-states?
To protect against an insider threat, it’s important to ensure that no backdoor routes were developed in the system software. Similarly, systems designers must protect hardware from nefarious components that could fail if someone sends a key phrase. The entire hardware supply chain, moreover, must be tested and verified.
Remote attacks can include attempts to access a UAV video feed. An attacker may gain access to the system data to modify its operation or deny availability -- especially if it's connected to the Internet. Local attacks can be as simple, yet dangerous, as a malicious actor plugging in a USB key to capture data.
Map logical attacks to physical components
System security engineers can map a nominal physical system architecture to the logical system architecture to identify potential physical avenues of attack. The system logical diagram defines what the system does and what functions it needs to perform.
The next step is to decompose that information to an actual physical architecture that identifies the circuit boards on which those functions reside, and how those boards communicate with each other. The designer needs to define the backplane, network interfaces, and how data will flow throughout the system. This will enable system security engineers to map system operation to potential attack vectors, and propose remediations for each anticipated attack vector.
Defining the final system security plan
When systems designers finish all this, they can start working with the customer to make changes to the system’s physical architecture. It may be necessary to remove some connections from a single-board computer, boost security by adding an encryptor into a data path, or add an encrypted hard drive to store data. The designer may want to apply additional logical constraints to provide remediation. In this way, they can change the security aspects of the way the system operates, instead of making changes to the physical architecture. They could implement software to verify the current functionality of the system.
Designers should weigh the cost of remediation for each type of anticipated attack as they identify which techniques to apply. Equally important, however, is determining new remediations have created new system vulnerabilities. Are there new remediations that need to be applied?
Designers must weigh the risks of different attack vectors with the costs of remediation; this will influence how best to proceed. One cost-effective remediation might protect against many attacks, while it may be difficult to justify using a very expensive remediation to protect against one attack vector.
Risk analysis might reveal a potential attack type that isn’t very probable or a potential attack with little chance of system access. In this case, system security engineers may decide not to protect against this kind of attack, or to mitigate it with an inexpensive less-effective approach. A low-risk potential attack vector may be acceptable. Designers might apply additional remediation to processes and procedures, like checking the system regularly after deployment to ensure that it’s still in good shape.
In an ideal world, all the groups involved in developing the system would work together; unfortunately, that’s not always possible. Different groups are available at different times; contracts may start and stop; engineers may not address security considerations before defining the physical architecture, forcing them to layer security on top of existing architecture.
Government customers may require systems designers to accept all identified risks. There may be clashes between requirements for cyber security and system certification. For these reasons, it's crucial to get system security engineers involved as early as possible before the architecture gets finalized. Just as early, the security team should engage their suppliers so its members can understand the solutions that are available and the steps they can take to meet their security and performance goals.
Read the full article here.
Developing a Secure COTS-based Trusted Computing System: An Introduction
Security and trusted computing, at the end of the day, really are all about the system. While the pieces and parts, such as the individual modules, operating system, and boot software, all are important, system security is not an additive process; it can’t simply be bolted-on to make the system secure.
COTS-Based Trusted Computing: Getting Started in Next-Generation Mission-Critical Electronics
Trusted computing involves technologies protect mission-critical embedded electronics from physical and remote attacks and from hardware and software failures.
The Trusted Computing Implications of Interfaces, and How They Can Influence System Performance
Steve Edwards and David Sheets explore the implications of how interfaces influence system design in trusted computing.
Trusted Computing for Defense & Aerospace
Curtiss-Wright goes well beyond standard approaches to Trusted Computing to provide truly secure solutions for air, ground, and sea platforms. We keep cybersecurity and physical protection in mind, from design and testing to supply chain and manufacturing. This comprehensive, end-to-end approach creates an effective mesh of protection layers that integrate to ensure reliability of Curtiss-Wright products in the face of attempted compromise.