The Trusted Computing Implications of Interfaces, and How They Can Influence System Performance

article
article
Article
August 22, 2018

The Trusted Computing Implications of Interfaces, and How They Can Influence System Performance

Published in Military & Aerospace Electronics

A trusted computing system can ensure security at the potentially vulnerable entry points of vulnerable system interfaces, yet this may compromise performance through design trade-offs that systems designers must recognize, understand, mitigate, and compensate for.

Systems development often involves several different engineering groups, and the team dealing with security isn't necessarily the one responsible for project performance.

The group concerned with security will identify which parts of the system need protecting to meet the program’s security plan. An entirely different team can dictate performance issues involving processor speed, compute power, memory, and I/O bandwidth -- all of which affect system hardware. It’s not unusual for these disparate teams to be out of sync with each other. This affects trusted computing systems because decisions made about system security inevitably affect system performance.

Design teams must have conversations internally at the highest level to understand trade-offs that implementing security can create. Authentication and encryption can influence processor use and available data bandwidth, which can force designers into augmenting the system’s processing power to make up for security's effects on overall performance. It also could force system designers to consider relaxing security requirements to maintain performance.

Designers must consider how to define security at the I/O boundary at the board and subsystem level. At the subsystem level are interfaces like Ethernet, MIL-STD-1553, and others that communicate outside of the box. As these interfaces communicate and receive information from other equipment, designers need authentication to ensure that communications happen only among authorized entities and that only trusted data flows in both directions.

When design teams discuss security and performance tradeoffs, they must make choices about how to implement authentication. Will it occur only once, at power-up, or every time a message is sent? What kind of authentication should be performed? Should some sort of key exchange be used to passkeys back and forth? Such decisions have associated overhead costs. Depending on the system architecture, those overhead costs may increase startup timeline, reduce overall system throughput, or introduce additional system latency.

Authentication is just one issue involved with security and performance tradeoffs. Calculating the implications of complex processes like cryptography, and acknowledgments as data passes through the system also can be difficult. Designers may opt to build and test a system mockup to gauge the overhead that security will impose, yet the most important thing is to learn the different costs of implementing authentication.

Encryption is a two-way street when it comes to processing costs. Encrypted data must be decrypted, adding additional processor overhead. Key exchanges can introduce overhead when creating ephemeral session keys. throughput hiccups caused by key renegotiation can impose additional overhead.

Another often-overlooked consideration is the availability or lack of existing industry standards that define how to ensure security over data interfaces. System integrators need to understand which interfaces their system will use, and determine if standard security protocols exist for them. Ethernet, for example, has standard security protocols like Internet Protocol Security (IPsec) and Transport Layer Security (TLS). Designers might implement such standard protocols at different levels of the IP stack; their impact on system performance will differ depending on where in the IP stack the security standard is located. What’s being authenticated, and what is being encrypted also influences system performance.

Using standard interface security protocols has clear benefits. For one, the designer can implement and interoperate them on hardware modules from several vendors. When no pre-packaged security protocol exists for a particular interface, the designer, the program, or the standards body must define a secure approach for using that interface.

Sometimes the system designer must use an older interface that wasn’t designed with security in mind. Doing this brings related concerns, like deciding whether to layer additional security code on top of the interface, or designing a unique solution.

The target platform itself may drive many of these interface and performance issues. If an Ethernet network is available, the designer can use its built-in security. If a sensor must communicate over MIL-STD-1553, there won’t be as rich an ecosystem to support security. MIL-STD-1553 has been around for many decades and is common in military systems. A lot of deployed equipment can’t support modern trusted computing authentication techniques.

If a legacy sensor cannot perform authentication over MIL-STD-1553, the designer must decide how, or if, to implement authentication; he has to determine risks and vulnerabilities, like how critical it will be not to authenticate that link.

The designer should identify and understand not only all interfaces in a system but also any associated potential security concerns. This includes the module interfaces that already are enabled, as well as those interfaces that possibly could be enabled. Common interfaces can have serious security implications, so design teams should not overlook them during security reviews. Designers also should consider debugging interfaces and maintenance interfaces.

Read the full article here

Steve Edwards

Steve Edwards

Director and Technical Fellow

Steve has over 25 years of experience in the embedded system industry. He leads Curtiss-Wright Defense Solutions’ efforts in addressing physical and cyber security on their COTS products and represents the company in defense related security conferences. Steve has worked collaboratively in several standard bodies, including a time chairing the VITA 65 OpenVPX, and as lead for the Sensor Open Systems Architecture (SOSA) Security Subcommittee. Steve lead the design of Curtiss-Wright’s first rugged multiprocessor and FPGA products and was involved in the architecture, management, and evangelization of the industry’s first VPX products. He has a Bachelor of Science in Electrical Engineering from Rutgers University.

Trusted Computing for Defense & Aerospace

Curtiss-Wright goes well beyond standard approaches to Trusted Computing to provide truly secure solutions for air, ground, and sea platforms. We keep cybersecurity and physical protection in mind, from design and testing to supply chain and manufacturing. This comprehensive, end-to-end approach creates an effective mesh of protection layers that integrate to ensure reliability of Curtiss-Wright products in the face of attempted compromise.