Augmented Reality and Video-Management Systems
Author: Kevin Rooney
Published in Military Embedded Systems
Over the last few years, the concept of augmented reality, in which computer-generated imagery is combined with views of the real world, has become mainstream. Formerly found only in very high-end applications, such as helmet-based heads-up displays for fighter-jet pilots, this next-generation graphics capability is poised to revolutionize applications such as search and rescue (SAR) and airborne surveillance.
As the operator is asked to look at more and more information, we’re seeing display sizes grow increasingly large. Part of the problem is that it’s very difficult to sit close to a large display and effectively absorb all of the information it presents. It’s akin to watching a movie while seated in the front row of the theatre. On platforms such as helicopters and fixed-wing aircraft, the space constraints make it unlikely that the operator can sit any distance from the video screen. As a rule of thumb, given a person with 20/20 vision seated 18 inches away from a screen, the largest video display that can be realistically viewed optimally is 17 inches. As displays increasingly embrace the use of HD, 2K, and 4K formats, the high level of information and detail delivered will be difficult to view if the operator is seated too close to the screen.
Augmented reality promises a solution that solves the constrained space issue, provides access to more useful and actionable data, and enables a more natural and effective interaction with the real world. Overlaying moving map data, license-plate identification, and other powerful mission information onto a helmet visor or on glasses takes the LCD display out of the equation. Before, if operators turned their head away from the video screen, they paid the penalty of missing sensor data.
With augmented reality displayed properly on a visor or glasses, the operator’s viewing experience is optimized, enabling the operator to access additional data resources, whenever and wherever they turn their head. For the operator, the ability to view the real world and have overlaid information (such as Google Maps) makes them much more effective, while essentially removing the technology as an intermediary barrier. As information is delivered to the operator in a more human, seamless, and intuitive way, the result will be more successful missions. Augmented reality can also take advantage of new features, such as the ability to dynamically “annunciate” or highlight any important changes. For example, if a target moves, the new state can be flagged to draw the operator’s attention.
One of the challenges for designers of augmented-reality display systems is development of a synchronized vision system able to handle latency of two frames or fewer. The delay experienced in the user’s brain, caused by a disparity between their eyesight vision and the movement of their body – well known from virtual-reality goggle users – can cause disorientation and bring on motion sickness. Another key element of delivering augmented reality is ensuring that latency is consistent: When latency is consistent and predictable, rather than random, the brain is able to develop a sort of muscle memory that enables it to conform and react to the disparity between what the eyes see and the body feels.
The opportunity for commercial off-the-shelf (COTS) vendors is to take this high-end technology from the realm of specialized applications and make it a pervasive and cost-effective solution.
Existing modular and scalable video gateway products can serve as the foundation building blocks for the development of COTS augmented-reality systems, supporting digital and analog switching and video format conversion, to develop affordable practical examples of deployable augmented-reality solutions. (Figure 1.)
Figure 1: Curtiss-Wright’s RVG-SA1 analog video switch, a compact nonblocking crosspoint switch, is an example of a COTS element that can be used to integrate a rugged deployable augmented-reality solution.
Ethernet gateways will enable video over IP, while coming technological advances will support the safe use of video over wireless in aircraft environments. Also required: Recording and archiving the augmented-reality data, with support for time stamping, so that the captured video can be used in an evidence chain in court. Augmented-reality systems also offer an effective tool for crew training and scenario simulation.
As natural as it is to use a rear-view mirror while driving a car, it may well soon become as seamless to access real-time streaming video data that makes the operator more effective instead of removing them from the real world by placing a video screen in their field of vision. The next two years should see the emergence of new and practical augmented-reality solutions for a wide variety of applications. We are already seeing military implementation of this technology; we expect that the SAR and law-enforcement markets will follow very shortly.
SWaP-Optimized: The Right Way to Add Advanced Surveillance Capabilities to Rotorcraft
Making the modern video equipment work together with the platform’s legacy systems can be costly and complex. The time and effort needed to integrate a myriad of legacy and modern video formats and resolutions can both add program risk and delay deployment.
Cutting-Edge Video Solutions Essential for Enhanced Situational Awareness
Situational awareness is a critical capability in the battlefield. Tools and technologies, including real-time sensors, video displays, mission computers, and video-distribution systems, are evolving to bring new advanced solutions to the warfighter.
Elements of a Video Management System for Situational Awareness
We look at the growing challenge of how best to provide operators with as much usable visual information as possible while ensuring that the data is readable and actionable in real time
Kevin Rooney joined Curtiss-Wright at the beginning of 2013 as the Managing Director for the Video Management and Systems business unit. Kevin has 36 years of experience in aerospace and defense. He received his training in the Royal Air Force, specializing in flight simulator engineering and post-graduate at Lancaster University and the University of Limerick in management. Before joining Curtiss-Wright, he was the Managing Director for Northrop Grumman’s Unmanned Ground Systems (UK). Previously Kevin held business leadership roles in Thales in information security/communications and at Airbus for ISR and experimental UAVs. VDS designs and deploys mission-critical intelligent video management & application-ready systems and is located in Letchworth Garden City, UK.
Video Management Systems
Reaping the benefits of the advancements in sensor and camera technology requires a video system that delivers interoperability, reliability, and SWaP optimization. At the heart of a modern video management system (VMS) is video distribution technology that acts as the central hub for the proliferating number of video sources on-board today's mobile vehicles. Our rugged line of small form factor video management solutions includes both analog and digital switches as well as the smallest and lightest format converter in its class, available on the market today