Today, the term VR is frequently abused as being a new form of simulation. In fact, virtual simulation has been with us for decades and describes the process of replicating a real environment with a virtual representation of that environment or piece of equipment. So, in the 1920s, the Link Trainer was, in fact, a virtual simulation of a real aircraft cockpit.
Surely, some would argue, VR is all about visualisation and the creation of environments that attempt to ‘suspend disbelief’ and seduce the trainee into thinking that they are flying in combat over Syria when, in fact, they are at their home base in Western Europe or the US.
Even accepting this caveat in describing VR, the use of model boards, analogue and then digital image generators (IGs) can be traced back to the 1940s. Visual simulation is not new; it has just benefitted from improved performance driven by new technologies over time.
These points are important ones and provide a fundamental place of departure for this analysis. For a number of years now, visualisation has comprised three key elements.
These elements – the hardware and software creating the IG; the software defining the visual database; and finally, the display system – have been described by Philippe Perey, CAE Defence and Security’s head of technology, ‘as a three-legged stool’, and if one leg fails, the stool will topple.
Today, the system integrator and user have a different perspective of the technology that is completely at variance with the first digital IGs that were coming online 30 years ago.
In the past, such systems were niche products that were hardware heavy and filled rooms with computer racks and air conditioning units along with technicians to keep them running. They were also extremely expensive and provided poor visual representations of the real world compared to what is available today.
In many ways, the IG has now been commoditised. In part, this is due to the miniaturisation of electronics and the effect of Moore’s Law on processing power but also, to an extent, by the user’s expectations.
Orlando, Florida-based AVT Simulation said that, today, ‘desktop personal computers (PCs), one or multiple racks of computers, tablets, game consoles and anything else that can display an image can be considered an image generator’.
Modern recruits to the armed forces have been raised playing games on high-performance consoles where the quality of graphics is extremely high. Their levels of expectation have been pushed ever higher, but that is not the only challenge for the companies involved in providing visualisation solutions.
In the US, the army’s Synthetic Training Environment (STE) programme is pushing industry to create the next level when it comes to visualisation. Instead of discreet IGs that are used to support a single simulator, STE envisages enabling individuals and units to log in to a rich and diverse terrain and environmental database to allow them to conduct exercises anywhere on the globe.
‘We currently provide lots of support to the back end of the simulator in terms of planning, performance and how we run exercises,’ explained MG Maria Gervais, STE Cross-Functional Team leader at US Army Futures Command.
‘We are working towards a common architecture to enable that vision, through more software and less hardware. We’ve got to ask lots of questions such as how can we scale games engines to enable us to point at a place on the globe and have One World Terrain let us exercise there,’ she explained.
Gervais’ view of One World Terrain is an important one as, according to her, the US Army currently operates 57 IG database standards. The task of adopting a single solution to provide a global image generation source is a challenge, but many companies are on the case.
At last year’s virtual I/ITSEC, for example, a team comprising Cesium, Epic Games, Microsoft and NVIDIA launched Project Anywhere, ‘an [Epic Games] Unreal Engine proof-of-concept application running in the cloud and accessible by any device, anytime. Project Anywhere provides real-time access to 3D Tiles visuals and data of the entire world from Cesium, with underpinning technology from Microsoft Azure and NVIDIA.’
Another approach to global visualisation is being provided by MAK Technologies with its Legion product that is aimed ‘at future [distributed] virtual training environments’ that are scalable and able to support ‘higher entity counts and more complex scenarios’.
Under development since July 2019, version 1.0 should be released in the next few months. The company is expected to propose the Legion interface standard to the Simulation Interoperability Standards Organization as an accepted open standard.
MAK has already demonstrated Legion with 3.8 million cloud-hosted entities and integrated it with VBS4, Unreal Engine and VR-Vantage visualisation systems. The product has also been integrated with CGF such as OneSAF and VR-Forces. Len Granowetter, MAK Technologies’ CTO, describes Legion as a ‘scalable interoperability framework’ that can be viewed as ‘disruptive technology’.
So, are we moving away from discreet IGs? Looking at Legion and games-based solutions such as Bohemia Interactive Simulations’ VBS Blue IG, the answer might be yes.
VBS Blue IG was launched in September 2017 and is a 3D whole-earth (WGS-84) IG that has been designed to support multi-domain training from individual to collective.
The company says that ‘VBS Blue IG includes a baseline, geospecific global terrain that is procedurally enhanced based on real-world metadata. Developers have the flexibility to use procedural content, satellite imagery, high-resolution terrain or geospecific features to enhance areas of special interest.’
Such systems provide the user with a great deal of flexibility and that is the order of the day, certainly as far as the USAF is concerned. But what were historically seen as discreet and proprietary IG systems are changing. For example, Collins Aerospace’s EP-8100 IG is ‘leveraging game-like concepts’, according to promotional material, while its PC COTS-based EP-80 is described as ‘scalable’.
What is clear is that modern IGs have the ability to assimilate other tasks so as to challenge what may be termed as games-based solutions. For MetaVR and its flagship Virtual Reality Scene Generator (VRSG), the market remains vibrant.
‘2020 was an extremely good year for us – we had a record year in sales with a significant increase in orders from both public and undisclosed customers,’ explained Garth Smith, president of MetaVR.
In January 2021, the company released VRSG version 6.5 that supports a number of Varjo head-mounted displays, the HP Reverb VR product and Collins Aerospace’s Coalescence mixed-reality system.
‘A new direct video streaming option using the Real-time Transport Protocol network protocol (Combat Air Force Distributed Mission Operations [CAF DMO] compliant) has also been added,’ explained Smith.
‘Supporting CAF DMO requirements is a significant continued force multiplier for our users, enabling them to train strategically. It essentially enables different simulators that use VRSG and semi-automated force software such as Battlespace Simulations’ Modern Air Combat Environment [MACE] to train jointly in a simulated training environment that identically replicates real-world missions.
‘For example, the MALET-JSIL Aircrew Trainer [MJAT], and US Air Force JTAC simulators that use VRSG and MACE, allows Reaper operators and JTACs to train together in a common 3D world rendered in VRSG.’
In April, the company received an order from the Joint Systems Integration Laboratory (JSIL) for 171 VRSG licences for the MJAT training capability. The licences are coupled with government off-the-shelf software to create a ground control station simulator for training pilots and sensor operators of the General Atomics MQ-9 Reaper, as an upgrade to the USAF’s Predator/Reaper Mission Aircrew Training System (PMATS) programme.
‘The existing PMATS devices are being progressively upgraded with the newly updated simulation software suite,’ explained Smith.
It is clear that projects such as STE and the USAF’s Simulator Common Architecture Requirements and Standards (SCARS) programme aim to provide a common network to enable real-time collaborative training to take place. With STE this relies on a ubiquitous and common IG database, while SCARS emphasises common standards for IGs and network protocols to link extant simulators.
This approach stresses the importance of high-end IGs forming part of a tailored, high-fidelity simulator that, where feasible, mimics the actual platform as closely as possible. An example of this is the CAE Medallion MR e-Series visual system that provides 4K high-definition, 3D images. This visual solution can provide up to 360° FOV and is specifically designed for fast-jet pilot training.
Operating at 120Hz, the visual solution features a head-mounted motion compensation system designed to reduce parallax errors. The Medallion MR e-Series also uses active eyewear to provide 3D depth perception – a bonus when conducting tasks such as air-to-air refuelling.
Could a cloud-based, common visual system provide such fidelity? The answer today is no, and so despite the allure of such systems, it remains a case of ‘horses for courses’. Both types of IG/visual systems have their uses.
One thing is clear though, and that is that the fidelity of the modern IG continues to expand and when the other two legs of the stool are added, visual acuity has never been better.