Autonomous 3D Visualization for Simulation Exercise Support
conference paper
Participants and spectators in simulated exercises often find it challenging to acquire good situation awareness and maintain it during the often hectic sequence of events that occur. The tools that are used to support them in this regard include 2D plan view displays and 3D stealth viewers. However, manually controlled 2D or 3D visualizations often do not provide the desired on-time overview and insight into the events that occur within the simulation. This is particularly the case when the environment is complex and dynamic, contains a large number of entities, or is geographically spread out.
ScreenPlay is an experimentation platform that acts as a virtual director that autonomously controls camera viewpoint positioning and movement in a 3D virtual environment. The system combines a rough behavior description and several view descriptions to produce the desired result, based on storytelling considerations, scenario context, and the events that occur within the simulation. By relying on assumptions of event patterns that are expected to occur in the simulated environment rather than needing to know exactly what will happen, ScreenPlay has a large degree of flexibility with regards to the occurrence and timing of events.
In this paper, we evaluate how autonomous 3D visualization can be used to support large-scale simulated exercises. This has initially been tested in small experiments in 2009. In 2010 it has been used to support visualization for analysis and after action reviews during the JPOW Joint Theatre Missile Defence exercise and in a number of simulated experiments for concept development and experimentation in the naval domain. For these cases, we will evaluate the process of configuring autonomous visualization routines for an exercise and we will discuss and compare the experiences and the results obtained by its application during both mission execution and debriefing.
ScreenPlay is an experimentation platform that acts as a virtual director that autonomously controls camera viewpoint positioning and movement in a 3D virtual environment. The system combines a rough behavior description and several view descriptions to produce the desired result, based on storytelling considerations, scenario context, and the events that occur within the simulation. By relying on assumptions of event patterns that are expected to occur in the simulated environment rather than needing to know exactly what will happen, ScreenPlay has a large degree of flexibility with regards to the occurrence and timing of events.
In this paper, we evaluate how autonomous 3D visualization can be used to support large-scale simulated exercises. This has initially been tested in small experiments in 2009. In 2010 it has been used to support visualization for analysis and after action reviews during the JPOW Joint Theatre Missile Defence exercise and in a number of simulated experiments for concept development and experimentation in the naval domain. For these cases, we will evaluate the process of configuring autonomous visualization routines for an exercise and we will discuss and compare the experiences and the results obtained by its application during both mission execution and debriefing.
TNO Identifier
426504
Article nr.
10035
Source title
Proceedings I/ITSEC 2010. Interservice / industry training, simulation and education conference Orlando, Florida November 29 - December 2, 2010 : 'Ttraining centric - readiness focused'
Pages
2261-2273