Evaluating a multimodal interface for firefighting rescue tasks
Firefighters searching for victims work in hazardous environments with limited visibility, obstacles and uncertain navigation paths. In rescue tasks, extra sensor information from infrared cameras, indoor radar and gas sensors could improve vision, orientation and navigation. A visual and tactile interface concept is proposed that integrates this sensor information and presents it on a head-mounted display and tactile belt. Sixteen trained participants performed a firefighting rescue task with and without the prototype interface, measuring task performance, mental effort, orientation and preference. We found no difference in task performance or orientation, but a significantly higher preference for the prototype compared to baseline. From participants’ remarks, it appears the interface overloaded them with information, reducing the potential benefit for orientation and performance. Implications for the design of the prototype are outlined.
To reference this document use:
PCS - Perceptual and Cognitive Systems
BSS - Behavioural and Societal Sciences
Social Impact of ICT
Proceedings of the 56th Annual Meeting of the Human Factors and Ergonomics Society. Boston, MA., 277-281