Evaluating a multimodal interface for firefighting rescue tasks
conference paper
Firefighters searching for victims work in hazardous environments with limited visibility, obstacles and uncertain navigation paths. In rescue tasks, extra sensor information from infrared cameras, indoor radar and gas sensors could improve vision, orientation and navigation. A visual and tactile interface concept is proposed that integrates this sensor information and presents it on a head-mounted display and tactile belt. Sixteen trained participants performed a firefighting rescue task with and without the prototype interface, measuring task performance, mental effort, orientation and preference. We found no difference in task performance or orientation, but a significantly higher preference for the prototype compared to baseline. From participants’ remarks, it appears the interface overloaded them with information, reducing the potential benefit for orientation and performance. Implications for the design of the prototype are outlined.
TNO Identifier
462075
Source title
Proceedings of the 56th Annual Meeting of the Human Factors and Ergonomics Society. Boston, MA.
Pages
277-281
Files
To receive the publication files, please send an e-mail request to TNO Repository.