Multimodal interfaces : a framework based on modality appropriateness

conference paper
Our sensory modalities are specialized in perceiving different attributes of an object or event. This fact is the basis of the approach towards multimodal interfaces we describe in this paper. We rated the match between 20 possible information attributes (common in human computer interaction) and the visual, auditory and tactile sensory systems. We refer to this match as modality appropriateness. Preferably, an information chunk is allocated to the most appropriate modality. In situations in which information consists of several attributes, these may be allocated to different modalities. This approach is in contrast with the more common approach in which multimodality is implemented in an interface as a redundant presentation of the same information to two or more sensory modalities. This latter approach can be beneficial to solve risks of sensory overload and to make the interface accessible for people with a sensory challenge, but is not based on possible synergy between the senses. However, the supposed synergy may also involve costs, for example in terms of the time required to switch between modalities and in the introduction of additional noise in cross-modal comparisons compared to unimodal comparisons. We will discuss both the chances and the potential costs of applying the modality appropriateness framework.
TNO Identifier
16561
Source title
Proceedings of the Human Factors and Ergonomics Society
Pages
1542-1546
Files
To receive the publication files, please send an e-mail request to TNO Repository.