Explainable AI for Digital (Engineering) Assistants (XAI4DEA)
report
When humans rely on interacting with AI enabled systems, notably to support a critical decision- making process, it is crucial that the human interactors understand the outcomes in an explainable way. While this is less important for non-critical applications, business, safety, and mission critical applications require this transparency when accountability and compliance to regulations is demanded. We can find examples in the medical, finance, automotive and aviation domains. Although many emerging AI technologies prove to be very powerful, many lack this transparency aspect. Digital Assistants are software-based services designed to help users by performing tasks, providing information, and managing various activities. Common examples include virtual assistants like Siri, Alexa, and Google Assistant, which can handle tasks such as setting reminders, answering questions, controlling smart home devices, and more. In the professional domain, AI enabled Digital Assistants may support architects, designers, engineers and many other experts, doing their job in a more effective way. For many AI technologies, it is unclear how their outcomes relate to their inputs. This makes it difficult for users to take responsibility and hence they tend not to rely on these technologies for critical situations. To overcome this point, we need more trustworthiness, which is built on top of Explainability of the AI technology. We went into the question of what is needed to have a successful cohabitation of humans and Digital Assistants working together in a seamless way. What are preconditions and considerations for success? The research, as reflected in the document, has been built around a set of interviews with people from the high-tech and medical field. We also found that in literature and research, an overwhelming amount of knowledge on the topic is available. So, we complemented the interviews with a limited literature scan, which gives pointers to more information. We found that to what level the human interactor trusts, the AI's outcomes is highly relevant and this has a strong connection with socio-technical systems. The interviews consistently highlighted that AI is not a magical solution for poorly understood problems. The interviewees were less concerned about the Explainable AI technologies themselves; most challenges lie in organizational embedding and Human-Machine acceptance. Applying AI in an organization that is not prepared may result in disappointing performance that does not bring the expected value. So, some AI-readiness leveling systems can assist such organizations to provide a solid step-by-step approach and avoid pitfalls.
TNO Identifier
1002115
Publisher
TNO
Collation
27 p.