Searched for: subject:"Explainability"
(1 - 13 of 13)
document
Schoonderwoerd, T.A.J. (author), Jorritsma, W. (author), Neerincx, M.A. (author), van den Bosch, K. (author)
Much of the research on eXplainable Artificial Intelligence (XAI) has centered on providing transparency of machine learning models. More recently, the focus on human-centered approaches to XAI has increased. Yet, there is a lack of practical methods and examples on the integration of human factors into the development processes of AI-generated...
article 2021
document
Verhagen, R.S. (author), Neerincx, M.A. (author), Tielman, M.L. (author)
Because of recent and rapid developments in Artificial Intelligence (AI), humans and AI-systems increasingly work together in human-agent teams. However, in order to effectively leverage the capabilities of both, AI-systems need to be understandable to their human teammates. The branch of eXplainable AI (XAI) aspires to make AI-systems more...
conference paper 2021
document
de Greeff, J. (author), de Boer, M.H.T. (author), Hillerström, F.H.J. (author), Bomhoff, F.W. (author), Jorritsma, W. (author), Neerinx, M. (author)
AI tools are becoming more commonly used in a variety of application domains. In this paper, we describe a system named FATE that combines state of the art AI tools. The goal of the FATE system is decision support with use of ongoing human-AI co-learning, explainable AI and fair, bias-free and secure usage of data. These topics are societally...
conference paper 2021
document
van der Waa, J. (author), Verdult, S. (author), van den Bosch, K. (author), van Diggelen, J. (author), Haije, T. (author), van der Stigchel, B. (author), Cocu, I. (author)
With the progress of Artificial Intelligence, intelligent agents are increasingly being deployed in tasks for which ethical guidelines and moral values apply. As artificial agents do not have a legal position, humans should be held accountable if actions do not comply, implying humans need to exercise control. This is often labeled as Meaningful...
article 2021
document
Kaptein, F. (author), Broekens, J. (author), Hindriks, K. (author), Neerincx, M.A. (author)
Current developments in Artificial Intelligence (AI) led to a resurgence of Explainable AI (XAI). New methods are being researched to obtain information from AI systems in order to generate explanations for their output. However, there is an overall lack of valid and reliable evaluations of the effects on users' experience of, and behavior in...
article 2021
document
Waa, J.V.D. (author), Schoonderwoerd, T. (author), Diggelen, J.V. (author), Neerincx, M. (author)
Decision support systems (DSS) have improved significantly but are more complex due to recent advances in Artificial Intelligence. Current XAI methods generate explanations on model behaviour to facilitate a user's understanding, which incites trust in the DSS. However, little focus has been on the development of methods that establish and...
article 2020
document
Voogd, J.M. (author), de Heer, P.B.U.L. (author), Veltman, H.J. (author), Hanckmann, P. (author), van Lith, J.M. (author)
In decision support systems, information from many different sources must be integrated and interpreted to aid the process of gaining situational understanding. These systems assist users in making the right decisions, for example when under time pressure. In this work, we discuss a controlled automated support tool for gaining situational...
conference paper 2019
document
Sileno, G. (author), Boer, A. (author), van Engers, T. (author)
For being potentially destructive, in practice incomprehensible and for the most unintelligible, contemporary technology is setting high challenges on our society. New conception methods are urgently required. Reorganizing ideas and discussions presented in AI and related fields, this position paper aims to highlight the importance of normware...
conference paper 2019
document
Adhikari, A. (author), Tax, D.M.J. (author), Satta, R. (author), Faeth, M. (author)
Explainable Artificial Intelligence (XAI) is an emergent research field which tries to cope with the lack of transparency of AI systems, by providing human understandable explanations for the underlying Machine Learning models. This work presents a new explanation extraction method called LEAFAGE. Explanations are provided both in terms of...
conference paper 2019
document
van den Bosch, K. (author), Schoonderwoerd, T.A.J. (author), Blankendaal, R.A.M. (author), Neerincx, M.A. (author)
The increasing use of ever-smarter AI-technology is changing the way individuals and teams learn and perform their tasks. In hybrid teams, people collaborate with artificially intelligent partners. To utilize the different strengths and weaknesses of human and artificial intelligence, a hybrid team should be designed upon the principles that...
conference paper 2019
document
Neerincx, M.A. (author), van der Waa, J. (author), Kaptein, F. (author), van Diggelen, J. (author)
conference paper 2018
document
van der Waa, J.S. (author), van Diggelen, J. (author), Neerincx, M.A. (author)
Explainable AI becomes increasingly important as the use of intelligent systems becomes more widespread in high-risk domains. In these domains it is important that the user knows to which degree the system’s decisions can be trusted. To facilitate this, we present the Intuitive Confidence Measure (ICM): A lazy learning meta-model that can...
conference paper 2018
document
Kaptein, N.A. (author), van Hattum, T. (author), Horst, R. (author), TNO Technische Menskunde (author)
Several studies indicate that road users are not able to distinguish roads according to official road categories. As a means of designing a sustainably safe traffic environment the concept of Self-Explaining Roads (SER) has been developed. In order to support safe driving behaviour and appropriate speed choice drivers should be enabled to...
report 1998
Searched for: subject:"Explainability"
(1 - 13 of 13)