Print Email Facebook Twitter Using Relational Concept Networks for Explainable Decision Support Title Using Relational Concept Networks for Explainable Decision Support Author Voogd, J.M. de Heer, P.B.U.L. Veltman, H.J. Hanckmann, P. van Lith, J.M. Publication year 2019 Abstract In decision support systems, information from many different sources must be integrated and interpreted to aid the process of gaining situational understanding. These systems assist users in making the right decisions, for example when under time pressure. In this work, we discuss a controlled automated support tool for gaining situational understanding, where multiple sources of information are integrated. In the domain of operational safety and security, available data is often limited and insufficient for sub-symbolic approaches such as neural networks. Experts generally have high level (symbolic) knowledge but may lack the ability to adapt and apply that knowledge to the current situation. In this work, we combine sub-symbolic information and technologies (machine learning) with symbolic knowledge and technologies (from experts or ontologies). This combination offers the potential to steer the interpretation of the little data available with the knowledge of the expert. We created a framework that consists of concepts and relations between those concepts, for which the exact relational importance is not necessarily specified. A machine-learning approach is used to determine the relations that fit the available data. The use of symbolic concepts allows for properties such as explainability and controllability. The framework was tested with expert rules on an attribute dataset of vehicles. The performance with incomplete inputs or smaller training sets was compared to a traditional fully-connected neural network. The results show it as a viable alternative when data is limited or incomplete, and that more semantic meaning can be extracted from the activations of concepts. © IFIP International Federation for Information Processing 2019. Subject Decision supportExplainabilityGraph-based machine learningNeural networksSymbolic AIDecision support systemsEngineering educationExtractionGraphic methodsMachine learningSemanticsFully connected neural networkInformation and technologiesMachine learning approachesSub-symbolic approachSymbolic knowledgeData mining To reference this document use: http://resolver.tudelft.nl/uuid:ac784263-5a1e-4195-9ba0-9ba8964e16c2 TNO identifier 869497 Publisher Springer Verlag ISBN 9783030297251 ISSN 0302-9743 Source Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3rd IFIP Cross Domain Conference for Machine Learning and Knowledge Extraction, CD-MAKE 2019, 26 August 2019 through 29 August 2019, 78-93 Document type conference paper Files To receive the publication files, please send an e-mail request to TNO Library.