Title
Hybrid Strategies Towards Safe “Self-Aware” Superintelligent Systems
Author
Aliman, N.M.
Kester, L.
Contributor
Iklé, M. (editor)
Franz, A. (editor)
Rzepka, R. (editor)
Goertzel, B. (editor)
Publication year
2018
Abstract
Against the backdrop of increasing progresses in AI research paired with a rise of AI applications in decision-making processes, security-critical domains as well as in ethically relevant frames, a large-scale debate on possible safety measures encompassing corresponding long-term and short-term issues has emerged across different disciplines. One pertinent topic in this context which has been addressed by various AI Safety researchers is e.g. the AI alignment problem for which no final consensus has been achieved yet. In this paper, we present a multidisciplinary toolkit of AI Safety strategies combining considerations from AI and Systems Engineering as well as from Cognitive Science with a security mindset as often relevant in Cybersecurity. We elaborate on how AGI “Self-awareness” could complement different AI Safety measures in a framework extended by a jointly performed Human Enhancement procedure. Our analysis suggests that this hybrid framework could contribute to undertake the AI alignment problem from a new holistic perspective through security-building synergetic effects emerging thereof and could help to increase the odds of a possible safe future transition towards superintelligent systems.
Subject
Defence, Safety and Security
Self-awareness
AI Safety
Human enhancement
AI alignment
Superintelligence
To reference this document use:
http://resolver.tudelft.nl/uuid:98bc4394-fa32-489d-a5b0-ccab8cb09f20
TNO identifier
867739
Publisher
Springer
Source
Artificial General Intelligence. AGI 2018. Lecture Notes in Computer Science, vol 10999.
Document type
conference paper