Hybrid Strategies Towards Safe “Self-Aware” Superintelligent Systems

conference paper
Against the backdrop of increasing progresses in AI research paired with a rise of AI applications in decision-making processes, security-critical domains as well as in ethically relevant frames, a large-scale debate on possible safety measures encompassing corresponding long-term and short-term issues has emerged across different disciplines. One pertinent topic in this context which has been addressed by various AI Safety researchers is e.g. the AI alignment problem for which no final consensus has been achieved yet. In this paper, we present a multidisciplinary toolkit of AI Safety strategies combining considerations from AI and Systems Engineering as well as from Cognitive Science with a security mindset as often relevant in Cybersecurity. We elaborate on how AGI “Self-awareness” could complement different AI Safety measures in a framework extended by a jointly performed Human Enhancement procedure. Our analysis suggests that this hybrid framework could contribute to undertake the AI alignment problem from a new holistic perspective through security-building synergetic effects emerging thereof and could help to increase the odds of a possible safe future transition towards superintelligent systems.
TNO Identifier
867739
Publisher
Springer
Source title
Artificial General Intelligence. AGI 2018. Lecture Notes in Computer Science, vol 10999.
Editor(s)
Iklé M.,
Franz A.,
Rzepka R.,
Goertzel B. (eds)
Files
To receive the publication files, please send an e-mail request to TNO Repository.