Hybrid Strategies Towards Safe “Self-Aware” Superintelligent Systems
Iklé, M. (editor)
Franz, A. (editor)
Rzepka, R. (editor)
Goertzel, B. (editor)
Against the backdrop of increasing progresses in AI research paired with a rise of AI applications in decision-making processes, security-critical domains as well as in ethically relevant frames, a large-scale debate on possible safety measures encompassing corresponding long-term and short-term issues has emerged across different disciplines. One pertinent topic in this context which has been addressed by various AI Safety researchers is e.g. the AI alignment problem for which no final consensus has been achieved yet. In this paper, we present a multidisciplinary toolkit of AI Safety strategies combining considerations from AI and Systems Engineering as well as from Cognitive Science with a security mindset as often relevant in Cybersecurity. We elaborate on how AGI “Self-awareness” could complement different AI Safety measures in a framework extended by a jointly performed Human Enhancement procedure. Our analysis suggests that this hybrid framework could contribute to undertake the AI alignment problem from a new holistic perspective through security-building synergetic effects emerging thereof and could help to increase the odds of a possible safe future transition towards superintelligent systems.
Defence, Safety and Security
To reference this document use:
Artificial General Intelligence. AGI 2018. Lecture Notes in Computer Science, vol 10999.