Telling autonomous systems what to do
Recent progress in Artificial Intelligence, sensing and network technology, robotics, and (cloud) computing has enabled the development of intelligent autonomous machine systems. Telling such autonomous systems "what to do" in a responsible way, is a non-trivial task. For intelligent autonomous machines to function in human society and collaborate with humans, we see three challenges ahead affecting meaningful control of autonomous systems. First, autonomous machines are not yet capable of handling failures and unexpected situations. Providing procedures for all possible failures and situations is unfeasible because the state-action space would explode. Machines should therefore become self-aware (self-assessment, self-management) enabling them to handle unexpected situations when they arise. This is a challenge for the computer science community. Second, in order to keep (meaningful) control, humans come into a new role of providing intelligent autonomous machines with objectives or goal functions (including rules, norms, constraints and moral values), specifying the utility of every possible outcome of actions of autonomous machines. Third, in order to be able to collaborate with humans, autonomous systems will require an understanding of (us) humans (i.e., our social, cognitive, affective and physical behaviors) and the ability to engage in partnership interactions (such as explanations of task performances, and the establishment of joint goals and work agreements). These are new challenges for the cognitive ergonomics community. © 2018 ACM.
To reference this document use:
Distributed computer systems
Association for Computing Machinery
ACM International Conference Proceeding Series : 36th European Conference on Cognitive Ergonomics, ECCE 2018, 5 September 2018 through 7 September 2018