Efficient abstraction selection in reinforcement learning
conference paper
This paper introduces a novel approach for abstraction selection in reinforcement learning problems modelled as factored Markov decision processes (MDPs), for which a state is described via a set of state components. In abstraction selection, an agent must choose an abstraction from a set of candidate abstractions, each build up from a different combination of state components. Copyright © 2013 Association for the Advancement of Artificial Intelligence.
TNO Identifier
489088
ISBN
9781577356301
Source title
10th Symposium on Abstraction, Reformulation, and Approximation, SARA 2013, 11-12 July 2013, Leavenworth, WA, USA
Pages
123-127
Files
To receive the publication files, please send an e-mail request to TNO Repository.