Title
Switching between Representations in Reinforcement Learning
Author
van Seijen, H.
Whiteson, S.
Kester, L.
TNO Defensie en Veiligheid
Contributor
Babuska, R. (editor)
Groen, F.C.A (editor)
Publication year
2010
Abstract
This chapter presents and evaluates an online representation selection method for factored Markov decision processes (MDPs). The method addresses a special case of the feature selection problem that only considers certain subsets of features, which we call candidate representations. A motivation for the method is that it can potentially deal with problems where other structure learning algorithms are infeasible due to a large degree of the associated dynamic Bayesian network. Our method uses switch actions to select a representation and uses off-policy updating to improve the policy of representations that were not selected. We demonstrate the validity of the method by showing for a contextual bandit task and a regular MDP that given a feature set containing only a single relevant feature, we can find this feature very efficiently using the switch method. We also show for a contextual bandit task that switching between a set of relevant features and a subset of these features can outperform each of the individual representations. The reason for this is that the switch method combines the fast performance increase of the small representation with the high asymptotic performance of the large representation.
Subject
Psychology
To reference this document use:
http://resolver.tudelft.nl/uuid:c621a73f-c291-45f8-952c-b7297615eedc
DOI
https://doi.org/10.1007/978-3-642-11688-9_3
TNO identifier
347448
Publisher
Springer, Berlin : Heidelberg
Source
Interactive Collaborative Information Systems, 281, 65-84
Series
Studies in Computational Intelligence
Document type
bookPart