Neural Network Ensembles in Reinforcement Learning

Creators: Faußer, Stefan A. and Schwenker, Friedhelm
Title: Neural Network Ensembles in Reinforcement Learning
Item Type: Article or issue of a publication series
Journal or Series Title: Neural Processing Letters
Page Range: pp. 55-69
Date: 2015
Divisions: Informationsmanagement
Abstract: The integration of function approximation methods into reinforcement learning models allows for learning state- and state-action values in large state spaces. Model-free methods, like temporal-difference or SARSA, yield good results for problems where the Markov property holds. However, methods based on a temporal-difference are known to be unstable estimators of the value functions, when used with function approximation. Such unstable behavior depends on the Markov chain, the discounting value and the chosen function approximator. In this paper, we propose a meta-algorithm to learn state- or state-action values in a neural network ensemble, formed by a committee of multiple agents. The agents learn from joint decisions. It is shown that the committee benefits from the diversity on the estimation of the values. We empirically evaluate our algorithm on a generalized maze problem and on SZ-Tetris. The empirical evaluations confirm our analytical results.
Forthcoming: No
Link eMedia: Download
Citation:

Faußer, Stefan A. and Schwenker, Friedhelm (2015) Neural Network Ensembles in Reinforcement Learning. Neural Processing Letters, 41. pp. 55-69. ISSN 1573-773X

Actions (login required)

View Item View Item