Policy Search in Reproducing Kernel Hilbert Space

Vien Anh Ngo, Peter Englert, Marc Toussaint

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Citations (Scopus)

Abstract

Modeling policies in reproducing kernel Hilbert space (RKHS) renders policy gradient reinforcement learning algorithms non-parametric. As a result, the policies become very flexible and have a rich representational potential without a pre-defined set of features. However, their performances might be either non-covariant under re-parameterization of the chosen kernel, or very sensitive to step-size selection. In this paper, we propose to use a general framework to derive a new RKHS policy search technique. The new derivation leads to both a natural RKHS actor-critic algorithm and a RKHS expectation maximization (EM) policy search algorithm. Further,we show that kernelization enables us to learn in partially observable (POMDP) tasks which is considered daunting for parametric approaches. Via sparsification, a small set of "support vectors" representing the history is shown to be effectively discovered.For evaluations, we use three simulated (PO)MDP reinforcement learning tasks, and a simulated PR2's robotic manipulation task. The results demonstrate the effectiveness of the new RKHS policy search framework in comparison to plain RKHS actor-critic, episodic natural actor-critic, plain actor-critic, and PoWER approaches.
Original languageEnglish
Title of host publicationThe Twenty-Fifth International Joint Conference on Artificial Intelligence
Subtitle of host publicationIJCAI
Pages2089-2096
Number of pages8
Publication statusPublished - 15 Jul 2016

Fingerprint

Dive into the research topics of 'Policy Search in Reproducing Kernel Hilbert Space'. Together they form a unique fingerprint.

Cite this