Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes

Tuyen P. Le, Vien Ngo, Md. Abu Layek, TaeChoong Chung

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)
455 Downloads (Pure)


In recent years, reinforcement learning (RL) hasachieved remarkable success due to the growing adoption of deeplearning techniques and the rapid growth of computing power.Nevertheless, it is well-known that flat reinforcement learningalgorithms are often have trouble learning and are even data-efficient with respect to tasks having hierarchical structures, e.g.those consisting of multiple subtasks. Hierarchical reinforcementlearning is a principled approach that can tackle such challengingtasks. On the other hand, many real-world tasks usually haveonly partial observability in which state measurements areoften imperfect and partially observable. The problems of RLin such settings can be formulated as a partially observableMarkov decision process (POMDP). In this paper, we studyhierarchical RL in a POMDP in which the tasks have only partialobservability and possess hierarchical properties. We propose ahierarchical deep reinforcement learning approach for learningin hierarchical POMDP. The deep hierarchical RL algorithmis proposed for domains to both MDP and POMDP learning.We evaluate the proposed algorithm using various challenginghierarchical POMDPs.
Original languageEnglish
Pages (from-to)49089 - 49102
Number of pages14
JournalIEEE Access
Publication statusPublished - 27 Jul 2018


Dive into the research topics of 'Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes'. Together they form a unique fingerprint.

Cite this