Omega-Regular Objectives in Model-Free Reinforcement Learning

Ernst Moritz Hahn, Mateo Perez, Sven Schewe, Fabio Somenzi, Ashutosh Trivedi, University Liverpool

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)
48 Downloads (Pure)

Abstract

We provide the first solution for model-free reinforcement learning of ω -regular objectives for Markov decision processes (MDPs). We present a constructive reduction from the almost-sure satisfaction of ω -regular objectives to an almost-sure reachability problem, and extend this technique to learning how to control an unknown model so that the chance of satisfying the objective is maximized. We compile ω -regular properties into limit-deterministic Büchi automata instead of the traditional Rabin automata; this choice sidesteps difficulties that have marred previous proposals. Our approach allows us to apply model-free, off-the-shelf reinforcement learning algorithms to compute optimal strategies from the observations of the MDP. We present an experimental evaluation of our technique on benchmark learning problems.
Original languageEnglish
Title of host publicationTACAS: International Conference on Tools and Algorithms for the Construction and Analysis of Systems
PublisherSpringer Lecture Notes in Computer Science (LNCS)
Pages395-412
Volume11427
DOIs
Publication statusPublished - 04 Apr 2019

Publication series

NameLecture Notes in Computer Science
Volume11427
ISSN (Print)0302-9743

Fingerprint

Reinforcement learning
Learning algorithms

Cite this

Hahn, E. M., Perez, M., Schewe, S., Somenzi, F., Trivedi, A., & Liverpool, U. (2019). Omega-Regular Objectives in Model-Free Reinforcement Learning. In TACAS: International Conference on Tools and Algorithms for the Construction and Analysis of Systems (Vol. 11427, pp. 395-412). (Lecture Notes in Computer Science; Vol. 11427). Springer Lecture Notes in Computer Science (LNCS). https://doi.org/10.1007/978-3-030-17462-0_27
Hahn, Ernst Moritz ; Perez, Mateo ; Schewe, Sven ; Somenzi, Fabio ; Trivedi, Ashutosh ; Liverpool, University. / Omega-Regular Objectives in Model-Free Reinforcement Learning. TACAS: International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Vol. 11427 Springer Lecture Notes in Computer Science (LNCS), 2019. pp. 395-412 (Lecture Notes in Computer Science).
@inproceedings{ca9e42da37ec4352818f7fbd8f62c5f9,
title = "Omega-Regular Objectives in Model-Free Reinforcement Learning",
abstract = "We provide the first solution for model-free reinforcement learning of ω -regular objectives for Markov decision processes (MDPs). We present a constructive reduction from the almost-sure satisfaction of ω -regular objectives to an almost-sure reachability problem, and extend this technique to learning how to control an unknown model so that the chance of satisfying the objective is maximized. We compile ω -regular properties into limit-deterministic B{\"u}chi automata instead of the traditional Rabin automata; this choice sidesteps difficulties that have marred previous proposals. Our approach allows us to apply model-free, off-the-shelf reinforcement learning algorithms to compute optimal strategies from the observations of the MDP. We present an experimental evaluation of our technique on benchmark learning problems.",
author = "Hahn, {Ernst Moritz} and Mateo Perez and Sven Schewe and Fabio Somenzi and Ashutosh Trivedi and University Liverpool",
year = "2019",
month = "4",
day = "4",
doi = "10.1007/978-3-030-17462-0_27",
language = "English",
volume = "11427",
series = "Lecture Notes in Computer Science",
publisher = "Springer Lecture Notes in Computer Science (LNCS)",
pages = "395--412",
booktitle = "TACAS: International Conference on Tools and Algorithms for the Construction and Analysis of Systems",

}

Hahn, EM, Perez, M, Schewe, S, Somenzi, F, Trivedi, A & Liverpool, U 2019, Omega-Regular Objectives in Model-Free Reinforcement Learning. in TACAS: International Conference on Tools and Algorithms for the Construction and Analysis of Systems. vol. 11427, Lecture Notes in Computer Science, vol. 11427, Springer Lecture Notes in Computer Science (LNCS), pp. 395-412. https://doi.org/10.1007/978-3-030-17462-0_27

Omega-Regular Objectives in Model-Free Reinforcement Learning. / Hahn, Ernst Moritz; Perez, Mateo; Schewe, Sven; Somenzi, Fabio; Trivedi, Ashutosh; Liverpool, University.

TACAS: International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Vol. 11427 Springer Lecture Notes in Computer Science (LNCS), 2019. p. 395-412 (Lecture Notes in Computer Science; Vol. 11427).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Omega-Regular Objectives in Model-Free Reinforcement Learning

AU - Hahn, Ernst Moritz

AU - Perez, Mateo

AU - Schewe, Sven

AU - Somenzi, Fabio

AU - Trivedi, Ashutosh

AU - Liverpool, University

PY - 2019/4/4

Y1 - 2019/4/4

N2 - We provide the first solution for model-free reinforcement learning of ω -regular objectives for Markov decision processes (MDPs). We present a constructive reduction from the almost-sure satisfaction of ω -regular objectives to an almost-sure reachability problem, and extend this technique to learning how to control an unknown model so that the chance of satisfying the objective is maximized. We compile ω -regular properties into limit-deterministic Büchi automata instead of the traditional Rabin automata; this choice sidesteps difficulties that have marred previous proposals. Our approach allows us to apply model-free, off-the-shelf reinforcement learning algorithms to compute optimal strategies from the observations of the MDP. We present an experimental evaluation of our technique on benchmark learning problems.

AB - We provide the first solution for model-free reinforcement learning of ω -regular objectives for Markov decision processes (MDPs). We present a constructive reduction from the almost-sure satisfaction of ω -regular objectives to an almost-sure reachability problem, and extend this technique to learning how to control an unknown model so that the chance of satisfying the objective is maximized. We compile ω -regular properties into limit-deterministic Büchi automata instead of the traditional Rabin automata; this choice sidesteps difficulties that have marred previous proposals. Our approach allows us to apply model-free, off-the-shelf reinforcement learning algorithms to compute optimal strategies from the observations of the MDP. We present an experimental evaluation of our technique on benchmark learning problems.

U2 - 10.1007/978-3-030-17462-0_27

DO - 10.1007/978-3-030-17462-0_27

M3 - Conference contribution

VL - 11427

T3 - Lecture Notes in Computer Science

SP - 395

EP - 412

BT - TACAS: International Conference on Tools and Algorithms for the Construction and Analysis of Systems

PB - Springer Lecture Notes in Computer Science (LNCS)

ER -

Hahn EM, Perez M, Schewe S, Somenzi F, Trivedi A, Liverpool U. Omega-Regular Objectives in Model-Free Reinforcement Learning. In TACAS: International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Vol. 11427. Springer Lecture Notes in Computer Science (LNCS). 2019. p. 395-412. (Lecture Notes in Computer Science). https://doi.org/10.1007/978-3-030-17462-0_27