ICML-07 Tutorial on Bayesian Methods for Reinforcement Learning

Tutorial Slides

Summary and Objectives

Although Bayesian methods for Reinforcement Learning can be traced back to the 1960s (Howard's work in Operations Research), Bayesian methods have only been used sporadically in modern Reinforcement Learning. This is in part because non-Bayesian approaches tend to be much simpler to work with. However, recent advances have shown that Bayesian approaches do not need to be as complex as initially thought and offer several theoretical advantages. For instance, by keeping track of full distributions (instead of point estimates) over the unknowns, Bayesian approaches permit a more comprehensive quantification of the uncertainty regarding the transition probabilities, the rewards, the value function parameters and the policy parameters. Such distributional information can be used to optimize (in a principled way) the classic exploration/exploitation tradeoff, which can speed up the learning process. Similarly, active learning for reinforcement learning can be naturally optimized. The estimation of gradient performance with respect to value function or and/or policy parameters can also be done more accurately while using less data. Bayesian approaches also facilitate the encoding of prior knowledge and the explicit formulation of domain assumptions.

The primary goal of this tutorial is to raise the awareness of the research community with regard to Bayesian methods, their properties and potential benefits for the advancement of Reinforcement Learning. An introduction to Bayesian learning will be given, followed by a historical account of Bayesian Reinforcement Learning and a description of existing Bayesian methods for Reinforcement Learning. The properties and benefits of Bayesian techniques for Reinforcement Learning will be discussed, analyzed and illustrated with case studies.

Outline

  1. Introduction to Reinforcement Learning and Bayesian learning

  2. History of Bayesian RL

  3. Model-based Bayesian RL

    3.1  Policy optimization techniques

    3.2  Encoding of domain knowledge

    3.3  Exploration/exploitation tradeoff and active learning

    3.4  Bayesian imitation learning in RL

    3.5  Bayesian multi-agent coordination and coalition formation in RL

  4. Model-free Bayesian RL

    4.1  Gaussian process temporal difference (GPTD)

    4.2  Gaussian process SARSA

    4.3  Bayesian policy gradient

    4.4  Bayesian actor-critic algorithms

  5. Demo

    5.1  Control of an octopus arm using GPTD

Presenters