VIDEO DOI: https://doi.org/10.48448/x23v-5v22
PAPER DOI: Model-based reinforcement learning, Spectral method, Sample complexity, Computational complexity

technical paper

AAMAS 2020

May 11, 2020

Live on Underline

Can Agents Learn by Analogy? An Inferable Model for PAC Reinforcement Learning

Model-based reinforcement learning algorithms make decisions by building and utilizing a model of the environment. However, none of the existing algorithms attempts to infer the dynamics of any state-action pair from known state-action pairs before meeting it for sufficient times. We propose a new model-based method called Greedy Inference Model (GIM) that infers the unknown dynamics from known dynamics based on the internal spectral properties of the environment. In other words, GIM can "learn by analogy". We further introduce a new exploration strategy which ensures that the agent rapidly and evenly visits unknown state-action pairs. GIM is much more computationally efficient than state-of-the-art model-based algorithms, as the number of dynamic programming operations is independent of the environment size. Lower sample complexity could also be achieved under mild conditions compared against methods without inferring. Experimental results demonstrate the effectiveness and efficiency of GIM in a variety of real-world tasks.

Downloads

SlidesTranscript English (automatic)

Next from AAMAS 2020

technical paper

Green Security Game with Community Engagement

AAMAS 2020

+3
Taoan Huang and 5 other authors

11 May 2020

Similar lecture

poster

Spectral Feature Augmentation for Graph Contrastive Learning and Beyond

AAAI 2023

+2
Yifei Zhang and 4 other authors

10 February 2023

Stay up to date with the latest Underline news!

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved