PAPER DOI: feedback-based reward shaping, deep reinforcement learning, human feedback

technical paper

AAMAS 2020

May 11, 2020

Live on Underline

FRESH: Interactive Reward Shaping in High-Dimensional State Spaces using Human Feedback

Reinforcement learning has been successful in training autonomous agents to accomplish goals in complex environments. Although this has been adapted to multiple settings, including robotics and computer games, human players often find it easier to obtain higher rewards in some environments than reinforcement learning algorithms. This is especially true of high-dimensional state spaces where the reward obtained by the agent is sparse or extremely delayed. In this presentation, we introduce the FRESH (Feedback-based REward SHaping) framework, which effectively integrates feedback signals supplied by a human operator with deep reinforcement learning algorithms in high-dimensional state spaces. During training, a human operator is presented with trajectories from a replay buffer and then provides feedback on states and actions in the trajectory. In order to generalize feedback signals provided by the human operator to previously unseen states and actions at test-time, we use a feedback neural network. We use an ensemble of neural networks with a shared network architecture to represent model uncertainty and the confidence of the neural network in its output. The output of the feedback neural network is converted to a shaping reward that is augmented to the reward provided by the environment. We evaluate our approach on the Bowling and Skiing Atari games in the arcade learning environment. Although human experts have achieved high scores in these environments, state-of-the-art deep learning algorithms perform poorly. We observe that FRESH achieves much higher scores than state-of-the-art deep learning algorithms in both environments. FRESH also achieves a 21.4% higher score than a human expert in Bowling and does as well as an expert in Skiing.


SlidesTranscript English (automatic)

Next from AAMAS 2020

technical paper

Likelihood Quantile Networks for Coordinating Multi-Agent Reinforcement Learning

AAMAS 2020

Xueguang Lyu and 1 other author

11 May 2020

Similar lecture


CEMA – Cost-Efficient Machine-Assisted Document Annotations

AAAI 2023

Guowen Yuan and 2 other authors

11 February 2023

Stay up to date with the latest Underline news!


  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved