May 11, 2020
Live on Underline
FRESH: Interactive Reward Shaping in High-Dimensional State Spaces using Human Feedback
Reinforcement learning has been successful in training autonomous agents to accomplish goals in complex environments. Although this has been adapted to multiple settings, including robotics and computer games, human players often find it easier to obtain higher rewards in some environments than reinforcement learning algorithms. This is especially true of high-dimensional state spaces where the reward obtained by the agent is sparse or extremely delayed. In this presentation, we introduce the FRESH (Feedback-based REward SHaping) framework, which eﬀectively integrates feedback signals supplied by a human operator with deep reinforcement learning algorithms in high-dimensional state spaces. During training, a human operator is presented with trajectories from a replay buﬀer and then provides feedback on states and actions in the trajectory. In order to generalize feedback signals provided by the human operator to previously unseen states and actions at test-time, we use a feedback neural network. We use an ensemble of neural networks with a shared network architecture to represent model uncertainty and the confidence of the neural network in its output. The output of the feedback neural network is converted to a shaping reward that is augmented to the reward provided by the environment. We evaluate our approach on the Bowling and Skiing Atari games in the arcade learning environment. Although human experts have achieved high scores in these environments, state-of-the-art deep learning algorithms perform poorly. We observe that FRESH achieves much higher scores than state-of-the-art deep learning algorithms in both environments. FRESH also achieves a 21.4% higher score than a human expert in Bowling and does as well as an expert in Skiing.
Next from AAMAS 2020
Likelihood Quantile Networks for Coordinating Multi-Agent Reinforcement Learning
11 May 2020