Task-Relevant Embeddings for Robust Perception in Reinforcement Learning

Abstract

Reinforcement learning is unreasonably sample inefficient in many real-world visual domains, which require relatively simple control behaviors but pose challenging perception problems. We show that even simple visual noise added to common reinforcement learning benchmark environments can significantly degrade learning efficiency and break common approaches such as the use of autoencoders. We propose new methods for learning task-relevant state representations, and show that they can discover image embeddings that are significantly more effective when robust perception is required.

Eric Liang, Roy Fox, Joseph Gonzalez, and Ion Stoica, PGMRL Workshop, ICML 2018