Conference Proceedings

An Algorithm and User Study for Teaching Bilateral Manipulation via Iterated Best Response Demonstrations

Multilateral demonstrations can be difficult for human supervisors to proved because they require divided attention. We propose Bilateral Iterated Best Response (BIBR), a new algorithm that reduces supervisor burden by iteratively demonstrating each manipulator unilaterally while rolling out an estimated robot policy for the other manipulator. We present a web-based user study of a two-agent gridworld domain. We confirm prior work that bilateral demonstrations are noisier and longer when the task is asymmetric, and show that BIBR improves success rate in the asymmetric task, while learning policies that have shorter and smoother trajectories.

Carolyn Chen, Sanjay Krishnan, Michael Laskey, Roy Fox, and Ken Goldberg, CASE, 2017

Statistical Data Cleaning for Deep Learning of Automation Tasks from Demonstrations

Human demonstrators of deep learning tasks are prone to inconsistencies and errors that can delay or degrade learning. We improve task performance by characterizing supervisor inconsistency and correcting for it using data cleaning techniques. In human demonstrations of a planar part extraction task on a 2DOF robot, trained CNN models show an improvement of 11.2% in mean absolute success rate after data cleaning.

Caleb Chuck, Michael Laskey, Sanjay Krishnan, Ruta Joshi, Roy Fox, and Ken Goldberg, CASE, 2017

Minimum-Information LQG Control — Part I: Memoryless Controllers

We consider the problem of controlling a linear system with Gaussian noise and quadratic cost (LQG), using a memoryless controller that has limited capacity of the channel connecting its sensor to its actuator. We formulate this setting as a sequential rate-distortion (SRD) problem, where we minimize the rate of information required for the controller’s operation, under a constraint on its external cost. We present the optimality principle, and study the interesting and useful phenomenology of the optimal controller, such as the principled reduction of its order.

Roy Fox and Naftali Tishby, CDC, 2016

Minimum-Information LQG Control — Part II: Retentive Controllers

We consider the case where the controller is retentive (memory-utilizing). We can view the memory reader as one more sensor, and the memory writer as one more actuator. We can then formulate the problem of control under communication limitations, again as a sequential rate-distortion (SRD) problem of minimizing the rate of information required for the controller’s operation, under a constraint on its external cost. We show that this problem can be reduced to the memoryless case, studied in Part I. We then further investigate the form of the resulting optimal solution, and demonstrate its interesting phenomenology.

Roy Fox and Naftali Tishby, CDC, 2016

Principled Option Learning in Markov Decision Processes

We characterize a good set of prior options as the centroids of clusters of control options that are optimized for a set of subtasks. We formulate this insight as an optimization problem and derive an optimization algorithm that alternates between planning given the set of prior options and clustering the set of control options. We illustrate this approach in a simple two-room simulation.

*Roy Fox, *Michal Moshkovitz, and Naftali Tishby, EWRL, 2016
* Equal contribution

Taming the Noise in Reinforcement Learning via Soft Updates

We identify a shortcoming of off-policy reinforcement learning algorithms, in which the optimization over noisy estimates introduces bias during updates. We propose G-learning, a new off-policy learning algorithm that regularizes the updates by introducing an informational cost. We show how these soft updates reduce the accumulation of bias and lead to faster convergence. We discuss additional benefits of G-learning, such as the ability to naturally incorporate any available prior domain knowledge, and to avoid some exploration costs. We illustrate its benefits in several examples where G-learning results in significant improvements of the learning rate and the learning cost.

*Roy Fox, *Ari Pakman, and Naftali Tishby, UAI, 2016
* Equal contribution

A Multi-Agent Control Framework for Co-Adaptation in Brain-Computer Interfaces

We model the process by which the brain and the computer in a brain-computer interface (BCI) co-adapt to one another. We show in this simplified Linear-Quadratic-Gaussian (LQG) model how the brain’s neural encoding can adapt to the task of controlling the computer, at the same time that the computer’s adaptive decoder can adapt to the task of estimating the intention signal, leading to improvement in the system’s performance. We then propose an encoder-aware decoder adaptation scheme, which allows the computer to drive improvement forward faster by anticipating the brain’s adaptation.

*Josh Merel, *Roy Fox, Tony Jebara, and Liam Paninski, NIPS 2013
* Equal contribution

Bounded Planning in Passive POMDPs

We define Passive POMDPs, where actions do not affect the world state, but still incur costs. We present a variational principle for the problem of maintaining in memory the information state that is most useful for minimizing the cost, leading to a trade-off between memory and sensing, similar to multi-terminal source coding. We analyze the problem as an equivalent joint-state MDP, and introduce an efficient and simple algorithm for finding an optimum.

Roy Fox and Naftali Tishby, ICML 2012

A Reinforcement Learning Algorithm with Polynomial Interaction Complexity for Only-Costly-Observable MDPs

We define Only-Costly-Observable Markov Decision Processes (OCOMDPs), an interesting subclass of POMDPs, where the state is unobservable, except through a costly action that completely reveals the state. This is an extention of Unobservable MDPs, where planning is known to be NP-complete. Despite this computational complexity, we give an algorithm for PAC-learning OCOMDPs with polynomial interaction complexity, given a planning oracle.

Roy Fox and Moshe Tennenholtz, AAAI 2007


Information-Theoretic Methods for Planning and Learning in Partially Observable Markov Decision Processes

We formulate the problem of optimizing an agent under both extrinsic and intrinsic constraints on its operation in a dynamical system and develop the main tools for solving it. We identify the challenging convergence properties of the optimization algorithm, such as the bifurcation structure of the update operator near phase transitions. We study the special case of linear-Gaussian dynamics and quadratic cost (LQG), where the optimal solution has a particularly simple and solvable form. We also explore the learning task, where the model of the world dynamics is unknown and sample-based updates are used instead.

Roy Fox, PhD Thesis, 2016

Reinforcement Learning in Partially Observable Decision Processes

We study the interaction complexity of several partially observable decision processes. We show a simple subclass of POMDPs which are learnable, but require exponential interaction complexity. We discuss the previously studied OPOMDPs, and show they can be PAC-learnable in polynomial interaction complexity. We then define a more general and complex subclass of POMDPs, which we call OCOMDPs, and give an algorithm for PAC-learning them with polynomial interaction complexity, given a planning oracle.

Roy Fox, MSc Thesis, 2008


Multi-Level Discovery of Deep Options

Discovery of Deep Options (DDO) discovers parametrized options from demonstrations, and can recursively discover additional levels of the hierarchy. It scales to multi-level hierarchies by decoupling low-level option discovery from high-level meta-control policy learning, facilitated by under-parametrization of the high level. We demonstrate that augmenting the action space of Deep Q-Networks with the discovered options accelerates learning by guiding exploration in tasks where random actions are unlikely to reach valuable states.

*Roy Fox, *Sanjay Krishnan, Ion Stoica, and Ken Goldberg, 2017
* Equal contribution

Iterative Noise Injection for Scalable Imitation Learning

Imitation Learning algorithms suffer from covariate shift when the agent visits different states than the supervisor. Active approaches such as DAgger collect data from the current agent distribution, which can also change in each iteration with very large batch sizes. We propose injecting artificial noise into the supervisor’s policy, prove an improved bound on the loss due to the covariate shift, and introduce an algorithm to estimate the level of ε-greedy noise to inject. Our algorithm, Dart, achieves a better performance than DAgger in a driving simulator.

Michael Laskey, Jonathan Lee, Wesley Hsieh, Richard Liaw, Jeffrey Mahler, Roy Fox, and Ken Goldberg, 2017

Optimal Selective Attention in Reactive Agents

We present the minimum-information principle for selective attention in reactive agents. We motivate this approach by reducing the general problem of optimal control in POMDPs, to reactive control with complex observations. We introduce a forward-backward algorithm for finding optimal selective-attention policies, and illustrate it with several examples. Finally, we analyze and explore the newly discovered phenomenon of period doubling bifurcations in this optimization process.

Roy Fox and Naftali Tishby, technical report, 2015