One of my core research areas is into understanding the computational mechanisms that can enable learning to perform complex tasks primarily from experience and feedback. This topic, called Reinforcement Learning, has a complex history tying fields as diverse as neuroscience, behavioural and development psychology, economics and computer science. I approach it as a computational researcher aiming to build Artificial Intelligence agents that learn to way Humans do, not by any correspondence of their “brain” and it “neural” structure by the algorithms they both use to learn to act in a complex, mysterious world.
Learning Resources
Courses and Texts
Seminal Deep RL Papers
- https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf
Our Papers on Reinforcement Learning
Scientific Discovery and the Cost of Measurement – Balancing Information and Cost in Reinforcement Learning
Bellinger, Colin,
Drozdyuk, Andriy,
Crowley, Mark,
and Tamblyn, Isaac
In 1st Annual AAAI Workshop on AI to Accelerate Science and Engineering (AI2ASE)
2022
The use of reinforcement learning (RL) in scientific applications, such as materials design and automated chemistry, is increasing. A major challenge, however, lies in fact that measuring the state of the system is often costly and time consuming in scientific applications, whereas policy learning with RL requires a measurement after each time step. In this work, we make the measurement costs explicit in the form of a costed reward and propose a framework that enables off-the-shelf deep RL algorithms to learn a policy for both selecting actions and determining whether or not to measure the current state of the system at each time step. In this way, the agents learn to balance the need for information with the cost of information. Our results show that when trained under this regime, the Dueling DQN and PPO agents can learn optimal action policies whilst making up to 50% fewer state measurements, and recurrent neural networks can produce a greater than 50% reduction in measurements. We postulate the these reduction can help to lower the barrier to applying RL to real-world scientific applications.
Decentralized Mean Field Games
Ganapathi Subramanian, Sriram,
Taylor, Mathew,
Crowley, Mark,
and Poupart, Pascal
In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI-2022)
2022
Multiagent reinforcement learning algorithms have not been widely adopted in large scale environments with many agents as they often scale poorly with the number of agents. Using mean field theory to aggregate agents has been proposed as a solution to this problem. However, almost all previous methods in this area make a strong assumption of a centralized system where all the agents in the environment learn the same policy and are effectively indistinguishable from each other. In this paper, we relax this assumption about indistinguishable agents and propose a new mean field system known as Decentralized Mean Field Games, where each agent can be quite different from others. All agents learn independent policies in a decentralized fashion, based on their local observations. We define a theoretical solution concept for this system and provide a fixed point guarantee for a Q-learning based algorithm in this system. A practical consequence of our approach is that we can address a ‘chicken-and-egg’ problem in empirical mean field reinforcement learning algorithms. Further, we provide Q-learning and actor-critic algorithms that use the decentralized mean field learning approach and give stronger performances compared to common baselines in this area. In our setting, agents do not need to be clones of each other and learn in a fully decentralized fashion. Hence, for the first time, we show the application of mean field learning methods in fully competitive environments, large-scale continuous action space environments, and other environments with heterogeneous agents. Importantly, we also apply the mean field method in a ride-sharing problem using a real-world dataset. We propose a decentralized solution to this problem, which is more practical than existing centralized training methods.
Investigation of Independent Reinforcement Learning Algorithms in Multi-Agent Environments
Lee, Ken Ming,
Ganapathi Subramanian, Sriram,
and Crowley, Mark
In NeurIPS 2021 Deep Reinforcement Learning Workshop
2021
Independent reinforcement learning algorithms have no theoretical guarantees for finding the best policy in multi-agent settings. However, in practice, prior works have reported good performance with independent algorithms in some domains and bad performance in others. Moreover, a comprehensive study of the strengths and weaknesses of independent algorithms is lacking in the literature. In this paper, we carry out an empirical comparison of the performance of independent algorithms on four PettingZoo environments that span the three main categories of multi-agent environments, i.e., cooperative, competitive, and mixed. We show that in fully-observable environments, independent algorithms can perform on par with multi-agent algorithms in cooperative and competitive settings. For the mixed environments, we show that agents trained via independent algorithms learn to perform well individually, but fail to learn to cooperate with allies and compete with enemies. We also show that adding recurrence improves the learning of independent algorithms in cooperative partially observable environments.
Multi-Agent Advisor Q-Learning
Ganapathi Subramanian, Sriram,
Larson, Kate,
Taylor, Mathew,
and Crowley, Mark
Journal of Artificial Intelligence Research (JAIR)
2022
A Complementary Approach to Improve WildFire Prediction Systems.
Subramanian, Sriram Ganapathi,
and Crowley, Mark
In Neural Information Processing Systems (AI for social good workshop)
2018
Partially Observable Mean Field Reinforcement Learning
Ganapathi Subramanian, Sriram,
Taylor, Matthew,
Crowley, Mark,
and Poupart, Pascal
In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS)
2021
Traditional multi-agent reinforcement learning algorithms are not scalable to environments with more than a few agents, since these algorithms are exponential in the number of agents. Recent research has introduced successful methods to scale multi-agent reinforcement learning algorithms to many agent scenarios using mean field theory. Previous work in this field assumes that an agent has access to exact cumulative metrics regarding the mean field behaviour of the system, which it can then use to take its actions. In this paper, we relax this assumption and maintain a distribution to model the uncertainty regarding the mean field of the system. We consider two different settings for this problem. In the first setting, only agents in a fixed neighbourhood are visible, while in the second setting, the visibility of agents is determined at random based on distances. For each of these settings, we introduce a Q-learning based algorithm that can learn effectively. We prove that this Q-learning estimate stays very close to the Nash Q-value (under a common set of assumptions) for the first setting. We also empirically show our algorithms outperform multiple baselines in three different games in the MAgents framework, which supports large environments with many agents learning simultaneously to achieve possibly distinct goals.
Active Measure Reinforcement Learning for Observation Cost Minimization: A framework for minimizing measurement costs in reinforcement learning
Bellinger, Colin,
Coles, Rory,
Crowley, Mark,
and Tamblyn, Isaac
In Canadian Conference on Artificial Intelligence
2021
Markov Decision Processes (MDP) with explicit measurement cost are a class of en- vironments in which the agent learns to maximize the costed return. Here, we define the costed return as the discounted sum of rewards minus the sum of the explicit cost of measuring the next state. The RL agent can freely explore the relationship between actions and rewards but is charged each time it measures the next state. Thus, an op- timal agent must learn a policy without making a large number of measurements. We propose the active measure RL framework (Amrl) as a solution to this novel class of problem, and contrast it with standard reinforcement learning under full observability and planning under partially observability. We demonstrate that Amrl-Q agents learn to shift from a reliance on costly measurements to exploiting a learned transition model in order to reduce the number of real-world measurements and achieve a higher costed return. Our results demonstrate the superiority of Amrl-Q over standard RL methods, Q-learning and Dyna-Q, and POMCP for planning under a POMDP in environments with explicit measurement costs.
Deep Multi Agent Reinforcement Learning for Autonomous Driving
Bhalla, Sushrut,
Ganapathi Subramanian, Sriram,
and Crowley, Mark
In Canadian Conference on Artificial Intelligence
2020
Learning Multi-Agent Communication with Reinforcement Learning
Bhalla, Sushrut,
Ganapathi Subramanian, Sriram,
and Crowley, Mark
In Conference on Reinforcement Learning and Decision Making (RLDM-19)
2019
Training Cooperative Agents for Multi-Agent Reinforcement Learning
Bhalla, Sushrut,
Ganapathi Subramanian, Sriram,
and Crowley, Mark
In Proc. of the 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019)
2019
Learning Forest Wildfire Dynamics from Satellite Images Using Reinforcement Learning
Subramanian, Sriram Ganapathi,
and Crowley, Mark
In Conference on Reinforcement Learning and Decision Making
2017
Policy Gradient Optimization Using Equilibrium Policies for Spatial Planning Domains
Crowley, Mark
In 13th INFORMS Computing Society Conference
2013
Equilibrium Policy Gradients for Spatiotemporal Planning
Crowley, Mark
2011
In spatiotemporal planning, agents choose actions at multiple locations in space over some planning horizon to maximize their utility and satisfy various constraints. In forestry planning, for example, the problem is to choose actions for thousands of locations in the forest each year. The actions at each location could include harvesting trees, treating trees against disease and pests, or doing nothing. A utility model could place value on sale of forest products, ecosystem sustainability or employment levels, and could incorporate legal and logistical constraints such as avoiding large contiguous areas of clearcutting and managing road access. Planning requires a model of the dynamics. Existing simulators developed by forestry researchers can provide detailed models of the dynamics of a forest over time, but these simulators are often not designed for use in automated planning. This thesis presents spatiotemoral planning in terms of factored Markov decision processes. A policy gradient planning algorithm optimizes a stochastic spatial policy using existing simulators for dynamics. When a planning problem includes spatial interaction between locations, deciding on an action to carry out at one location requires considering the actions performed at other locations. This spatial interdependence is common in forestry and other environmental planning problems and makes policy representation and planning challenging. We define a spatial policy in terms of local policies defined as distributions over actions at one location conditioned upon actions at other locations. A policy gradient planning algorithm using this spatial policy is presented which uses Markov Chain Monte Carlo simulation to sample the landscape policy, estimate its gradient and use this gradient to guide policy improvement. Evaluation is carried out on a forestry planning problem with 1880 locations using a variety of value models and constraints. The distribution over joint actions at all locations can be seen as the equilibrium of a cyclic causal model. This equilibrium semantics is compared to Structural Equation Models. We also define an algorithm for approximating the equilibrium distribution for cyclic causal networks which exploits graphical structure and analyse when the algorithm is exact.