The task of Forest Fire Management presents a number of unique challenges which push the boundaries of what is possible with existing AI/ML algorithms. These include the importance of considering:
- multiple spatial-temporal scales at all times
- tradeoffs between individual and social good
- paucity of supervised training data
- coordinating decisions amongst a large number of agents (distributed nationwide) without frequent, extensive communication.
Relevant work to this project has been progressing recently on theoretical models of multi-agent decision making.
This research explores the intersection of theory in the field of Game Theory and Multi-Agent Reinforcement Learning (MARL), both of which study how multiple agents can adapt and optimize their decision making policies in response to changes in the world around them and actions of other decision makers.
News
- ML for Forest Fire Journal Paper: In 2020, in collaboration with some other investigators in this strategic network, we published a review article in the Environmental Reviews journal \cite{jain2020review} analysing the relevance of various ML algorithms to the domain and exhaustively surveying and analysing the existing research using ML for forest fire management.
- NSERC/CANADA WILDFIRE STRATEGIC NETWORK: In 2019, an NSERC Strategic Network grant was confirmed in which we are involved to support computational research in to Forest Fire Management practices. This includes the AI/ML/RL focus of our research along with our colleague in computer science Prof. Kate Larson.
- In 2019, we were awarded a computing grant on the topic of
Wildfire management: Disaster response, climate change” from Waterloo Artificial Intelligence Institute (WAII) in affiliation with the Microsoft “AI for Social Good” program.
Our Papers on Forest Fire Management
A Complementary Approach to Improve WildFire Prediction Systems.
Subramanian, Sriram Ganapathi,
and Crowley, Mark
In Neural Information Processing Systems (AI for social good workshop)
2018
Combining MCTS and A3C for prediction of spatially spreading processes in forest wildfire settings
Ganapathi Subramanian, Sriram,
and Crowley, Mark
In Canadian Conference on Artificial Intelligence
2018
In recent years, Deep Reinforcement Learning (RL) algorithms have shown super-human performance in a variety Atari and classic board games like chess and GO. Research into applications of RL in other domains with spatial considerations like environmental planning are still in their nascent stages. In this paper, we introduce a novel combination of Monte-Carlo Tree Search (MCTS) and A3C algorithms on an online simulator of a wildfire, on a pair of forest fires in Northern Alberta (Fort McMurray and Richardson fires) and on historical Saskatchewan fires previously compared by others to a physics-based simulator. We conduct several experiments to predict fire spread for several days before and after the given spatial information of fire spread and ignition points. Our results show that the advancements in Deep RL applications in the gaming world have advantages in spatially spreading real-world problems like forest fires. \textcopyright Springer International Publishing AG, part of Springer Nature 2018.
Using Spatial Reinforcement Learning to Build Forest Wildfire Dynamics Models From Satellite Images
Ganapathi Subramanian, Sriram,
and Crowley, Mark
Frontiers in ICT
2018
Machine learning algorithms have increased tremendously in power in recent years but have yet to be fully utilized in many ecology and sustainable resource management domains such as wildlife reserve design, forest fire management and invasive species spread. One thing these domains have in common is that they contain dynamics that can be characterized as a Spatially Spreading Process (SSP) which requires many parameters to be set precisely to model the dynamics, spread rates and directional biases of the elements which are spreading. We present related work in Artificial Intelligence and Machine Learning for SSP sustainability domains including forest wildfire prediction. We then introduce a novel approach for learning in SSP domains using Reinforcement Learning (RL) where fire is the agent at any cell in the landscape and the set of actions the fire can take from a location at any point in time includes spreading North, South, East, West or not spreading. This approach inverts the usual RL setup since the dynamics of the corresponding Markov Decision Process (MDP) is a known function for immediate wildfire spread. Meanwhile, we learn an agent policy for a predictive model of the dynamics of a complex spatially-spreading process. Rewards are provided for correctly classifying which cells are on fire or not compared to satellite and other related data. We examine the behaviour of five RL algorithms on this problem: Value Iteration, Policy Iteration, Q-Learning, Monte Carlo Tree Search and Asynchronous Advantage Actor-Critic (A3C). We compare to a Gaussian process based supervised learning approach and discuss the relation of our approach to manually constructed, state-of-the-art methods from forest wildfire modelling. We also discuss the relation of our approach to manually constructed, state-of-the-art methods from forest wildfire modelling. We validate our approach with satellite image data of two massive wildfire events in Northern Alberta, Canada, the Fort McMurray fire of 2016 and the Richardson fire of 2011. The results show that we can learn predictive, agent-based policies as models of spatial dynamics using RL on readily available satellite images that other methods and have many additional advantages in terms of generalizability and interpretability.
Learning Forest Wildfire Dynamics from Satellite Images Using Reinforcement Learning
Subramanian, Sriram Ganapathi,
and Crowley, Mark
In Conference on Reinforcement Learning and Decision Making
2017
Allowing a wildfire to burn: Estimating the effect on future fire suppression costs
Houtman, Rachel M.,
Montgomery, Claire A.,
Gagnon, Aaron R.,
Calkin, David E.,
Dietterich, Thomas G.,
McGregor, Sean,
and
Crowley, Mark
International Journal of Wildland Fire
2013
Where a legacy of aggressive wildland fire suppression has left forests in need of fuel reduction, allowing wildland fire to burn may provide fuel treatment benefits, thereby reducing suppression costs from subsequent fires. The least-cost-plus-net-value-change model of wildland fire economics includes benefits of wildfire in a framework for evaluating suppression options. In this study, we estimated one component of that benefit – the expected present value of the reduction in suppression costs for subsequent fires arising from the fuel treatment effect of a current fire. To that end, we employed Monte Carlo methods to generate a set of scenarios for subsequent fire ignition and weather events, which are referred to as sample paths, for a study area in central Oregon. We simulated fire on the landscape over a 100-year time horizon using existing models of fire behaviour, vegetation and fuels development, and suppression effectiveness, and we estimated suppression costs using an existing suppression cost model. Our estimates suggest that the potential cost savings may be substantial. Further research is needed to estimate the full least-cost-plus-net-value-change model. This line of research will extend the set of tools available for developing wildfire management plans for forested landscapes.