One of the broad goals of my research is to find ways to use technology to free decision makers to think more about values rather than optimization methods. There is a vast literature on optimization methods in Artificial Intelligence, Operations Research and other fields to choose from. When choosing an optimization algorithm to use for a spatiotemporal planning problem, one of the difficult computational trade-offs to consider is between the number of locations where actions must be taken and the correlation between the actions at those locations.
Below is an informal map from my recent IEEE Transactions paper of relative power of some common optimization algorithms on spatiotemporal planning problems. Assuming a fixed statespace size the algorithms are mapped out qualitatively according to their relative ability to solve problems with an increasing number of locations (vertical) and the amount of correlation there is between actions due to spatial constraints, rewards and dynamics (distance from either line). Note that for a single location correlation between actions is meaningless and the total number of actions in the MDP increases exponentially as C increases down the graph.
Distance from the top indicates an algorithm which can optimize policies over larger action spaces. The distance from the right edge indicates how relatively correlated those actions can be without causing huge performance problems for the algorithm. A detailed overview of Warren Powell inspired the layout of this map. All these problems can be represented as MDPs where the goal is to choose optimal actions measured against some value model, although researchers in different applied fields often do not use the language of MDPs.
If you want to read more about this see my paper and please comment on here if you have suggestions, questions or ideas about how to improve this map.