Warren B. Powell December 8, 2009
Approximate dynamic programming is a powerful class of algorithmic strategies for solving stochastic
optimization problems where optimal decisions can be characterized using Bellman's optimality equa-
tion, but where the characteristics of the problem make solving Bellman's equation computationally
intractable. This brief chapter provides an introduction to the basic concepts of approximate dy-
namic programming while building bridges between the different erent communities that have contributed
to this field. We cover basic approximate value iteration (temporal difference learning), policy ap-
proximation, and a brief introduction to strategies for approximating value functions. We cover
Q-learning, and the use of the post-decision state for solving problems with vector-valued decisions.
The approximate linear programming method is introduced, along with a discussion of stepsize se-
lection issues. The presentation closes with a discussion of some practical issues that arise in the
implementation of ADP techniques.