Optimal and Robust Control
B3M35ORR + BE3M35ORR + BE3M35ORCAssigned reading, recommended further reading
Assigned (compulsory) reading
No compulsory reading.
Recommended (not compulsory) further reading
Our introductory treatment of dynamic programming is to a large extent based on Chapter 6 in [1]. Note however that newer edition of the book is now on the market.
Some other classics that are perhaps better accessible (because cheaper) such as [2] also contain comparable introductory exposition. Another classic [3] actually uses dynamic programming as the only "vehicle" to derive all those LQ-optimal regulation and tracking results. A few copies of this book are available in the faculty library at NTK. Fairly detailed treatment is in the two-volume [4]
In our introductory lecture we mainly regarded dynamic programming as a theoretical concept which enabled us to (re)derive some analytical results such as discrete-time and continuous-time LQ-optimal control. The usefulness of dynamic programming as a practical computational scheme is fairly limited because of the curse of dimensionality problem (the computational complexity and memory requirements grow quickly with the dimension of the state space). These deficiencies of dynamic programming are attacked by methodologies known under various names as neuro-dynamic programming, approximate dynamic programming or reinforcement learning. We will not cover these in our course, but an interested reader can find an accessible introduction in [5] and [8]. Full expositions are then in [6] and [7]. The recent edition of [4] also have some material on it.
[1] Frank L. Lewis and Vassilis L. Syrmos. Optimal Control, 2nd Edition. Wiley-Interscience, October 1995.
[2] D. E. Kirk, Optimal Control Theory: An Introduction. Dover Publications, 2004.
[3] B. D. O. Anderson and J. B. Moore, Optimal Control: Linear Quadratic Methods. Dover Publications, 2007.
[4] D. P. Bertsekas. Dynamic Programming and Optimal Control. Vol. I and II. Athena Scientific. 2017.
[5] F. L. Lewis, D. Vrabie, K. Vamvoudakis. Reinforcement Learning and Feedback Control: Using Natural Decision Methods to Design Optimal Adaptive Controllers. IEEE Control Systems, December 2012. https://doi.org/10.1109/MCS.2012.2214134
[6] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[7] W. B. Powell. Approximate Dynamic Programming: Solving the Curses of Dimensionality. Wiley, 2nd ed., 2011.
[8] F. Wang, H. Zhang and D. Liu, "Adaptive Dynamic Programming: An Introduction," in IEEE Computational Intelligence Magazine, vol. 4, no. 2, pp. 39-47, May 2009.