Optimal and Robust Control
BE3M35ORR + B3M35ORR + BE3M35ORC
Tento kurz je součástí již archivovaného semestru, a proto je dostupný pouze pro čtení.
Learning goals
Požadavky na absolvování
Knowledge (I memorize and understand)
- Understand and explain the Bellman's principle of optimality.
- Show how dynamic programming and Bellman's principle of optimality can be used to give analytical solution to a discrete-time LQ-optimal control on a finite control interval.
- Give the Hamilton-Jacobi-Bellman (HJB) equation and explain it as a reformulation of the principle of optimality for continuous-time systems. Give also its version featuring the Hamiltonian function.
Skills (I can use the knowledge to solve a problem)
- use dynamic programming to design an optimal feedback controller in the form of a lookup table for a general (possibly nonlinear) discrete-time dynamical system.
Naposledy změněno: středa, 22. března 2017, 14.08