Optimal and Robust Control
B3M35ORR + BE3M35ORR + BE3M35ORCOsnova sekce
-
-
(Mathematical) optimization – modeling and analysis
- unconstrained optimization
- first-order necessary conditions (scalar and vector case): gradient, directional derivative
- second-order necessary conditions: positive semidefiniteness of Hessian
- second-order sufficient conditions: positive definiteness of Hessian
- constrained optimization
- equality-type constraints: Lagrange multipliers
- inequality-type constraints: KKT conditions
- (Lagrange) duality
- classes of optimization problems
- linear programming
- quadratic programming (linearly constrained, quadratically constrained)
- (general) nonlinear programming
- (convex) conic programing (linear conic programming, second-order cone programming, semidefinite programming)
- Formulating optimization problems in Matlab, Julia, Python.
- unconstrained optimization
-
Numerical optimization - algorithms
- Algorithmic (automatic) differentiation
- Unconstrained optimization (derivative-based methods): descent direction methods (gradient method, Newton and Quasi-Newton method)
- Constrained optimization: active set methods (projected gradient method)
-
Discrete-time optimal control - direct approach, model predictive control (MPC)
- Introduction to optimal control: motivation, optimization criteria (or performance indices), optimization "variables" (controller parameters or control signals).
- Discrete-time control for a linear system with a quadratic performance index over a finite time horizon formulated as a quadratic program -> open-loop control.
- Model predictive control (MPC) aka receding horizon control as a real-time optimization-based feedback control scheme: regulation, tracking, both simultaneous and sequential formats, soft constraints, practical issues.
-
Discrete-time optimal control - indirect approach, LQ-optimal control
- conditions of optimality for a general nonlinear discrete-time system - two-point boundary value problem
- discrete-time LQ-optimal control on a finite time horizon, initial and final states fixed
- discrete-time LQ-optimal control on a finite time horizon, final state free: discrete-time (recurrent) Riccati equation
- discrete-time LQ-optimal control on an infinite time horizon - LQ-optimal constant state feedback: discrete-time algebraic Riccati equation (ARE)
- discrete-time LQ-optimal tracking and other LQ extensions
-
Dynamic programming, approximate dynamic programming, reinforcement learning
- Bellman's optimality principle
- dynamic programming approach to problems with discrete and finite time and discrete and finite state space
- dynamic programming used to derive LQ-optimal controller
- ...
-
Continuous-time optimal control, calculus of variations, LQ-optimal control
- introduction to calculus of variations: functional, variation of a functional, finite-interval fixed-ends problem, Euler-Lagrange equation as a first-order necessary condition of optimality.
- continuous-time optimality principle - HJB equation
-
Continuous-time optimal control with free final time and constrained inputs, time-optimal control
- calculus of variations for free final time
- minimum-time LQ-optimal control
- minimum-time optimal control under constrained control - transition from calculus of variations to Pontryagin's principle of maximum; bang-bang control for a double integrator and harmonic oscillator
- proximate time-optimal control (PTOS)
-
Numerical methods for continuous-time optimal control
- Indirect approaches to numerical optimal control - solving BVP
- Shooting and multiple shooting (iterating over the unknown boundary conditions)
- Gradient method (minimization of H by iterating over u)
- Quasi-linearisation
- Direct approaches to numerical optimal control - transcribing the optimal control problem into a nonlinear programming problem:
- Direct transcription
- Direct collocation
- Software for numerical optimal control: Acado, ...
- Indirect approaches to numerical optimal control - solving BVP
-
LQG-optimal control, H2-optimal control, Loop Transfer Recovery (LTR)
- LQ-optimal control for stochastic systems (random initial state, stochastic disturbance)
- Optimal estimation
- LQG-optimal control
- H2-optimal control
- Loop Transfer Recovery (LTR)
-
Models of uncertainty, analysis of robustness
- uncertainties in real physical parameters
- uncertainty formulated in frequency domain
- unstructured frequency domain uncertainty represented by \(\Delta\) term and a weighting filter W
- structured frequency-domain uncertainty
- additive, multiplicative, inverse models of uncertainty
- small gain theorem based robust stability and robust performance analysis
-
Classical and modern robust control design methods in frequency domain
- Loopshaping (lead, lag, lead-lag, ...)
- Quantitative Feedback Theory (QFT)
- \(\mathcal{H}_\infty\)-minimization-based control design
- standard \(\mathcal{H}_\infty\)-optimal control
- mixed sensitivity minimization
- robust loopshaping (assuming coprime factor uncertainty)
- \(\mu\) synthesis (DK iterations)
-
Analysis of limits of achievable performance
- SISO systems
- Scaling
- Integral constraints
- Interpolation constraints
- Limitations due to delay
- Limitations due to disturance
- Limitations due to saturation of controls
- MIMO systems
- Directionality of MIMO systems
- Ill-conditioning of MIMO systems
- Relative Gain Array (RGA)
- Limitations due to uncertainty
- SISO systems
-
Model and controller order reduction
- Basic order reduction techniques: truncation and residualization
- Balanced state-space realization: simultaneous diagonalization of observability and controllability gramians
- Balanced truncation / balanced residualization
- Hankel norm minimization
- Frequency-weighted approximation and stability-guaranteeing controller-order reduction
-
Linear Matrix Inequalities (LMI) for control analysis and design, Semidefinite Programming (SDP), control synthesis for Linear Parameter-Varying (LPV) systems