The course provides the basics of mathematical optimization: using linear algebra for optimization (least squares, SVD), Lagrange multipliers, selected numerical algorithms (gradient, Newton, Gauss-Newton, Levenberg-Marquardt methods), linear programming, convex sets and functions, intro to convex optimization, duality.
Linear algebra. Calculus, including intro to multivariate calculus. Recommended are numerical algorithms and probability and statistics.
The aim of the course is to teach students to recognize optimization problems around them, formulate them mathematically, estimate their level of difficulty, and solve easier problems.
1. General problem of continuous optimization.
2. Over- and under-determined linaer systems: method of least squares and least norm.
3. Minimization of quadratic functions without constraints.
4. Using SVD in optimization.
5. Iterative algorithms for free local extrema (gradient, Newton, Gauss-Newton, Levenberg-Marquardt methods).
6. Linear programming.
7. Simplex method.
8. Convex sets and polyhedra. Convex functions.
9. Intro to convex optimization.
10. Lagrange formalism, KKT conditions.
11. Lagrange duality. Duality in linear programming.
12. Examples of non-convex problems.
13. Intro to multicriteria optimization.
At seminars, students exercise the theory by solving problems together using blackboard and solve optimization problems in Matlab as homeworks.
Online lecture notes Tomáš Werner: Optimalizace (see www pages of the course).
Optionally, selected parts from books:
Carl D. Meyer: Matrix Analysis and Applied Linear Algebra, SIAM, 2000.
Giuseppe C. Calafiore, Laurent El Ghaoui: Optimization Models, Cambridge University Press, 2014.
Stephen Boyd and Lieven Vandenberghe: Convex Optimization, Cambridge University Press, 2004.