Knowledge (I memorize and understand)

  1. Explain the main principle of descent direction methods for unconstrained optimization. In particular, give the descent direction condition.
  2. Give an overview of approaches for line search, that is, a one-dimensional optimization.
  3. Explain the steepest descent (aka gradient) method. Discuss its shortcomings.
  4. Explain conditioning of a matrix and what impact it has on convergence of steepest descent algorithm. Propose a modification of a steepest descent method that includes scaling of the original matrix such that conditioning is improved.
  5. Explain the Newton method for unconstrained minimization. Give also the its interpretation as a method for root finding. Discuss its shortcomings.
  6. Discuss the issue of solving a set of linear equations (in matrix-vector form) as they appear in the Newton method. Which matrix factorization will be appropriate?
  7. Explain the key idea behind Quasi-Newton methods.]
  8. Explain the key idea behind trust region methods for unconstrained optimization. What are the advantages with respect to descent direction methods?

Skills (I can use the knowledge to solve a problem)

  1. Write Matlab code implementing a Quasi-Newton method for minimization of a provided function.
Naposledy změněno: pátek, 21. února 2020, 15.29