Charles Explorer logo
🇬🇧

Nonlinear Optimisation Algorithms

Class at Faculty of Mathematics and Physics |
NOPT008

Syllabus

1) Introduction to optimization methods without constraints ... general optimization method, order of methods, basic classification of methods

2) Convergence of methods ... Q-covergence and corresponding important classes of algorithms - linear, superlinear and quadratic, R-convergence and context of different definitions of convergence

3) Basic Gradient Methods ... Gradient Methods, Step Selection - linesearch, Step Selection Methods - Armijo, Goldstein, Wolfe, Intervals for Wolfe method

4) Gradient Methods and Newtonian Methods ... gradient methods and Newtonian Methods, Zoutendijk's theorem and its relation to convergence, the condition of strong convexity and the Lipschitz continuity of gradients.

5) Trust-region method ... various strategies to select step and direction of trust-region methods, Cauchy point, description of dogleg method, Sorensen lemma.

6) Quasi-Newtonian methods ... the principle of quasinewton methods, Hessian approximation, convergence of methods (Dennis-Moré), DFP, BFGS, L-BFGS and Broyden methods.

7) Conjugate gradient methods ... conjugate gradient methods, determination of direction, convergence for specific systems, preconditioning and PCG Methods, methods of Fletcher and Reeves Method and methods of Polak and Ribier.

8) Basic optimization with constraints ... basic problem of conditional optimization, KKT matrix and its use for the construction of constraint methods - active set, Newton method.

9) Barrier methods ... the principle of barrier methods, interior-point method for nonlinear tasks as an extension of the methods of linear, related method convergence.

10) Penal and barrier methods ... penalizing methods, barrier methods and their convergence

Annotation

The course follows the course Fundamentals of Nonlinear Optimization organized in preceding term. In this course we will focus on individual algorithms for solving different types of typically convex optimization problems.

In addition to the basic features of methods relating to their structure and convergence, the course provides an overview of the methods used for problems without constraints as well as problems with constraints. Together with description of methods themselves, we will often be interested in the suitability of the method for a given class of problems or its convergence.