Theory of unconstrained optimization (necessary and sufficient conditions, the role of convexity, classification of convergence), minimization in a given direction (Golden section search, curve fitting, Newton), inexact line search (Goldstein, Armijo, and Wolfe conditions), basic descent methods (the method of steepest descent and the Newton method), conjugate direction methods (the nonlinear conjugate gradient method), Quasi-Newton methods (the quasi-Newton condition, rank-one update, DFP, BFGS, the Broyden family), trust-region methods, least-squares problems (the Gauss-Newton and the Levenberg-Marquart method). Theory of constrained optimization (Lagrange multipliers, necessary and sufficient conditions).
Optimization and minimization techniques. Optimization method, global convergence, speed of convergence.
Minimization of a functional, descent techniques, nonlinear conjugate gradient method, Quasi-Newton methods, trust-region methods. Least-squares problems, the Gauss-Newton method. Theory of constrained optimization,
Lagrange multipliers, convex optimization, penalty and barrier methods, projection and dual methods. The course is suitable for students focused on industrial mathematics and numerical analysis.