Optimization: principles and algorithms, by Michel Bierlaire
augmentedLagrangian.m
Go to the documentation of this file.
1 %> \file
2 %> Algorithm 19.1: augmented Lagrangian algorithm. Implementation of algorithm 19.1 of \cite Bier15-book
3 %>
4 %> @note Tested with \ref run1905.m
5 %> @note Tested with \ref run1906.m
6 %>
7 %> @author Michel Bierlaire
8 %> @date Mon Mar 23 15:45:36 2015
9 %> @ingroup Algorithms
10 %> @ingroup chap19
11 
12 %> Applies the augmented Lagrangian method to solve \f[\min_x f(x) \f] subject to \f[h(x)=0,\f] where \f$f:\mathbb{R}^n \to \mathbb{R}\f$ and \f$h:\mathbb{R}^n \to \mathbb{R}^m \f$.
13 %> @param problem the name of the Octave function defining f(x), h(x) and their derivatives. The funtion has two arguments: x and index. If index=0, the objective function \f$f\f$ and its derivatives are evaluated. If index=\f$i\f$, the constraint \f$h_i\f$ and its derivtives are evaluated.
14 %> @param x0 starting primal point (nx1)
15 %> @param lambda0 starting dual point (mx1)
16 %> @param eps algorithm stops if \f$\|\nabla L(x_k,\lambda_k\| \leq \varepsilon \f$ and \f$\|h(x_k)\|^2\f$.
17 %> @param maxiter maximum number of iterations (default: 100)
18 %> @return [solution,lambda]
19 %> @return x: primal solution
20 %> @return lambda: dual solution
21 function [solution,lambda] = augmentedLagrangian(problem,x0,lambda0,eps,maxiter=100)
22  addpath("../chap11") ;
23  n = size(x0,1) ;
24  m = size(lambda0,1) ;
25  xk = x0 ;
26  lambdak = lambda0 ;
27 
28  c0 = 10.0 ;
29  eta0 = 0.1258925 ;
30  tau = 10.0 ;
31  alpha = 0.1 ;
32  beta = 0.9 ;
33 
34  c = c0 ;
35 
36  eps0 = 1.0 / c0 ;
37  printf("epsk = %f\n",eps0)
38  epsk = eps0 ;
39  etak = eta0 / (c0^alpha) ;
40 
41  k = 0 ;
42  niter = 0 ;
43 
44  [f,g,H,fL,gL,HL,constraints] = Lc(problem,xk,lambdak,c) ;
45 
46  do
47  printf("%d\t",k);
48  printf("%e\t",xk);
49  printf("%e\t",lambdak);
50  printf("%e\t",norm(gL)) ;
51  printf("%e\t",norm(constraints)) ;
52  printf("%d\t",c) ;
53  printf("%e\t",norm(g)) ;
54  printf("%e\t",epsk) ;
55  printf("%e\t",norm(constraints)) ;
56  printf("%e\t",etak) ;
57  printf("%d\t",niter) ;
58  printf("\n") ;
59  [xk,iteres,niter] = newtonLineSearch(@(x)Lc(problem,x,lambdak,c),xk,epsk,0,maxiter) ;
60  [f,g,H,fL,gL,HL,constraints] = Lc(problem,xk,lambdak,c) ;
61 % printf("Penalty parameter: %f\n", c) ;
62 % printf("Nbr iterations: %d\n", niter) ;
63 % printf("Gradient norm: %f\n", norm(g)) ;
64 % printf("Required gradient norm: %f\n",epsk) ;
65 % printf("Constraint norm: %f\n",norm(constraints)) ;
66 % printf("Required constraint norm: %f\n",etak)
67 
68  if (norm(constraints) <= etak)
69 % printf("Updating multipliers\n") ;
70  lambdak += c * constraints ;
71  epsk = epsk / c ;
72  etak = etak / c^beta ;
73  else
74 % printf("Updating the penalty parameter\n") ;
75  c = c * tau ;
76  epsk = eps0 / c ;
77  etak = eta0 /c^alpha ;
78  endif
79  k = k + 1 ;
80  until ((epsk <= eps && constraints'*constraints <= eps) || k >= maxiter )
81  printf("%d\t",k);
82  printf("%e\t",xk);
83  printf("%e\t",lambdak);
84  printf("%e\t",norm(g)) ;
85  printf("%e\t",norm(constraints)) ;
86  printf("\n") ;
87  solution = xk ;
88  lambda = lambdak ;
89 endfunction
90 
91 
function newtonLineSearch(in obj, in x0, in eps, in printlevel)
Applies Newton&#39;s algorithm with line search to solve where .
function Lc(in problem, in x, in lambda, in c)
function augmentedLagrangian(in problem, in x0, in lambda0, in eps, in maxiter)
Applies the augmented Lagrangian method to solve subject to where and .
Copyright 2015-2016 Michel Bierlaire