Optimization: principles and algorithms, by Michel Bierlaire
gaussNewton.m
Go to the documentation of this file.
1 %> @file
2 %> Algorithm 14.1: Gauss-Newton method. Implementation of algorithm 14.1 of \cite Bier15-book
3 %>
4 %> @note Tested with \ref runGaussNewton.m
5 %> @note Tested with \ref runNeuralNetwork.m
6 %>
7 %> @author Michel Bierlaire
8 %> @date Sat Mar 21 16:51:49 2015
9 %> @ingroup Algorithms
10 %> @ingroup chap14
11
12 %> Applies Gauss-Newton algorithm to solve \f[\min_x f(x)= \frac{1}{2} g(x)^T g(x)\f] where \f$g:\mathbb{R}^n\to\mathbb{R}^m\f$.
13 %> @param obj the name of the Octave function defining g(x) and its gradient matrix. It should return a vector of size m and a matrix of size n x m.
14 %> @param x0 the starting point
15 %> @param eps algorithm stops if \f$\|\nabla g(x) g(x)\| \leq \varepsilon \f$.
16 %> @param maxiter maximum number of iterations (Default: 100)
17 %> @return [solution,iteres,niter]
18 %> @return solution: local minimum of the function
19 %> @return iteres: sequence of iterates generated by the algorithm. It contains n+2 columns. Columns 1:n contains the value of the current iterate. Column n+1 contains the value of the objective function. Column n+2 contains the value of the norm of the gradient. It contains maxiter rows, but only the first niter rows are meaningful.
20 %> @return niter: total number of iterations
21 function [solution,iteres,niter] = gaussNewton(obj,x0,eps,maxiter=100)
22  iteres = zeros(1+ maxiter,4) ;
23  n = size(x0,1) ;
24  xk = x0 ;
26  deriv = gradg * g ;
27  iteres(1,:) = [xk' g'*g norm(deriv) ] ;
28  k = 0 ;
29  do