2 %> Algorithm 14.1: Gauss-Newton method. Implementation of algorithm 14.1 of \cite Bier15-book
4 %> @note Tested with \ref runGaussNewton.m
5 %> @note Tested with \ref runNeuralNetwork.m
7 %> @author Michel Bierlaire
8 %> @date Sat Mar 21 16:51:49 2015
12 %> Applies Gauss-Newton algorithm to solve \f[\min_x f(x)= \frac{1}{2} g(x)^T g(x)\f] where \f$g:\mathbb{R}^n\to\mathbb{R}^m\f$.
13 %> @param obj the name of the Octave
function defining g(x) and its gradient matrix. It should
return a vector of size m and a matrix of size n x m.
14 %> @param x0 the starting point
15 %> @param eps algorithm stops if \f$\|\nabla g(x) g(x)\| \leq \varepsilon \f$.
16 %> @param maxiter maximum number of iterations (Default: 100)
17 %> @
return [solution,iteres,niter]
18 %> @
return solution: local minimum of the
function 19 %> @
return iteres: sequence of iterates generated by the algorithm. It contains n+2 columns. Columns 1:n contains the value of the current iterate. Column n+1 contains the value of the objective
function. Column n+2 contains the value of the norm of the gradient. It contains maxiter rows, but only the first niter rows are meaningful.
20 %> @
return niter: total number of iterations
21 function [solution,iteres,niter] =
gaussNewton(obj,x0,eps,maxiter=100)
22 iteres = zeros(1+ maxiter,4) ;
25 [g,gradg] = feval(obj,xk) ;
27 iteres(1,:) = [xk' g'*g norm(deriv) ] ;
30 dk = - gradg * gradg' \ gradg * g ;
32 [g,gradg] = feval(obj,xk) ;
35 iteres(k+1,:) = [xk' g'*g norm(deriv) ] ;
37 until (norm(deriv) <= eps || k >= maxiter)
function gaussNewton(in obj, in x0, in eps, in maxiter)
Applies Gauss-Newton algorithm to solve where .