Optimization: principles and algorithms, by Michel Bierlaire
gaussNewton.m File Reference

Algorithm 14.1: Gauss-Newton method. More...

Go to the source code of this file.


function gaussNewton (in obj, in x0, in eps, in maxiter)
 Applies Gauss-Newton algorithm to solve

\[\min_x f(x)= \frac{1}{2} g(x)^T g(x)\]

where $g:\mathbb{R}^n\to\mathbb{R}^m$. More...


Detailed Description

Algorithm 14.1: Gauss-Newton method.

Implementation of algorithm 14.1 of [1]

Tested with runGaussNewton.m
Tested with runNeuralNetwork.m
Michel Bierlaire
Sat Mar 21 16:51:49 2015

Definition in file gaussNewton.m.

Function Documentation

function gaussNewton ( in  obj,
in  x0,
in  eps,
in  maxiter 

Applies Gauss-Newton algorithm to solve

\[\min_x f(x)= \frac{1}{2} g(x)^T g(x)\]

where $g:\mathbb{R}^n\to\mathbb{R}^m$.

objthe name of the Octave function defining g(x) and its gradient matrix. It should return a vector of size m and a matrix of size n x m.
x0the starting point
epsalgorithm stops if $\|\nabla g(x) g(x)\| \leq \varepsilon $.
maxitermaximum number of iterations (Default: 100)
solution: local minimum of the function
iteres: sequence of iterates generated by the algorithm. It contains n+2 columns. Columns 1:n contains the value of the current iterate. Column n+1 contains the value of the objective function. Column n+2 contains the value of the norm of the gradient. It contains maxiter rows, but only the first niter rows are meaningful.
niter: total number of iterations
Copyright 2015-2016 Michel Bierlaire