# Optimization: principles and algorithms¶

Bierlaire, M. (2015). Optimization: Principles and Algorithms. EPFL Press

### Algorithm 9.1: quadratic problems: direct solution¶

In [1]:
import numpy as np
from scipy import linalg
L = linalg.cholesky(Q).T
y = linalg.solve_triangular(L,-b,lower=True)
solution = linalg.solve_triangular(L.T,y,lower=False)
return solution



Example 9.8: $Q=\left(\begin{array}{cccc} 1& 1 & 1 & 1 \\ 1 & 2 & 2 & 2 \\ 1 & 2 & 3 & 3 \\ 1 & 2 & 3 & 4\end{array}\right)$, $b=\left(\begin{array}{c}-4 \\ -7 \\ -9 \\ -10\end{array}\right)$

In [2]:
Q = np.array([[1, 1, 1, 1] , [1, 2, 2, 2] , [1, 2, 3, 3] , [1, 2, 3, 4]])
b = np.array([[-4],[-7],[-9],[-10]])
print(s)

[[1.]
[1.]
[1.]
[1.]]


### Algorithm 9.2: Conjugate gradient method¶

In [5]:
import optimizationExceptions

# Define a function simple to call for the inner product
def scalarInner(a,b):
return(np.asscalar(a.T.dot(b)))

n = len(x0)
xk = x0
gk = Q.dot(xk) + b
iters = list()
dk = -gk
betak = 0
for k in range(n):
denom = scalarInner(dk,Q.dot(dk))
if (denom <= 0):
raise OptimizationLinAlgError('The matrix must be positive definite')
alphak = - np.asscalar(dk.T.dot(gk)) / denom
iters.append([xk,gk,dk,alphak,betak])
xk = xk + alphak * dk
gkp1 = Q.dot(xk) + b
betak = scalarInner(gkp1,gkp1) / scalarInner(gk,gk)
dk = -gkp1 + betak * dk
gk = gkp1
iters.append([xk,gk,dk,alphak,betak])
return (xk,iters)



Run the algorithm from $x_0=\left(\begin{array}{c}5 \\ 5 \\ 5 \\ 5 \end{array}\right)$.

In [6]:
x0 = np.array([[5.0],[5.0],[5.0],[5.0]])
print(x)

[[1.]
[1.]
[1.]
[1.]]


Calculate the gradient, to check that it is indeed numerically close to 0.

In [8]:
print(Q.dot(x)+b)

[[-1.63336011e-12]
[-3.06865644e-12]
[-4.13535872e-12]
[-4.70556927e-12]]


Table 9.1, page 231.

In [9]:
print("k\txk\t\tGrad(xk)\t\tdk\t\talphak\t\ttbetak")
for k in range(len(iters)):
#    print(iters[k])
print("{}\t{:+E}\t{:+E}\t{:+E}\t{:+E}\t{:+E}".format(k,iters[k][0].item(0),iters[k][1].item(0),iters[k][2].item(0),iters[k][3],iters[k][4]))
print("  \t{:+E}\t{:+E}\t{:+E}".format(iters[k][0].item(1),iters[k][1].item(1),iters[k][2].item(1)))
print("  \t{:+E}\t{:+E}\t{:+E}".format(iters[k][0].item(2),iters[k][1].item(2),iters[k][2].item(2)))
print("  \t{:+E}\t{:+E}\t{:+E}".format(iters[k][0].item(3),iters[k][1].item(3),iters[k][2].item(3)))


k	xk		Grad(xk)		dk		alphak		tbetak
0	+5.000000E+00	+1.600000E+01	-1.600000E+01	+1.207658E-01	+0.000000E+00
+5.000000E+00	+2.800000E+01	-2.800000E+01
+5.000000E+00	+3.600000E+01	-3.600000E+01
+5.000000E+00	+4.000000E+01	-4.000000E+01
1	+3.067747E+00	+1.508100E+00	-1.525788E+00	+1.029534E+00	+1.105469E-03
+1.618557E+00	+9.484536E-01	-9.794067E-01
+6.524300E-01	-2.297496E-01	+1.899527E-01
+1.693667E-01	-1.060383E+00	+1.016164E+00
2	+1.496896E+00	+1.706557E-01	-1.976757E-01	+2.371723E+00	+1.770889E-02
+6.102242E-01	-1.555851E-01	+1.382409E-01
+8.479928E-01	-9.204998E-02	+9.541383E-02
+1.215542E+00	+1.234923E-01	-1.054971E-01
3	+1.028064E+00	+5.777961E-03	-8.275692E-03	+3.391183E+00	+1.263550E-02
+9.380933E-01	-1.650846E-02	+1.825520E-02
+1.074288E+00	+2.311184E-02	-2.190624E-02
+9.653322E-01	-1.155592E-02	+1.022291E-02
4	+1.000000E+00	-1.633360E-12	+1.633360E-12	+3.391183E+00	+5.271931E-20
+1.000000E+00	-3.068656E-12	+3.068656E-12
+1.000000E+00	-4.135359E-12	+4.135359E-12
+1.000000E+00	-4.705569E-12	+4.705569E-12


An error is triggered if the matrix is not definite positive. Here, $Q=\left(\begin{array}{cccc} 1& 2 & 3 & 4 \\ 5 & 6 & 7 & 8 \\ 9 & 10 & 11 & 12 \\ 13 & 14 & 15 & 16\end{array}\right)$

In [10]:
Q = np.array([[1, 2, 3, 4] , [5, 6, 7, 8] , [9, 10, 11, 12] , [13, 14, 15, 16]])
b = np.array([[-4],[-7],[-9],[-10]])
x0 = np.array([[5.0],[5.0],[5.0],[5.0]])

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-10-8e5e73a722c2> in <module>()
2 b = np.array([[-4],[-7],[-9],[-10]])
3 x0 = np.array([[5.0],[5.0],[5.0],[5.0]])
NameError: name 'OptimizationLinAlgError' is not defined