class BIOGEME: examples of use of each function

This webpage is for programmers who need examples of use of the functions of the class. The examples are designed to illustrate the syntax. They do not correspond to any meaningful model. For examples of models, visit biogeme.epfl.ch.

In [1]:
import datetime
print(datetime.datetime.now())
2019-12-29 21:15:43.050983
In [2]:
import biogeme.version as ver
print(ver.getText())
biogeme 3.2.5 [2019-12-29]
Version entirely written in Python
Home page: http://biogeme.epfl.ch
Submit questions to https://groups.google.com/d/forum/biogeme
Michel Bierlaire, Transport and Mobility Laboratory, Ecole Polytechnique Fédérale de Lausanne (EPFL)

In [3]:
import biogeme.biogeme as bio
import biogeme.database as db
import pandas as pd
import numpy as np
from biogeme.expressions import Beta, Variable, exp

Define the verbosity of Biogeme

In [4]:
import biogeme.messaging as msg
logger = msg.bioMessage()
logger.setDetailed()

Definition of a database

In [5]:
df = pd.DataFrame({'Person':[1,1,1,2,2],
                   'Exclude':[0,0,1,0,1],
                   'Variable1':[1,2,3,4,5],
                   'Variable2':[10,20,30,40,50],
                   'Choice':[1,2,3,1,2],
                   'Av1':[0,1,1,1,1],
                   'Av2':[1,1,1,1,1],
                   'Av3':[0,1,1,1,1]})
myData = db.Database('test',df)

Definition of various expressions

In [6]:
Variable1=Variable('Variable1')
Variable2=Variable('Variable2')
beta1 = Beta('beta1',-1.0,-3,3,0)
beta2 = Beta('beta2',2.0,-3,10,0)
likelihood = -beta1**2 * Variable1 - exp(beta2*beta1) * Variable2 - beta2**4
simul = beta1 / Variable1 + beta2 / Variable2
dictOfExpressions = {'loglike':likelihood,'beta1':beta1,'simul':simul}

Creation of the BIOGEME object

In [7]:
myBiogeme = bio.BIOGEME(myData,dictOfExpressions)
myBiogeme.modelName = 'simpleExample'
print(myBiogeme)
[21:15:44] < General >   Remove 6 unused variables from the database as only 2 are used.
simpleExample: database [test]{'loglike': ((((-(beta1(-1.0) ** `2`)) * Variable1) - (exp((beta2(2.0) * beta1(-1.0))) * Variable2)) - (beta2(2.0) ** `4`)), 'beta1': beta1(-1.0), 'simul': ((beta1(-1.0) / Variable1) + (beta2(2.0) / Variable2))}
simpleExample: database [test]{'loglike': ((((-(beta1(-1.0) ** `2`)) * Variable1) - (exp((beta2(2.0) * beta1(-1.0))) * Variable2)) - (beta2(2.0) ** `4`)), 'beta1': beta1(-1.0), 'simul': ((beta1(-1.0) / Variable1) + (beta2(2.0) / Variable2))}

Note that, by default, Biogeme removes the unused variables from the database to optimize space.

In [8]:
myBiogeme.database.data.columns
Out[8]:
Index(['Person', 'Exclude', 'Variable1', 'Variable2', 'Choice', 'Av1', 'Av2',
       'Av3'],
      dtype='object')

calculateInitLikelihood

In [9]:
myBiogeme.calculateInitLikelihood()
[21:15:44] < General >   Log likelihood (N=5):  -115.3003
Out[9]:
-115.30029248549191

calculateLikelihood

In [10]:
x = myBiogeme.betaInitValues
xplus = [v+1 for v in x]
print(xplus)
[0.0, 3.0]
In [11]:
myBiogeme.calculateLikelihood(xplus,scaled=True)
[21:15:44] < General >   Log likelihood (N=5):       -555
Out[11]:
-111.0

It is possible to calculate the likelihood based only on a sample of the data

In [12]:
myBiogeme.calculateLikelihood(xplus, scaled=True, batch=0.5)
[21:15:44] < Detailed >  Use 50.0% of the data.
[21:15:44] < General >   Log likelihood (N=2):       -232
Out[12]:
-116.0
In [13]:
myBiogeme.database.data
Out[13]:
Person Exclude Variable1 Variable2 Choice Av1 Av2 Av3
1 1 0 2 20 2 1 1 1
4 2 1 5 50 2 1 1 1
In [14]:
myBiogeme.calculateLikelihood(xplus, scaled=True, batch=0.6)
[21:15:44] < Detailed >  Use 60.0% of the data.
[21:15:44] < General >   Log likelihood (N=3):       -363
Out[14]:
-121.0
In [15]:
myBiogeme.database.data
Out[15]:
Person Exclude Variable1 Variable2 Choice Av1 Av2 Av3
2 1 1 3 30 3 1 1 1
4 2 1 5 50 2 1 1 1
3 2 0 4 40 1 1 1 1

By default, each observation has the same probability to be selected in the sample. It is possible to define the selection probability to be proportional to the values of a column of the database, using the parameter 'weights'.

In [16]:
myBiogeme.columnForBatchSamplingWeights = 'Variable2'
myBiogeme.calculateLikelihood(xplus, scaled=True, batch=0.6)
[21:15:44] < Detailed >  Use 60.0% of the data.
[21:15:44] < General >   Log likelihood (N=3):       -353
Out[16]:
-117.66666666666667
In [17]:
myBiogeme.database.data
Out[17]:
Person Exclude Variable1 Variable2 Choice Av1 Av2 Av3
4 2 1 5 50 2 1 1 1
1 1 0 2 20 2 1 1 1
3 2 0 4 40 1 1 1 1

calculateLikelihoodAndDerivatives

In [18]:
f,g,h,bhhh = myBiogeme.calculateLikelihoodAndDerivatives(xplus,scaled=True,hessian=True,bhhh=True)
print(f'f={f}')
print(f'g={g}')
print(f'h={h}')
print(f'bhhh={bhhh}')
[21:15:44] < General >   Log likelihood (N=5):       -555 Gradient norm:      7e+02 Hessian norm:       1e+03 BHHH norm:       1e+05
f=-111.0
g=[ -90. -108.]
h=[[-270.  -30.]
 [ -30. -108.]]
bhhh=[[ 9900.  9720.]
 [ 9720. 11664.]]

Now the unscaled version

In [19]:
f,g,h,bhhh = myBiogeme.calculateLikelihoodAndDerivatives(xplus,scaled=False,hessian=True,bhhh=True)
print(f'f={f}')
print(f'g={g}')
print(f'h={h}')
print(f'bhhh={bhhh}')
[21:15:44] < General >   Log likelihood (N=5):       -555 Gradient norm:      7e+02 Hessian norm:       1e+03 BHHH norm:       1e+05
f=-555.0
g=[-450. -540.]
h=[[-1350.  -150.]
 [ -150.  -540.]]
bhhh=[[49500. 48600.]
 [48600. 58320.]]

Using only a sample of the data

In [20]:
f,g,h,bhhh = myBiogeme.calculateLikelihoodAndDerivatives(xplus,scaled=True,batch=0.5,hessian=True,bhhh=True)
print(f'f={f}')
print(f'g={g}')
print(f'h={h}')
print(f'bhhh={bhhh}')
[21:15:44] < Detailed >  Use 50.0% of the data.
[21:15:44] < General >   Log likelihood (N=2):       -252 Gradient norm:      3e+02 Hessian norm:       8e+02 BHHH norm:       6e+04
f=-126.0
g=[-135. -108.]
h=[[-405.  -45.]
 [ -45. -108.]]
bhhh=[[18450. 14580.]
 [14580. 11664.]]

likelihoodFiniteDifferenceHessian

In [21]:
myBiogeme.likelihoodFiniteDifferenceHessian(xplus)
[21:15:44] < General >   Log likelihood (N=5):       -555 Gradient norm:      7e+02  
[21:15:44] < General >   Log likelihood (N=5):       -555 Gradient norm:      7e+02  
[21:15:44] < General >   Log likelihood (N=5):  -555.0002 Gradient norm:      7e+02  
Out[21]:
array([[-1380.00020229,  -150.        ],
       [ -150.0000451 ,  -540.00005396]])

checkDerivatives

In [22]:
f,g,h,gdiff,hdiff = myBiogeme.checkDerivatives(verbose=True)
[21:15:44] < General >   Log likelihood (N=5):  -115.3003 Gradient norm:      1e+02 Hessian norm:       3e+02 
[21:15:44] < General >   Log likelihood (N=5):  -115.3003 Gradient norm:      1e+02 Hessian norm:       3e+02 
[21:15:44] < General >   Log likelihood (N=5):  -115.3003 Gradient norm:      1e+02 Hessian norm:       3e+02 
[21:15:44] < General >   Log likelihood (N=5):  -115.3003 Gradient norm:      1e+02 Hessian norm:       3e+02 
[21:15:44] < Detailed >  x		Gradient	FinDiff		Difference
[21:15:44] < Detailed >  beta1          	-1.060058E+01	-1.060058E+01	-5.427932E-06
[21:15:44] < Detailed >  beta2          	-1.396997E+02	-1.396997E+02	+2.608000E-05
[21:15:44] < General >   Log likelihood (N=5):  -115.3003 Gradient norm:      1e+02 Hessian norm:       3e+02 
[21:15:44] < General >   Log likelihood (N=5):  -115.3003 Gradient norm:      1e+02 Hessian norm:       3e+02 
[21:15:44] < General >   Log likelihood (N=5):  -115.3003 Gradient norm:      1e+02 Hessian norm:       3e+02 
[21:15:44] < Detailed >  Row		Col		Hessian	FinDiff		Difference
[21:15:44] < Detailed >  beta1          	beta1          	-1.112012E+02	-1.112012E+02	-8.045522E-06
[21:15:44] < Detailed >  beta1          	beta2          	+2.030029E+01	+2.030029E+01	+7.365980E-09
[21:15:44] < Detailed >  beta2          	beta1          	+2.030029E+01	+2.030029E+01	-1.613879E-07
[21:15:44] < Detailed >  beta2          	beta2          	-2.603003E+02	-2.603003E+02	+2.229281E-05
In [23]:
print(f'f={f}')
print(f'g={g}')
print(f'h={h}')
print(f'gdiff={gdiff}')
print(f'hdiff={hdiff}')
hdiff
f=-115.30029248549191
g=[ -10.60058497 -139.69970751]
h=[[-111.20116994   20.30029249]
 [  20.30029249 -260.30029249]]
gdiff=[-5.42793187e-06  2.60800035e-05]
hdiff=[[-8.04552172e-06  7.36597983e-09]
 [-1.61387920e-07  2.22928137e-05]]
Out[23]:
array([[-8.04552172e-06,  7.36597983e-09],
       [-1.61387920e-07,  2.22928137e-05]])

estimate

During estimation, it is possibler to save intermediate results, in case the estimation must be interrupted.

In [24]:
results = myBiogeme.estimate(bootstrap=10,saveIterations=True)
[21:15:44] < General >   Log likelihood (N=5):  -115.3003
[21:15:44] < General >   Minimize with tol 1e-07
[21:15:44] < General >   Log likelihood (N=5):  -115.3003 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):   -1216003 Gradient norm:      5e+06  
[21:15:44] < General >   Log likelihood (N=5):  -115.6422 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):  -67.11418 Gradient norm:          3  
[21:15:44] < General >   Log likelihood (N=5):  -67.07432 Gradient norm:          1  
[21:15:44] < General >   Log likelihood (N=5):   -67.0655 Gradient norm:       0.06  
[21:15:44] < General >   Log likelihood (N=5):  -67.06549 Gradient norm:      0.001  
[21:15:44] < General >   Log likelihood (N=5):  -67.06549 Gradient norm:      3e-07  
[21:15:44] < General >   Log likelihood (N=5):  -67.06549 Gradient norm:      3e-07 Hessian norm:       2e+02 BHHH norm:       7e+01
[21:15:44] < General >   Re-estimate the model 10 times for bootstrapping
[21:15:44] < General >   Minimize with tol 1e-07
[21:15:44] < General >   Log likelihood (N=5):  -115.3003 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):   -1216003 Gradient norm:      5e+06  
[21:15:44] < General >   Log likelihood (N=5):  -115.6422 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):  -67.11418 Gradient norm:          3  
[21:15:44] < General >   Log likelihood (N=5):  -67.07432 Gradient norm:          1  
[21:15:44] < General >   Log likelihood (N=5):   -67.0655 Gradient norm:       0.06  
[21:15:44] < General >   Log likelihood (N=5):  -67.06549 Gradient norm:      0.001  
[21:15:44] < General >   Log likelihood (N=5):  -67.06549 Gradient norm:      3e-07  
[21:15:44] < General >   Minimize with tol 1e-07
[21:15:44] < General >   Log likelihood (N=5):  -110.5936 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):  -647339.6 Gradient norm:      3e+06  
[21:15:44] < General >   Log likelihood (N=5):  -98.89788 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):  -60.49517 Gradient norm:      1e+01  
[21:15:44] < General >   Log likelihood (N=5):   -59.6489 Gradient norm:          1  
[21:15:44] < General >   Log likelihood (N=5):  -59.64234 Gradient norm:       0.06  
[21:15:44] < General >   Log likelihood (N=5):  -59.64231 Gradient norm:       0.03  
[21:15:44] < General >   Log likelihood (N=5):  -59.64231 Gradient norm:     0.0002  
[21:15:44] < General >   Log likelihood (N=5):  -59.64231 Gradient norm:      4e-06  
[21:15:44] < General >   Log likelihood (N=5):  -59.64231 Gradient norm:      8e-08  
[21:15:44] < General >   Minimize with tol 1e-07
[21:15:44] < General >   Log likelihood (N=5):  -105.8869 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):  -234865.9 Gradient norm:      9e+05  
[21:15:44] < General >   Log likelihood (N=5):  -81.88364 Gradient norm:      8e+01  
[21:15:44] < General >   Log likelihood (N=5):  -53.36062 Gradient norm:      2e+01  
[21:15:44] < General >   Log likelihood (N=5):  -51.99468 Gradient norm:        0.9  
[21:15:44] < General >   Log likelihood (N=5):  -51.99047 Gradient norm:        0.3  
[21:15:44] < General >   Log likelihood (N=5):  -51.98973 Gradient norm:       0.08  
[21:15:44] < General >   Log likelihood (N=5):  -51.98968 Gradient norm:     0.0005  
[21:15:44] < General >   Log likelihood (N=5):  -51.98968 Gradient norm:      9e-06  
[21:15:44] < General >   Log likelihood (N=5):  -51.98968 Gradient norm:      1e-08  
[21:15:44] < General >   Minimize with tol 1e-07
[21:15:44] < General >   Log likelihood (N=5):  -115.3003 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):   -1216003 Gradient norm:      5e+06  
[21:15:44] < General >   Log likelihood (N=5):  -115.6422 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):  -67.11418 Gradient norm:          3  
[21:15:44] < General >   Log likelihood (N=5):  -67.07432 Gradient norm:          1  
[21:15:44] < General >   Log likelihood (N=5):   -67.0655 Gradient norm:       0.06  
[21:15:44] < General >   Log likelihood (N=5):  -67.06549 Gradient norm:      0.001  
[21:15:44] < General >   Log likelihood (N=5):  -67.06549 Gradient norm:      3e-07  
[21:15:44] < General >   Minimize with tol 1e-07
[21:15:44] < General >   Log likelihood (N=5):  -110.5936 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):  -647339.6 Gradient norm:      3e+06  
[21:15:44] < General >   Log likelihood (N=5):  -98.89788 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):  -60.49517 Gradient norm:      1e+01  
[21:15:44] < General >   Log likelihood (N=5):   -59.6489 Gradient norm:          1  
[21:15:44] < General >   Log likelihood (N=5):  -59.64234 Gradient norm:       0.06  
[21:15:44] < General >   Log likelihood (N=5):  -59.64231 Gradient norm:       0.03  
[21:15:44] < General >   Log likelihood (N=5):  -59.64231 Gradient norm:     0.0002  
[21:15:44] < General >   Log likelihood (N=5):  -59.64231 Gradient norm:      4e-06  
[21:15:44] < General >   Log likelihood (N=5):  -59.64231 Gradient norm:      8e-08  
[21:15:44] < General >   Minimize with tol 1e-07
[21:15:44] < General >   Log likelihood (N=5):   -120.007 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):   -1378082 Gradient norm:      6e+06  
[21:15:44] < General >   Log likelihood (N=5):  -131.0268 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):  -74.35888 Gradient norm:          4  
[21:15:44] < General >   Log likelihood (N=5):  -74.30076 Gradient norm:          1  
[21:15:44] < General >   Log likelihood (N=5):  -74.29393 Gradient norm:       0.05  
[21:15:44] < General >   Log likelihood (N=5):  -74.29392 Gradient norm:      0.004  
[21:15:44] < General >   Log likelihood (N=5):  -74.29392 Gradient norm:      4e-07  
[21:15:44] < General >   Minimize with tol 1e-07
[21:15:44] < General >   Log likelihood (N=5):  -110.5936 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):  -647339.6 Gradient norm:      3e+06  
[21:15:44] < General >   Log likelihood (N=5):  -98.89788 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):  -60.49517 Gradient norm:      1e+01  
[21:15:44] < General >   Log likelihood (N=5):   -59.6489 Gradient norm:          1  
[21:15:44] < General >   Log likelihood (N=5):  -59.64234 Gradient norm:       0.06  
[21:15:44] < General >   Log likelihood (N=5):  -59.64231 Gradient norm:       0.03  
[21:15:44] < General >   Log likelihood (N=5):  -59.64231 Gradient norm:     0.0002  
[21:15:44] < General >   Log likelihood (N=5):  -59.64231 Gradient norm:      4e-06  
[21:15:44] < General >   Log likelihood (N=5):  -59.64231 Gradient norm:      8e-08  
[21:15:44] < General >   Minimize with tol 1e-07
[21:15:44] < General >   Log likelihood (N=5):  -122.3604 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):   -1459122 Gradient norm:      6e+06  
[21:15:44] < General >   Log likelihood (N=5):   -138.719 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):  -77.92015 Gradient norm:          4  
[21:15:44] < General >   Log likelihood (N=5):  -77.84983 Gradient norm:          1  
[21:15:44] < General >   Log likelihood (N=5):  -77.84349 Gradient norm:       0.04  
[21:15:44] < General >   Log likelihood (N=5):  -77.84349 Gradient norm:      0.004  
[21:15:44] < General >   Log likelihood (N=5):  -77.84349 Gradient norm:      6e-07  
[21:15:44] < General >   Log likelihood (N=5):  -77.84349 Gradient norm:      3e-09  
[21:15:44] < General >   Minimize with tol 1e-07
[21:15:44] < General >   Log likelihood (N=5):  -108.2402 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):  -391222.4 Gradient norm:      2e+06  
[21:15:44] < General >   Log likelihood (N=5):  -90.27531 Gradient norm:      9e+01  
[21:15:44] < General >   Log likelihood (N=5):  -56.94745 Gradient norm:      2e+01  
[21:15:44] < General >   Log likelihood (N=5):  -55.85266 Gradient norm:          1  
[21:15:44] < General >   Log likelihood (N=5):  -55.84752 Gradient norm:        0.2  
[21:15:44] < General >   Log likelihood (N=5):  -55.84734 Gradient norm:       0.07  
[21:15:44] < General >   Log likelihood (N=5):  -55.84729 Gradient norm:     0.0001  
[21:15:44] < General >   Log likelihood (N=5):  -55.84729 Gradient norm:      5e-07  
[21:15:44] < General >   Minimize with tol 1e-07
[21:15:44] < General >   Log likelihood (N=5):  -124.7137 Gradient norm:      1e+02  
[21:15:44] < General >   Log likelihood (N=5):   -1540162 Gradient norm:      7e+06  
[21:15:44] < General >   Log likelihood (N=5):  -146.4113 Gradient norm:      2e+02  
[21:15:44] < General >   Log likelihood (N=5):  -81.44427 Gradient norm:          5  
[21:15:44] < General >   Log likelihood (N=5):  -81.35973 Gradient norm:          1  
[21:15:44] < General >   Log likelihood (N=5):  -81.35364 Gradient norm:       0.04  
[21:15:44] < General >   Log likelihood (N=5):  -81.35364 Gradient norm:      0.002  
[21:15:44] < General >   Log likelihood (N=5):  -81.35364 Gradient norm:      7e-08  
[21:15:44] < General >   Results saved in file simpleExample.html
[21:15:44] < General >   Results saved in file simpleExample.pickle
In [25]:
results.getEstimatedParameters()
Out[25]:
Value Std err t-test p-value Rob. Std err Rob. t-test Rob. p-value Bootstrap[10] Std err Bootstrap t-test Bootstrap p-value
beta1 -1.273264 0.115144 -11.057997 0.0 0.013724 -92.776669 0.0 0.011684 -108.977325 0.0
beta2 1.248769 0.084830 14.720836 0.0 0.059086 21.134794 0.0 0.050486 24.734877 0.0

The values of the intermediate results saved can be retrieved as follows.

Formula before

In [26]:
myBiogeme.loglike
Out[26]:
((((-(beta1(-1.0) ** `2`)) * Variable1) - (exp((beta2(2.0) * beta1(-1.0))) * Variable2)) - (beta2(2.0) ** `4`))

Retrieving the values

In [27]:
myBiogeme.loadSavedIteration()
myBiogeme.loglike
[21:15:44] < Detailed >  Parameter values restored from __savedIterations.txt
Out[27]:
((((-(beta1(-1.257397883628598) ** `2`)) * Variable1) - (exp((beta2(1.316511819976571) * beta1(-1.257397883628598))) * Variable2)) - (beta2(1.316511819976571) ** `4`))

A file name can be given. If the file does not exist, the statement is ignored.

In [28]:
myBiogeme.loadSavedIteration(filename='fileThatDoesNotExist.txt')
[21:15:44] < Warning >   Cannot read file fileThatDoesNotExist.txt. Statement is ignored.

simulate

In [29]:
# Simulate with the default values for the parameters
simulationWithDefaultBetas = myBiogeme.simulate()
simulationWithDefaultBetas
Out[29]:
loglike beta1 simul
0 -6.495239 -1.257398 -1.125747
1 -9.986485 -1.257398 -0.562873
2 -13.477731 -1.257398 -0.375249
3 -16.968976 -1.257398 -0.281437
4 -20.460222 -1.257398 -0.225149
In [30]:
# Simulate with the estimated values for the parameters
print(results.getBetaValues())
simulationWithEstimatedBetas = myBiogeme.simulate(results.getBetaValues())
simulationWithEstimatedBetas
{'beta1': -1.2732639841254711, 'beta2': 1.248768808907056}
Out[30]:
loglike beta1 simul
0 -6.092234 -1.273264 -1.148387
1 -9.752666 -1.273264 -0.574194
2 -13.413098 -1.273264 -0.382796
3 -17.073530 -1.273264 -0.287097
4 -20.733962 -1.273264 -0.229677

confidenceIntervals

In [31]:
drawsFromBetas = results.getBetasForSensitivityAnalysis(myBiogeme.freeBetaNames)
left, right = myBiogeme.confidenceIntervals(drawsFromBetas)
left
Out[31]:
loglike beta1 simul
0 -6.443845 -1.291087 -1.173883
1 -9.953791 -1.291087 -0.586941
2 -13.519165 -1.291087 -0.391294
3 -17.362747 -1.291087 -0.293471
4 -21.231691 -1.291087 -0.234777
In [32]:
right
Out[32]:
loglike beta1 simul
0 -5.755916 -1.259329 -1.128453
1 -9.624859 -1.259329 -0.564227
2 -13.413220 -1.259329 -0.376151
3 -16.973950 -1.259329 -0.282113
4 -20.483627 -1.259329 -0.225691