Germán Rodríguez
Multilevel Models Princeton University

Random-Effects Logit Models

Stata can fit logit models for panel data with xtlogit and mixed effects logit models with melogit. I will illustrate both using data from Lillard and Panis (2000) on hospital deliveries of births grouped by mother. First we read the data

. infile hosp loginc distance dropout college mother ///
>     using http://data.princeton.edu/pop510/hospital.dat, clear
(1,060 observations read)

To fit a model with a woman-level random effect we can use xtlogit with

. xtlogit hosp loginc distance dropout college, i(mother) re

Fitting comparison model:

Iteration 0:   log likelihood = -644.95401  
Iteration 1:   log likelihood = -541.44886  
Iteration 2:   log likelihood =  -537.4711  
Iteration 3:   log likelihood = -537.45771  
Iteration 4:   log likelihood = -537.45771  

Fitting full model:

tau =  0.0     log likelihood = -537.45771
tau =  0.1     log likelihood = -534.03315
tau =  0.2     log likelihood = -530.99872
tau =  0.3     log likelihood = -528.53741
tau =  0.4     log likelihood = -526.88308
tau =  0.5     log likelihood = -526.38822
tau =  0.6     log likelihood = -527.67382

Iteration 0:   log likelihood =  -526.3876  
Iteration 1:   log likelihood = -522.68442  
Iteration 2:   log likelihood = -522.65042  
Iteration 3:   log likelihood = -522.65042  

Random-effects logistic regression              Number of obs     =      1,060
Group variable: mother                          Number of groups  =        501

Random effects u_i ~ Gaussian                   Obs per group:
                                                              min =          1
                                                              avg =        2.1
                                                              max =         10

Integration method: mvaghermite                 Integration pts.  =         12

                                                Wald chi2(4)      =     110.06
Log likelihood  = -522.65042                    Prob > chi2       =     0.0000

─────────────┬────────────────────────────────────────────────────────────────
        hosp │      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
─────────────┼────────────────────────────────────────────────────────────────
      loginc │   .5622009   .0727497     7.73   0.000     .4196141    .7047876
    distance │  -.0765915   .0323473    -2.37   0.018    -.1399911    -.013192
     dropout │  -1.997753   .2556249    -7.82   0.000    -2.498769   -1.496737
     college │    1.03363   .3884851     2.66   0.008     .2722135    1.795047
       _cons │   -3.36984   .4794505    -7.03   0.000    -4.309546   -2.430134
─────────────┼────────────────────────────────────────────────────────────────
    /lnsig2u │   .4372018   .3161192                     -.1823805    1.056784
─────────────┼────────────────────────────────────────────────────────────────
     sigma_u │   1.244335   .1966791                       .912844    1.696203
         rho │   .3200274   .0687907                      .2020988    .4665343
─────────────┴────────────────────────────────────────────────────────────────
LR test of rho=0: chibar2(01) = 29.61                  Prob >= chibar2 = 0.000

. estimates store xt

The run is very fast. By default it uses Gauss-Hermite adaptive quadrature using the mean and variance with 12 integration points.

The same model can be fit using melogit, which by default uses only 7 integration points. We can change this option to get a better correspondence

. melogit hosp loginc distance dropout college || mother:, intpoints(12)

Fitting fixed-effects model:

Iteration 0:   log likelihood = -539.11554  
Iteration 1:   log likelihood = -537.46251  
Iteration 2:   log likelihood = -537.45771  
Iteration 3:   log likelihood = -537.45771  

Refining starting values:

Grid node 0:   log likelihood =  -526.3876

Fitting full model:

Iteration 0:   log likelihood =  -526.3876  
Iteration 1:   log likelihood = -522.74579  
Iteration 2:   log likelihood = -522.65053  
Iteration 3:   log likelihood = -522.65042  
Iteration 4:   log likelihood = -522.65042  

Mixed-effects logistic regression               Number of obs     =      1,060
Group variable:          mother                 Number of groups  =        501

                                                Obs per group:
                                                              min =          1
                                                              avg =        2.1
                                                              max =         10

Integration method: mvaghermite                 Integration pts.  =         12

                                                Wald chi2(4)      =     110.06
Log likelihood = -522.65042                     Prob > chi2       =     0.0000
─────────────┬────────────────────────────────────────────────────────────────
        hosp │      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
─────────────┼────────────────────────────────────────────────────────────────
      loginc │   .5622029   .0727497     7.73   0.000      .419616    .7047898
    distance │  -.0765916   .0323474    -2.37   0.018    -.1399914   -.0131919
     dropout │  -1.997762   .2556281    -7.82   0.000    -2.498784    -1.49674
     college │   1.033635   .3884878     2.66   0.008     .2722127    1.795057
       _cons │  -3.369853   .4794516    -7.03   0.000    -4.309561   -2.430145
─────────────┼────────────────────────────────────────────────────────────────
mother       │
   var(_cons)│   1.548407   .4894986                      .8332867    2.877238
─────────────┴────────────────────────────────────────────────────────────────
LR test vs. logistic model: chibar2(01) = 29.61       Prob >= chibar2 = 0.0000

. estimates store me

We can compare the estimates

. estimates table xt me

─────────────┬──────────────────────────
    Variable │     xt           me      
─────────────┼──────────────────────────
hosp         │
      loginc │  .56220088    .56220292  
    distance │ -.07659154   -.07659161  
     dropout │ -1.9977529   -1.9977619  
     college │  1.0336304    1.0336349  
       _cons │   -3.36984    -3.369853  
─────────────┼──────────────────────────
lnsig2u      │
       _cons │   .4372018               
─────────────┼──────────────────────────
        var( │
_cons[mot~r])│
       _cons │               1.5484071  
─────────────┴──────────────────────────

Note that xtlogit reports the logged variance (and the standard deviation) whereas melogit reports the variance, but the results are equivalent. The other estimates are all very close.

You will find on this website analyses of the same data using Bayesian methods:

The last page includes a comparison of estimates with all four methods.