QA Case Studies – GLMM Nodes

[03-Jan-2013]

General Comment

Numerical estimation of the marginal likelihood and marginal posterior distributions for model parameters is considerably more difficult in mixed models compared to those containing only fixed effects. Generally speaking, the internal abn code for nodes with a random effect term is very much slower than using INLA. Other than the fact that the internal code implements the conventional Laplace approximation (in contrast to INLA spline-based nested approximations) it also attempts to minimize the size of error in the mlik estimate by comparing mlik values using a 3 pt finite difference approximation for the hessian with those of a 5pt estimate, both of which use adaptive step sizes at both first and second derivative estimates. This takes considerable time but hopefully provides highly reliable estimates as we demonstrate below.


QA Study One – glmm nodes

This case study uses data set ex3.dag.data which is provided with abn and comprises of 13 binary variables and one grouping factor. We consider an analyses where each node is parameterized as a GLMM and where this has at most four parents. This gives a total of 10322 node-parent combinations against which we compare the internal abn code, with INLA, and also the posterior parameter modes against glmer(). All the results, data and R code files for conducting this case study can be found here.

Figure one shows the mlik comparison between the internal code and INLA. These are clearly in very close agreement, with the median (absolute) relative difference being 0.03 %. Figure two compares the estimated local error in mlik with the difference with INLA – there is no relationship here.

Fig1. Comparison of relative error between internal code and INLA for nodes with a group level random effect. Very close agreement.

Fig1. Comparison of relative error between internal code and INLA for nodes with a group level random effect. Very close agreement.


Fig2. Comparison between the relative difference in internal code and INLA and the estimated local error in mlik in the internal code. No obvious relationship.

Fig2. Comparison between the relative difference in internal code and INLA and the estimated local error in mlik in the internal code. No obvious relationship.

While there is very good agreement with INLA, for completeness we also compare the parameter modes from the node-parent combinations with the 10 largest differences against those from glmer(). As while the mliks are in agreement, neither approach is a gold standard and so it is of interest to check the modes are in close agreement with glmer().


################ bad= 1 #################

# 1. glmer()
(Intercept) b51 b61 b71 b91
-0.2083273 -0.9339085 -0.9862183 1.3676042 1.5988177
group.precision
0.3848078

# 2. C
b10|(Intercept) b10|b5 b10|b6 b10|b7
-0.2083676 -0.9339019 -0.9860415 1.3675654
b10|b9 b10|group.precision
1.5987542 0.3848120

# 2b. glmer()-C
(Intercept) b51 b61 b71 b91
4.026806e-05 -6.589227e-06 -1.768181e-04 3.879316e-05 6.346297e-05
group.precision
-4.179180e-06

# 3. INLA
b10|(Intercept) b10|b5 b10|b6 b10|b7
-0.2077327 -0.9315665 -0.9842367 1.3630560
b10|b9 b10|group.precision
1.5834060 0.3742098

# 3b. glmer()-INLA
(Intercept) b51 b61 b71 b91
-0.0005945819 -0.0023419719 -0.0019815301 0.0045481802 0.0154117585
group.precision
0.0105979758

###########################################
################ bad= 2 #################

# 1. glmer()
(Intercept) b11 b41 b111 b121
1.2866872 -1.3739533 -0.8317203 0.8618255 -1.0473774
group.precision
0.5606638

# 2. C
b2|(Intercept) b2|b1 b2|b4 b2|b11
1.2864484 -1.3738181 -0.8315907 0.8617350
b2|b12 b2|group.precision
-1.0471387 0.5607362

# 2b. glmer()-C
(Intercept) b11 b41 b111 b121
2.388325e-04 -1.352618e-04 -1.296479e-04 9.044146e-05 -2.386690e-04
group.precision
-7.241301e-05

# 3. INLA
b2|(Intercept) b2|b1 b2|b4 b2|b11
1.2690736 -1.3589508 -0.8298628 0.8595190
b2|b12 b2|group.precision
-1.0436519 0.5309725

# 3b. glmer()-INLA
(Intercept) b11 b41 b111 b121
0.017613598 -0.015002523 -0.001857540 0.002306461 -0.003725471
group.precision
0.029691362

###########################################
....
################ bad= 10 #################

# 1. glmer()
(Intercept) b11 b91 b111 b121
0.6543242 -1.4801453 0.4819907 0.9101779 -0.9593435
group.precision
0.5263712

# 2. C
b2|(Intercept) b2|b1 b2|b9 b2|b11
0.6543192 -1.4799880 0.4819095 0.9101288
b2|b12 b2|group.precision
-0.9592635 0.5264493

# 2b. glmer()-C
(Intercept) b11 b91 b111 b121
5.006709e-06 -1.573484e-04 8.121527e-05 4.912216e-05 -7.991665e-05
group.precision
-7.808053e-05

# 3. INLA
b2|(Intercept) b2|b1 b2|b9 b2|b11
0.6502900 -1.4656947 0.4796138 0.9079449
b2|b12 b2|group.precision
-0.9558823 0.5016822

# 3b. glmer()-INLA
(Intercept) b11 b91 b111 b121
0.004034190 -0.014450596 0.002376931 0.002232982 -0.003461195
group.precision
0.024689004

###########################################

The modes are in close agreement, while those from INLA are generally further away from glmer() than those from the internal code they are all very similar.

Finally, we compare the marginal posterior densities in the node-parent combination which had the largest relative difference between the internal code and INLA. Figure three shows that the densities are all very similar.

Fig3. Comparison of posterior marginal densities for a binary node with four binary covariates and a single grouping (random effect) variable. Black is INLA's estimate and red the internal abn code.

Fig3. Comparison of posterior marginal densities for a binary node with four binary covariates and a single grouping (random effect) variable. Black is INLA’s estimate and red the internal abn code.


QA Study Two – glmm nodes

This case study uses data set ex4.dag.data which is provided with abn and comprises of 10 binary variables and one grouping factor. We consider an analyses where each node is parameterized as a GLMM and where this has at most four parents. This gives a total of 2560 node-parent combinations against which we compare the internal abn code, with INLA, and also the posterior parameter modes against glmer(). All the R code files for conducting this case study can be found here.

Figure one and Figure two show a comparison between the internal code and INLA. As with case study one above there is very close agreement.

Fig1. Comparison of relative difference in mliks, median (absolute) difference is 0.03 %. Very close agreement in all 2560 comparisons.

Fig1. Comparison of relative difference in mliks, median (asbsolute) difference is 0.03 %. Very close agreement in all 2560 comparisons.


Fig2. Comparison of relative difference between internal code and INLA against estimated local error in mlik. No evidence of any relationship.

Fig2. Comparison of relative difference between internal code and INLA against estimated local error in mlik. No evidence of any relationship.

Following the similar pattern as in case study one we also check the modes of the ten comparisons with the biggest relative differences.

################ bad= 1 #################

# 1. glmer()
(Intercept) b3TRUE b4TRUE b6TRUE b8TRUE
2.3887176 0.7717457 -0.1310729 0.5312686 0.4255655
group.precision
0.3167966

# 2. C
b1|(Intercept) b1|b3 b1|b4 b1|b6
2.3878647 0.7720948 -0.1312674 0.5313179
b1|b8 b1|group.precision
0.4263292 0.3167946

# 2b. glmer()-C
(Intercept) b3TRUE b4TRUE b6TRUE b8TRUE
8.529629e-04 -3.490485e-04 1.944823e-04 -4.930406e-05 -7.637861e-04
group.precision
1.968264e-06

# 3. INLA
b1|(Intercept) b1|b3 b1|b4 b1|b6
2.3261943 0.7734201 -0.1428762 0.5303982
b1|b8 b1|group.precision
0.4042026 0.3217394

# 3b. glmer()-INLA
(Intercept) b3TRUE b4TRUE b6TRUE b8TRUE
0.0625233350 -0.0016743583 0.0118033205 0.0008703193 0.0213628831
group.precision
-0.0049428701

###########################################
################ bad= 2 #################

# 1. glmer()
(Intercept) b3TRUE b6TRUE b8TRUE b9TRUE
2.3398875 0.7571624 0.5133657 0.3967382 0.2667972
group.precision
0.3186642

# 2. C
b1|(Intercept) b1|b3 b1|b6 b1|b8
2.3397833 0.7571171 0.5133585 0.3971162
b1|b9 b1|group.precision
0.2667321 0.3186604

# 2b. glmer()-C
(Intercept) b3TRUE b6TRUE b8TRUE b9TRUE
1.042575e-04 4.529968e-05 7.164506e-06 -3.779888e-04 6.505373e-05
group.precision
3.746524e-06

# 3. INLA
b1|(Intercept) b1|b3 b1|b6 b1|b8
2.2772207 0.7592452 0.5119995 0.3743772
b1|b9 b1|group.precision
0.2550953 0.3251166

# 3b. glmer()-INLA
(Intercept) b3TRUE b6TRUE b8TRUE b9TRUE
0.062666865 -0.002082823 0.001366183 0.022361028 0.011701870
group.precision
-0.006452439

###########################################
....
################ bad= 10 #################

# 1. glmer()
(Intercept) b3TRUE b6TRUE b8TRUE b10TRUE
2.3703846 0.7535883 0.5141969 0.3663253 0.2461768
group.precision
0.3178437

# 2. C
b1|(Intercept) b1|b3 b1|b6 b1|b8
2.3705722 0.7535848 0.5140362 0.3660633
b1|b10 b1|group.precision
0.2465629 0.3178620

# 2b. glmer()-C
(Intercept) b3TRUE b6TRUE b8TRUE b10TRUE
-1.876168e-04 3.525675e-06 1.606354e-04 2.619973e-04 -3.860850e-04
group.precision
-1.824810e-05

# 3. INLA
b1|(Intercept) b1|b3 b1|b6 b1|b8
2.3084068 0.7554059 0.5124465 0.3422167
b1|b10 b1|group.precision
0.2275780 0.3230519

# 3b. glmer()-INLA
(Intercept) b3TRUE b6TRUE b8TRUE b10TRUE
0.061977731 -0.001817624 0.001750355 0.024108636 0.018598780
group.precision
-0.005208164

###########################################

In each case we find that the modes of the internal code, INLA and glmer() are in very close agreement, although the internal code being somewhat closer. Finally, we compare the marginal posterior densities in the node-parent combination which had the largest relative difference between the internal code and INLA. Figure three shows that the densities are all very similar.

Fig3. Comparison of marginal posterior densities between internal code and INLA. Very close agreement, although not identical.

Fig3. Comparison of marginal posterior densities between internal code and INLA. Very close agreement, although not identical.


QA Study Three – glmm nodes

This case study uses data set ex5.dag.data which is provided with abn and comprises of 8 continuous variables each modelled as Gaussian and one grouping factor. We consider an analyses where each node is parameterized as a GLMM and where this has at most four parents. This gives a total of 792 node-parent combinations against which we compare the internal abn code, with INLA, and also the posterior parameter modes against glmer(). All the results, data and R code files for conducting this case study can be found here.

This case study differs from the above two studies in that there are now 48 node-parent combinations in which the internal code throws an error (returning R’s NA missing value for the mlik in each case). In addition, the comparison with INLA now results in far larger relative differences in mlik values compared to the previous two studies with GLMM nodes. Figure one shows the relative differences and figure two the estimated local error in mlik from the internal code. Figure three shows a comparison of the actual mlik estimates in the cases where the internal code and INLA differed by more than 0.25%.

Fig1. comparison of (absolute) relative difference between internal code and INLA. The horizontal line is at 0.25% which is a little above the maximum approximate error observed in case study two with GLMM nodes. There are clearly much larger discrepancies here.

Fig1. comparison of (absolute) relative difference between internal code and INLA. The horizontal line is at 0.25% which is a little above the maximum approximate error observed in case study two with GLMM nodes. There are clearly much larger discrepancies here.


Fig2. Estimated local error in mlik against relative difference between internal code and INLA. Despite the larger differences with INLA the local error suggests the internal estimates are reasonably robust.

Fig2. Estimated local error in mlik against relative difference between internal code and INLA. Despite the larger differences with INLA the local error suggests the internal estimates are reasonably robust.

Fig3. comparison of mlik values between internal code and INLA for all 234 node-parent combinations which differed by more than 0.25%. The line shown is y=x and clearly there are fairly large differences between the estimates given by each approach.

Fig3. comparison of mlik values between internal code and INLA for all 234 node-parent combinations which differed by more than 0.25%. The line shown is y=x and clearly there are fairly large differences between the estimates given by each approach.

To summarize, compared with the previous two GLMM case studies (both of which have binary nodes) there is a far larger discrepancy with this data set between INLA and the internal code. As neither is a gold standard estimate we now compare the posterior modes from each approach with those from glmer() (or equivalent lmer() as the family here is gaussian), where we consider lmer() to be the gold standard in terms of approximate parameter modes. There are a total of 234 node-parent comparisons which differ by more than 0.25% between the internal code and INLA and we examine the modes against lmer() for each of these, a subset of this output is given below:


################ bad= 1 #################

# 1. glmer()
(Intercept) group.precision residual.precision
0.04696366 7.50243829 1.13755290

# 2. C
g1|(Intercept) g1|group.precision g1|precision
0.04677903 8.50140683 1.13755827

# 2b. glmer()-C
(Intercept) group.precision residual.precision
1.846249e-04 -9.989685e-01 -5.376578e-06

# 3. INLA
g1|(Intercept) g1|group.precision g1|precision
0.0006158085 0.0119924728 0.9950390718

# 3b. glmer()-INLA
(Intercept) group.precision residual.precision
0.04634785 7.49044582 0.14251382

###########################################
################ bad= 2 #################

# 1. glmer()
(Intercept) g3 group.precision residual.precision
0.05045563 -0.04287693 6.52613718 1.13931550

# 2. C
g1|(Intercept) g1|g3 g1|group.precision g1|precision
0.04992646 -0.03880092 7.51539161 1.14162350

# 2b. glmer()-C
(Intercept) g3 group.precision residual.precision
0.0005291644 -0.0040760109 -0.9892544259 -0.0023079958

# 3. INLA
g1|(Intercept) g1|g3 g1|group.precision g1|precision
0.0003158667 0.1338808903 0.0228107222 1.0112658362

# 3b. glmer()-INLA
(Intercept) g3 group.precision residual.precision
0.05013976 -0.17675782 6.50332646 0.12804966

###########################################
...
################ bad= 234 #################

# 1. glmer()
(Intercept) g4 g5 g6
-0.05752297 0.05904066 -0.10722607 0.32266629
g7 group.precision residual.precision
-0.03362285 6.84931507 1.23770035

# 2. C
g8|(Intercept) g8|g4 g8|g5 g8|g6
-0.05636448 0.05909492 -0.10871338 0.31925136
g8|g7 g8|group.precision g8|precision
-0.03444493 7.91465688 1.24896274

# 2b. glmer()-C
(Intercept) g4 g5 g6
-1.158488e-03 -5.425953e-05 1.487311e-03 3.414929e-03
g7 group.precision residual.precision
8.220844e-04 -1.065342e+00 -1.126239e-02

# 3. INLA
g8|(Intercept) g8|g4 g8|g5 g8|g6
-0.001015261 0.059252274 -0.197926250 0.187808979
g8|g7 g8|group.precision g8|precision
-0.058010075 0.013210750 1.060274214

# 3b. glmer()-INLA
(Intercept) g4 g5 g6
-0.0565077069 -0.0002116155 0.0907001775 0.1348573147
g7 group.precision residual.precision
0.0243872291 6.8361043190 0.1774261383

###########################################

In each and every of the 234 comparisons the modes estimated from the INLA output are completely different from those using lmer(), whereas the internal code is in every case a close match to the lmer() modes. This suggests that values from INLA are not reliable here.

A second issue is present with this data set. The internal code returns 48 mlik values as NA. Two things are of interest here: i) is this NA value reasonable, i.e. can lmer() fit this model?; and ii) what does INLA do here? Figure four shows that for each and every case where the internal code cannot fit the model, and gives an NA, that INLA instead gives a spuriously positive mlik value vastly better that those for “valid” models.

Fig4. mlik values from INLA, those in yellow and red are the values which the internal code reports as NA's (missing).

Fig4. mlik values from INLA, those in yellow and red are the values which the internal code reports as NA’s (not available).

The output snippet below shows that for each and every of the 48 cases where the internal code “fails” then lmer() also reports an error, in contrast to INLA’s highly inflated and misleading mlik value.

########################################################
Error in mer_finalize(ans) : Calculated PWRSS for a LMM is negative
################ missing= 1 #################

# 3. INLA
g2|(Intercept) g2|g3 g2|g4 g2|group.precision
-2.648608e-06 8.981587e-01 -7.014133e-01 7.368817e+04
g2|precision
4.710170e+05

$ mlik= 1888.338

###########################################
########################################################
Error in mer_finalize(ans) : Calculated PWRSS for a LMM is negative
################ missing= 2 #################

# 3. INLA
g2|(Intercept) g2|g1 g2|g3 g2|g4
-2.649968e-06 -5.636391e-07 8.981587e-01 -7.014133e-01
g2|group.precision g2|precision
7.374590e+04 4.708745e+05

$ mlik= 1876.489

###########################################
....
########################################################
Error in mer_finalize(ans) : Calculated PWRSS for a LMM is negative
################ missing= 48 #################

# 3. INLA
g4|(Intercept) g4|g2 g4|g3 g4|g7
-2.650460e-06 -1.425695e+00 1.280500e+00 -5.647343e-07
g4|g8 g4|group.precision g4|precision
-5.632820e-07 7.378532e+04 4.707246e+05

$ mlik= 1864.993

###########################################

In this case study the internal code seems to perform well. INLA appears to struggle here with many of the node-parent combinations, and is a good illustration of why having an alternative to INLA seems necessary in order to ensure reliable and robust mlik values for subsequent structure discovery searches.


QA Study Four – glmm nodes

This case study uses data set ex6.dag.data which is provided with abn and comprises of 7 variables, one Poisson, four Gaussian and two binary plus one grouping factor. We consider an analyses where each node is parameterized as a GLMM and where this has at most three parents. This gives a total of 294 node-parent combinations against which we compare the internal abn code, with INLA, and also the posterior parameter modes against glmer(). All the results, data and R code files for conducting this case study can be found here.

Figures one and two show the relative difference between internal code and INLA and the estimated local error in mlik values. Many of the mlik values are very close with the median (absolute) relative difference around 0.006%. Although it is clear than quite a number of the mlik values differ by considerably more than that. There are 48 node-parent combinations which have a difference of more than 0.25% and these are investigated further below.

Fig1. Relative difference in mliks between internal code and INLA. Median (absolute) relative difference is 0.006%. 48 node-parent comparisons have a relative difference in excess of 0.25%.

Fig1. Relative difference in mliks between internal code and INLA. Median (absolute) relative difference is 0.006%. 48 node-parent comparisons have a relative difference in excess of 0.25%.


Fig2. Relative difference between internal code and INLA against approximate local error in mlik.

Fig2. Relative difference between internal code and INLA against approximate local error in mlik.

As with the other case studies we use glmer() as the gold standard against which to compare the modes from the internal code and INLA.

################ bad= 1 #################

# 1. glmer()
(Intercept) g2 b2no group.precision
0.3731573 0.5005805 -0.3725462 13.7258939
residual.precision
1.5833282

# 2. C
g1|(Intercept) g1|g2 g1|b2 g1|group.precision
0.3734518 0.5010345 -0.3736847 15.1134356
g1|precision
1.5871722

# 2b. glmer()-C
(Intercept) g2 b2no group.precision
-0.0002945187 -0.0004539956 0.0011384602 -1.3875416939
residual.precision
-0.0038440001

# 3. INLA
g1|(Intercept) g1|g2 g1|b2 g1|group.precision
0.371865404 0.507477896 -0.383664192 0.004242917
g1|precision
1.448831144

# 3b. glmer()-INLA
(Intercept) g2 b2no group.precision
0.001291923 -0.006897374 0.011117985 13.721650982
residual.precision
0.134497043

###########################################
################ bad= 2 #################

# 1. glmer()
(Intercept) p1 g2 b2no
0.3735870930 -0.0009740356 0.4966137404 -0.3657885004
group.precision residual.precision
13.6345663526 1.5825764661

# 2. C
g1|(Intercept) g1|p1 g1|g2 g1|b2
0.3739064967 -0.0009685062 0.4971737246 -0.3671259819
g1|group.precision g1|precision
15.2623303040 1.5883509984

# 2b. glmer()-C
(Intercept) p1 g2 b2no
-3.194037e-04 -5.529345e-06 -5.599842e-04 1.337481e-03
group.precision residual.precision
-1.627764e+00 -5.774532e-03

# 3. INLA
g1|(Intercept) g1|p1 g1|g2 g1|b2
0.3670454445 -0.0008727633 0.5085936258 -0.3798340438
g1|group.precision g1|precision
0.0035272743 1.4547554930

# 3b. glmer()-INLA
(Intercept) p1 g2 b2no
0.0065416485 -0.0001012723 -0.0119798854 0.0140455434
group.precision residual.precision
13.6310390783 0.1278209731

###########################################
....
################ bad= 48 #################

# 1. glmer()
(Intercept) g2 g3 g4 group.precision
8.15087496 -0.32545776 0.44682748 -1.13666443 0.01597648

# 2. C
b2|(Intercept) b2|g2 b2|g3 b2|g4
8.10292305 -0.32537730 0.44685912 -1.13661702
b2|group.precision
0.01621116

# 2b. glmer()-C
(Intercept) g2 g3 g4 group.precision
4.795191e-02 -8.045411e-05 -3.163456e-05 -4.741090e-05 -2.346731e-04

# 3. INLA
b2|(Intercept) b2|g2 b2|g3 b2|g4
4.2794890 -0.3166347 0.4400366 -1.1246914
b2|group.precision
0.5905595

# 3b. glmer()-INLA
(Intercept) g2 g3 g4 group.precision
3.871385915 -0.008823091 0.006790914 -0.011973063 -0.574583037

###########################################

In each and every of the 48 cases with largest difference, INLA’s estimate of the mode of the group level precision is vastly different from that provided by either glmer() of the internal code (which are both very similar). In most cases INLA’s estimate of precision is vastly smaller than that from glmer() but in some cases it is much larger (e.g. the last case in the above output snippet). Figure three shows a comparison of the estimated posterior density for the group level precision in one of the 48 cases gives above, specially for the model g1 ~ g2 + b2 + (1 | group) (in INLA inla(g1~g2+b2+f(group,model=”iid”….)). These estimates are clearly completely different and the INLA estimate is completely inconsistent with glmer() and therefore not reliable here. The other cases above are similar.

Fig3. Comparison of marginal posterior density for Gaussian node g1 ~ g2 + b2 + (1 | group) with group being an iid random effect. The black line is INLAs estimate and the red line the internal code. These are clearly different and the mode in the internal code matches the mode estimate from glmer().

Fig3. Comparison of marginal posterior density for Gaussian node g1 ~ g2 + b2 + (1 | group) with group being an iid random effect. The black line is INLAs estimate and the red line the internal code. These are clearly different and the mode in the internal code matches the mode estimate from glmer().

As with case study three, the internal code appears robust, if much slower than INLA, while INLA appears reliable for many node-parent combinations but does give incorrect results for a considerable minority of cases.


QA Study Five – glmm nodes

This case study is similar to case study four in the GLM nodes (see here) and uses the data set Epil which is provided as part of the INLA library but here we now model the repeated observations for (assumed Poisson count) variable “y” using group variable “Ind”. As in the second part of case study four (GLM nodes) we consider here the new response y=100*y. In the GLM node-parent combinations this caused INLA to produce incorrect very large mlik values. Repeating this analysis with the GLMM node-parent combinations causes INLA to crash for all four parent combinations for node “y”. All results, data and R code files for this case study can be found here, along with the crash output for the INLA calls (see file out2.inla.txt).

To check that the internal code does a reasonable job here we again compare its output against modes using glmer() for all four cases where INLA crashed.

################ node-parent= 1 #################

# 1. glmer()
(Intercept) group.precision
6.1452448 0.6680473

# 2. C
y|(Intercept) y|group.precision
6.1392521 0.6649523

#################################################
################ node-parent= 2 #################

# 1. glmer()
(Intercept) Trt1 group.precision
6.3706617 -0.4288202 0.6888950

# 2. C
y|(Intercept) y|Trt y|group.precision
6.3699960 -0.4274550 0.6887558

###########################################
################ node-parent= 3 ###########

# 1. glmer()
(Intercept) Age group.precision
6.1454883 -0.1297835 0.6741270

# 2. C
y|(Intercept) y|Age y|group.precision
6.1475772 -0.1260702 0.6715139

###########################################
################ node-parent= 4 ###########

# 1. glmer()
(Intercept) Trt1 Age group.precision
6.3867339 -0.4594058 -0.1530896 0.6980803

# 2. C
y|(Intercept) y|Trt y|Age y|group.precision
6.3866019 -0.4694131 -0.1620847 0.6926234

###########################################

There is a very close match between the modes for the internal code and glmer(). Figure one also shows the estimated marginal posterior densities.

Fig1. Marginal posterior distributions for parameters in y=1+Trt+Age+(1|Ind).

Fig1. Marginal posterior distributions for parameters in y=1+Trt+Age+(1|Ind).

This case study suggests the internal code is reliable.


QA Study Six – glmm nodes

In this case study we consider just a single glmm node (model) using data ex7.dag.data which is again provided with abn. We examine the model b1=b2+(1|group), in INLA inla(g1~g2+b2+f(group,model=”iid”….). This is a large data set with over 10K observations and variables b1 and b2 are measured across over 4K groups. All results, data and R code files for this case study can be found here. As with all other case studies we compare the results from the internal code, INLA and glmer() for this data.

Below we compare the modes given from glmer() with those from INLA and the internal code.

#######################################################################
## 1. glmer()
Generalized linear mixed model fit by the Laplace approximation
Formula: b1 ~ b2 + (1 | group)
Random effects:
Groups Name Variance
group (Intercept) 1.619 # => group.precision=(1/1.619) = 0.618
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.73022 0.04557 -59.91 < 2e-16 *** b2TRUE -0.43468 0.16591 -2.62 0.00879 ** ####################################################################### ## 2. C (internal code) b1|(Intercept) b1|b2 b1|group.precision -2.7276459 -0.4367608 0.6207554 ####################################################################### ## 3. INLA b1|(Intercept) b1|b2 b1|group.precision -2.3692217 -0.3728149 1.7348308

The intercept and slope parameters are quite different from INLA compared to glmer() and the internal code, but what is very considerably different is the precision estimate which differs by almost a factor three. Figure one shows the marginal posterior density produced by INLA and compared with that from the internal code.

Fig1. Comparison of the marginal posterior density from INLA (black) and the internal code (red). Obviously these are very different.

Fig1. Comparison of the marginal posterior density from INLA (black) and the internal code (red). Obviously these are very different.

It appears that INLA here considerably underestimates the group level variance.

Recent Posts