[R-sig-ME] generalized mixed linear models, glmmPQL and GLMER give very different results that both do not fit the data well...
Douglas Bates
bates at stat.wisc.edu
Sat Mar 19 10:01:37 CET 2011
On Fri, Mar 18, 2011 at 9:54 PM, David Duffy <davidD at qimr.edu.au> wrote:
> On Fri, 18 Mar 2011, Franssens, Samuel wrote:
>
>> Hi,
>>
>> I have the following type of data: 86 subjects in three independent groups
>> (high power vs low power vs control). Each subject solves 8 reasoning
>> problems of two kinds: conflict problems and noconflict problems. I measure
>> accuracy in solving the reasoning problems. To summarize: binary response, 1
>> within subject var (TYPE), 1 between subject var (POWER).
>>
>> I wanted to fit the following model: for problem i, person j:
>> logodds ( Y_ij ) = b_0j + b_1j TYPE_ij
>> with b_0j = b_00 + b_01 POWER_j + u_0j
>> and b_1j = b_10 + b_11 POWER_j
>>
>> I think it makes sense, but I'm not sure.
>> Here are the observed cell means:
>> conflict noconflict
>> control 0.6896552 0.9568966
>> high 0.6935484 0.9677419
>> low 0.8846154 0.9903846
>>
>> GLMER gives me:
>> Formula: accuracy ~ f_power * f_type + (1 | subject)
>> Data: syllogisms
>> Random effects:
>> Groups Name Variance Std.Dev.
>> subject (Intercept) 4.9968 2.2353
>> Number of obs: 688, groups: subject, 86
>>
>> Fixed effects:
>> Estimate Std. Error z value Pr(>|z|)
>> (Intercept) 1.50745 0.50507 2.985 0.00284 **
>> f_powerhp 0.13083 0.70719 0.185 0.85323
>> f_powerlow 2.04121 0.85308 2.393 0.01672 *
>> f_typenoconflict 3.28715 0.64673 5.083 3.72e-07 ***
>> f_powerhp:f_typenoconflict 0.21680 0.93165 0.233 0.81599
>> f_powerlow:f_typenoconflict -0.01199 1.45807 -0.008 0.99344
>> ---
>>
>> Strange thing is that when you convert the estimates to probabilities,
>> they are quite far off. For control, no conflict (intercept), the estimation
>> from glmer is 1.5 -> 81% and for glmmPQL is 1.14 -> 75%, whereas the
>> observed is: 68%.
>>
>> Am I doing something wrong?
>
> You are forgetting that your model includes a random intercept for each
> subject.
To follow up on David's comment a bit, you have a very large estimated
standard deviation for the random effects for subject. An estimated
standard deviation of 2.23 in the glmer fit is huge. On the scale of
the linear predictor the random effects would swamp the contribution
of the fixed effects - different subjects' random effects could easily
differ by 8 or more (+/ 2 s.d.). Transformed to the probability scale
that takes you from virtually impossible to virtually certain. To get
that large an estimated standard deviation you would need different
subjects exposed to the same experimental conditions and who get
completely different results (all wrong vs all right). The estimate
of the fixed effect for type is also very large, considering that this
is a binary covariate.
I would suggest plotting the data, say by first grouping the subjects
into groups according to power then by subject then by type. You
should see a lot of variability in responses according to subject
within power.
Regarding the difference between the glmer and the glmmPQL results,
bear in mind that glmmPQL is based on successive approximation of the
GLMM by an LMM. This case is rather extreme and the approximation is
likely to break down. In fact, for the glmer fit, I would recommend
using nAGQ=7 or nAGQ=9 to see if that changes the estimates
substantially. The difference between the default evaluation of the
deviance by the Laplace approximation and the more accurate adaptive
Gauss-Hermite quadrature may be important in this case.
I just saw your other message about the properties of the random
effects. The values returned are the conditional modes of the random
effects given the observed data. The mean 0 and standard deviation of
2.3 apply to the marginal or unconditional distribution of the random
effects, not the conditional distribution.
More information about the R-sig-mixed-models
mailing list