[R-sig-ME] Advice for reporting results of glmm from lme4

Kay Cecil Cichini Kay.Cichini at uibk.ac.at
Wed Aug 31 15:12:19 CEST 2011


hi colin,

you can use AIC for testing candidate models. if you compare two  
models - say, one with and the other without interaction and dropping  
the interaction leads to a information-loss (higher AIC) this would  
mean that the interaction is significant to the model... you could  
also test this by a LRT-test.

this can be regarded to be equivalent to an anova telling you this or  
that factor or interaction is significant...

if you're interested in comparisons of certain combinations of factor  
levels you could do it with glht().

yours,
kay

ps: the AIC and LRT-tests were not explicitly used and, as you said,  
not incorporated in the results, thus table 5 is only attached to the  
appendix..
this was generated as given in Zuur at al. and might serve you as a template.



Zitat von Colin Wahl <biowahl at gmail.com>:

> Thanks for the advice,
>
> In your publication, it is unclear to me how you tested main effects and
> interactions using AIC. It appears to me at initial glance that you main
> (gap-effect) and interaction (gap-stage interaction effect) were tested
> using pairwise tests. Perhaps from the glht() package? I noticed table 5 in
> the appendix using AIC and LRT to test the significance of each factor in
> different models, but I dont see how it is incorporated into your results.
>
> Also, after thumbing through Zuur 2009, I dont see AIC used for testing
> significance of main and interaction effects, but for validating models.
>
> best,
> Colin
>
> On Mon, Aug 29, 2011 at 1:05 AM, Kay Cecil Cichini
> <Kay.Cichini at uibk.ac.at>wrote:
>
>> Hi Colin,
>>
>> I faced quite the same problems recently -
>> I did some exhaustive search and finally came to the solution used in the
>> attached publication.
>>
>> For outputs showing significance of main effects and interaction use AIC as
>> given in Zuur et al. (2009) - Mixed Effects Models and Extensions in Ecology
>> with R.
>>
>> For comparison of different level combinations you could use glht() in
>> package multcomp.
>>
>> One last thing: I wonder if the parameterization of your random effects is
>> set up properly - from my understanding 1|stream would suffice, but it is
>> likely that i didn't get the whole survey design.
>>
>> Best wishes,
>> Kay Cichini
>>
>>
>>
>>
>>  Zitat von Colin Wahl <biowahl at gmail.com>:
>>
>>  Hello,
>>> I am currently writing my master's thesis and would like some advice on
>>> how
>>> to report my glmm results. I am testing how stream macroinvertebrate
>>> distributions vary between watersheds defined by different types of land
>>> use, and between stream reaches with and without riparian corridors. I am
>>> considering using the following glmer output to report my results (the
>>> actual glmer output is included at the end of this post).
>>>
>>>     Treatment
>>>
>>> Estimate
>>>
>>> St. error
>>>
>>> z value
>>>
>>> p value (>|z|)
>>>
>>> Cultivated(intercept)
>>>
>>> 1.35
>>>
>>> 0.49
>>>
>>> -8.694
>>>
>>> <0.001***
>>>
>>> Developed
>>>
>>> 0.18
>>>
>>> 0.76
>>>
>>> -2.705
>>>
>>> 0.007**
>>>
>>> Forested
>>>
>>> 28.2
>>>
>>> 0.6339
>>>
>>> 5.297
>>>
>>> <0.001***
>>>
>>> Grassland
>>>
>>> 28.9
>>>
>>> 0.7486
>>>
>>> 4.531
>>>
>>> <0.001***
>>>
>>> Riparia:cultivated
>>>
>>> 1.55
>>>
>>> 0.6323
>>>
>>> 0.225
>>>
>>> 0.822
>>>
>>> Riparia:developed
>>>
>>> 0.29
>>>
>>> 0.9682
>>>
>>> 0.383
>>>
>>> 0.701
>>>
>>> Riparia:forested
>>>
>>> 16.6
>>>
>>> 0.8087
>>>
>>> -1.071
>>>
>>> 0.284
>>>
>>> Riparia:grassland
>>>
>>> 1.9
>>>
>>> 0.9601
>>>
>>> -3.284
>>>
>>> 0.001**
>>>  I am concerned about two things: the confidence of these results, and how
>>> to report them
>>>
>>> These results (treatment estimates, errors and p values [suspect, I know])
>>> are very much in agreement with very distinct trends in the data.  In
>>> previous posts I have been directed toward various approaches using mcmc,
>>> bootstrapping, or profiling to get more accurate estimates of 95%
>>> confidence
>>> intervals and accurately determine significant differences. I have
>>> struggled
>>> with attempting these approaches but have not been rewarded with much
>>> success (no local faculty are familiar enough with these types of analyses
>>> to provide support or assistance). In meetings with my committee we've
>>> decided that these results are sufficient, given the scope of my project,
>>> how well they fit distinct trends, how strong significant differences
>>> (though likely biased) are, and how fresh these advanced approaches are.
>>>
>>> This type of output is alien (and understandably discomforting) to
>>> everyone
>>> on my committee and it seems likely it will be to most ecologists and or
>>> reviewers, who in my experience expect the omnipotent ANOVA table with
>>> main
>>> effects and interactions. While I am comfortable interpreting and
>>> explaining
>>> these results, reporting them is a different story.
>>>
>>> My questions are:
>>> How should glmer/lmer results be reported and submitted?
>>> How presentable would you consider these results and how dangerous is it
>>> to
>>> assume these p values reflect real differences in the data?
>>> What improvements would you expect for submission to reviewers,
>>> considering
>>> this is coming from an institution whose faculty is unfamiliar with these
>>> non-traditional approaches (with which general consensus is somewhat
>>> lacking)?
>>>
>>> I would very much like to do this right, but I need to be finished with
>>> this
>>> project in 3 months and do not have the time to commit (or, likely, also
>>> the
>>> requisite experience) to sufficiently teach myself mcmc, profiling, or
>>> even
>>> the matrix-based framework lme4 uses.
>>>
>>> As always, thank you to all the busy people out there who make time to
>>> help,
>>> Colin Wahl
>>> Masters Student
>>> Western Washington University
>>> Bellingham, WA
>>>
>>>
>>>
>>>
>>>
>>> glmer output (estimates not back-transformed):
>>>
>>> Generalized linear mixed model fit by the Laplace approximation
>>> Formula: E ~ wsh * rip + (1 | stream) + (1 | stream:rip) + (1 | obs)
>>>   Data: ept
>>>   AIC   BIC logLik deviance
>>>  284.4 309.5 -131.2    262.4
>>> Random effects:
>>>  Groups     Name        Variance Std.Dev.
>>>  obs        (Intercept) 0.30186  0.54942
>>>  stream:rip (Intercept) 0.40229  0.63427
>>>  stream     (Intercept) 0.12788  0.35760
>>> Number of obs: 72, groups: obs, 72; stream:rip, 24; stream, 12
>>>
>>> Fixed effects:
>>>            Estimate Std. Error z value Pr(>|z|)
>>> (Intercept)  -4.2906     0.4935  -8.694  < 2e-16 ***
>>> wshd         -2.0557     0.7601  -2.705 0.00684 **
>>> wshf          3.3575     0.6339   5.297 1.18e-07 ***
>>> wshg          3.3923     0.7486   4.531 5.86e-06 ***
>>> ripN          0.1425     0.6323   0.225  0.82165
>>> wshd:ripN     0.3708     0.9682   0.383  0.70170
>>> wshf:ripN    -0.8665     0.8087  -1.071  0.28400
>>> wshg:ripN    -3.1530     0.9601  -3.284  0.00102 **
>>> ---
>>> Signif. codes:  0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
>>>
>>> Correlation of Fixed Effects:
>>>          (Intr) wshd   wshf   wshg   ripN   wshd:N wshf:N
>>> wshd      -0.649
>>> wshf      -0.779  0.505
>>> wshg      -0.659  0.428  0.513
>>> ripN      -0.644  0.418  0.501  0.424
>>> wshd:ripN  0.421 -0.672 -0.327 -0.277 -0.653
>>> wshf:ripN  0.503 -0.327 -0.638 -0.332 -0.782  0.511
>>> wshg:ripN  0.424 -0.275 -0.330 -0.632 -0.659  0.430  0.515
>>>
>>>        [[alternative HTML version deleted]]
>>>
>>>
>>>
>>
>
>
> --
> CW
>




More information about the R-sig-mixed-models mailing list