[R-sig-ME] Advice on comparing non-nested random slope models

landon hurley ljrhurley at gmail.com
Mon Mar 27 16:36:35 CEST 2017


On 3/27/17 10:16 AM, Craig DeMars wrote:
> Thanks. Unfortunately, it doesn't look like the Vuong test has been
> implemented for mixed models yet, at least not in R.....
> 

Craig, you might want to follow up with [0] to see if there was any
advancement if you haven't already.

[0] https://stat.ethz.ch/pipermail/r-sig-mixed-models/2015q4/024104.html



> On Mon, Mar 27, 2017 at 4:36 AM, Poe, John <jdpo223 at g.uky.edu> wrote:
> 
>> You might try a Vuong test. It's a likelihood ratio test that allows for
>> nonnested models.
>>
>>
>> On Mar 26, 2017 8:30 PM, "Paul Buerkner" <paul.buerkner at gmail.com> wrote:
>>
>> Hi Craig,
>>
>> in short, significance does not tell you anything about model fit. You may
>> find models to have the best fit without any particular predictor being
>> significant for this model. Similarily average "effect sizes" are not a
>> good indicator of model fit.
>>
>> Information criteria are, in my opinion, the right way to go. For an
>> improved version of the AIC, I recommend going Bayesian and computing the
>> so called LOO (leave-one-out cross validation) or the WAIC (widely
>> applicable information criterion) as implemented in the R package loo. For
>> the bayesian GLMM model fitting (and convenient LOO computation), you could
>> use the R packages brms or rstanarm.
>>
>> Best,
>> Paul
>>
>> 2017-03-26 22:15 GMT+02:00 Craig DeMars <cdemars at ualberta.ca>:
>>
>>> Hello,
>>>
>>> This is a bit of a follow-up to a question last week on selecting among
>>> GLMM models. Is there a recommended strategy for comparing non-nested,
>>> random slope models? I have seen a similar question posted here
>>> http://stats.stackexchange.com/questions/116935/comparing-non-nested-
>>> models-with-aic but it doesn't seem to answer the problem - and maybe
>>> there
>>> is no "answer".  Zuur et al. (2010) discuss model selection but only in a
>>> nested framework. Bolker et al. (2009) suggest AIC can be used in GLMMs
>> but
>>> caution against boundary issues and don't specifically mention any issues
>>> with comparing different random effects structures (as Zuur does).
>>>
>>> The context of my question comes from an analysis where we have 5 *a
>>> priori*
>>> hypotheses describing different climate effects on juvenile recruitment
>> in
>>> an ungulate species.  The data set has 21 populations (or herds) with
>>> repeated annual measurements of recruitment and the climate variables
>>> measured at the herd scale. To generate SE's that reflect herd as the
>>> sampling unit, explanatory variables are specified as random slopes
>> within
>>> herd (as recommended by Schielzeth & Forstmeier 2009; Year is also
>>> specified as a random intercept).  Because there are only 21 herds,
>> models
>>> are fairly simple with only 2-3 explanatory variables (3 may by pushing
>>> it...????). I can't post the data but it isn't really relevant to the
>>> question (I think).
>>>
>>> Initially, we looked at AIC to compare models.  At the bottom of this
>>> email, I have pasted the output from two models, each representing
>> separate
>>> hypotheses, to illustrate "the problem".  The first model yields an AIC
>>> value of 2210.7. The second model yields an AIC of 2479.5. Using AIC,
>> Model
>>> 1 would be the "best" model. However, examining the parameter estimates
>>> within each model makes me think twice about declaring  Model 1 (or the
>>> hypothesis it represents) as the most parsimonious explanation for the
>>> data. In Model 1, two of the thee fixed effects estimates have small
>> effect
>>> sizes and all estimates are "non-significant" (if one considers
>>> p-values....). In Model 2, two of the three fixed effect estimates have
>>> larger effect sizes are would be considered "significant.  Is this an
>>> example of the difficulty in using AIC to compare non-nested mixed
>>> models.....or am I missing something in my interpretation? I haven't come
>>> across this type of result when model selecting among GLMs.
>>>
>>> Any suggestions on how best to compare competing hypotheses represented
>> by
>>> non-nested GLMMs? Should one just compare relative effect sizes of
>>> parameter estimates among models?
>>> Any help would be appreciated.
>>>
>>> Thanks,
>>> Craig
>>>
>>> *Model 1:*
>>> Generalized linear mixed model fit by maximum likelihood (Laplace
>>> Approximation) ['glmerMod']
>>>  Family: binomial  ( logit )
>>> Formula: (Calves/Cows) ~ spr.indvi.ab + green.rate.ab + trend + (1 |
>> Year)
>>> +      (spr.indvi.ab + green.rate.ab + trend | Herd)
>>>    Data: bou.dat
>>> Weights: Cows
>>>
>>>      *AIC  *    BIC   logLik deviance df.resid
>>>   *2210.7*   2265.0  -1090.3   2180.7      262
>>>
>>> Scaled residuals:
>>>     Min      1Q  Median      3Q     Max
>>> -3.8700 -1.0800 -0.1057  1.0405  6.8353
>>>
>>> Random effects:
>>>  Groups Name          Variance Std.Dev. Corr
>>>  Year   (Intercept)   0.10517  0.3243
>>>  Herd   (Intercept)   0.29832  0.5462
>>>         spr.indvi.ab  0.04331  0.2081    0.38
>>>         green.rate.ab 0.03741  0.1934    0.68  0.62
>>>         trend         0.62661  0.7916   -0.59  0.20 -0.46
>>> Number of obs: 277, groups:  Year, 22; Herd, 21
>>>
>>> Fixed effects:
>>>               Estimate Std. Error z value Pr(>|z|)
>>> (Intercept)   -1.62160    0.15798 -10.265   <2e-16 ***
>>> spr.indvi.ab   0.04019    0.09793   0.410    0.682
>>> green.rate.ab  0.04704    0.05555   0.847    0.397
>>> trend         -0.29676    0.23092  -1.285    0.199
>>> ---
>>> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>>>
>>> Correlation of Fixed Effects:
>>>             (Intr) spr.n. grn.r.
>>> spr.indvi.b -0.113
>>> green.rat.b  0.347  0.438
>>> trend       -0.606  0.349 -0.200
>>>
>>> *Model 2:*
>>> Generalized linear mixed model fit by maximum likelihood (Laplace
>>> Approximation) ['glmerMod']
>>>  Family: binomial  ( logit )
>>> Formula: (Calves/Cows) ~ win.bb + tot.sn.ybb + trend + (1 | Year) + (
>>> win.bb
>>> +      tot.sn.ybb | Herd)
>>>    Data: bou.dat
>>> Weights: Cows
>>>
>>>     * AIC*      BIC   logLik deviance df.resid
>>>   *2479.5 *  2519.4  -1228.8   2457.5      266
>>>
>>> Scaled residuals:
>>>     Min      1Q  Median      3Q     Max
>>> -4.5720 -1.1801 -0.1364  1.3704  8.3271
>>>
>>> Random effects:
>>>  Groups Name        Variance Std.Dev. Corr
>>>  Year   (Intercept) 0.10694  0.3270
>>>  Herd   (Intercept) 0.13496  0.3674
>>>         win.bb      0.05351  0.2313   -0.13
>>>         tot.sn.ybb  0.06200  0.2490    0.23  0.34
>>> Number of obs: 277, groups:  Year, 22; Herd, 21
>>>
>>> Fixed effects:
>>>              Estimate Std. Error z value Pr(>|z|)
>>> (Intercept) -1.851656   0.127702 -14.500  < 2e-16 ***
>>> win.bb      -0.364019   0.101386  -3.590  0.00033 ***
>>> tot.sn.ybb   0.275271   0.118111   2.331  0.01977 *
>>> trend       -0.007568   0.115706  -0.065  0.94785
>>> ---
>>> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>>>
>>> Correlation of Fixed Effects:
>>>            (Intr) win.bb tt.sn.
>>> win.bb      0.048
>>> tot.sn.ybb  0.269  0.083
>>> trend      -0.242 -0.269 -0.131
>>> --
>>> Craig DeMars, Ph.D.
>>> Postdoctoral Fellow
>>> Department of Biological Sciences
>>> University of Alberta
>>> Phone: 780-221-3971 <(780)%20221-3971> <(780)%20221-3971>
>>>
>>>         [[alternative HTML version deleted]]
>>>
>>> _______________________________________________
>>> R-sig-mixed-models at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>>         [[alternative HTML version deleted]]
>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>>
>>
> 
> 


-- 
Violence is the last refuge of the incompetent.



More information about the R-sig-mixed-models mailing list