[R-meta] rma.glmm model selection

Viechtbauer, Wolfgang (SP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Mon Sep 13 18:20:40 CEST 2021


No idea - I don't know the 'performance' package. I tried to run your example, but once I get to the last two lines, I just get a bunch of errors:

> compare_performance(model2,model3,model4, model5,model6, rank = TRUE)                                               
 ----------- FAILURE REPORT --------------                                                                            
 --- failure: length > 1 in coercion to logical ---
 --- srcref --- 
: 
 --- package (from environment) --- 
insight
 --- call from context ---                                                                                            
.format_basic_table(final, header, sep, caption = caption, subtitle = subtitle, 
    footer = footer, align = align, empty_line = empty_line, 
    indent_groups = indent_groups, indent_rows = indent_rows)
 --- call from argument --- 
!is.null(caption) && caption != ""
 --- R stacktrace ---

[...]

Same when running test_bf(model2,model3,model4, model5,model6).

No idea what the issue is.

Best,
Wolfgang

>-----Original Message-----
>From: Philippe Tadger [mailto:philippetadger using gmail.com]
>Sent: Monday, 13 September, 2021 16:44
>To: Viechtbauer, Wolfgang (SP); r-sig-meta-analysis using r-project.org
>Subject: Re: [R-meta] rma.glmm model selection
>
>Dear Wolfgang,
>I am trying to implement other measures to check the fit (apart form likelihood,
>AIC, BIC), like RMSE  (also Bayes factor). I implemented as followed:
>thedata1<-data.frame(
>  study = c(1,1,2,
>            2,3,3,4,4,5,
>            5,6,6,7,7),
>  treat = c(0,1,0,
>            1,0,1,0,1,0,
>            1,0,1,0,1),
>  n = c(81,156,
>        43,89,38,44,80,
>        77,170,159,49,
>        47,148,82),
>  event = c(12,1,
>            3,0,6,1,27,13,
>            5,2,6,4,0,6),
>  control = c(1,0,1,
>              0,1,0,1,0,1,
>              0,1,0,1,0),
>  treat12 = c(-0.5,
>              0.5,-0.5,0.5,-0.5,
>              0.5,-0.5,0.5,
>              -0.5,0.5,-0.5,0.5,
>              -0.5,0.5)
>)
>
>model2<-glmer(cbind(event,n-event)~factor(study)+factor(treat)+(treat-
>1|study),data=thedata1,
>      family=binomial(link="logit"))
>model3<-glmer(cbind(event,n-event)~(1|study)+factor(treat)+(treat-1|study),
>data=thedata1,
>      family=binomial(link="logit"))
>model4<-glmer(cbind(event,n-event)~factor(study)+factor(treat)+(treat12-1|study),
>              data=thedata1, family=binomial(link="logit"))
>model5<-glmer(cbind(event,n-event)~(1|study)+factor(treat)+(treat12-1|study),
>              data=thedata1, family=binomial(link="logit"))
>model6<-glmer(cbind(event,n-event)~factor(treat)+(control+treat-1|study),
>data=thedata1,
>              family=binomial(link="logit"))
>library("performance")
>compare_performance(model2,model3,model4, model5,model6, rank = TRUE)
>test_bf(model2,model3,model4, model5,model6)
>What do you think? Any of this measure could have a value for the comparison
>between models?
>
>On 12/09/2021 23:36, Viechtbauer, Wolfgang (SP) wrote:
>I would say that if the log likelihoods are not meaningfully comparable, then
>neither are the AIC or BIC values.
>
>Speaking of these different models, in the 'devel' version of metafor, rma.glmm()
>now even allows for even more flexibility:
>
>https://wviechtb.github.io/metafor/reference/rma.glmm.html
>
>See especially the 'Note' section. One can now control the coding of the group
>variable and for "UM.RS" one can now allow for correlation between the random
>study effects and the random group effect (just as a side-note, to make our lives
>even more complicated having to choose among an even wider collection of potential
>modeling options).
>
>Best,
>Wolfgang
>
>-----Original Message-----
>From: Philippe Tadger [mailto:philippetadger using gmail.com]
>Sent: Sunday, 12 September, 2021 22:54
>To: Viechtbauer, Wolfgang (SP); r-sig-meta-analysis using r-project.org
>Subject: Re: [R-meta] rma.glmm model selection
>
>Hello Wolfgang, colleagues
>
>Thanks for the answer!
>
>I've been reading the article  (Jackson et al., 2017) were you explain such models
>and provide codes for SAS/Stata/R.
>I did run the SAS code with the sham data, and collect all goodness of fit for
>models 2-6
>
>        AIC   BIC   -2 x Likelihood   Model Name                          effects
>Variance-covariance Matrix Other name
>Model 4  80.74 80.25 62.74             modified Simmonds and Higgins model Fixed
>UM.FS
>Model 2  81.53 81.05 63.53             Simmonds and Higgins model          Fixed
>Model 5  91.19 90.97 83.19             modified Simmonds and Higgins model Random
>CV                         UM.RS
>Model 6  93.18 92.91 83.18             Van Houwelingen bivariate model     Random
>UN                         CM.AL
>Model 3  93.76 93.54 85.76             Simmonds and Higgins model          Random
>CV
>
>I understand that the likelihood measure can not be used if the distributions or
>models are not nested (models 6 & 7 or CM.AL and CM.EL), which is not the case as
>you point it out.
>Do you consider the AIC values also not "meaningful" to choose between models?
>
>Sorry if there's is any typo in the self made table (trying to unify all the names
>and features of each model)
>
>Thanks in advance for you valuable time
>
>On 12/09/2021 15:00, Viechtbauer, Wolfgang (SP) wrote:
>Hi Philippe,
>
>Good question. I doubt that a direct comparison of the likelihoods of these models
>(or information criteria) is meaningful though. For example, UM.FS and UM.RS
>differ both in terms of their fixed and random effects (since the whole point is
>to use either fixed or random study effects). CM.AL and CM.EL differ even more
>fundamentally, as they use other distributions.
>
>The sentence that you quote is true, but the devil is, as always, in the details.
>Here, I was thinking more in terms of: We have some specific model where we can
>swap in or out certain random effects. Once you start swapping in and out fixed
>and random effects at the same time and even switching distributions, then things
>get a lot more tricky.
>
>So, I really don't have any good suggestions at the moment.
>
>Best,
>Wolfgang
>
>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>Behalf Of Philippe Tadger
>Sent: Saturday, 11 September, 2021 12:42
>To: r-sig-meta-analysis using r-project.org
>Subject: [R-meta] rma.glmm model selection
>
>Dear Meta community
>
>I would like to ask for guidance on  how to do model selection in
>metafor::rma.glmm. I'm thinking specifically in the comparison between:
>UM.FS, UM.RS, CM.AL, CM.EL methods. Is it possible to conduct a goodness
>of fit or alternative selection methods between the 4 models?
>
>I saw the model selection here
>https://www.metafor-
>project.org/doku.php/tips:model_selection_with_glmulti_and_mumin
><https://www.metafor-
>project.org/doku.php/tips:model_selection_with_glmulti_and_mumin>
>for predictors that mention that is possible to do a similar approach
>with rma.glmm. Also a specific phrase catch my attention: "one can also
>consider model selection with respect to the random effects structure."
>
>Thanks in advance for your help and time
>
>--
>
>Kind regards/Saludos cordiales
>*Philippe Tadger*
>ORCID <https://orcid.org/0000-0002-1453-4105>, Reseach Gate
><https://www.researchgate.net/profile/Philippe-Tadger>
>Phone/WhatsApp: +32498774742


More information about the R-sig-meta-analysis mailing list