[R-meta] Moderator analysis test of residual heterogeneity confusion
Viechtbauer, Wolfgang (SP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Wed Sep 18 22:16:59 CEST 2019
The first model has 8 model coefficients with k = 456. The second model has 58 model coefficients with k = 389. So, the datasets used in those two analyses are not the same (probably because of missing values on some of the moderators included in the second model). So, it's a bit difficult to compare the results. This aside, including 58 model coefficients with k = 389 is not a good idea. This is likely to lead to overfitting.
Also, I notice that the random effect you added is called 'study', while k is much larger than the number of studies. This was actually just discussed on this mailing list, but to repeat: You really should also include an observation-level random effect in the model.
Best,
Wolfgang
-----Original Message-----
From: Mia Daucourt [mailto:miadaucourt using gmail.com]
Sent: Wednesday, 18 September, 2019 19:35
To: Viechtbauer, Wolfgang (SP)
Cc: r-sig-meta-analysis using r-project.org
Subject: Re: [R-meta] Moderator analysis test of residual heterogeneity confusion
Oops, let me try that again...
I am using the metafor package to run a multilevel correlated effects model. For moderator analyses, I am running them one at a time, to see how much heterogeneity each accounst for, and then I ran model with all mods to see how much variance is left to be explained they're combined.
I have an odd a situation where there is no significant residual variance with just an individual moderator in the model, but then for a set of moderators (that includes that moderator) there is significant residual variance. How can this be?
Maybe this output can help...
Single moderator results:
Multivariate Meta-Analysis Model (k = 456; method: REML)
logLik Deviance AIC BIC AICc
112.1356 -224.2713 -206.2713 -169.3281 -205.8603
Variance Components:
estim sqrt nlvls fixed factor
sigma^2 0.0136 0.1166 51 no study
Test for Residual Heterogeneity:
QE(df = 448) = 409.9810, p-val = 0.9007
Test of Moderators (coefficients 1:8):
F(df1 = 8, df2 = 448) = 6.2947, p-val < .0001
All mods results:
Multivariate Meta-Analysis Model (k = 389; method: REML)
logLik Deviance AIC BIC AICc
-36.0635 72.1270 186.1270 403.1911 210.1707
Variance Components:
estim sqrt nlvls fixed factor
sigma^2 0.0330 0.1818 43 no study
Test for Residual Heterogeneity:
QE(df = 333) = 1028.1159, p-val < .0001
Test of Moderators (coefficients 2:56):
F(df1 = 55, df2 = 333) = 4.0802, p-val < .0001
Thank you for your help!
My best,
Mia
On Sep 18, 2019, at 12:50 PM, Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
Dear Mia,
Your screenshots did not come through properly. Note that this a text-only mailing list, so please post output, not screenshots. Also, please post in plain text -- not rich text format or HTML.
Best,
Wolfgang
-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On Behalf Of Mia Daucourt
Sent: Wednesday, 18 September, 2019 18:24
To: r-sig-meta-analysis using r-project.org
Subject: [R-meta] Moderator analysis test of residual heterogeneity confusion
Good afternoon,
I am using the metafor package to run a multilevel correlated effects model. For moderator analyses, I am running them one at a time, to see how much heterogeneity each accounst for, and then I ran model with all mods to see how much variance is left to be explained they're combined.
I have an odd a situation where there is no significant residual variance with just an individual moderator in the model, but then for a set of moderators (that includes that moderator) there is significant residual variance. How can this be?
Maybe these screenshots can help...
Single moderator results:
Moderator analysis test of residual heterogeneity confusion
All mods model results:
Thank you for your help!
My best,
Mia
More information about the R-sig-meta-analysis
mailing list