[R-meta] Moderator analysis test of residual heterogeneity confusion

Mia Daucourt m|@d@ucourt @end|ng |rom gm@||@com
Thu Sep 19 03:27:50 CEST 2019


Thank you for the advice! I have added the observation-level random effect and will not run the overfitted model anymore and just stick to one moderator at a time. 

How would I interpret the results of the test for residual heterogeneity for the single moderator below? 

(P.S. this is the only moderator that does not have a significant value for the between-study variance parameter at the observation level and only does at the study level).

Code:

Model2_HMEcomp <- rma.mv(yi, vi., mods = ~ factor(hne_component)-1, random = ~ 1 | study/count, data=Zcalc, test="t")
summary(Model2_HMEcomp)

Partial output:

Multivariate Meta-Analysis Model (k = 456; method: REML)

   logLik   Deviance        AIC        BIC       AICc 
 112.1356  -224.2713  -204.2713  -163.2233  -203.7678   

Variance Components:

            estim    sqrt  nlvls  fixed       factor 
sigma^2.1  0.0136  0.1166     51     no        study 
sigma^2.2  0.0000  0.0000    456     no  study/count 

Test for Residual Heterogeneity:
QE(df = 448) = 409.9810, p-val = 0.9007

Test of Moderators (coefficients 1:8):
F(df1 = 8, df2 = 448) = 6.2947, p-val < .0001

Thanks so much for your guidance!

My best,

Mia





> On Sep 18, 2019, at 4:53 PM, James Pustejovsky <jepusto using gmail.com> wrote:
> 
> Mia, 
> 
> To add a little bit to Wolfgang's feedback: Even if you fit the two models on the same set of observations (say, assuming that you had complete data for all of the moderators), my understanding is that adding more moderators will not necessarily explain more of the residual heterogeneity. It is true that, in regular old linear regression models, adding more predictors will always increase R-squared, but once we are in the land of multi-level models, there is no such guarantee (unfortunately!). 
> 
> To help diagnose what is going on in situations like this, it is useful to have an understanding of whether the moderators vary at the within-study level or only at the between-study level.
> 
> James
> 
> On Wed, Sep 18, 2019 at 3:17 PM Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer using maastrichtuniversity.nl <mailto:wolfgang.viechtbauer using maastrichtuniversity.nl>> wrote:
> The first model has 8 model coefficients with k = 456. The second model has 58 model coefficients with k = 389. So, the datasets used in those two analyses are not the same (probably because of missing values on some of the moderators included in the second model). So, it's a bit difficult to compare the results. This aside, including 58 model coefficients with k = 389 is not a good idea. This is likely to lead to overfitting. 
> 
> Also, I notice that the random effect you added is called 'study', while k is much larger than the number of studies. This was actually just discussed on this mailing list, but to repeat: You really should also include an observation-level random effect in the model.
> 
> Best,
> Wolfgang
> 
> -----Original Message-----
> From: Mia Daucourt [mailto:miadaucourt using gmail.com <mailto:miadaucourt using gmail.com>] 
> Sent: Wednesday, 18 September, 2019 19:35
> To: Viechtbauer, Wolfgang (SP)
> Cc: r-sig-meta-analysis using r-project.org <mailto:r-sig-meta-analysis using r-project.org>
> Subject: Re: [R-meta] Moderator analysis test of residual heterogeneity confusion
> 
> Oops, let me try that again...
> 
> I am using the metafor package to run a multilevel correlated effects model. For moderator analyses, I am running them one at a time, to see how much heterogeneity each accounst for, and then I ran model with all mods to see how much variance is left to be explained they're combined. 
> 
> I have an odd a situation where there is no significant residual variance with just an individual moderator in the model, but then for a set of moderators (that includes that moderator) there is significant residual variance. How can this be?
> 
> Maybe this output can help...
> 
> Single moderator results:
> 
> Multivariate Meta-Analysis Model (k = 456; method: REML)
> 
>    logLik   Deviance        AIC        BIC       AICc 
>  112.1356  -224.2713  -206.2713  -169.3281  -205.8603   
> 
> Variance Components:
> 
>             estim    sqrt  nlvls  fixed  factor 
> sigma^2    0.0136  0.1166     51     no   study 
> 
> Test for Residual Heterogeneity:
> QE(df = 448) = 409.9810, p-val = 0.9007
> 
> Test of Moderators (coefficients 1:8):
> F(df1 = 8, df2 = 448) = 6.2947, p-val < .0001
> 
> All mods results:
> 
> Multivariate Meta-Analysis Model (k = 389; method: REML)
> 
>   logLik  Deviance       AIC       BIC      AICc 
> -36.0635   72.1270  186.1270  403.1911  210.1707   
> 
> Variance Components:
> 
>             estim    sqrt  nlvls  fixed  factor 
> sigma^2    0.0330  0.1818     43     no   study 
> 
> Test for Residual Heterogeneity:
> QE(df = 333) = 1028.1159, p-val < .0001
> 
> Test of Moderators (coefficients 2:56):
> F(df1 = 55, df2 = 333) = 4.0802, p-val < .0001
> 
> Thank you for your help!
> 
> My best,
> 
> Mia
> 
> On Sep 18, 2019, at 12:50 PM, Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer using maastrichtuniversity.nl <mailto:wolfgang.viechtbauer using maastrichtuniversity.nl>> wrote:
> 
> Dear Mia,
> 
> Your screenshots did not come through properly. Note that this a text-only mailing list, so please post output, not screenshots. Also, please post in plain text -- not rich text format or HTML.
> 
> Best,
> Wolfgang
> 
> -----Original Message-----
> From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org <mailto:r-sig-meta-analysis-bounces using r-project.org>] On Behalf Of Mia Daucourt
> Sent: Wednesday, 18 September, 2019 18:24
> To: r-sig-meta-analysis using r-project.org <mailto:r-sig-meta-analysis using r-project.org>
> Subject: [R-meta] Moderator analysis test of residual heterogeneity confusion
> 
> Good afternoon,
> 
> I am using the metafor package to run a multilevel correlated effects model. For moderator analyses, I am running them one at a time, to see how much heterogeneity each accounst for, and then I ran model with all mods to see how much variance is left to be explained they're combined.  
> 
> I have an odd a situation where there is no significant residual variance with just an individual moderator in the model, but then for a set of moderators (that includes that moderator) there is significant residual variance. How can this be?
> 
> Maybe these screenshots can help...
> Single moderator results:
> Moderator analysis test of residual heterogeneity confusion
> 
> All mods model results:
> 
> Thank you for your help!
> 
> My best,
> 
> Mia
> 
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org <mailto:R-sig-meta-analysis using r-project.org>
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis <https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis>


	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list