[R-meta] Random and mixed effects models with the Metafor rma.mv function

Viechtbauer, Wolfgang (SP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Sun Jan 30 16:23:09 CET 2022


Dear Edwin,

See below for my responses.

Best,
Wolfgang

>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>Behalf Of Edwin Lebrija Trejos
>Sent: Sunday, 30 January, 2022 15:49
>To: r-sig-meta-analysis using r-project.org
>Subject: [R-meta] Random and mixed effects models with the Metafor rma.mv
>function
>
>Dear Community,
>I am looking for expert opinions on meta-analytical models and their
>implementation in The "Metafor" Package by Wolfgang Viechtbauer.
>
>I am checking a meta-analysis of experiments on plant species with significant
>implications for the reviewed topic and I am wondering about the adequacy of
>analyses behind some key conclusions in the study. The data of the meta-analysis
>consists of hundreds of observations of plant species responses taken from tens
>of experimental studies conducted on different species from different terrestrial
>plant communities and using different methodological approaches. A considerably
>heterogeneity in responses exist, as expected and common in ecological studies.
>Below I detailed three points I would appreciate to get feedback on:
>
>1) To evaluate the "generality and magnitude" of experimental effects, the
>authors of the meta-analysis start by fitting a basic 'mean' ("random effects")
>model that does not correct for any dependency on the data using the Metafor
>rma.mv function and the syntax: res <- rma.mv (yi, vi, data=dat), where yi are
>the observed effect sizes, or outcomes, and vi the corresponding sampling
>variances. The results of this model show a significant mean/overall effect size,
>as expected by theory...
>
>Am I correct that this model, fitted with the rma.mv function, is a fixed effects
>model and not a random effects model (as the authors intend to fit)?

Correct.

>My understanding is that when using the rma.mv function (instead of the rma.uni
>function), a random effects model should include a random term of the form:
>random = ~1| Oucome.ID, when Outcome.ID is a unique identifier for each reported
>experimental species response (or row in the dataset). Please clarify to me
>otherwise.

Correct.

>2) The authors emphasize that accounting for non-independence among outcomes is
>necessary. The focus is on the dependency of outcomes from experiments conducted
>on the same plant species, i.e. on a 'taxonomic' dependency of responses.
>Therefore, a "mean, corrected" model is fitted by adding a random 'Species'
>intercept to the model, i.e. res.corr <- rma.mv (yi, vi, random =
>list(~1|Species), data=dat). This model, as opposed to the "mean, uncorrected"
>model (described above), returns a weak and non-significant effect and is
>markedly favored by the Akaike information criterion (AIC) when compared to the
>"mean, corrected" model (thousands of AIC units difference). These results lead
>to a key conclusion that, when controlling for taxonomic non-independence in the
>data, there are no significant, widespread effects, as opposed to theory and
>generally accepted by peers.
>
>I am wondering as well on the formulation of such corrected model:
>- Should the "corrected" model also include a random Outcome.ID term? I.e. rma.mv
>(yi, vi, random = list (~1|Species, ~ 1| Outcome.ID), data= dat)?

In general, yes. See:

https://www.metafor-project.org/doku.php/analyses:konstantopoulos2011#a_common_mistake_in_the_three-level_model

>- Moreover, agreeing that it's important to control for dependence among
>outcomes, I wonder if additionally controlling for the dependence of outcomes
>within studies is also in place. This, since each published study used  in the
>meta-analysis reports experimental outcomes for several species tested in the
>same study. Is the following metaphor model syntax appropriate to correct for
>such within study dependency? rma.mv (yi, vi, random = list (~1|Species, ~1|
>Study.ID/ Outcome.ID), data=dat), where Study.ID is a variable that identifies
>each published study?

Yes. Whether this is fully sufficient to account for within-study dependence depends on whether the sampling errors are independent or not. This has been discussed many times on this mailing list. But adding study as a random effect is generally something I would do.

>For clarity, here is a dummy sample of the analysis data table:
>Outcome.ID	Study.ID	Species 	          yi               vi
>1	                 Study_1	Species A   -1.72417	0.06701
>2	                 Study_1	Species A	  -1.99694	0.047748
>3	                 Study_2	Species B	   0.15911	0.012989
>4	                 Study_2	Species C	  -1.26529	0.115533
>5	                 Study_3	Species B    0.383786	0.004959
>6	                 Study_3	Species D  -0.07703	0.005961
>...
>
>3) Follow-up analyses are conducted to explore the sources of heterogeneity in
>the data. These analyses are conducted by splitting the data into different
>categories corresponding to types of experimental methods employed, plant life
>stages, growth forms, climatic zones and so on. For each data subset, a "mean,
>corrected" model (i.e., the 'res.corr' model above) is fitted. I believe this
>what is called "Subgroup" analysis in some meta-analysis literature that I have
>found.  Other models fitted to further explore heterogeneity involved the
>inclusion of continuous variables using the 'mods' argument of the rma.mv
>function.
>
>My question here is: isn't it better to explore the sources of heterogenenity in
>the data taking advantage of the mixed model approach implemented by the rmw.mv
>function and include in the same model both categorical and continuous variables.
>Or, is there an advantage to performing" Subgroup" analysis?

See:

https://www.metafor-project.org/doku.php/tips:comp_two_independent_estimates

Generally, my preference is to use meta-regression models instead of subgrouping.

>Given my modest (and not too fresh) experience with meta-analysis and the Metafor
>package, and given the significant impact of meta-analyses on knowledge progress,
>I'd be very grateful if any can provide feedback and help me verify, to the
>extent possible, the correctness of my observations. Applying the alternative
>models that I mention above to the dataset used in the meta-analysis returns both
>quantitatively and qualitatively different results, which I find problematic.
>
>Thanks,
>Edwin



More information about the R-sig-meta-analysis mailing list