[R-meta] comparing rma to lm

Viechtbauer, Wolfgang (SP) wolfg@ng@viechtb@uer @ending from m@@@trichtuniver@ity@nl
Thu Sep 13 09:51:58 CEST 2018


To add to this:

Based on your mail, Tom, that you had sent to me earlier, I think you fitted a fixed-effects model with rma(), that is, you used rma(..., method="FE"). This implies that only sampling error (i.e., the uncertainty of the estimated effects) is incorporated into the model. You really should be fitting a RE model (e.g., using the default method="REML") and compare the results. A random-effects model also incorportes the variability due to between-study differences in the underlying true effects into the model. The lm() approach implicitly incorportes the latter (while ignoring that the sampling errors are heteroscedastic).

Best,
Wolfgang

-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On Behalf Of James Pustejovsky
Sent: Thursday, 13 September, 2018 2:21
To: tjuenger using austin.utexas.edu
Cc: r-sig-meta-analysis using r-project.org
Subject: Re: [R-meta] comparing rma to lm

Tom,

I would offer two potential explanations for why your results differ in
using rma as oppose to lm---one good and the other potentially problematic.
The good possibility is that meta-analytic models might be giving you
improved precision. Meta-regression is just weighted least squares
regression, where the weights are chosen to optimize the use of information
from larger and smaller studies/samples. If the studies in your analysis
vary widely in size/precision, then maybe meta-regression is making more
efficient use of the data, and thus leading to smaller SEs (and more
statistically significant results). To see whether this is the case:
compare the SEs from rma to the SEs from lm (I would suggest using robust
SEs from the sandwich package for the latter).

The bad possibility is that the default methods for calculating hypothesis
tests and confidence intervals in rma are based on large-sample
approximations, whereas the defaults with lm use methods (t-tests rather
than z-tests) that are more accurate when the number of studies is small.
If this is what makes the difference, then the extra-significant results
from rma could be spurious. Using the Knapp-Hartung correction (test =
"knha") will improve the small-sample calibration of the meta-analysis
tests. You could try turning that on to see if it makes a difference.

Hook 'em,
James

On Wed, Sep 12, 2018 at 6:33 PM Juenger, Thomas E <
tjuenger using austin.utexas.edu> wrote:

> Hi:
>
> I study plant quantitative genetics.  We use statistical analyses to map
> genes controlling plant traits.  I'm emailing to ask a few simple questions
> about meta-analyses using metafor in R.
>
> My research program centers on how the effect of inheriting alternative
> alleles at a genetic locus is altered by environmental variation.  We call
> this gene-by-environment interaction (GxE).  We can study this phenomenon
> by growing our mapping populations in different environmental contexts -
> this can be different greenhouses, growth chambers, treatment applications
> or field sites in nature.  Ultimately we end up with an effect estimate
> (the mean difference between individuals carrying alternative alleles - we
> call this the "additive effect" in quantitative genetics) and standard
> error for each locus affecting a trait in each environmental context.  Our
> mapping approach often involves mixed models to test how the effect of
> alleles changes by condition.  However, we generally do not know the
> mechanistic driver or cause of the GxE and we imagine it can differ among
> the many loci influencing a particular trait of interest.
>
> Our most recent experiment grows a mapping population at 10 different
> field locations.  We'd like to look for drivers of the GxE using a
> regression approach.  We started running simply lm models to ask how
> various climate factors affected the additive effect across the 10
> experiments - things like latitude, temperature, rainfall.   A friend
> mentioned that it could be interesting to think about this as a
> meta-analysis problem, and include our error estimates when looking for
> covariates/moderators that drive the GxE.  The suggestion seems a good one
> given we have excellent data about the uncertainty of the effects.  We've
> just started looking at some basic analyses using rma.
>
> My initial thought is that we would see less significant results when
> taking into account the uncertainty in the additive effect estimates.
> However, we actually see the opposite.  In every case our rma models have
> more significant covariates/moderators than simple linear models.  I'm
> surprised by this and am trying to understand why this might be so.
>
> Any thoughts or ideas - I have a feeling I'm missing something simple...
> Tom



More information about the R-sig-meta-analysis mailing list