[R-meta] Fixed vs Random Effects

Célia Sofia Moreira celiasofiamoreira at gmail.com
Thu Apr 12 00:59:58 CEST 2018


Dear all,

I'm needing your help on the decision between fixed- or random-effects. I
know that most of you are reviewers in top and respectable journals on
meta-analysis, and so I will take your opinion very seriously. The question
is the following:

My "favourite" papers recommend the use of random effects when you want to
make inferences about the average effect to the entire population of
studies from which the included studies are assumed to be a random
selection (including "studies that have been conducted, that could have
been conducted, or that may be conducted in the future"). Others (Cochrane)
recommend the use of random-effect when samples/experiments/designs/...
have different features. All of them say that the choice should not be
decided on the basis of presence/absence of heterogeneity, and the
researcher should decide on the type of inference desired before examining
the data.

Papers included in 'my' meta-analysis have very different
samples/experimental features, as the majority of studies in social
sciences. Moreover, I consider that is advantageous to make inferences to
the entire population, instead of making inferences only to the set of
studies included in 'my' meta-analysis; it is a wider approach. Therefore,
I decided to perform random-effects models. In most cases, the results
showed only small heterogeneity (and thus the results for fixed effects are
similar).

Now, a co-author disagrees with my point of view and says that the
meta-analysis should be performed using fixed-effects models because (his
main reasons):
1) "larger studies should have more weight" (sample sizes range from 25 to
65),
2) "choosing a random-effects model introduces an error in each study",
3) "fixed effects provide narrower CI intervals and, as such, more precise
results".

He also gave me a reference of an article that was published in the same
journal we are planning to submit 'our' meta-analysis, in which
fixed-effects were preferred. The authors used the following argument:

"Studies on the effect of medications were combined using a fixed-effect
model (Borenstein et al., 2010). We expected the final model to include
only a small number of studies and estimation of random-effects models with
few studies has been shown to be unreliable (Guolo and Varin, 2017).
However, random-effects models were carried out in a sensitivity analysis."

I have confirmed that results from random- and fixed-effects models are
similar in most of the cases (usually <= .01; narrower CI but the
significance does not change), and even when the difference is higher
(=.04) there is no "small-studies effect" (i.e., small studies are not
consistently more positive, or negative).

What is your opinion on his arguments and on the argument used in that
paper (i.e., the estimation of fixed-effects models is more reliable than
the random-effects when there are only few studies)?

Kind regards,
celia

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list