[R-meta] Fixed vs Random Effects

James Pustejovsky jepusto at gmail.com
Fri Apr 13 18:13:37 CEST 2018


Celia,

I very much agree with Michael's comments. If you are interested in making
inferences to the entire population, then a random effects model is an
absolute requirement. However, your choice of inference model does not
necessarily constrain how you weight the studies in the meta-analysis. As
Michael noted, the Henmi & Copas method uses FE weighting but estimates
standard errors and CIs based on random effects assumptions. The metafor
package makes it possible to do other variations on this as well (by
specifying wi in addition to vi or sei)--for instance you could use equal
weight for each study, but still draw inferences to the population of
studies under a random effects model.

As you noted, the random effects model does indeed take into account sample
size (or more generally, the precision of the ES estimates) in the
weighting of studies. It's just that sample size is given relatively less
emphasis than with FE weights because larger between-study variance tends
to make the weights closer to equal. Under the assumptions of the RE model,
these RE weights are more efficient than the fixed effect weights. However,
the RE model is based on the assumption that one has an unbiased sample of
studies from the population--if that were not the case, then the FE weights
might actually be more efficient. My understanding is that this is the main
rationale for the Henmi & Copas method---that if there is publication bias,
then FE weighting might give a better estimate of the overall population
average effect. But it sounds like you have investigated small-study
effects in your data already, and it doesn't seem to be a concern (though
perhaps your power to find small-study effects might be pretty limited). In
any case, there are other methods for more directly investigating
publication bias--including the selection models implemented in the weightr
package (https://cran.r-project.org/web/packages/weightr/index.html) and
other packages (see the section of the Meta-analysis Task View titled
"unobserved studies": https://cran.r-project.org/web/views/MetaAnalysis.html
).

All that said, I don't think it would be unreasonable to report the results
of both the RE and the FE models, as long as you're clear about the
inferences that can be drawn from each model. Like Michael, I would
probably go with reporting the RE results in the main text and the FE
results as sensitivity/supplementary materials.

James


On Fri, Apr 13, 2018 at 7:00 AM, Célia Sofia Moreira <
celiasofiamoreira at gmail.com> wrote:

> Dear Professor Michael Dewey,
>
> Thank you very much for your helpful comments. I really appreciate your
> attention. I just didn't understand the meaning of "Point 1 is indeed
> true". We know that fixed effects models give more weight to larger
> studies. But do you think this is correct? That is, do you think that
> weights' distribution in fixed-effects is better/fairer than weights'
> distribution in random-effects modelling?
>
> Also, don't weights in random effects modelling also take into account the
> sample size [although not as strongly (proportionally) as in fixed-effects
> modelling]?
>
> Kind regards,
>  celia
>
> 2018-04-12 13:48 GMT+01:00 Michael Dewey <lists at dewey.myzen.co.uk>:
>
> > Comments in line. Perhaps even more necessary than usual to stress these
> > are personla views and others will differ.
> >
> > On 11/04/2018 23:59, Célia Sofia Moreira wrote:
> >
> >> Dear all,
> >>
> >> I'm needing your help on the decision between fixed- or random-effects.
> I
> >> know that most of you are reviewers in top and respectable journals on
> >> meta-analysis, and so I will take your opinion very seriously. The
> >> question
> >> is the following:
> >>
> >> My "favourite" papers recommend the use of random effects when you want
> to
> >> make inferences about the average effect to the entire population of
> >> studies from which the included studies are assumed to be a random
> >> selection (including "studies that have been conducted, that could have
> >> been conducted, or that may be conducted in the future"). Others
> >> (Cochrane)
> >> recommend the use of random-effect when samples/experiments/designs/...
> >> have different features. All of them say that the choice should not be
> >> decided on the basis of presence/absence of heterogeneity, and the
> >> researcher should decide on the type of inference desired before
> examining
> >> the data.
> >>
> >
> > That summary is what I believe too and I think the last sentence is one
> > which is very important.
> >
> >
> >> Papers included in 'my' meta-analysis have very different
> >> samples/experimental features, as the majority of studies in social
> >> sciences. Moreover, I consider that is advantageous to make inferences
> to
> >> the entire population, instead of making inferences only to the set of
> >> studies included in 'my' meta-analysis; it is a wider approach.
> Therefore,
> >> I decided to perform random-effects models. In most cases, the results
> >> showed only small heterogeneity (and thus the results for fixed effects
> >> are
> >> similar).
> >>
> >> Now, a co-author disagrees with my point of view and says that the
> >> meta-analysis should be performed using fixed-effects models because
> (his
> >> main reasons):
> >> 1) "larger studies should have more weight" (sample sizes range from 25
> to
> >> 65),
> >> 2) "choosing a random-effects model introduces an error in each study",
> >> 3) "fixed effects provide narrower CI intervals and, as such, more
> precise
> >> results".
> >>
> >
> > Point 1 is indeed true. I do not understand point 2. Point 3 is also true
> > but misses the point as to whether that narrowness is appropriate or not.
> > If you want to choose a third option there is the method by Hemni and
> Copas
> > which is available in  metafor. This was designed for situations of small
> > study bias and basically gives you the fixed effect summary (which gets
> rid
> > of point 1) while making the CI more wide.
> >
> >
> >> He also gave me a reference of an article that was published in the same
> >> journal we are planning to submit 'our' meta-analysis, in which
> >> fixed-effects were preferred. The authors used the following argument:
> >>
> >> "Studies on the effect of medications were combined using a fixed-effect
> >> model (Borenstein et al., 2010). We expected the final model to include
> >> only a small number of studies and estimation of random-effects models
> >> with
> >> few studies has been shown to be unreliable (Guolo and Varin, 2017).
> >> However, random-effects models were carried out in a sensitivity
> >> analysis."
> >>
> >
> > It is true that the estimate of tau^2 is quite imprecise but I would have
> > thought it more logical to do the analyses the other way round (random
> > primary, fixed sensitivity).
> >
> > There is also the issue f whether in the face of extreme heterogeneity it
> > makes sense to give any summary estimate at all. I recently reviewed an
> > article where they used random effects to combine two estimates, one from
> > each sex. Apart from the issue of whether you can generalise to a
> > population of other sexes there were some pairs of estimates which
> clearly
> > looked different and where combining them obscured rather than
> illuminated.
> >
> >
> >> I have confirmed that results from random- and fixed-effects models are
> >> similar in most of the cases (usually <= .01; narrower CI but the
> >> significance does not change), and even when the difference is higher
> >> (=.04) there is no "small-studies effect" (i.e., small studies are not
> >> consistently more positive, or negative).
> >>
> >> What is your opinion on his arguments and on the argument used in that
> >> paper (i.e., the estimation of fixed-effects models is more reliable
> than
> >> the random-effects when there are only few studies)?
> >>
> >> Kind regards,
> >> celia
> >>
> >>         [[alternative HTML version deleted]]
> >>
> >> _______________________________________________
> >> R-sig-meta-analysis mailing list
> >> R-sig-meta-analysis at r-project.org
> >> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> >>
> >>
> > --
> > Michael
> > http://www.dewey.myzen.co.uk/home.html
> >
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis at r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list