[R-meta] effect size estimates regardless of direction

Daniel Noble UNSW d@niel@noble @ending from un@w@edu@@u
Thu Jun 14 01:16:30 CEST 2018

Hi Dave,

 nice paper in Biological reviews, Daniel.  The types of analyses I am
> doing are very similar, so that will be a good reference.
> In general, I am still confused about how to obtain information other than
> effect magnitudes from the analyze-then-transform method, because the
> models used for the approach use the raw means as the response.  So, other
> information,  such as variance estimation and the importance of various
> moderators in explaining variance, can not be obtained from the model
> outputs.

Yep, there are still some limitations to what can be done with this
approach at the moment. I am a little confused with this point though as
the approach is not a "modelling" approach *per se*. It simply transforms
means that have been estimated from a model that assumes a normal
probability distribution to what they would be if they were to come from a
folded normal distribution. Arguably, if you hypothsized certain
categorical moderators are important in explaining heterogenity then you
would want to estimate the overall mean estimate for each level regardless
of how much variance they explain. But, I do see your point – one maybe
interested in understanding variance explained assuming a folded normal.
I'm not aware of an easy solution on how to deal with this presently.

> Wolfgang, with your suggestion to apply a folded normal distribution to
> the means and obtaining profile likelihood CI's, I presume you were
> envisioning the absolute effect sizes being used as the response (yi) in
> the models?    That would be more representative of Morrisey's
> "transform-then-analyse" method, then?  Can a folded normal distribution be
> specified in rma.mv models?
> And to follow up again on previous points:
> Regarding question 1:  To deal with estimation of variance for moderator
> levels, I am not sure how to explicitly model the variance.  Doing a subset
> analysis sounds straightforward enough, but I am keen to explore the other
> option as well.   Any tips, Daniel?

I'm not sure how to in metafor, off hand, but with MCMCglmm it's fairly
easy by modifying the "rcov" argument. If you look at the code for the
paper I sent, it should show how to do this there (see links below).

> Regarding question 2:  I am only dealing with categorical moderators.  I
> was looking at the importance of various moderators in explaining effect
> sizes, using anova type analyses described here
> <http://www.metafor-project.org/doku.php/analyses:berkey1995>.  The
> absolute effects among different moderator levels is what I am truly
> interested in, so perhaps this analysis isn't necessary.

To me, it isn't. But I suppose it depends on your specific question. I
would just estimate the effect sizes and the credible intervals.

> 3.  I noticed that you used  the MCMCglmm package to run your bayesian
> models.  I have not used that package but am happy to have a look.
> I tried to find the code for Noble et al. 2018, to no avail.  If it is
> open access, let me know where I can find it. That would serve as a nice
> guide to navigate that package.

Very sorry. I should have given the DOI and link in last email. It should
be in the paper itself, but you can access the code here: DOI 10.17605/
OSF.IO/ZBGSS or the link https://osf.io/zbgss/

Hope this helps,


> On Wed, Jun 13, 2018 at 12:33 AM, Daniel Noble <daniel.wa.noble using gmail.com>
> wrote:
>> Hi Dave,
>> Some thoughts from me below, but I am sure Wolfgang and others can chime
>> in with better input.
>> Thanks to you both for the helpful information, and sorry for the delay
>>> in responding. To remind everyone, I wrote a couple weeks ago seeking
>>> advice on how to estimate the mean magnitude of effect sizes (i.e. the
>>> absolute value of effects without considering direction), rather than
>>> estimating true means as most models are intended for.
>>> Daniel suggested a bayesian approach that The analyze-then-transform
>>> method proposed by Morrisey et al 2016.  This approach does seem to be just
>>> what I need for estimating mean effect magnitudes without generating upward
>>> biases.
>>> A few follow up questions:
>>> 1. I am using multi-level mixed models to estimate mean effect sizes
>>> (using rma.mv in metafor).  Any reason why the function for the mean of
>>> a folded distribution (the mu.fnorm function in the Rejoinder paper) could
>>> not be applied to these more complex models?
>> No, this won't be a problem. One can apply the folded normal to estimate
>> the mean magnitude of effects for various levels of a categorical variable
>> after accounting for study, phylogeny and species (etc) in a multi-level
>> context (I did this recently, see Noble et al. 2018. Biological Reviews,
>> 93, 72–97; code for applying folded normal etc. is available for the paper,
>> if that is at all helpful). I think the thing to be careful about is the
>> estimation of variance for each level of the categorical moderator. The
>> folded normal will be sensitive to total variance, and so, assuming
>> homogeneous variance in each level of a categorical moderator may not be a
>> realistic assumption and will likely lead to some odd estimates at times.
>> You can: 1) explicitly model heterogeneous variance in each level of the
>> categorical moderator or 2) simply do a subset analysis to model each level
>> of a categorical moderator seperately (i.e, seperate models) and then apply
>> the folded normal to the overall mean estimate of the model with the
>> subsetted data in each group / level.
>>> 2. I am also testing the influence of various moderators on effect sizes
>>> using likelihood ratio tests (seeing whether dropping certain factors
>>> reduces goodness of fit). I can not think of how the analyze-then-transform
>>> method could be applicable here.  Have you ever done these types of
>>> analyses with magnitudes?
>> I'm not entirely clear on the question here. Do you mean categorical and
>> continuous moderators? You would be correct, applying the folded normal for
>> continuous moderators is pretty tricky at times. Shinichi and I are trying
>> to sort out exactly what this means at the moment – it's kind of a mind
>> bender thinking about this problem (at least for me). Presently, as far as
>> I understand it, you can only really do this with different levels of
>> categorical predictors. Although I maybe wrong - so others feel free to
>> chime in to correct me!
>>> 3. do you have recommendations for estimating confidence intervals about
>>> the mean magnitudes?
>> This was why I suggested to use a Bayesian approach as it becomes very
>> easy to estimate credible intervals on these estimates as you can apply the
>> folded normal function to the entire posterior distibution. Although, this
>> can probably also be done with a bootstrapping method using metafor.
>> Wolfgang will probably have some good suggestions here on what would work
>> with metafor.
>>> On Mon, May 21, 2018 at 10:39 PM, Daniel Noble <
>>> daniel.wa.noble using gmail.com> wrote:
>>>> Hi Dave and Wolfgang,
>>>> If you don't mind going Bayesian, you can try the "analyse and
>>>> transform" option. This is done by estimating the overall mean estimate and
>>>> applying that to the folded normal. Check out Mike's two papers.
>>>> Morrisey,M.B.(2016). Meta-analysis of magnitudes, differences and
>>>> variation in evolutionary parameters. Journal of Evolutionary Biology 29,
>>>> 1882–1904.
>>>> Morrisey,M.B.(2016). Rejoinder: further considerations for
>>>> meta-analysis of transformed quantities such as absolute values. Journal of
>>>> Evolutionary Biology 29, 1922–1931.
>>>> The second one has some R code that can help.
>>>> Cheers,
>>>> Dan
>>>> –––
>>>> Dr. Daniel Noble | ARC DECRA Fellow
>>>> Level 5 West, Biological Sciences Building (E26)
>>>> Ecology & Evolution Research Centre (E&ERC)
>>>> School of Biological, Earth and Environmental Sciences (BEES)
>>>> *The University of New South Wales*
>>>> Sydney, NSW 2052
>>>> T : +61 430 290 053
>>>> E : daniel.noble using unsw.edu.au <daniel.noble using mq.edu.au>
>>>> W: www.nobledan.com
>>>> Github: https://github.com/daniel1noble
>>>> On Tue, May 22, 2018 at 7:03 AM, Viechtbauer, Wolfgang (SP) <
>>>> wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>>>>> Hi Dave,
>>>>> You cannot just take absolute values and proceed with standard
>>>>> methods. As you noted, by taking absolute values, you end up with folded
>>>>> normal distributions. My approach would be to use ML estimation where the
>>>>> absolute values have folded normal distributions and then compute a profile
>>>>> likelihood confidence interval for the mean parameter, since I suspect a
>>>>> Wald-type CI would perform poorly.
>>>>> Best,
>>>>> Wolfgang
>>>>> -----Original Message-----
>>>>> From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bo
>>>>> unces using r-project.org] On Behalf Of Dave Daversa
>>>>> Sent: Monday, 21 May, 2018 13:47
>>>>> To: r-sig-meta-analysis using r-project.org
>>>>> Subject: [R-meta] effect size estimates regardless of direction
>>>>> ATTACHMENT(S) REMOVED: forest.plot.example.pdf |
>>>>> dummy.forest.plot.code.R
>>>>> Hi all,
>>>>> My question regards how to estimate overall magnitudes of effect sizes
>>>>> from compiled studies regardless of the direction.  I have attached a
>>>>> figure to illustrate, which I developed using made-up data and the attached
>>>>> code.
>>>>> In the figure five studies have significantly positive effect sizes,
>>>>> while 5 have significantly negative effect sizes.  Each have equal
>>>>> variances.  So, the overall estimated mean effect size from a random
>>>>> effects model is 0.   However, what if we simply want to estimate the mean
>>>>> effect size regardless of direction (i.e. the average magnitude of
>>>>> effects)?  In this example, that value would be 9.58 (CI: 6.48, 12.67),
>>>>> correct?
>>>>> I have heard that taking absolute values of effect sizes generates an
>>>>> upward bias in estimates of the standardized mean difference.  Also, this
>>>>> would create a folded normal distribution, which would violate assumptions
>>>>> of the model and would require an alternative method of estimating
>>>>> confidence intervals.  What would be your approach to setting up a model
>>>>> for answering the question of how much the overall magnitude of responses
>>>>> is?
>>>>> I suspect this question has come up in this email group in the past.
>>>>> If so, my apologies for the redundancy, and please send me any reference
>>>>> that may be helpful.
>>>>> Dave Daversa
>>>>> _______________________________________________
>>>>> R-sig-meta-analysis mailing list
>>>>> R-sig-meta-analysis using r-project.org
>>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>>> --
>>> ****************************************************************
>>> *David Daversa, PhD*
>>> *Postdoctoral Researcher*
>>> *Institute for Integrative Biology, University of
>>> Liverpoolddaversa using gmail.com <ddaversa using gmail.com>D.Daversa using liv.ac.uk
>>> <ddaversa using wustl.edu>*https://www.liverpool.ac.uk/in
>>> tegrative-biology/staff/david-daversa/
>>> <http://www.zoo.cam.ac.uk/zoostaff/manica/drdaversa.htm>
> --
> ****************************************************************
> *David Daversa, PhD*
> *Postdoctoral Researcher*
> *Institute for Integrative Biology, University of
> Liverpoolddaversa using gmail.com <ddaversa using gmail.com>D.Daversa using liv.ac.uk
> <ddaversa using wustl.edu>*https://www.liverpool.ac.uk/
> integrative-biology/staff/david-daversa/
> <http://www.zoo.cam.ac.uk/zoostaff/manica/drdaversa.htm>

	[[alternative HTML version deleted]]

More information about the R-sig-meta-analysis mailing list