[R-meta] Some questions: effect sizes, heterogeneity and Egger's test
Michael Dewey
||@t@ @end|ng |rom dewey@myzen@co@uk
Thu Sep 16 12:44:21 CEST 2021
For my money interpreting asymmetry is dependent on what the scientists
involved regard as plausible mechanisms. Actually that sentence is true
for almost any replacement for the word asymmetry. It may be that in
some fields small studies do produce effects nearer the null even if we
find it hard to think of examples.
Michael
On 15/09/2021 15:03, James Pustejovsky wrote:
> Please keep the mailing list cc'd.
>
> Regarding reporting the limit estimate, let me first comment in general.
> Typically, I think it's a good idea to report it (along with its confidence
> interval). Stanley & Doucouliagos have proposed interpreting this as an
> estimate of the population mean effect size after adjusting for small-study
> effects, so it does have some methodological precedent, although it is only
> a very rough bias adjustment. Beyond that, it's a helpful way to
> characterize the degree of asymmetry in a funnel plot: the greater the
> asymmetry, the more the limit estimate will differ from the unadjusted
> estimate.
>
> Now to consider the specifics of your results: If your effects are coded so
> that positive values are consistent with the theoretically expected effect
> (is that the case?), then these results show that the pattern of asymmetry
> is not statistically distinguishable from null. Furthermore the direction
> of asymmetry is the opposite of what one would expect under most theories
> of small study effects or publication bias. If the asymmetry is driven by
> publication bias, one would expect smaller studies to have larger effects
> than bigger studies. But the estimate from Egger's test indicates the
> opposite. And if I'm interpreting things correctly, then this means that
> your limit estimate should be _larger_ than your unadjusted estimate. Given
> this, it's perhaps less important to report the limit estimate. I would be
> very curious to hear others' perspectives on this, too.
>
> James
>
>
> On Tue, Sep 14, 2021 at 3:54 AM Teresa Luther <Teresa.Luther using gmx.de> wrote:
>
>> Hi James,
>>
>> Thanks a lot for the detailed reply and for providing the helpful article
>> on heterogeneity - I will have a look at that and should any questions
>> remain, I would write here again.
>> Concerning the regression intercept: Yes, my output indeed displays the
>> Limit estimate.
>>
>> This is the Output I get from Egger’s Test:
>> Test for Funnel Plot Asymmetry: z = -0.8936, p = 0.3716
>> Limit Estimate (as sei -> 0): b = 1.0497 (CI: -1.1704, 3.2699)
>>
>> I am not quite sure, how informative the intercept 1.050 actually is as I
>> have not seen it being reported in other meta-analysis. Do you think it
>> might also be sufficient to just report z and p value?
>>
>> Thank you so much for your help!!
>>
>> Best,
>> Teresa
>>
>> Am 13.09.2021 um 21:41 schrieb James Pustejovsky <jepusto using gmail.com>:
>>
>> Hi Teresa,
>>
>> Responses below.
>>
>> James
>>
>> On Mon, Sep 13, 2021 at 9:54 AM Teresa Luther <Teresa.Luther using gmx.de>
>> wrote:
>>
>>> Dear All,
>>>
>>> I conducted several meta-analyses. I used the R package "metafor" as well
>>> as the free open-source software OpenMetaAnalyst for conducting the
>>> analyses.
>>>
>>> Now, I am writing up the results and face some uncertainty with regard to
>>> the statistics.
>>>
>>> 1) I have a p-value of .000 for several effect sizes (Hedge's g). Also
>>> for Higgins' I^2 (heterogeneity) I sometimes get this value for p.
>>> I would tend to write p < .001 and possibly footnote the .000. Is this
>>> possible or what would you suggest in such a case? Simply report it as p <
>>> .001?
>>>
>>
>> Yes, reporting as p < .001 is sensible.
>>
>>
>>>
>>> 2) As a measure of heterogeneity, I interpret Higgins' I^2. I interpret
>>> the values above 25% as low heterogeneity (following Higgins et al. 2003).
>>> In some of the analyses, I get a value of 0%. I would now have to write
>>> that there is no heterogeneity. However, I consider this value almost
>>> impossible, since there is always a certain variance between the studies.
>>> Since I had assumed heterogeneity between the studies, I had also performed
>>> the calculations on the basis of a random-effects model.
>>> I am also not sure whether values of 25, 50 and 75 % have to be
>>> considered as cut-off values or whether values in between can also be
>>> interpreted as for example "small to medium" for I^2=45 %.
>>> This question also arises for me with Hedge's g.
>>>
>>
>> I would recommend reporting and focusing your interpretation on the
>> estimate of tau, the between-study heterogeneity parameter, rather than on
>> I^2. The interpretation of I^2 depends on the distribution of sample sizes
>> in your primary studies, so it doesn't make much sense at all to use
>> decontextualized benchmarks based on I^2. For a more detailed explanation
>> of this reasoning and some additional suggestions about how to interpret
>> heterogeneity, check out the following article:
>>
>> Borenstein, M., Higgins, J. P., Hedges, L. V., & Rothstein, H. R. (2017).
>> Basics of meta‐analysis: I2 is not an absolute measure of heterogeneity. *Research
>> synthesis methods*, *8*(1), 5-18.
>>
>>
>>>
>>> 3) To investigate a possible publication bias, I ran Egger's regression
>>> test in R for some of the analyses. I would like to report the regression
>>> intercept as well and assume that the intercept is the “b“ (provided
>>> together with CI) I get in the output. Would it be correct to report this
>>> value as beta^ with index 1? In the literature I found that in most
>>> meta-analyses only report the p-value for Egger's test, however the
>>> recommendation is made to report the intercept as well. Now I would like to
>>> report this regression intercept correctly.
>>>
>>> I don't think there is a standard symbol or notation for the intercept
>> from Egger's regression test. The more important thing is to give an
>> accurate description of the coefficient, as the estimated average effect
>> size in a study with zero sampling error, based on a model that includes
>> the standard error as a linear predictor. In recent versions of metafor,
>> the regtest() function reports this estimate and describes it as the "Limit
>> Estimate (as sei -> 0)".
>>
>>
>>
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
--
Michael
http://www.dewey.myzen.co.uk/home.html
More information about the R-sig-meta-analysis
mailing list