[R-meta] Meta-analysis of meta-analyses
Chris Theres
@9cnther @end|ng |rom gm@||@com
Thu Feb 3 15:35:56 CET 2022
Dear all,
Let me add an additional consideration to this intriguing topic.
Another argument why I would generally favor Gerta’s suggestion of a
first-order meta-analysis of a union set of studies is reporting standards.
Inadequate reporting standards can already be a problem in first-order
meta-analysis. I’d guess most of us have at some point contacted authors
for additional or clarifying information. This issue can become even more
complex in second-order meta-analysis (SOMA) for several reasons.
First, authors may have applied different artifact corrections in their
meta-analyses. For example, take the case of a simple attenuation
correction. Some studies may have accounted for measurement error on study
level, some may have used artifact distribution, and some may have not
accounted for it at all. Ideally, every study would then report all coding
decisions, the original (coded) effect sizes, as well as the corrected ones
or, even better, share their data and code. However, this is frequently not
the case, making it hard to replicate the analysis without further
information. On the level of a SOMA, this may leave you in a limbo as to
what you are actually synthesizing. Second, if studies yield several effect
sizes for the same relationship, authors may arbitrarily decide which
effect to include. They may also compute a (possible weighted) average of
dependent effect sizes. Again, if these decisions are not clearly conveyed,
it may be hard to replicate the results. Last, we always have the
possibility of transcriptional or computational errors, not only in your
SOMA but also in every first-order meta-analysis.
In sum, unless all included meta-analyses clearly convey their coding
decisions, transformation procedures and (possibly corrected) included
effect sizes, a SOMA might not yield a clearer picture of the phenomenon
but instead introduce additional, “unnecessary” uncertainty into the
analysis.
The relevance of these points may certainly vary across research fields and
may also be more prevalent in correlational studies, yet I thought they
were worth mentioning.
Best,
Christian
--
*Dr. Christian TheresSaarland UniversityChair of Management Information
SystemsCampus C3.166123 SaarbrueckenGERMANY*
----------------------------------------------------------------------
Message: 1
Date: Wed, 2 Feb 2022 07:54:15 -0600
From: James Pustejovsky <jepusto using gmail.com>
To: =?utf-8?Q?"Dr._Gerta_R=C3=BCcker"?= <ruecker using imbi.uni-freiburg.de>
Cc: "Viechtbauer, Wolfgang (SP)"
<wolfgang.viechtbauer using maastrichtuniversity.nl>, Gladys
Barragan-Jason
<gladou86 using gmail.com>, R meta <r-sig-meta-analysis using r-project.org>
Subject: Re: [R-meta] Meta-analysis of meta-analyses
Message-ID: <9AC0DE96-CDEB-4367-8187-3877774DA999 using gmail.com>
Content-Type: text/plain; charset="utf-8"
I wonder about exactly this question for most of the second-order
meta-analyses that I have seen. Beyond issues of statistical dependence,
SOMA tend to further conceal heterogeneity of effects. Averaging together
averages makes it that much harder to understand the extent to which effect
sizes vary across studies.
> On Feb 2, 2022, at 4:37 AM, Dr. Gerta Rücker <ruecker using imbi.uni-freiburg.de>
wrote:
> But you could easily identify the overlapping studies, couldn't you? Why
not simply do a first-order meta-analysis of the union set of all studies
found?
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list