[R-meta] multilevel models and bias assessment

Lukasz Stasielowicz |uk@@z@@t@@|e|ow|cz @end|ng |rom un|-o@n@brueck@de
Fri Dec 31 14:10:56 CET 2021


Dear Catia,

one could conduct a modified Egger's regression test by accounting for 
the dependency (e.g. StudyID/EffectID and respective variance-covariance 
matrix) and using a precision estimate as moderator variable, e.g.


model <- rma.mv(Effects, Vmatrix, mods = ~ Precision,random = ~ 1 | 
StudyID/EffectID, data = data)


Please note that the choice of the precision estimate depends on the 
effect size (e.g., r, d): "A complication with Egger’s regression is 
that for certain effect size metrics, the standard error is naturally
correlated with the effect size estimate even in the ab-
sence of selective reporting or other sources of asym-
metry. Different variants of Egger’s regression have
been developed to reduce the correlation by using alter-
native measures of precision, specifically for log odds
ratios (Macaskill, Walter, & Irwig, 2001; Moreno et
al., 2009; Peters et al., 2006), raw proportions (Hunter
et al., 2014), hazard ratios (Debray, Moons, & Riley,
2018), and standardized mean differences (Pustejovsky
& Rodgers, 2019)."

If you are using correlation coefficients then see this reply from Wolfgang:
https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2020-May/002086.html

For recommendations about other effect sizes you can use the references 
in the cited article:
Rodgers, M. A., & Pustejovsky, J. E. (2021). Evaluating meta-analytic 
methods to detect selective reporting in the presence of dependent 
effect sizes. Psychological Methods, 26(2), 141–160. 
https://doi.org/10.1037/met0000300


Best wishes,
Lukasz
-- 
Lukasz Stasielowicz
Osnabrück University
Institute for Psychology
Research methods, psychological assessment, and evaluation
Seminarstraße 20
49074 Osnabrück (Germany)

Am 28.12.2021 um 12:00 schrieb r-sig-meta-analysis-request using r-project.org:
> Send R-sig-meta-analysis mailing list submissions to
> 	r-sig-meta-analysis using r-project.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> 	https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> or, via email, send a message with subject or body 'help' to
> 	r-sig-meta-analysis-request using r-project.org
> 
> You can reach the person managing the list at
> 	r-sig-meta-analysis-owner using r-project.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of R-sig-meta-analysis digest..."
> 
> 
> Today's Topics:
> 
>     1. multilevel models and bias assessment
>        (=?UTF-8?Q?C=C3=A1tia_Ferreira_De_Oliveira?=)
> 
> ----------------------------------------------------------------------
> 
> Message: 1
> Date: Mon, 27 Dec 2021 19:11:44 +0000
> From: =?UTF-8?Q?C=C3=A1tia_Ferreira_De_Oliveira?= <cmfo500 using york.ac.uk>
> To: R meta <r-sig-meta-analysis using r-project.org>
> Subject: [R-meta] multilevel models and bias assessment
> Message-ID:
> 	<CACw+TfcHfXHPEgNrf7SZPVtqgJvSQS5XYnOD6Uo8BpHpkR-wgA using mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
> 
> Dear all,
> 
> I hope you are well.
> Given that we should not conduct Egger's regression on effect sizes with
> dependency, would it be more adequate to aggregate all effect sizes coming
> from the same study or is it OK to just combine effect sizes when they come
> from the same participants? I know the latter ignores the nested nature of
> the data, so I just wanted to check whether it is adequate to do so just
> for bias assessment.
> 
> Best wishes,
> 
> Catia
>



More information about the R-sig-meta-analysis mailing list