[R-meta] Egger Sandwich Test & Correlated Hierarchical Effects (Plus) Model
James Pustejovsky
jepu@to @end|ng |rom gm@||@com
Sun Feb 26 22:05:35 CET 2023
Hi Sicong,
Responses inline below.
James
On Thu, Feb 23, 2023 at 10:49 AM Sicong Liu via R-sig-meta-analysis <
r-sig-meta-analysis using r-project.org> wrote:
> Dear All,
>
> I would like to ask two questions, one about the correlated hierarchical
> effects (CHE) model and the other about Egger Sandwich test used in CHE.
>
> * I am meta-analyzing a dataset consisting of ~400 effect sizes
> (converted from logOR to d) from 98 independent group comparisons, which is
> from 88 clinical trials. I have more independent group comparisons than
> trials because a few trials have separate reports for subgroups of their
> samples. For example, a trial may separately report outcomes from
> adolescents and adults in the sample. I am planning to use the correlated
> hierarchical effect (CHE) approach and wonder whether I should use the CHE
> (i.e., two levels such as ‘independent sample / ES’) or the CHE+ (three
> levels such as ‘trial / independent sample / ES’). Based on some
> preliminary results, it seems that the 2nd level variance component would
> turn 0 in CHE+ models. In such a case, I wonder if I should use CHE instead
> of CHE+? If I can use CHE, a related questions is, is it better to use
> ‘independent sample / ES’ or ‘trial / ES?’
>
It sounds like when you fit the CHE+ model with trial / sample / ES that
the sample-level variance is estimated as zero. That suggests that you
could drop the sample-level variance component with no loss of fit. If the
data includes only a few trials with multiple independent samples, then
there's very little information to estimate the middle-level variance
component, which would be further justification for using the simpler
working model with trials / ES.
One note about this: For purposes of specifying the variance-covariance
matrix for sampling errors (i.e., the V matrix in metafor), you should
still use the independent sample as the clustering variable (i.e., assuming
independent sampling errors for ES from different samples), even if you
drop the sample-level random effects from the model.
> * I am also planning to run Egger Sandwich test with the above
> meta-regression models. Based on the example from Rodgers & Pustejovsky
> (2021), see reference below, is it better to use the standard error version
> or the variance version of the precision (i.e., the modified variance term
> named W in below Pustejovsky & Rodgers, 2019).
This is a tricky question and I am not aware of clear guidance or research
on which one to use when. My sense is that the relative power of the two
tests depends on the distribution of the standard errors from different
studies, so there might not be a general rule about which to use. In light
of all this, perhaps it would be best to just report results from both
versions?
> In addition, Rodgers & Pustejovsky (2021) seems to recommend use one-sided
> t test for detecting selective reporting in Egger Sandwich test. Does it
> make it wrong to use two-sided t test associated with the precision term in
> Egger Sandwich test?
>
One-sided tests makes more sense because only certain patterns of asymmetry
in the funnel plot are consistent with selective reporting (publication
bias) based on statistical significance. If ES are coded so that average
effects would be expected to be greater than zero, then selective reporting
will tend to induce a positive association between ES imprecision (as
measured by SE or sampling variance) and ES magnitude. In other words, the
observed studies with larger SEs will tend to have larger effect size
estimates. A positive slope from the Egger's Sandwich test is indicative of
such an association. On the other hand, a negative slope from the Egger's
Sandwich test would be pretty difficult to interpret in terms of selective
reporting. Thus, using a one-sided test for the null of beta <= 0 versus
the alternative of beta > 0 is more clearly connected with the
interpretation of funnel plot asymmetry as an indicator of potential
selective reporting.
* Rodgers, M. A., & Pustejovsky, J. E. (2021). Evaluating
> meta-analytic methods to detect selective reporting in the presence of
> dependent effect sizes. Psychological methods, 26(2), 141.
> * Pustejovsky, J. E., & Rodgers, M. A. (2019). Testing for funnel
> plot asymmetry of standardized mean differences. Research Synthesis
> Methods, 10(1), 57-71.
>
> Thank you very much!
>
> Best regards,
> Sicong (Zone)
>
> ------------------------------------------
> Sicong (Zone) Liu, Ph.D.
> Research Associate
> University of Pennsylvania
>
> 3620 Walnut Street,
> Philadelphia, PA 19104-6220
> Email: zone using upenn.edu<mailto:zone using upenn.edu>
> ------------------------------------------
>
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
> To manage your subscription to this mailing list, go to:
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list