[R-meta] Should we adjust for (standardized) baseline in meta-regression like ANCOVA?

James Pustejovsky jepu@to @end|ng |rom gm@||@com
Sun Mar 3 19:24:57 CET 2024


Hi Zac,

This is an interesting question, and I think the intuition of trying to
improve precision (and potentially bias?) by controlling for group
differences is right on. However, I think there are some potential issues
with what you've sketched.

First, the "standardized mean pre-test" in your meta-regression model is
not a scale-free quantity, so its magnitude will depend on how the
continuous outcome scale is defined.

Second, even if all of the outcomes were on a common scale (in which case,
standardization would be unnecessary), there's an important difference
between typical applications of ANCOVA and the meta-regression you've
described. The former is estimated with individual-level observations and
so will account for individual differences between groups to the extent
that the pre-test is predictive of the post-test _at the individual level_.
The meta-regression analogue would estimate association between effect size
and pre-test score _at the level of the sample_ (i.e., using between-sample
covariation). There's no guarantee that the between-sample association will
be the same (or as strong as) the individual-level association, so the
meta-regression won't necessarily work to improve precision or reduce bias
as it does with typical ANCOVA.

Third, a more general point, since you're using arm-specific
standardization. I've always thought that this is a weird thing to do
because presumably all the arms of a given study use the same outcome
measurement scale. If that is indeed the case, and if the baseline
population SDs are the same in each arm (as you would expect in a
randomized trial), then standardizing by arm will just introduce additional
noise into the contrast between effect sizes between studies. This could be
reduced by pooling the baseline SDs across arms and using this pooled SD as
the denominator for all the arm-specific change scores in a given study.

You mentioned using pre-test adjusted means to calculate effect size
estimates and that this wasn't a possibility because your database only has
raw summary statistics reported. Another possibility would be to use
difference-in-differences (i.e., differences-in-change scores) to calculate
effect sizes compared to a control arm. Say that group 0 is the control
condition and then you've got group d = 1,2,... (presumably more than 1 if
you're looking at dose-response relationships?), the diff-in-diff effect
size would be
[(M_{post,d} - M_{pre,d}) - (M_{post,0} - M_{pre,0})] / SD_pre,
where SD_pre could be pooled across all available groups in the design. To
get the sampling variance of this ES estimator, you do need to know the
correlation between pre-test and post-test scores in each group, or the SD
of the change scores for each group, or the SE of the mean change score, or
a t statistic or p-value from a paired-sample t test. But you'll need the
same information to calculate sampling variances for the arm-specific
effect sizes, so perhaps this isn't an obstacle. Compared to the
arm-specific SMD, this diff-in-diff effect size at least does something to
account for baseline differences. Relative to arm-specific SMDs, it will
have better precision (and potentially reduced bias, if there are
systematic group differences) so long as the pre-test and post-test are
strongly correlated within each group (which you'd hope would be the case,
if the primary studies all use change score analysis or repeated measures
ANOVA).

James

James


On Sat, Mar 2, 2024 at 7:17 PM Zac Robinson via R-sig-meta-analysis <
r-sig-meta-analysis using r-project.org> wrote:

> Dear All,
>
>
> I am working through a conceptual issue that I can't seem to find any
> resources on. Specifically, I am performing a multilevel dose-response
> meta-regression using the 'metafor' package. My model includes effect sizes
> calculated as a standardized mean change within each study arm and a
> continuous moderator. So in R syntax:
>
>
> ((Post Mean - Pre Mean) / Pre SD) ~ Moderator
>
>
> Whenever I write this out, I can’t help but think of similarities to a
> typical “change score ANCOVA” many run for an RCT on a continuous outcome,
> where the baseline mean is included as a covariate to improve precision and
> account for things like regression to the mean:
>
>
> (Post Mean - Pre Mean) ~ Treatment + Pre Mean
>
>
> To me, it seems like the same rationale would apply for meta-regression,
> just that the pre-score would need to be standardized. It seems like you
> are basically dividing each side of the equation by the Pre SD.
>
>
> ((Post Mean - Pre Mean) / Pre SD) ~ Moderator + (Pre Mean / Pre SD)
>
>
> Am I totally off base with this? It seems to make sense to me, but I am
> also very open to the possibility that I’m missing something and the
> approach I am proposing could be introducing mathematical coupling that may
> be misleading. It could also be something that is taken care of my random
> effects (i.e., list(~Moderator|study, ~1|arm, ~1|es)) - although that still
> seems like it wouldn't totally remedy the issue.
>
>
> Also, I am aware that it is sometimes recommended to extract the adjusted
> means from the ANCOVA of each individual study to get around this issue -
> but in my case I only have access to the raw (unadjusted) means. Moreover,
> this issue seems a bit more straightforward if I was able to keep my effect
> sizes in raw units, but because I am including effects on multiple scales,
> effects need to be standardized.
>
>
> Thank you in advance!
>
>
> Zac
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
> To manage your subscription to this mailing list, go to:
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list