[R-meta] Moderator analysis with missing values (Methods and interpretations)
Viechtbauer, Wolfgang (SP)
wolfg@ng@viechtb@uer @ending from m@@@trichtuniver@ity@nl
Fri Jul 6 15:11:08 CEST 2018
Hi Tommy,
1) This is a tricky (and common) issue. I suspect this is one of the reasons why moderators are still often tested one at a time (to 'maximize' the number of studies included in an analysis when testing each moderator). But this makes it impossible to sort out the unique contributions of correlated moderators, so this isn't ideal. One could consider imputation techniques, although this isn't common practice in the meta-analysis context. So, as for a more pragmatic approach, why not do both? If a moderator is found to be relevant when tested individually and also when other moderators are included, then this gives should give us more confidence in the finding.
2) Possible, sure. Is it useful, maybe. Consider the following scatterplot of the effect sizes against some moderator (ignore the *'s for now):
| * .. .
| *.. . .
| . *. .
| . .*.
| .. *
| *
+------*--------
Now suppose all studies where the moderator is below * are missing. This shouldn't bias the slope of the coefficient for the moderator, but studies where the moderator is know will have on average a higher effect size than studies where the moderator is unknown. So what will then the conclusion be once we find this?
3) Again, how about both? Make a side-by-side table of the results.
4) Yes (on average).
5) Yes. If you see a coefficient for "Yes", then "No" is the reference level. So the coefficient for "Yes" tells you how much lower/higher the effect is on average for "Yes" compared to "No".
Best,
Wolfgang
>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-
>project.org] On Behalf Of Tommy van Steen
>Sent: Friday, 06 July, 2018 14:37
>To: r-sig-meta-analysis using r-project.org
>Subject: [R-meta] Moderator analysis with missing values (Methods and
>interpretations)
>
>Hi all,
>
>I’m running a meta-analysis using Cohen’s d in the metafor-package for R.
>I’m doubting my method/interpretation of results at various stages. As I
>want to make sure I’m doing it right, rather than doing what is
>convenient, I hope you could provide me with some advice regarding the
>following questions:
>
>1. Heterogeneity is high in my data, and I want to add a list of
>moderators to test their influence. However, many of these moderators
>have missing values because not all studies have measured these
>variables. If I run a model that includes all moderators, the number of
>comparisons drops from 51 to 27. I’d prefer to include all moderators at
>once, but is this the right thing to do, or should I test each moderator
>separately?
>2. Following 1: if I can run the model as a whole, is it possible and
>useful to in some way compare the overall effect size of the studies with
>no missing moderator data with those that are excluded in the model
>because of these missing datapoints?
>3. Some moderators that are significant when including all moderators at
>once, are not significant when tested individually on the same subset of
>27 studies. Which of the two statistics (as part of the larger model, or
>the individual moderator) should I report?
>
>And two questions about interpretation:
>4. I added publication year as moderator and and the estimate is 0.0360.
>Am I interpreting this result correctly when I say that every increase in
>the moderator year by 1, increases the effect size by 0.0360?
>5. I also added a dichotomous moderator with options yes/no. In the
>moderator list, This moderator is listed with the ‘yes’ option, with an
>estimate of 0.5739, does this mean the effect size is 0.5739 higher than
>when the moderator value is ‘no’?
>
>Thank you in advance for your thoughts and advice.
>
>Best wishes,
>Tommy
More information about the R-sig-meta-analysis
mailing list