[R-meta] Dependent Measure Modelling Question

James Pustejovsky jepu@to @end|ng |rom gm@||@com
Wed Mar 13 16:21:55 CET 2019


To your first question, yes it is possible to use Wald_test to do "robust"
anovas for comparing factor level combinations. The interface works
similarly to anova(), but the constraints have to be provided in the form
of a matrix. Here is an example based on Wolfgang's tutorial:

dat <- dat.raudenbush1985
dat$weeks <- cut(dat$weeks, breaks=c(0,1,10,100),
labels=c("none","some","high"), right=FALSE)
dat$tester <- relevel(factor(dat$tester), ref="blind")
res.i2 <- rma(yi, vi, mods = ~ weeks:tester - 1, data=dat)

# ANOVA with model-based variances
anova(res.i2, L=c(0,1,-1,0,0,0))
linearHypothesis(res.i2, c("weekssome:testerblind - weekshigh:testerblind =
anova(res.i2, L=c(0,0,0,0,1,-1))
linearHypothesis(res.i2, c("weekssome:testeraware - weekshigh:testeraware =

# Wald tests with RVE

# some vs. high, test = blind
Wald_test(res.i2, constraints = matrix(c(0,1,-1,0,0,0), nrow = 1),
          vcov = "CR2", cluster = dat$author)

# some vs. high, test = aware
Wald_test(res.i2, constraints = matrix(c(0,0,0,0,1,-1), nrow = 1),
          vcov = "CR2", cluster = dat$author)

To your second question about models that allow for differing levels of
heterogeneity, this tutorial from the metafor site discusses it a bit:

For your model, I think the syntax might be something along the lines of
the following:

StimulibyEmotion <-

  rma.mv(yi, vi, mods = ~ StimuliType:Emotion -1,

         random = list(~ 1 | studyID, ~ Emotion | outcome, ~ 1 | effectID),

         struct = "UN",

           tdist = TRUE, data=dat)

This model allows for varying levels of outcome-level heterogeneity,
depending on the emotion being assessed. The struct = "UN" argument
controls the assumption made about how the random effects for each emotion
co-vary within levels of an outcome. Just for sake of illustration, I've
assumed that the between-study heterogeneity is constant (~ 1 | studyID)
and the effect-level heterogeneity is also constant (~ 1 | effectID). I'm
not at all sure that this is the best (or even really an appropriate)
model. To get a sense of that, I think we'd need to know more about the
structure of your data, what's nested in what, and the distinction between
outcome and effectID.


On Mon, Mar 11, 2019 at 11:03 PM Grace Hayes <grace.hayes3 using myacu.edu.au>

> Dear James,
> Thank you for your response to my previous query. Yes, the effect size
> estimates are statistically dependent. Therefore, as per your
> recommendation, I have read over a few tutorials that cover multivariate
> meta-analysis and robust variances estimations. Specifically, the one that
> you wrote about using club sandwich to run co-efficient tests followed by
> Wald-tests. This article was most helpful! I have a follow up question
> regards the use of the Wald-test, which I have outlined below.
> My three potential moderators are: task_design (two levels), Emotion (6
> levels) and StimuliType (5 levels). To test the moderating effect of each
> of these variables I ran the following:
> allModerator <- rma.mv( yi, vi, mods = ~ task_design + Emotion +
> StimuliType, random = ~ 1 |  studyID/outcome/effectID, tdist = TRUE, data
> = dat)
> coef_test(allModerator, vcov = "CR2")
> Wald_test(allModerator, constraints = 2, vcov = "CR2")
> Wald_test(allModerator, constraints = 3:7, vcov = "CR2")
> Wald_test(allModerator, constraints = 8:11, vcov = "CR2")
> The constraints for each Wald test match the coefficients related to each
> moderator, so I believe these tested for the significance of each moderator
> while adjusting for the other two moderating variables. However, I was also
> interested in variance across the estimated average effect produced by each
> stimuli format for each emotion. I followed the below guide by Wolfgang
> Viechtbauer, that showed how to parameterize the model to provide the
> estimated average effect for each factor level combinations.
> http://www.metafor-project.org/doku.php/tips:multiple_factors_interactions
> My model was:
> StimulibyEmotion <- rma.mv(yi, vi, mods = ~ StimuliType:Emotion -1, random = ~ 1 |  studyID/outcome/effectID, tdist = TRUE, data=dat)
> coef_test(StimulibyEmotion, vcov = "CR2")
> Wolfgang then uses anovas to test factor level combination against each
> other. Can I use the Wald test to do this to my robust variance estimations?
> Also, would it be possible for you to please elaborate on what you meant
> by "a model that allows for different heterogeneity levels for each
> emotion", or provide a link to an article demonstrating this? As a first
> time used of R and metafor, I wasn't sure how to go about this.
> Many thanks,
> Grace
> ------------------------------
> *From:* James Pustejovsky <jepusto using gmail.com>
> *Sent:* Tuesday, 12 February 2019 1:37 PM
> *To:* Grace Hayes
> *Cc:* r-sig-meta-analysis using r-project.org
> *Subject:* Re: [R-meta] Dependent Measure Modelling Question
> Grace,
> It sounds like the data that you're describing has two factors, emotion
> type and task type, and that both are within-study factors (in other words,
> a given study might report results for multiple emotion types and/or
> multiple task types). Are the emotion types and task types also measured
> within-participant, such that a given participant in a study gets assessed
> with multiple task types, on multiple emotion types, or both? If so, then
> one challenge in analyzing this data structure is that the effect size
> estimates will be statistically dependent. There are several ways to handle
> this (multivariate meta-analysis, robust variance estimation), which we've
> discussed in many previous posts on the listserv.
> Other than this issue, it sounds to me like it would be possible
> to  analyze both factors---emotion type and task type---together in one big
> model. The major advantage of doing so is that the joint model would let
> you examine differences in emotion types *while controlling for task
> types*, as well as examining differences in task types *while controlling
> for emotion types*. Controlling for the other factor (and maybe other
> covariates that are associated with effect size magnitude) should provide
> clearer, more interpretable results for differences on a given factor.
> There is also evidence that using a multivariate meta-analysis model can
> potentially mitigate outcome reporting bias to some extent (see Kirkham,
> Riley, & Williamson, 2012; Hwang & Desantis, 2018).
> A further advantage of using one big model is that it would let you adjust
> for other potential moderators that might have similar associations for
> each emotion type and each task type. If you conduct separate analyses for
> each emotion type (for example), you would have to analyze these moderators
> separately, so you'd end up with 6 sets of moderator analyses instead of
> just one.
> The main challenge in the "one big meta-analysis model" approach is that
> it requires careful checking of the model's assumptions. For example, you
> would need to assess whether the between-study heterogeneity is similar
> across the six emotion types and, if not, fit a model that allows for
> different heterogeneity levels for each emotion.
> James
> Hwang, H., & DeSantis, S. M. (2018). Multivariate network meta‐analysis to
> mitigate the effects of outcome reporting bias. *Statistics in medicine*.
> Kirkham, J. J., Riley, R. D., & Williamson, P. R. (2012). A multivariate
> meta‐analysis approach for reducing the impact of outcome reporting bias in
> systematic reviews. *Statistics in medicine*, *31*(20), 2179-2195.
> On Mon, Feb 11, 2019 at 3:16 AM Grace Hayes <grace.hayes3 using myacu.edu.au>
> wrote:
> Hi all,
> I have a question regarding a meta-analysis of multiple dependent outcomes
> that I would like to conduct using metafor.
> For this meta-analysis of emotion recognition in ageing, I'm interested in
> age-effects (young adults vs. older adults) on four different emotion
> recognition tasks (Task A, Task B, Task C, Task D). Studies in this area
> typically compare older adults' performance to younger adults' performance
> on more than one of these emotion recognition task.
> For each task there are also multiple outcomes.  Each task produces an
> accuracy age-effect for each emotion type included (I.e., anger, sadness,
> fear). Up to 6 different emotions are included (Emotion 1, Emotion 2,
> Emotion 3, Emotion 4, Emotion 5, Emotion 6). I therefore have some studies
> with, for example, 6 different age-effects from 3 different emotions tasks;
> a total of 18 dependent outcomes.
> Ideally I would like to investigate age-effects for each of the six
> emotion types seperately (with Tasks A, B, C and D combined), and
> age-effects for each task type seperately (with Emotions 1-6 combined). I
> would then like to compare the effects for each emotion type (Emotions 1-6
> separately) produced by each task  (Measure A, B, C, D separately).
> My question is, can I have a model that analyses emotion type and task
> type all together? Is this possible and statistically appropriate? Will it
> tell me the age-effects produced for each emotion by each task, or will it
> only tell me if task type and emotion type are significant moderators?
> I am also interested to know if I can add additional moderators such as
> number of emotions included in the task and year of publication?
> One concern that has been brought to my attention is overfitting from too
> many factors. Another is that output would be difficult too interpret, and
> thus it has been recommended that I perhaps run separately analyses for
> each task.
> Any advice would be much appreciated.
> Sincerely,
> Grace Hayes
>         [[alternative HTML version deleted]]
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis

	[[alternative HTML version deleted]]

More information about the R-sig-meta-analysis mailing list