[R-meta] Dependent Measure Modelling Question

James Pustejovsky jepu@to @end|ng |rom gm@||@com
Fri Mar 22 16:20:23 CET 2019


Grace,

I see. This is quite a complex data structure, and I do not think there is
a single right answer for what random effects specification should be
used.  Without a single definitive model specification, I think the thing
to do would be to explore a range of models and compare their fit. Others
on the listserv might have better suggestions about how to conduct and
report this sort of model-building exercise. I'll offer a few highly
speculative suggestions. Your initial specification,

A:    random = ~ 1 |  studyID/outcome/effectID

seems quite reasonable as a starting point. Other specifications that you
might explore would allow the between-study heterogeneity to vary depending
on the emotion, task, or combination of emotion and task. If you had a
large number of studies, all of which reported every combination of emotion
and task, a very general specification would be

B:    random = list(~ outcome |  studyID, ~ 1 | effectID), struct = "UN"

But this model might be hard to fit when studies each use only a few
combinations of emotions and tasks. You could try allowing the
between-study heterogeneity to vary by emotion but not by task:

C:    random = list(~ emotion |  studyID, ~ 1 | effectID), struct = "UN"


Or vice versa:

D:    random = list(~ task |  studyID, ~ 1 | effectID), struct = "UN"

For (C), you could also include random effects per task nested within
studyID, but you'd need to create a taskID variable that takes on different
values for every study. Similarly for (D), you could also include random
effects per emotion nested within studyID by creating an emotionID variable
that takes on different values for every study.

James





On Thu, Mar 21, 2019 at 11:53 PM Grace Hayes <grace.hayes3 using myacu.edu.au>
wrote:

> Hi James,
>
> Yes that is correct, I have some studies with multiple ES estimates for
> the same combination of task and emotion.
>
> Grace
>
>
> *Grace Hayes*
>
> Psychologist | Doctor of Philosophy (PhD) Candidate
>
> Cognition and Emotion Research Centre
>
> School of Behavioural and Health Sciences, Faculty of Health Sciences
>
> Australian Catholic University
>
>
>
>
>
> Level 3, Mary Glowrey Building,
>
> 115 Victoria Parade, Fitzroy, VIC 3065
> *T:* +61 3 9230 8131
> *E**:* grace.hayes3 using myacu.edu.au
> *W:* http://ccaer.acu.edu.au/
>
>
>
>
> ------------------------------
> *From:* James Pustejovsky <jepusto using gmail.com>
> *Sent:* Friday, 22 March 2019 1:37 PM
> *To:* Grace Hayes
> *Cc:* r-sig-meta-analysis using r-project.org
> *Subject:* Re: [R-meta] Dependent Measure Modelling Question
>
> Grace,
>
> Sorry for the delay getting back to you. Your response is helpful in
> clarifying the structure of your data, but I'm still not sure I follow why
> you need the unique effectID in the model. Are there some studies where you
> have multiple ES estimates for the same combination of emotion and task
> (e.g., two measures of dynamic anger)?
>
> James
>
> On Wed, Mar 13, 2019 at 10:37 PM Grace Hayes <grace.hayes3 using myacu.edu.au>
> wrote:
>
> Thanks again James,
>
> In terms of the structure of my data; emotion outcomes ('Emotion'), are
> nested in tasks ('StimuliType'), which are nested in studies ('StudyID).
>
> The variable 'outcome' is one that I created that is a combination of the
> 'Emotion' and 'StimuliType' factors (i.e., DynamicAnger, StaticAnger,
> StaticDisgust), whereas the variable 'effectID' contains an unique identifier
> for each effect.
>
> I created these variables and defined the random effects as, random = ~ 1
> |  studyID/outcome/effectID, to account for the fact that some studies
> produced effects with the same factor combination (i.e., the same emotion
> from two tasks of the same stimuli type). Therefore, effects with the same
> factor combination ('outcome') but different studyID were independent, but
> effects with the same factor combination ('outcome') and the same studyID
> were dependent.
>
> Perhaps then, to apply the inner|outer formula to my data I would need to
> instead use Emotion|effectID?
>
> Cheers,
> Grace
>
>
> *Grace Hayes*
>
> Psychologist | Doctor of Philosophy (PhD) Candidate
>
> Cognition and Emotion Research Centre
>
> School of Behavioural and Health Sciences, Faculty of Health Sciences
>
> Australian Catholic University
>
>
>
>
>
> Level 3, Mary Glowrey Building,
>
> 115 Victoria Parade, Fitzroy, VIC 3065
> *T:* +61 3 9230 8131
> *E**:* grace.hayes3 using myacu.edu.au
> *W:* http://ccaer.acu.edu.au/
>
>
>
>
> ------------------------------
> *From:* James Pustejovsky <jepusto using gmail.com>
> *Sent:* Thursday, 14 March 2019 2:21 AM
> *To:* Grace Hayes
> *Cc:* r-sig-meta-analysis using r-project.org
> *Subject:* Re: [R-meta] Dependent Measure Modelling Question
>
> Grace,
>
> To your first question, yes it is possible to use Wald_test to do "robust"
> anovas for comparing factor level combinations. The interface works
> similarly to anova(), but the constraints have to be provided in the form
> of a matrix. Here is an example based on Wolfgang's tutorial:
>
> library(metafor)
> dat <- dat.raudenbush1985
> dat$weeks <- cut(dat$weeks, breaks=c(0,1,10,100),
> labels=c("none","some","high"), right=FALSE)
> dat$tester <- relevel(factor(dat$tester), ref="blind")
> res.i2 <- rma(yi, vi, mods = ~ weeks:tester - 1, data=dat)
>
> # ANOVA with model-based variances
> anova(res.i2, L=c(0,1,-1,0,0,0))
> linearHypothesis(res.i2, c("weekssome:testerblind - weekshigh:testerblind
> = 0"))
> anova(res.i2, L=c(0,0,0,0,1,-1))
> linearHypothesis(res.i2, c("weekssome:testeraware - weekshigh:testeraware
> = 0"))
>
> # Wald tests with RVE
> library(clubSandwich)
>
> # some vs. high, test = blind
> Wald_test(res.i2, constraints = matrix(c(0,1,-1,0,0,0), nrow = 1),
>           vcov = "CR2", cluster = dat$author)
>
> # some vs. high, test = aware
> Wald_test(res.i2, constraints = matrix(c(0,0,0,0,1,-1), nrow = 1),
>           vcov = "CR2", cluster = dat$author)
>
> To your second question about models that allow for differing levels of
> heterogeneity, this tutorial from the metafor site discusses it a bit:
>
> http://www.metafor-project.org/doku.php/tips:comp_two_independent_estimates?s[]=inner&s[]=outer
>
> For your model, I think the syntax might be something along the lines of
> the following:
>
> StimulibyEmotion <-
>
>   rma.mv(yi, vi, mods = ~ StimuliType:Emotion -1,
>
>          random = list(~ 1 | studyID, ~ Emotion | outcome, ~ 1 | effectID),
>
>          struct = "UN",
>
>            tdist = TRUE, data=dat)
>
>
> This model allows for varying levels of outcome-level heterogeneity,
> depending on the emotion being assessed. The struct = "UN" argument
> controls the assumption made about how the random effects for each emotion
> co-vary within levels of an outcome. Just for sake of illustration, I've
> assumed that the between-study heterogeneity is constant (~ 1 | studyID)
> and the effect-level heterogeneity is also constant (~ 1 | effectID). I'm
> not at all sure that this is the best (or even really an appropriate)
> model. To get a sense of that, I think we'd need to know more about the
> structure of your data, what's nested in what, and the distinction between
> outcome and effectID.
>
> Cheers,
> James
>
> On Mon, Mar 11, 2019 at 11:03 PM Grace Hayes <grace.hayes3 using myacu.edu.au>
> wrote:
>
> Dear James,
>
> Thank you for your response to my previous query. Yes, the effect size
> estimates are statistically dependent. Therefore, as per your
> recommendation, I have read over a few tutorials that cover multivariate
> meta-analysis and robust variances estimations. Specifically, the one that
> you wrote about using club sandwich to run co-efficient tests followed by
> Wald-tests. This article was most helpful! I have a follow up question
> regards the use of the Wald-test, which I have outlined below.
>
> My three potential moderators are: task_design (two levels), Emotion (6
> levels) and StimuliType (5 levels). To test the moderating effect of each
> of these variables I ran the following:
>
> allModerator <- rma.mv( yi, vi, mods = ~ task_design + Emotion +
> StimuliType, random = ~ 1 |  studyID/outcome/effectID, tdist = TRUE, data
> = dat)
>
> coef_test(allModerator, vcov = "CR2")
>
> #NUMBER OF EMOTIONS
>
> Wald_test(allModerator, constraints = 2, vcov = "CR2")
>
> #EMOTIONTYPE
>
> Wald_test(allModerator, constraints = 3:7, vcov = "CR2")
>
> #STIMULITYPE
>
> Wald_test(allModerator, constraints = 8:11, vcov = "CR2")
>
> The constraints for each Wald test match the coefficients related to each
> moderator, so I believe these tested for the significance of each moderator
> while adjusting for the other two moderating variables. However, I was also
> interested in variance across the estimated average effect produced by each
> stimuli format for each emotion. I followed the below guide by Wolfgang
> Viechtbauer, that showed how to parameterize the model to provide the
> estimated average effect for each factor level combinations.
>
> http://www.metafor-project.org/doku.php/tips:multiple_factors_interactions
>
> My model was:
>
> StimulibyEmotion <- rma.mv(yi, vi, mods = ~ StimuliType:Emotion -1, random = ~ 1 |  studyID/outcome/effectID, tdist = TRUE, data=dat)
>
> coef_test(StimulibyEmotion, vcov = "CR2")
>
> Wolfgang then uses anovas to test factor level combination against each
> other. Can I use the Wald test to do this to my robust variance estimations?
>
> Also, would it be possible for you to please elaborate on what you meant
> by "a model that allows for different heterogeneity levels for each
> emotion", or provide a link to an article demonstrating this? As a first
> time used of R and metafor, I wasn't sure how to go about this.
>
> Many thanks,
>
> Grace
>
>
> ------------------------------
> *From:* James Pustejovsky <jepusto using gmail.com>
> *Sent:* Tuesday, 12 February 2019 1:37 PM
> *To:* Grace Hayes
> *Cc:* r-sig-meta-analysis using r-project.org
> *Subject:* Re: [R-meta] Dependent Measure Modelling Question
>
> Grace,
>
> It sounds like the data that you're describing has two factors, emotion
> type and task type, and that both are within-study factors (in other words,
> a given study might report results for multiple emotion types and/or
> multiple task types). Are the emotion types and task types also measured
> within-participant, such that a given participant in a study gets assessed
> with multiple task types, on multiple emotion types, or both? If so, then
> one challenge in analyzing this data structure is that the effect size
> estimates will be statistically dependent. There are several ways to handle
> this (multivariate meta-analysis, robust variance estimation), which we've
> discussed in many previous posts on the listserv.
>
> Other than this issue, it sounds to me like it would be possible
> to  analyze both factors---emotion type and task type---together in one big
> model. The major advantage of doing so is that the joint model would let
> you examine differences in emotion types *while controlling for task
> types*, as well as examining differences in task types *while controlling
> for emotion types*. Controlling for the other factor (and maybe other
> covariates that are associated with effect size magnitude) should provide
> clearer, more interpretable results for differences on a given factor.
> There is also evidence that using a multivariate meta-analysis model can
> potentially mitigate outcome reporting bias to some extent (see Kirkham,
> Riley, & Williamson, 2012; Hwang & Desantis, 2018).
>
> A further advantage of using one big model is that it would let you adjust
> for other potential moderators that might have similar associations for
> each emotion type and each task type. If you conduct separate analyses for
> each emotion type (for example), you would have to analyze these moderators
> separately, so you'd end up with 6 sets of moderator analyses instead of
> just one.
>
> The main challenge in the "one big meta-analysis model" approach is that
> it requires careful checking of the model's assumptions. For example, you
> would need to assess whether the between-study heterogeneity is similar
> across the six emotion types and, if not, fit a model that allows for
> different heterogeneity levels for each emotion.
>
> James
>
>
> Hwang, H., & DeSantis, S. M. (2018). Multivariate network meta‐analysis to
> mitigate the effects of outcome reporting bias. *Statistics in medicine*.
>
> Kirkham, J. J., Riley, R. D., & Williamson, P. R. (2012). A multivariate
> meta‐analysis approach for reducing the impact of outcome reporting bias in
> systematic reviews. *Statistics in medicine*, *31*(20), 2179-2195.
>
> On Mon, Feb 11, 2019 at 3:16 AM Grace Hayes <grace.hayes3 using myacu.edu.au>
> wrote:
>
> Hi all,
>
>
> I have a question regarding a meta-analysis of multiple dependent outcomes
> that I would like to conduct using metafor.
>
>
> For this meta-analysis of emotion recognition in ageing, I'm interested in
> age-effects (young adults vs. older adults) on four different emotion
> recognition tasks (Task A, Task B, Task C, Task D). Studies in this area
> typically compare older adults' performance to younger adults' performance
> on more than one of these emotion recognition task.
>
>
> For each task there are also multiple outcomes.  Each task produces an
> accuracy age-effect for each emotion type included (I.e., anger, sadness,
> fear). Up to 6 different emotions are included (Emotion 1, Emotion 2,
> Emotion 3, Emotion 4, Emotion 5, Emotion 6). I therefore have some studies
> with, for example, 6 different age-effects from 3 different emotions tasks;
> a total of 18 dependent outcomes.
>
>
> Ideally I would like to investigate age-effects for each of the six
> emotion types seperately (with Tasks A, B, C and D combined), and
> age-effects for each task type seperately (with Emotions 1-6 combined). I
> would then like to compare the effects for each emotion type (Emotions 1-6
> separately) produced by each task  (Measure A, B, C, D separately).
>
>
> My question is, can I have a model that analyses emotion type and task
> type all together? Is this possible and statistically appropriate? Will it
> tell me the age-effects produced for each emotion by each task, or will it
> only tell me if task type and emotion type are significant moderators?
>
>
> I am also interested to know if I can add additional moderators such as
> number of emotions included in the task and year of publication?
>
>
> One concern that has been brought to my attention is overfitting from too
> many factors. Another is that output would be difficult too interpret, and
> thus it has been recommended that I perhaps run separately analyses for
> each task.
>
>
> Any advice would be much appreciated.
>
>
> Sincerely,
>
> Grace Hayes
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20190322/d939bcec/attachment-0001.html>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: Outlook-0omt2zgh.png
Type: image/png
Size: 8478 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20190322/d939bcec/attachment-0002.png>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: Outlook-0kel4z03.png
Type: image/png
Size: 8478 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20190322/d939bcec/attachment-0003.png>


More information about the R-sig-meta-analysis mailing list