[R-meta] Moderator Analysis

Jake Downs j@ke@down@ @end|ng |rom @gg|em@||@u@u@edu
Fri Feb 12 16:30:24 CET 2021


Michael-

Yes, I believe that gave me exactly what I wanted.

Using 'mods = ~ f.c -1' produced the following output:

Test of Moderators (coefficients 1:2):
QM(df = 2) = 17.29, p-val < .01

Model Results:

                  estimate    se  zval  pval  ci.lb  ci.ub
f.cComprehension      0.65  0.16  4.09  <.01   0.34   0.97  ***
f.cFluency            0.48  0.16  2.92  <.01   0.16   0.80   **

---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

How do I explain what this did as opposed to a 'regular' moderator analysis?

Thanks,
Jake

On Wed, Feb 10, 2021 at 4:17 AM Michael Dewey <lists using dewey.myzen.co.uk>
wrote:

> Dear Jake
>
> Untested, but I wonder if using mod = ~ f.c - 1 would give you what you
> desire.
>
> Michael
>
> On 10/02/2021 04:37, Jake Downs wrote:
> > Hello R Friends,
> > I am a doc student newish to R  and meta-analysis. It's a lot to wrap my
> > brain around, but I'm eager to learn.
> >
> > I am conducting a 3-level meta-analysis using rma.mv on student reading
> > outcomes for various types of related practices. Level one is effect
> sizes,
> > level two models covariance between effect sizes within studies, and
> level
> > three models covariance between studies.
> >
> > The meta-analysis is multivariate, so level one outcomes are coded as
> > either "fluency" or "comprehension."  I have ran the analysis for all
> > effects (g=0.58; code below), but I am also very interested in producing
> a
> > 'fluency' effect size and a 'comprehension' effect size. I would like
> > assistance to figure out the best way to do that.
> > 3 Level Fit:
> > rq1.fit1 <- tx.cg %>%
> >    rma.mv(
> >      yi = tx.cg.yi,  #fit one, 3 level meta-analysis
> >      V = tx.cg.vi,
> >      random = ~ 1 | study.number/effect.number,
> >      level=95,
> >      digits=2,
> >      data = .,
> >      method = "REML"
> >    )
> > summary(rq1.fit1)
> >
> > Option 1:
> > Moderator analysis. I ran a moderator analysis using this code:
> > rq2.f.c <- tx.cg %>%
> >    metafor::rma.mv(
> >      yi = tx.cg.yi,
> >      V = tx.cg.vi,
> >      random = ~ 1 | study.number/effect.number,
> >      level=95,
> >      digits=2,
> >      data = .,
> >      method = "REML",
> >      mods = ~ f.c)
> > summary(rq2.f.c)
> >
> > The QM test of moderators is approaching statistical significance (p =
> > 0.08), however the intercept (reference group of comprehension) did
> report
> > statistically significant results. Does that mean that only comprehension
> > moderates outcomes? (And that only a comprehension 'effect size' would be
> > valid?)
> >
> > Option 2: Single Variable Moderator analysis?
> > To calculate an effect size for fluency and comprehension, is there a way
> > to run a single variable as a moderator? For example, rather than running
> > fluency and comprehension in a combined moderator analysis run
> > comprehension only in one moderator analysis and fluency only in another?
> > Is this a viable method?
> >
> > Option 3: Subset Independent Meta-analysis
> > I don't think this option is viable, but I could use the subset =
> function
> > in metafor to run an analysis using ONLY comprehension, and using ONLY
> > fluency. This would throw away half my data however, which I think would
> > limit the validity of my findings.
> >
> > In short: I just want to be able to say that the effect size for fluency
> > was X and the effect size for comprehension was Y. What is the best way
> to
> > do that?
> >
> >
> > Thanks very much for your help.
> >
> > Jake
> >
> >       [[alternative HTML version deleted]]
> >
> > _______________________________________________
> > R-sig-meta-analysis mailing list
> > R-sig-meta-analysis using r-project.org
> > https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> >
>
> --
> Michael
> http://www.dewey.myzen.co.uk/home.html
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list