[R-meta] Using control-only and treatment-only studies in 'metafor' -- can you calculate effect sizes with NAs in your table?
James Pustejovsky
jepu@to @ending from gm@il@com
Tue Oct 30 22:24:05 CET 2018
I agree with Gerta's suggestion to perform the meta-analysis on the AUC
estimates and their standard errors. In addition to her suggestion of
conducting separate analyses on the two groups of studies, another
possibility would be to run a meta-analysis that uses study type
(procedure) as a moderator. The advantage of the latter approach is that
you could add further covariates to the model in order to control for other
differences between the two types of studies. You would need to have the
data arranged as follows:
Study AUCi SEi Procedure X1 X2
1 .93 .037 A 12 Q
2 .77 .066 A 16 R
3 .76 .083 B 7 Q
4 .84 .028 B 14 Q
etc
And then you could run a meta-regression such as:
library(metafor)
rma(AUCi ~ Procedure + X1 + X2, sei = SEi, data = Veronica_data)
One thing to consider is that (as you noted) the AUC is bounded between 0
and 1 and the standard error of an AUC estimate depends strongly on the
mean level. Ryu and Agresti (2008) suggested applying a logistic
transformation to the AUC values (with corresponding transformation of the
SEs) before conducting the meta-analysis/meta-regression. As far as I know,
the approach has not been studied further (perhaps others know of further
references?), but it strikes me as worth considering--especially if the
AUCs are pretty large and/or the studies have small sample sizes.
James
Ryu, E., & Agresti, A. (2008). Modeling and inference for an ordinal effect
size measure. *Statistics in Medicine*, *27*(10), 1703-1717.
On Tue, Oct 30, 2018 at 4:04 PM Gerta Ruecker <Ruecker using imbi.uni-freiburg.de>
wrote:
> Dear Veronica,
>
> At least for me, it is still not clear how your data look like. As I
> understand, you have two groups of studies, treatment and control, and for
> each study you have an ROC. What I don't understand is (i) what the meaning
> of the n_i is and (ii) whether the sd_i are really standard deviations
> (this doesn't make sense to me) or rather standard errors (this would make
> sense).
>
> If you have ROCs and their standard *errors* for each study, you may
> compute two verage ROCs (one for each group of studies) using a generic
> meta-analysis function (for example, metagen() from R package meta):
>
> meta1 <- metagen(ROC1_i, se1_i)
> meta2 <- metagen(ROC2_i, se2_i)
>
> Note that you can compare these two averages to each other, but for the
> interpretation you have to bear in mind that this is an uncontrolled
> comparison, as you have only uncontrolled studies.
>
> Best,
> Gerta
>
> ----------------ursprüngliche Nachricht-----------------
> Von: Veronica Frans [verofrans using gmail.com ]
> An: wolfgang.viechtbauer using maastrichtuniversity.nl Kopie:
> r-sig-meta-analysis using r-project.org Datum: Tue, 30 Oct 2018 15:38:20 -0400
> -------------------------------------------------
>
>
> > Hello, Wolfgang,
> >
> > Thanks for your reply.
> >
> > The 'mean accuracy score' I am referring to here is the Area Under the
> > Receiver Operating Curve (a plot of true positive and false positive
> rates
> > of a model prediction), and is calculated from testing a trained model's
> > ability to accurately determine the presence of absence of an occurrence
> in
> > a given spatial grid. Values below 0.5 indicate complete randomness, and
> > values closer to 1 imply the highest predictability. It is one of the
> > standard measures of species distribution modeling in ecology, and I am
> > using this score to test models that follow a newer procedure (the
> > treatment) versus models that don't (the control). The goal is to see if
> > the treatment has a greater effect on accuracy than the control.
> >
> > I am obviously new to meta-analyses, so any suggestions are definitely
> > appreciated. Thanks again for your help!
> >
> > Veronica
> >
> > On Tue, Oct 30, 2018 at 2:40 PM Viechtbauer, Wolfgang (SP) <
> > wolfgang.viechtbauer using maastrichtuniversity.nl wrote:
> >
> >> Dear Veronica,
> >>
> >> Measure 'SMD' is for computing the standardized mean difference between
> >> two groups. It does not seem applicable to your data.
> >>
> >> Can you describe in a bit more detail what these "mean accuracy scores"
> >> are? How are they computed within an individual study?
> >>
> >> Best,
> >> Wolfgang
> >>
> >> -----Original Message-----
> >> From: R-sig-meta-analysis [mailto:
> >> r-sig-meta-analysis-bounces using r-project.org ] On Behalf Of Veronica Frans
> >> Sent: Tuesday, 30 October, 2018 18:29
> >> To: r-sig-meta-analysis using r-project.org Subject: [R-meta] Using
> control-only and treatment-only studies in
> >> 'metafor' -- can you calculate effect sizes with NAs in your table?
> >>
> >> Dear forum,
> >>
> >> I would like to use the 'metafor' package for my meta-analysis. I am
> >> comparing the results (mean accuracy score from 0 to 1) of articles that
> >> use one procedure (the 'treatment'; group 1) versus those that don't
> (the
> >> 'control'; group 2). However, all of my studies present results for only
> >> the treatment or only the control, but never both.
> >>
> >> To run the escalc() function (measure=SMD), is it possible to have
> studies
> >> with NA's in m1i, sd1i, n1i, and vice-versa?
> >>
> >> Unfortunately, when I use my table in R, the escalc() function gives me
> >> NA's for yi and vi.
> >>
> >> Here's an example of the code I used:
> >>
> >> mod.means <-data.frame(
> >> study = c("UID6","UID7","UID11","UID13","UID17","UID18"),
> >> n1i = c(1,1,16,NA,NA,21), #number in treatment
> >> n2i = c(NA,NA,NA,2,2,NA), #number in control
> >> m1i = c(.931,.81,.977,NA,NA,.878), #treatment means
> >> m2i = c(NA,NA,NA,.865,.69,NA), #control means
> >> sd1i = c(0,0,.012,NA,NA,.0386), #treatment sd
> >> sd2i = c(NA,NA,NA,.05,.03,NA), #control sd
> >> scale = c(3,4,1,1,3,2) #potential moderator
> >> )
> >>
> >> all.meta <- escalc(measure = "SMD",
> >> m1i = m1i, m2i=m2i, #means
> >> sd1i=sd1i, sd2i = sd2i, #standard deviation
> >> n1i=n1i, n2i = n2i, #numbers
> >> data = mod.means)
> >>
> >> all.meta #show table
> >>
> >> Perhaps I should format my table in a different way or consider a
> different
> >> meta-analysis approach other than "SMD"?
> >>
> >> Any advice on this is greatly appreciated. Thank you for your time!
> >>
> >> Sincerely,
> >>
> >> Veronica Frans
> >>
> >
> > [[alternative HTML version deleted]]
> >
> > _______________________________________________
> > R-sig-meta-analysis mailing list
> > R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> >
>
>
> --
> Gerta Ruecker
> Institute of Medical Biometry and Statistics,
> Faculty of Medicine and Medical Center - University of Freiburg
> Postal Address: Stefan-Meier-Str. 26, 79104 Freiburg
> Phone: +49/761/ 203-6673
> Mail: Ruecker using imbi.uni-freiburg.de Homepage:
> http://www.imbi.uni-freiburg.de
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list