[R-meta] Follow-up - Dependencies in data: multiple studies with overlapping sample sizes
Viechtbauer, Wolfgang (SP)
wolfg@ng@viechtb@uer @ending from m@@@trichtuniver@ity@nl
Thu Nov 8 16:36:04 CET 2018
I would use:
W <- solve(V)
X <- model.matrix(res1)
P <- W - W %*% X %*% solve(t(X) %*% W %*% X) %*% t(X) %*% W
100 * res1$sigma2 / (res1$sigma2 + (res1$k-res1$p)/sum(diag(P)))
W should be the inverse of V. For a diagonal V matrix, this is the same as 1/vi, but your V matrix is not diagonal. Also, the model only has a single variance component, so no need to take sums in the last step.
Best,
Wolfgang
-----Original Message-----
From: Lasse Bang [mailto:banlas using ous-hf.no]
Sent: Friday, 02 November, 2018 11:44
To: Viechtbauer, Wolfgang (SP); 'r-sig-meta-analysis using r-project.org'
Subject: Follow-up - Dependencies in data: multiple studies with overlapping sample sizes
Dear Wolfgang,
Thank you so much for your comments!
I hope I can bother you with one last question:
For the model you proposed: res1 <- rma.mv(yi, V, data=dat1, random = ~ 1 | study_id)
Hope would I calculate I^2 for this model? I have been looking at http://www.metafor-project.org/doku.php/tips:i2_multilevel_multivariate
But am unsure which particular formula to use in my case.
I feel this formula is closest to what I am looking for:
W <- diag(1/dat1$vi)
X <- model.matrix(res1)
P <- W - W %*% X %*% solve(t(X) %*% W %*% X) %*% t(X) %*% W
100 * sum(res1$sigma2) / (sum(res1$sigma2) + (res1$k-res1$p)/sum(diag(P)))
Which produces an I^2 of: 63.67. Is this formula appropriate in my case?
Best,
-Lasse
Lasse Bang, Ph.D
Postdoctoral Researcher
Regional Department for Eating Disorders (RASP)
Oslo University Hospital, Ullevål HF
Oslo, Norway
E-mail: Lasse.Bang using ous-hf.no / I.Lasse.Bang using gmail.com
Phone: +47 23 02 73 71 /+47 41 42 97 04
LIK OSS PÅ FACEBOOK: «FORSKERGRUPPE FOR SPISEFORSTYRRELSER OUS»
IKKE SENSITIVT INNHOLD
-----Opprinnelig melding-----
Fra: Viechtbauer, Wolfgang (SP) [mailto:wolfgang.viechtbauer using maastrichtuniversity.nl]
Sendt: 1. november 2018 16:56
Til: Lasse Bang; 'r-sig-meta-analysis using r-project.org'
Emne: RE: Follow-up - Dependencies in data: multiple studies with overlapping sample sizes
Dear Lasse,
1) Ignore the method="FE" part (I've removed this from the website).
2) Ideally, one would want to fit a model with random effects for samples and studies, so:
res1 <- rma.mv(yi, V, data=dat1, random = list(~ 1 | study_id, ~ 1 | sample_id))
However, the dataset is too small for such a model to be sensible. Instead, I would go with random effects for studies here:
res1 <- rma.mv(yi, V, data=dat1, random = ~ 1 | study_id)
Best,
Wolfgang
-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On Behalf Of Lasse Bang
Sent: Thursday, 01 November, 2018 16:15
To: 'r-sig-meta-analysis using r-project.org'
Subject: [R-meta] Follow-up - Dependencies in data: multiple studies with overlapping sample sizes
Dear Guerta, Wolfgang, and others.
Thank you so much for your input on this subject! I considered your comments, and creating a model which incorporates covariances between the log odds ratios (Wolfgang's suggestion) could be the way to go.
As suggested, I reproduced the code from: http://www.metafor-project.org/doku.php/analyses:gleser2009#dichotomous_response_variable on my own sample of 6 studies, where the first three studies include overlapping control groups, and the remaining studies include unique samples (e.g. variable: sample_id = 1,1,1,2,3,4). I have included my code and results at the bottom of this message.
I have some follow-up questions:
1) I could not find any documentation on the method = "FE" argument; and noticed that running the model with the default REML argument produced identical results. Is there a reason to specify "RE" in my case or was this perhaps specific to the example on the metafor webpage (in which the model included a moderator)?
2) If I were to extend my model to a random effects model, I gather I must include a random argument in the model specification? Would this be something like:
res1 <- rma.mv(yi, V, data=dat1, method="FE", random = ~ 1 | study_id (where each study has its own unique study id)? Or perhaps random = ~ 1 | sample_id (where studies with the same control groups have the same value) so studies using the same control groups receive the same random effect?
All input highly appreciated!
Kind regards,
-Lasse
CODE:
dat1 <- escalc(measure="OR", ai=casepos, bi=caseneg, ci=contpos, di=contneg, data=dat1)
calc.v <- function(x) {
v <- matrix(x$pci[1]*(1-x$pci[1])/x$n1i[1], nrow=nrow(x), ncol=nrow(x))
diag(v) <- x$vi
v
}
V <- bldiag(lapply(split(dat1, dat1$sample_id), calc.v))
res1 <- rma.mv(yi, V, data=dat1, method="FE")
Which resulted in the V matrix:
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0.123747512 0.001229731 0.001229731 0.000000 0.000000 0.000000
[2,] 0.001229731 0.255589260 0.001229731 0.000000 0.000000 0.000000
[3,] 0.001229731 0.001229731 0.127694435 0.000000 0.000000 0.000000
[4,] 0.000000000 0.000000000 0.000000000 0.109777 0.000000 0.000000
[5,] 0.000000000 0.000000000 0.000000000 0.000000 0.105036 0.000000
[6,] 0.000000000 0.000000000 0.000000000 0.000000 0.000000 0.486039
And model results:
Multivariate Meta-Analysis Model (k = 6; method: FE)
Variance Components: none
Test for Heterogeneity:
Q(df = 5) = 13.8894, p-val = 0.0163
Model Results:
estimate se zval pval ci.lb ci.ub
0.4761 0.1577 3.0189 0.0025 0.1670 0.7852 **
Lasse Bang, Ph.D
Postdoctoral Researcher
Regional Department for Eating Disorders (RASP)
Oslo University Hospital, Ullevål HF
Oslo, Norway
E-mail: Lasse.Bang using ous-hf.no / I.Lasse.Bang using gmail.com
Phone: +47 23 02 73 71 /+47 41 42 97 04
LIK OSS PÅ FACEBOOK: «FORSKERGRUPPE FOR SPISEFORSTYRRELSER OUS»
IKKE SENSITIVT INNHOLD
-----Opprinnelig melding-----
Fra: Viechtbauer, Wolfgang (SP) [mailto:wolfgang.viechtbauer using maastrichtuniversity.nl]
Sendt: 25. oktober 2018 22:08
Til: Lasse Bang; 'r-sig-meta-analysis using r-project.org'
Emne: RE: Dependencies in data: multiple studies with overlapping sample sizes
Hi Lasse,
Indeed, when different groups are contrasted with a common group, then the estimates are no longer independent (due to 'reuse' of the information from the common group). Gleser & Olkin (2009) call this the 'multiple-treatment study' case. Code to compute the covariance between the log odds ratios can be found here:
http://www.metafor-project.org/doku.php/analyses:gleser2009#dichotomous_response_variable
A model that incorporates these covariances can then be fitted. So, in this scenario, there is no need to use cluster robust methods. Not sure if the latter would be appropriate for this amount of studies, even when using the small sample corrections.
Best,
Wolfgang
-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On Behalf Of Lasse Bang
Sent: Thursday, 25 October, 2018 10:04
To: 'r-sig-meta-analysis using r-project.org'
Subject: [R-meta] Dependencies in data: multiple studies with overlapping sample sizes
Dear experts,
After comments from reviewers, we are considering performing meta-analyses based on a systematic search which included studies measuring the association between bullying (exposure) and eating disorders (outcome). All studies are case-control studies, and the effect sizes are odds-ratios.
Based on the included studies, there are three possible meta-analyses which can be performed, based on the type of bullying the participants experienced (generic teasing, generic bullying, appearance-related teasing; each study typically explored more than one type of bullying and so report multiple effect sizes). If performed, these meta-analyses would be based on a small number of studies (k = 6, 7, and 11).
One of the concerns I have, is that three of the studies have identical healthy control samples. Study A compared patients with anorexia nervosa to healthy controls, study B compared patients with bulimia nervosa to healthy controls, and study C compared patients with binge-eating disorder to healthy controls. The cases are different in each study (n = 52-102), but the healthy controls are the same (n = 204). There is thus an extent of dependency between data from these studies. These three studies are also among the studies with largest total n, and all three studies report all three types of bullying mentioned earlier (so if performing three separate meta-analyses, all three studies would be included in each of the three meta-analyses).
I'm wondering how to potentially handle this in a meta-analysis? I know such dependencies can be handled using robust variance estimators (robumeta package), but will this work with the amount of studies I am dealing with (k = 6-11)? I know there is a small sample correction available when conducting a meta-regression model in robumeta, but I'm wondering if this is really feasible for the amount of studies that I have.
All input appreciated!
Kind regards,
-Lasse Bang
Lasse Bang, Ph.D
Postdoctoral Researcher
Regional Department for Eating Disorders (RASP)
Oslo University Hospital, Ullev�l HF
Oslo, Norway
E-mail: Lasse.Bang using ous-hf.no<http://no.mc656.mail.yahoo.com/mc/compose?to=bang.lasse@ulleval.no> / I.Lasse.Bang using gmail.com<mailto:Lassebang199 using hotmail.com>
Phone: +47 23 02 73 71 /+47 41 42 97 04
More information about the R-sig-meta-analysis
mailing list