[R-meta] R-sig-meta-analysis Digest, Vol 35, Issue 7
Tarun Khanna
kh@nn@ @end|ng |rom hert|e-@choo|@org
Tue Apr 21 14:35:19 CEST 2020
Dear Wolfgang,
I have a follow up question to Crystals'. Does bootstrapping also work with regressions that have moderator variables?
I used the link that you provided, http://www.metafor-project.org/doku.php/tips:bootstrapping_with_ma, and ran a regression with "alloc" as a moderator variable and then tried bootstrapping the errors but the std. errrs were calculated as zero. This is the output that i got:
Bootstrap Statistics :
original bias std. error
t1* -0.51792376 0 0
t2* -0.44786464 0 0
t3* 0.08899601 0 0
t4* 0.19465035 0 0
t5* -0.19465035 0 0
t6* -0.19465035 0 0
t7* -0.19465035 0 0
t8* 0.26607085 0 0
t9* 0.19465035 0 0
t10* -0.19465035 0 0
t11* 0.19465035 0 0
t12* 0.31363498 0 0
> boot.ci(res.boot, index=1:2) #CI for average effect
Error in sort.int(x, na.last = na.last, decreasing = decreasing, ...) :
index 0 outside bounds
In addition: Warning messages:
1: In sqrt(tv[, 2L]) : NaNs produced
2: In norm.inter(z, (1 + c(conf, -conf))/2) :
extreme order statistics used as endpoints
I also tried doing it with my own data set and got the following error:
Error in t.star[r, ] <- res[[r]] :
number of items to replace is not a multiple of replacement length
In addition: There were 50 or more warnings (use warnings() to see the first 50)
If bootstrapping does indeed work with moderator variables, what might be wrong?
Best
Tarun
Tarun Khanna
PhD Researcher
Hertie School
Friedrichstraße 180
10117 Berlin ∙ Germany
khanna using hertie-school.org ∙ www.hertie-school.org<http://www.hertie-school.org/>
________________________________
From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> on behalf of r-sig-meta-analysis-request using r-project.org <r-sig-meta-analysis-request using r-project.org>
Sent: 14 April 2020 20:38:07
To: r-sig-meta-analysis using r-project.org
Subject: R-sig-meta-analysis Digest, Vol 35, Issue 7
Send R-sig-meta-analysis mailing list submissions to
r-sig-meta-analysis using r-project.org
To subscribe or unsubscribe via the World Wide Web, visit
https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
or, via email, send a message with subject or body 'help' to
r-sig-meta-analysis-request using r-project.org
You can reach the person managing the list at
r-sig-meta-analysis-owner using r-project.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of R-sig-meta-analysis digest..."
Today's Topics:
1. Re: Dear Wolfgang (Viechtbauer, Wolfgang (SP))
2. Re: Bootstrapping confidence intervals in metafor
(Viechtbauer, Wolfgang (SP))
3. Re: How does the rma.mv function handle multiple inferences
within a study-level (Viechtbauer, Wolfgang (SP))
4. Re: Dear Wolfgang (Ju Lee)
----------------------------------------------------------------------
Message: 1
Date: Tue, 14 Apr 2020 14:04:27 +0000
From: "Viechtbauer, Wolfgang (SP)"
<wolfgang.viechtbauer using maastrichtuniversity.nl>
To: Ju Lee <juhyung2 using stanford.edu>,
"r-sig-meta-analysis using r-project.org"
<r-sig-meta-analysis using r-project.org>
Subject: Re: [R-meta] Dear Wolfgang
Message-ID: <e1a7d1d4ba4240ec996aa0e8c33d4131 using UM-MAIL3214.unimaas.nl>
Content-Type: text/plain; charset="us-ascii"
Dear Ju,
In principle, this might be of interest to you:
http://www.metafor-project.org/doku.php/tips:comp_two_independent_estimates
However, a standardized mean difference is given by (m1-m2)/sd, while a (log) response ratio is log(m1/m2). I see no sensible way of converting the former to the later.
Best,
Wolfgang
>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org]
>On Behalf Of Ju Lee
>Sent: Monday, 13 April, 2020 22:47
>To: r-sig-meta-analysis using r-project.org
>Subject: [R-meta] Dear Wolfgang
>
>Dear Wolfgang,
>
>I hope you are doing well.
>
>My research group is currently working on a project where they are trying to
>compare effect sizes generated from their current mixed-effect meta-analysis
>with effect sizes (based on similar response variables) calculated in other
>meta-analysis publications.
>
>We are currently using log response ratio and are trying to make some
>statement or analysis to compare our grand mean effect sizes with other
>studies. In more details, we are examining how herbivorous animal control
>plant growth in degraded environment. Now, there is already a meta-analysis
>out there that has examined this (in comparable manner) in natural
>environment as opposed to our study.
>
>My colleagues want to know if there is a way to make some type of comparison
>(ex. whether responses are stronger in degraded vs. natural environemnts)
>between two effect sizes from these different studies using statistical
>approaches.
>So far what they have from other meta-analysis publication is grand mean
>hedges'd and var which they transformed to lnRR and var in hopes to compare
>with our lnRR effect sizes.
>
>My view is that this is not possible unless we can have their actual raw
>dataset and run a whole new model combining with our original raw dataset.
>But I wanted to reach out to you and the community if there is an
>alternative approaches to compare mean effect sizes among different meta-
>analysis which are assumed to have used similar approaches in study
>selection and models (another issue being different random effect structures
>used in different meta-analysis which may not be very apparent from method
>description).
>
>Thank you for reading and I hope to hear from you!
>Best,
>JU
------------------------------
Message: 2
Date: Tue, 14 Apr 2020 15:02:43 +0000
From: "Viechtbauer, Wolfgang (SP)"
<wolfgang.viechtbauer using maastrichtuniversity.nl>
To: Crystal La Rue <cj.larue using uq.edu.au>,
"r-sig-meta-analysis using r-project.org"
<r-sig-meta-analysis using r-project.org>
Subject: Re: [R-meta] Bootstrapping confidence intervals in metafor
Message-ID: <163f141ecce0469f90ae034ec739bc7a using UM-MAIL3214.unimaas.nl>
Content-Type: text/plain; charset="us-ascii"
Dear Crystal,
This is relevant:
http://www.metafor-project.org/doku.php/tips:bootstrapping_with_ma
You just have to change boot.func() so that the appropriate model is being fitted (sounds like you are using rma.mv()) and change what is being returned (i.e., c(coef(res), vcov(res))) is probably all you need unless you want to create CIs for the variance components of the model). Here is a simple example for a non-parametric bootstrap using a three-level model:
library(metafor)
library(boot)
dat <- dat.konstantopoulos2011
res <- rma.mv(yi, vi, random = ~ 1 | district/school, data=dat)
res
boot.func <- function(dat, indices) {
sub <- dat[indices,]
res <- try(rma.mv(yi, vi, random = ~ 1 | district/school, data=sub), silent=TRUE)
if (is.element("try-error", class(res))) NA else c(coef(res), vcov(res))
}
set.seed(1234)
res.boot <- boot(dat, boot.func, R=1000)
boot.ci(res.boot, index=1:2)
An interesting consideration is whether one should really do a stratified bootstrap here. This can be done with:
set.seed(1234)
res.boot <- boot(dat, boot.func, R=1000, strata=dat$district)
boot.ci(res.boot, index=1:2)
Not sure which is more appropriate here.
Best,
Wolfgang
>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org]
>On Behalf Of Crystal La Rue
>Sent: Friday, 03 April, 2020 10:24
>To: r-sig-meta-analysis using r-project.org
>Subject: [R-meta] Bootstrapping confidence intervals in metafor
>
>Dear Wolfgang,
>
>I am conducting a three-level random-effects meta-analysis using metafor in
>R. I use Fisher's r-to-z transformed correlation coefficients and I have
>been advised to generate bootstrapped confidence intervals to capture a more
>accurate population standard error. I'm still quite new to R and am having
>trouble working out how to do this. Can you point me in the right direction?
>
>Many thanks,
>Crystal
------------------------------
Message: 3
Date: Tue, 14 Apr 2020 15:36:49 +0000
From: "Viechtbauer, Wolfgang (SP)"
<wolfgang.viechtbauer using maastrichtuniversity.nl>
To: Divya Ravichandar <divya using secondgenome.com>
Cc: "r-sig-meta-analysis using r-project.org"
<r-sig-meta-analysis using r-project.org>
Subject: Re: [R-meta] How does the rma.mv function handle multiple
inferences within a study-level
Message-ID: <8b4bb8b5c33e44f6b149b6bdb10b8ba1 using UM-MAIL3214.unimaas.nl>
Content-Type: text/plain; charset="utf-8"
To see what is going on, take a look at the actual weight matrix:
case <- data.frame(Study=c("a","b","c","c"), ES=c(-1.5,-3,1.5,3), SE=c(.2,.4,.2,.4))
case
res <- rma.mv(ES, SE^2, random = ~ 1 | Study, data=case)
res
weights(res, type="matrix")
The model estimate is computed with:
b = (X'WX)^(-1) X'Wy
Leaving aside the (X'WX)^(-1) part (which is just a scalar here), X'Wy is then:
[w11 ] [y1] [w11*y1 ]
[1 1 1 1] [ w22 ] [y2] = [1 1 1 1] [w22*y2 ]
[ w33 w34] [y3] [w33*y3 + w34*y4]
[ w34 w44] [y4] [w34*y3 + w44*y4]
= w11*y1 + w22*y2 + w33*y3 + w34*y4 + w34*y3 + w44*y4
= w11*y1 + w22*y2 + (w33+w34)*y3 + (w34+w44)*y4
As you can see above, w34 is negative and almost as large as w33 and w44 (which is indeed a result of sigma^2 being so large here). So, the weights attached to y1-y4 are:
w11 = 0.166950
w22 = 0.163671
w33+w34 = 0.133739
w34+w44 = 0.033435
So, the two estimates coming from study 'c' end up receiving together about as much weight (0.133739 + 0.033435 = 0.167174) as the two individual estimates from studies 'a' and 'b'.
In case 2, things are indeed simple and the weights are simply 1/(SE^2 + sigma^2) for each estimate.
Best,
Wolfgang
>-----Original Message-----
>From: Divya Ravichandar [mailto:divya using secondgenome.com]
>Sent: Wednesday, 01 April, 2020 23:06
>To: Viechtbauer, Wolfgang (SP)
>Cc: r-sig-meta-analysis using r-project.org
>Subject: Re: [R-meta] How does the rma.mv function handle multiple
>inferences within a study-level
>
>I apologize for my haste in the previous email. All sigma^2 in the above
>matrix are indeed the same (There were some rounding offs I overlooked).
>So would a take away here be that since there are two inferences under one
>Study, these are additionally weighted by the sigma^2? As opposed to case2
>shown below where with 1 inference per Bin-level, a meta-analysis at the Bin
>level would simply down weight each inference by Sigma^2+SE^2 only
>
>Case 2
>case <- data.frame(Study=c("a","b","c","c"), Bin=c("a","b","c","d"),ES=c(-
>1.5,-3,1.5,3), SE=c(.2,.4,.2,.4))
>res <- rma.mv(ES, SE^2, random = ~ 1 | Bin, data=case)
>
>On Wed, Apr 1, 2020 at 2:00 PM Divya Ravichandar <divya using secondgenome.com>
>wrote:
>Thank you for your explanation of how the weight matrix is computed.
>
>A followup question, on the 'sigma^2' only terms in the variance matrix
>[terms in matrix positions (3,4) & (4.3)].
>
>I assume (based on running the example above) the sigma^2 here is different
>from the sigma^2 used along the diagonal. Is this correct? If yes, is a
>sigma^2 estimated based on just the values corresponding to study c in the
>example?
>
>Thank you
>
>On Wed, Apr 1, 2020 at 1:21 PM Viechtbauer, Wolfgang (SP)
><wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>Dear Divya,
>
>The model you are using implies the following structure for the marginal
>var-cov matrix of the estimates:
>
>[SE_1^2 + sigma^2 ]
>[ SE_2^2 + sigma^2 ]
>[ SE_3^2 + sigma^2 sigma^2 ]
>[ sigma^2 SE_4^2 + sigma^2]
>
>The weight matrix is the inverse thereof. See:
>
>library(metafor)
>
>case <- data.frame(Study=c("a","b","c","c"), ES=c(-1.5,-3,1.5,3),
>SE=c(.2,.4,.2,.4))
>res <- rma.mv(ES, SE^2, random = ~ 1 | Study, data=case)
>res
>
>vcov(res, type="obs")
>weights(res, type="matrix")
>
>The model estimate is then given by b = (X'WX)^(-1) X'Wy, where X is just a
>column vector of 1s, W is the weight matrix above, and y is a column vector
>with the 4 effect sizes.
>
>Best,
>Wolfgang
>
>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org]
>On Behalf Of Divya Ravichandar
>Sent: Wednesday, 01 April, 2020 21:59
>To: r-sig-meta-analysis using r-project.org
>Subject: [R-meta] How does the rma.mv function handle multiple inferences
>within a study-level
>
>My use case is presented in the dataframe below. Studies a,b and c are to
>be integrated in a meta-analysis using: rma.mv(ES, SE^2, random = ~ 1 |
>Study, data=case)
>
>In this case, studies a & b have one inference each but because of my study
>design two inferences exist for study c. I am curious as to how the 2
>inferences under study c are weighted in the meta-analysis calculation as
>compared to the inference for studies a &b.
>
>case <- data.frame(Study=
>c("a","b","c","c"),Effect_size=c(-1.5,-
>3,1.5,3),Standard_error=c(.2,.4,.2,.4))
>
>Thanks
>--
>*Divya Ravichandar*
>Scientist
>Second Genome
------------------------------
Message: 4
Date: Tue, 14 Apr 2020 16:53:57 +0000
From: Ju Lee <juhyung2 using stanford.edu>
To: "Viechtbauer, Wolfgang (SP)"
<wolfgang.viechtbauer using maastrichtuniversity.nl>,
"r-sig-meta-analysis using r-project.org"
<r-sig-meta-analysis using r-project.org>
Subject: Re: [R-meta] Dear Wolfgang
Message-ID:
<BYAPR02MB5559631244B96112473E9F7FF7DA0 using BYAPR02MB5559.namprd02.prod.outlook.com>
Content-Type: text/plain; charset="utf-8"
Dear Wolfgang,
Thanks for your insights.
I am reaching out to my colleagues to see how they have made such transformation.
In the meantime, based on the information that you have sent, it is possible to compare two different meta-analyses if they are using the same effect size, say lnRR? and this wald-type test can be performed only with grand mean effect sizes and their standard error, without sample sizes or tau value, if I understood correctly?
How would this approach be actually applicable to publications that seemingly used similar mixed-effect models but there is no guarantee that random effect structures are standardized between the two?
Thank you very much!
Best,
JU
________________________________
From: Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer using maastrichtuniversity.nl>
Sent: Tuesday, April 14, 2020 7:04 AM
To: Ju Lee <juhyung2 using stanford.edu>; r-sig-meta-analysis using r-project.org <r-sig-meta-analysis using r-project.org>
Subject: RE: Dear Wolfgang
Dear Ju,
In principle, this might be of interest to you:
http://www.metafor-project.org/doku.php/tips:comp_two_independent_estimates
However, a standardized mean difference is given by (m1-m2)/sd, while a (log) response ratio is log(m1/m2). I see no sensible way of converting the former to the later.
Best,
Wolfgang
>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org]
>On Behalf Of Ju Lee
>Sent: Monday, 13 April, 2020 22:47
>To: r-sig-meta-analysis using r-project.org
>Subject: [R-meta] Dear Wolfgang
>
>Dear Wolfgang,
>
>I hope you are doing well.
>
>My research group is currently working on a project where they are trying to
>compare effect sizes generated from their current mixed-effect meta-analysis
>with effect sizes (based on similar response variables) calculated in other
>meta-analysis publications.
>
>We are currently using log response ratio and are trying to make some
>statement or analysis to compare our grand mean effect sizes with other
>studies. In more details, we are examining how herbivorous animal control
>plant growth in degraded environment. Now, there is already a meta-analysis
>out there that has examined this (in comparable manner) in natural
>environment as opposed to our study.
>
>My colleagues want to know if there is a way to make some type of comparison
>(ex. whether responses are stronger in degraded vs. natural environemnts)
>between two effect sizes from these different studies using statistical
>approaches.
>So far what they have from other meta-analysis publication is grand mean
>hedges'd and var which they transformed to lnRR and var in hopes to compare
>with our lnRR effect sizes.
>
>My view is that this is not possible unless we can have their actual raw
>dataset and run a whole new model combining with our original raw dataset.
>But I wanted to reach out to you and the community if there is an
>alternative approaches to compare mean effect sizes among different meta-
>analysis which are assumed to have used similar approaches in study
>selection and models (another issue being different random effect structures
>used in different meta-analysis which may not be very apparent from method
>description).
>
>Thank you for reading and I hope to hear from you!
>Best,
>JU
[[alternative HTML version deleted]]
------------------------------
Subject: Digest Footer
_______________________________________________
R-sig-meta-analysis mailing list
R-sig-meta-analysis using r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
------------------------------
End of R-sig-meta-analysis Digest, Vol 35, Issue 7
**************************************************
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list