[R-meta] Meta-analysis in R when there is no sampling variances

Resham Thapa reshambt1 at gmail.com
Fri Jun 30 19:39:25 CEST 2017


Dear Dr. Wolfgang,

Thank you for your quick response.

In my dataset, I have sample size (replications) and means for both
treatment and control groups. With that, I am calculating the log response
ratio (lrr) as an effect size. Yes, most of the articles in my search have
not reported SD or CV. I found that many researchers in my field prefer to
weight effect sizes using Wi=(Nt*Nc)/(Nt+Nc); where Nt and Nc are
replicates for treatment and control groups. Here is one link of a paper:
https://www.researchgate.net/publication/315379724_
Increased_plant_uptake_of_native_soil_nitrogen_
following_fertilizer_addition_-_not_a_priming_effect ( Increased plant
uptake of native soil nitrogen following fertilizer addition – not a
priming effect?).

My understanding is:
1. We can replace 'vi' with 'wi' and conduct the meta-analysis. Also, I
think multiple effect sizes from the study and multiple studies from the
same site will be more correlated with each other. To account for this, I
need to perform multi-level meta-analysis using random statement as
outlined in your website. So, my code looks like:

library(metafor)

dat <- read.csv('C:/Users/resham.thapa/Desktop/data_eg.csv') dat

# Calculating weighted effect size with multiple, nested
location/study/year as random effects.

res <- rma.mv(lrr, wi, random = ~ 1|site_id/study_id/site_year/com_control,
data=dat, method="REML")

res


*Is this correct?*


2.  Then, I need to perform non-parametric bootstrapping by slightly
modifying the codes from your website. However, I always get error while
running it. Any suggestions.


library(boot)

boot.func <- function(dat, indices) {

  res <- try(rma.mv(lrr, wi, random = ~
1|site_id/study_id/site_year/com_control,
data=dat, method="REML", subset=indices), silent=TRUE)

  if (is.element("try-error", class(res))) {

     NA

  } else {

     c(coef(res), vcov(res), res$sigma)

  }

}



res.boot <- boot(dat, boot.func, R=4999)

*## There were 50 or more warnings (use warnings() to see the first 50)*

warnings()

*# 'V' appears to be not positive definite.*

boot.ci(res.boot, index=1:2)

I have attached a subset of my data to illustrate about the nature of my
data-set more clearly. Thank you in advance.


Regards'
Resham


On Fri, Jun 30, 2017 at 11:54 AM, Viechtbauer Wolfgang (SP) <
wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:

> Dear Resham,
>
> I approved your post as a non-member, but please sign up for the list:
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
> What kind of outcome/effect size measure are you dealing with? I will
> venture a guess and assume you are dealing with (log-transformed) ratios of
> means. The actual sampling variance is then computed with:
>
> vi = sd1i^2/(n1i*m1i^2) + sd2i^2/(n2i*m2i^2)
>
> where m1i and m2i are the means of the first and second group, sd1i and
> sd2i are the SDs, and n1i and n2i are the group sizes. And the problem is
> that the SDs are not known (but you do know the means -- as otherwise you
> would not be able to compute the outcomes in the first place -- and you do
> know the group sizes).
>
> Here are some thoughts:
>
> Note that the sampling variance can also be written as:
>
> vi = cv1i^2 / n1i + cv2i^2 / n2i
>
> where cv1i and cv2i are the coefficient of variation for the first and
> second group. Maybe you could make an educated guess how large the CVs are
> and use that as a rough approximation to the actual sampling variances.
> Some people have used:
>
> vi = 1 / n1i + 1 / n2i
>
> which actually just implies that we assume that the CVs = 1 within each
> group.
>
> Then fit the model as usual (e.g., rma(yi, vi) where yi are the
> log-transformed ratios of means and vi the approximate variances). Then you
> could follow this up with bootstrapping. Code for this can be found here:
> http://www.metafor-project.org/doku.php/tips:bootstrapping_with_ma
>
> And/or you could consider using robust estimates of the standard errors
> for the fixed effects. That can be done using the robust() function (see
> help(robust) for details).
>
> Best,
> Wolfgang
>
> --
> Wolfgang Viechtbauer, Ph.D., Statistician | Department of Psychiatry and
> Neuropsychology | Maastricht University | P.O. Box 616 (VIJV1) | 6200 MD
> Maastricht, The Netherlands | +31 (43) 388-4170 | http://www.wvbauer.com
>
> -----Original Message-----
> From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-
> bounces at r-project.org] On Behalf Of Thapa, Resham - ARS
> Sent: Friday, June 30, 2017 17:10
> To: r-sig-meta-analysis at r-project.org
> Subject: [R-meta] Meta-analysis in R when there is no sampling variances
>
> Dear All,
>
> I am conducting a meta-analysis in environmental research. Most of the
> literatures in my search did not report sufficient information to calculate
> sampling variances.  I have also seen meta-analysis conducted in my area of
> research with similar issues; they performed non-parametric bootstrapping
> procedure to calculate bootstrapped CIs for the weighted effect size. In
> these papers, they used Metawin software to weight individual effect size
> following Adams et al., 1997. Resampling tests for meta-analysis for
> ecological data. Ecology 78(5):1277-1283; where weights (wi) is given as:
>
> Wi=(Nt*Nc)/(Nt+Nc); where Nt and Nc are replicates for treatment and
> control groups.
>
> I want to use R and perform meta-analysis by following similar approach.
> Any helps and suggestions will be highly appreciated.
>
> Thanks'
> Resham
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis at r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20170630/308c1e24/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: data_eg.csv
Type: text/csv
Size: 5667 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-meta-analysis/attachments/20170630/308c1e24/attachment-0001.csv>


More information about the R-sig-meta-analysis mailing list