[R-meta] Meta-analysis dichotomous outcome/quantitative predictor and calculation of r2 or equivalent
Viechtbauer, Wolfgang (SP)
wolfg@ng@viechtb@uer @ending from m@@@trichtuniver@ity@nl
Wed Aug 15 14:33:30 CEST 2018
1) meta <- rma.uni(yi, sei) is not correct. It should be:
meta <- rma.uni(yi, sei=sei)
(assuming 'sei' is the name of the variable that contains the standard errors).
2) If you have the raw data, you can do an 'IPD' meta-analysis. Just combine the data from the 12 studies into one dataset. Then fit a multilevel (mixed-effects) model to these data that takes the clustering of observations within studies into consideration.
For a discussion/illustration of the IPD vs 2-stage (computing coefficients per study and then combining) approach, see:
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On Behalf Of Michael Dewey
Sent: Wednesday, 15 August, 2018 12:29
To: Creese, Byron; r-sig-meta-analysis using r-project.org
Subject: Re: [R-meta] Meta-analysis dichotomous outcome/quantitative predictor and calculation of r2 or equivalent
Comments in line
On 15/08/2018 10:27, Creese, Byron wrote:
> Hello all, I am planning to conduct a meta-analysis of some data using metafor but have a couple of questions before I start...
> I have data from 12 studies and I have the raw data so I can conduct exactly the same primary analysis on all studies.
> My outcome variable is dichotomous and my predictor is quantitative. I am also controlling for a number of other variables which are common across all datasets.
> My primary analysis will be to run a logistic regression on each study.
> Because my predictor is quantitative I do not have ai (treatment positive), bi (treatment neg), ci (control positive) and di (control negative) so I think my best option would be a random-effects meta-analysis of the regression coefficients and their standard error, as follows:
> meta <- rma.uni(yi, sei)
> My first question was to check if that sounds reasonable.
Yes, that sounds reasonable to me.
> Secondly, if I were running this analysis in just one study I would assess the improvement in model fit associated with the predictor by following this calculation using the NagelkerkeR2 function in fmsb package:
> NagelkerkeR2(model)$R2 - NagelkerkeR2(model.null)$R2
> Is there an equivalent figure for meta-analysis or would it be appropriate to meta-analyse the r2 from the above calculated in each study?
I am not sure why you would want to do that. You have the estimates and
their standard errors so per study you have an estimate of the
improvement in model fit by looking a the confidence interval and
overall you have the same from the summary estimate and its confidence
interval. If you try to summarise the R^2 what happens if you have
identical standard errors but coefficients differing only in sign
leading to the same R^2?
> Many thanks,
More information about the R-sig-meta-analysis