[R-meta] Dependant variable in Meta Analysis

Viechtbauer, Wolfgang (SP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Thu Jun 4 15:10:04 CEST 2020

Assuming that the coefficients are commensurable, you can just meta-analyze them directly. The squared standard errors of the coefficients are then the sampling variances.

With commensurable, I mean that they measure the same thing and can be directly compared. For example, suppose the regression model y = b0 + b1 x + e has been examined in multiple studies. Since b1 reflects how many units y changes (on average) for a one-unit increase in x, the coefficient b1 is only comparable across studies if y has been measured in the same units across studies and x has been measured in the same units across studies (or if there is a known linear transformation that converts x from one study into the x from another study (and the same for y), then one can adjust b1 to make it commensurable across studies).

In certain models, one can relax the requirement that the units must be the same. For example, if the model is ln(y) = b0 + b1 x + e, then the units of y can actually differ across studies if they are multiplicative transformations of each other. If the model is ln(y) = b0 + b1 ln(x) + e, then x can also differ across studies in terms of a multiplicative transformation.

I think the latter gets close to (or is?) what people in economics do to estimate 'elasticities' and this is in fact what you might be dealing with.

Another complexity comes into play when there are other x's in the model. Strictly speaking, all models should include the same set of predictors as otherwise the coefficient of interest is 'adjusted for' different sets of covariates, which again makes it incommensurable. As a rough approximation to deal with different sets of covariates across studies, one could fit a meta-regression model (with the coefficient of interest as outcome) where one uses dummy variables to indicate for each study which covariates were included in the original regression models.


>-----Original Message-----
>From: Tarun Khanna [mailto:khanna using hertie-school.org]
>Sent: Thursday, 04 June, 2020 14:16
>To: Viechtbauer, Wolfgang (SP); r-sig-meta-analysis using r-project.org
>Subject: Re: Dependant variable in Meta Analysis
>Thank you for your reply Wolfgang.
>The "beta coefficients" that I refer to are not standardized regression
>coefficients but the relevant regression coefficients in the original
>studies. Would it be correct to direcly meta analyze the coefficients even
>when they are not standardized? How to we take into account the standard
>error of the coefficients? I have seen meta analysis in the literature that
>use the tranformation beta coefficient/ (sample size)^1/2 but I don't see
>how that takes into account the associated standard error.
>I have instead been calculating r coefficients using the t values of the
>relevant coefficients and the sample size using the following formula.
>r = ( t^2 / (t^2 + sample size) )^1/2
>I have been using the r to Fisher's Z transformation that you
>mentioned. Unfortunately, like you mentioned most of the studies
>employ multivariate analysis and so the transformation is not accurate. What
>would be the correct way to handle this?
>Tarun Khanna
>PhD Researcher
>Hertie School
>Friedrichstraße 180
>10117 Berlin ∙ Germany
>khanna using hertie-school.org ∙ www.hertie-school.org
>From: Viechtbauer, Wolfgang (SP)
><wolfgang.viechtbauer using maastrichtuniversity.nl>
>Sent: 04 June 2020 13:56:59
>To: Tarun Khanna; r-sig-meta-analysis using r-project.org
>Subject: RE: Dependant variable in Meta Analysis
>Dear Tarun,
>What exactly do you mean by 'beta coefficient'? A standardized regression
>coefficient? In the (very unlikely) case that the model includes no other
>predictors and is just a standard regression model, then the standardized
>regression coefficient for that single predictor is actually identical to
>the correlation beteen the predictor and the outcome and converting this
>correlation via Fisher's r-to-z transformation is fine (and then 1/(n-3) can
>be used as the corresponding sampling variance). However, if there are other
>predictors in the model, then the standardized regression coefficient is not
>a simple correlation and while one can still apply Fisher's r-to-z
>transformation to the coefficient, it will not have a variance of 1/(n-3)
>and assuming so would be wrong.
>Why don't you just meta-analyze the 'beta coefficients' directly? If these
>coefficients reflect percentage change, it sounds like they are 'unitless'
>and comparable across studies. Then you get the pooled estimate of the
>percentage change directly from the model.
>>-----Original Message-----
>>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-
>>On Behalf Of Tarun Khanna
>>Sent: Thursday, 04 June, 2020 13:41
>>To: r-sig-meta-analysis using r-project.org
>>Subject: [R-meta] Dependant variable in Meta Analysis
>>Dear All,
>>I am conducting a meta analysis of reduction in energy consumption in
>>households that have been exposed to certain behavioural interventions in
>>trials. The beta coefficients in the regressions in my the original studies
>>can ususally be interpreted as percentage change in electricity
>>To do the meta analysis I am converting these beta coefficients to Fisher's
>>Z. My problem is that Fisher's Z is not as easy to interpret as percentage
>>change in energy consumption.
>>Question 1: Is it possible to do the meta anlysis using the beta
>>coefficients coming from the original studies so that the results remain
>>easy to interpret?
>>Question 2: Is it sensible to convert the final Fisher's Z estimates back
>>the dependant variable coming from the studies?
>>Sorry if this question sounds too basic.
>>Tarun Khanna
>>PhD Researcher
>>Hertie School
>>Friedrichstraße 180
>>10117 Berlin ∙ Germany
>>khanna using hertie-school.org ∙ www.hertie-school.org

More information about the R-sig-meta-analysis mailing list