[R-meta] A question regarding the 'metafor package' : Standardized regression coefficients as outcome measures
Viechtbauer, Wolfgang (SP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Sat Dec 14 14:30:05 CET 2019
Thanks for the clarification. Some further thoughts based on that:
Falconer's formula uses ICCs, so I assume rMZ and rDZ are also ICCs. The large sample sampling variance of an ICC (for the case where we have k=2 observations in each class) is essentially the same as for a Pearson product-moment correlation, so approximately (1-rho^2)^2 / (n-1), which we can estimate with (1-r^2)^2 / (n-1). So, the sampling variance of h^2 is:
Var[h^2] = 4 * ((1-r_MZ^2)^2 / (n_MZ-1) + (1-r_DZ^2)^2 / (n_DZ-1)),
where n_MZ and n_DZ are the number of monozygotic and dizygotic twin pairs. So, you could meta-analyze the h^2 values using the above as the sampling variance.
If you prefer to meta-analyze h (i.e., sqrt(h^2)) values, then the sampling variance of that could be estimated with
Var[h] = Var[h^2] / (4 * h^2).
Alternatively, one could apply Fisher's r-to-z transformation to the ICC values, which is also the variance-stabilizing transformation for ICCs (for the case k=2 case; in other cases, the equation for the variance-stabilizing transformation is slightly different). So, let z_MZ = 1/2 ln((1 + r_MZ) / (1 - r_MZ)) and r_DZ analogously. Fisher found that the sampling variance of an ICC for k=2 is well approximated by 1/(n-3/2), so we have:
Var[z_MZ] = 1/(n_MZ - 3/2)
Var[z_DZ] = 1/(n_DZ - 3/2)
and so the sampling variance of 2*(z_MZ - z_DZ) is:
Var[2*(z_MZ - z_DZ)] = 4 * (1/(n_MZ - 3/2) + 1/(n_DZ - 3/2)).
The advantage of this approach is that the sampling variance estimates are much more accurate and the variance-stabilizing transformation also has normalizing properties. However, 2*(z_MZ - z_DZ) is not h^2 and one cannot back-transform 2*(z_MZ - z_DZ) back to h^2 (one cannot simply apply the reverse transformation to 2*(z_MZ - z_DZ) to obtain h^2). So, while this approach might be preferrable from a statistical point of view, it doesn't actually allow you to estimate the average h^2 (or h). However, for testing whether the average h^2 is 0, this would be the better approach (since testing 2*(z_MZ - z_DZ) = 0 is the same as testing h^2 = 0).
An interesting alternative approach would be to use a bivariate meta-analysis model for z_MZ and z_DZ (and since the two groups - monozygotic and dizygotic twin pairs - are independent, the sampling errors for the two values within each study are independent). Based on this, you would obtain estimates of the average z_MZ and the average z_DZ, which can both be back-transformed to ICCs (applying the usual back-transformation) and then one can compute h^2 (or h) based on the back-transformed values. This approach is essentially the same as the bivariate approach of van Houwelingen et al. (2002), although their application is on meta-analyzing log odds ratios. See:
From: Lior Abramson [mailto:labramson87 using gmail.com]
Sent: Thursday, 12 December, 2019 15:34
To: Viechtbauer, Wolfgang (SP)
Cc: r-sig-meta-analysis using r-project.org
Subject: Re: [R-meta] A question regarding the 'metafor package' : Standardized regression coefficients as outcome measures
Thank you very much for the response.
You are right, I am looking at ACE-type models. However, the way I extracted the genetic coefficients (h) was simply by looking at the correlations of MZ and DZ twins in each study, and than using Falconer's formula to extract the coefficient from these correlations (i.e., the square root of 2*(rMZ-rDZ)). This was the only way to assure that the summary effects do not depend on researchers' degrees of freedom and statistical decisions, since many of the studies reported h after model fitting .
Is there a way to extract the sampling variance in this case? Alternatively, do you think it is reasonable to perform Fisher Z transformation on the coefficients and then use the sampling variance formula that considers only the N of the total sample?
Thank you again,
On Wed, Dec 11, 2019 at 11:57 PM Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
Sounds like you are running ACE-type models. In any case, I would just meta-analyze the coefficients directly, assuming you can extract a standard error for the coefficient from whatever software you are using to fit those models. Just square the standard error and you have the sampling variance. Then feed the coefficients and corresponding sampling variances to rma().
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On Behalf Of Lior Abramson
Sent: Wednesday, 11 December, 2019 17:11
To: r-sig-meta-analysis using r-project.org
Subject: [R-meta] A question regarding the 'metafor package' : Standardized regression coefficients as outcome measures
Dear list members,
I am conducting a meta-analysis on the heritability of a trait as manifested in twin studies. Specifically, in twin studies, it is possible to derive the standardized regression coefficient of genes on a given trait (the genetic component is a latent variable that could not be directly observed). Thus, my outcome measure is a standardized regression coefficient. More specifically, it is a partial standardized regression coefficient since, in all the studies, there are exactly three variables that can affect the trait (genes, shared environment, and non-shared environment).
My question is: Is it possible to use partial standardized regression coefficient as an outcome measure in the 'metafor' package? If so, how can I do it? Is it reasonable to treat it like a correlation in terms of the syntax (i.e., to write measure="ZCOR" / measure ="COR")?
Thank you very much for your time and help,
More information about the R-sig-meta-analysis