[R-meta] Calculating variances and z transformation for tetrachoric, biserial correlations?

James Pustejovsky jepusto at gmail.com
Mon Jul 3 04:59:45 CEST 2017


The delta method is a standard technique from mathematical statistics.
There's nothing special about applying it to effect size estimates. My
go-to references are Casella & Berger (2002, p. 243) or Severini (2005, pp.
400-401), but any graduate or upper-level undergraduate text on
mathematical statistics will cover it.

To follow up on Wolfgang's earlier question about the utility of using
Fisher's z transformation for non-pearson correlations: I have not looked
into whether the variance of, say, the tetrachoric correlation, is more
stable on the z scale than on the r scale. In Pustejovsky (2014), I argued
that it would be reasonable to use the Fisher z transformation if the
predominance of the effect size estimates were "regular" Pearson
correlations. The beneficial, variance-stabilizing property of the
transformation would then apply to the majority of the estimates, even if
it didn't hold for all of them.

Casella, G., & Berger, R. L. (2002). Statistical inference (2nd ed.).
Pacific Grove, CA: Duxbury.
Severini, T. A. (2005). Elements of Distribution Theory. Cambridge, England.

On Sun, Jul 2, 2017 at 6:21 PM, Mark White <markhwhiteii at gmail.com> wrote:

> Ah, that is brilliant! Thank you for the reproducible example, as well. I
> wasn't sure if he meant the delta method could be applied to all flavors of
> the r, or just the "regular" r we usually use by just doing `cor()`. So we
> can apply it to all flavors (RTET, RBIS, POLY, etc.)
>
> And forgive my ignorance, but where could I reference the delta method?
> Looking around, it seems to be a very general rule, but is there somewhere
> that mentions using the delta method with regards to transforming variances
> of effect sizes particularly? It seems like it is just taking the numerator
> of var(r) for the large sample approximation and also putting it back in
> the denominator?
>
> On Sun, Jul 2, 2017 at 5:27 PM, Viechtbauer Wolfgang (SP) <
> wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
>
> > As James mentioned, just use:
> >
> > Var(z) = Var(r) / (1 - r^2)^2
> >
> > to compute the sampling variance of a Fisher's r-to-z transformed
> > coefficient. So, for example:
> >
> > dat1 <- escalc(measure="COR", ri=0.42, ni=23, add.measure=TRUE)
> > dat2 <- escalc(measure="RBIS", m1i=2.5, m2i=2.0, sd1i=1.1, sd2i=0.9,
> > n1i=20, n2i=20, add.measure=TRUE)
> > dat3 <- escalc(measure="RTET", ai=10, bi=4, ci=6, di=12,
> add.measure=TRUE)
> > dat <- rbind(dat1, dat2, dat3)
> > dat
> >
> > dat$vi <- dat$vi / (1 - dat$yi^2)^2
> > dat$yi <- transf.rtoz(dat$yi)
> > dat
> >
> > You can also do this with the polyserial coefficient.
> >
> > Note that for standard correlations, this results in using to 1/(n-1) for
> > the variance (e.g., 1/22 in this example). To use the slightly more
> > accurate 1/(n-3):
> >
> > dat$vi[1] <- 1/(23-3)
> > dat
> >
> > To compare:
> >
> > escalc(measure="ZCOR", ri=0.42, ni=23, add.measure=TRUE)
> >
> > Best,
> > Wolfgang
> >
> > >-----Original Message-----
> > >From: Mark White [mailto:markhwhiteii at gmail.com]
> > >Sent: Monday, July 03, 2017 00:02
> > >To: Viechtbauer Wolfgang (SP)
> > >Cc: r-sig-meta-analysis at r-project.org
> > >Subject: Re: [R-meta] Calculating variances and z transformation for
> > >tetrachoric, biserial correlations?
> > >
> > >Thanks for your prompt and detailed responses!
> > >
> > >All of the effect sizes I culled that were from 2x2 tables, Ms and SDs,
> or
> > >t- and F-statistics were artificially dichotomized (either both or one
> > >variable, respectively). So they are, in fact, coming from a truly
> > >continuous distribution, so I believe that they can all be compared to
> one
> > >another.
> > >
> > >So it seems like:
> > >
> > >1. The 217 "regular" correlations can be converted from r to z, and
> then I
> > >can use the 1/(N-3) variance for that.
> > >
> > >2. The 10 effect sizes where only one variable was dichotomized can be
> > >converted to d (via Ms and SDs, or ts and Fs), which can then be
> converted
> > >to r_{eg} to z, via James's 2014 paper. I can also use his calculations
> > >for the variance of z from r_{eg}.
> > >
> > >(I would be doing this instead of `metafor::escalc`, because even
> though I
> > >could directly convert r_{bis} to z using the normal Fisher's r to z
> > >transformation, there is no way to go from var(RBIS) to var(Z), and
> using
> > >1/(N-3) is not appropriate).
> > >
> > >3. The issue is the 12 effect sizes from 2x2 contingency tables since
> even
> > >though I could convert directly from r_{tet} to z using Fisher's
> > >transformation, there is no way to go from var(RTET) to var(Z), and
> using
> > >1/(N-3) is not appropriate. I suppose I could go from an odds ratio to d
> > >to r_{eg} to z, using James's 2014 paper?
> > >
> > >4. The other issue is, even though I could get the r_{poly} to z, I
> could
> > >not get the var(r_{poly}) to var(z), and again using 1/(N-3) is not
> > >appropriate.
> > >
> > >How much would it harm the meta-analysis if 217 of my 240 effect sizes
> had
> > >the correct estimation of 1/(N-3), but the other 23 effects—transformed
> > >from r_{bis}, r_{poly}, r_{tet}—to z and then their variances estimated
> > >incorrectly using 1/(N-3)? It seems like, although I can get comparable
> > >effect sizes now, I cannot transform their variances appropriately.
> > >
> > >Thanks,
> > >Mark
> > >
> > >On Sun, Jul 2, 2017 at 4:30 PM, Viechtbauer Wolfgang (SP)
> > ><wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
> > >Let me address the computations first (that's the easy part).
> > >
> > >Tetrachoric correlation: For tetrachoric correlations, escalc() computes
> > >the MLE (requires an iterative routine -- optim() is used for that). The
> > >sampling variance is estimated based on the inverse of the Hessian
> > >evaluated at the MLE. There is no closed form solution for that.
> > >
> > >Biserial correlation (from *t*- or *F*-statistic): You can use a trick
> > >here if you still want to use escalc(). If you know t (or t = sqrt(F)),
> > >then just use escalc(measure="RBIS", m1i=t*sqrt(2)/sqrt(n), m2i=0,
> sd1i=1,
> > >sd2i=1, n1i=n, n2i=n), where n is the size of the groups (not the total
> > >sample size). For example, using the example from Jacobs & Viechtbauer
> > >(2017):
> > >
> > >escalc(measure="RBIS", m1i=1.68*sqrt(2)/sqrt(10), m2i=0, sd1i=1, sd2i=1,
> > >n1i=10, n2i=10)
> > >
> > >yields yi = 0.4614 and vi = 0.0570, exactly as in the example. You used
> > >equation (13) to compute the sampling variances, which is the
> approximate
> > >equation. escalc() uses the 'exact' one (equation 12). That way, you are
> > >also consistent with what you get for the case of "Biserial correlation
> > >(from *M *and *SD*)".
> > >
> > >Biserial correlation (from *M *and *SD*): As mentioned above, escalc()
> > >uses equation (12) from Jacobs & Viechtbauer (2017) to compute/estimate
> > >the sampling variance.
> > >
> > >Square-root of eta-squared: You cannot use the large-sample variance of
> a
> > >regular correlation coefficient for this. The right thing to do is to
> > >compute a polyserial correlation coefficient here (the extension of the
> > >biserial to more than two groups). You can do this using the polycor
> > >package. Technically, the polyserial() function from that package
> requires
> > >you to input the raw data, which you don't have. If you have the means
> and
> > >SDs, you can just simulate raw data with exactly those means and SDs and
> > >use that as input to polyserial(). The means and SDs are sufficient
> > >statistics here, so you should always get the same result regardless of
> > >what specific values are simulated. Here is an example:
> > >
> > >x1 <- scale(rnorm(10)) * 2.4 + 10.4
> > >x2 <- scale(rnorm(10)) * 2.8 + 11.2
> > >x3 <- scale(rnorm(10)) * 2.1 + 11.5
> > >
> > >x <- c(x1, x2, x3)
> > >y <- rep(1:3,each=10)
> > >
> > >polyserial(x, y, ML=TRUE, std.err=T, control=list(reltol=1e12))
> > >
> > >If you run this over and over, you will (should) always get the same
> > >polyserial correlation coefficient of 0.2127. The standard error is
> > >~0.195, but it changes very slightly from run to run due to minor
> > >numerical differences in the optimization routine. Note that I increased
> > >the convergence tolerance a bit to avoid that those numerical issues
> also
> > >affect the estimate itself. But these minor differences are essentially
> > >inconsequential anyway.
> > >
> > >If you do not have the means and SDs, then well, don't know what to do
> off
> > >the top of my head. But again, don't treat the converted value as if it
> > >was a correlation coefficient. It is not.
> > >
> > >Now for your question what/how to combine:
> > >
> > >The various coefficients (Pearson product-moment correlation
> coefficients,
> > >biserial correlations, polyserial correlations, tetrachoric
> correlations)
> > >are directly comparable, at least in principle (assuming that the
> > >underlying assumptions hold -- e.g., bivariate normality for the
> > >observed/latent variables). I just saw that James also posted an answer
> > >and he raises an important issue about the theoretical comparability of
> > >the various coefficients, esp. when they arise from different sampling
> > >designs. I very much agree that this needs to be considered. You could
> > >take a pragmatic / empirical approach though by coding the type of
> > >coefficient / design from which the coefficient arose and examine
> > >empirically whether there are any systematic differences (i.e., via a
> > >meta-regression analysis) between the types.
> > >
> > >As James also points out, you can use Fisher's r-to-z transformation on
> > >all of these coefficients, but to be absolutely clear: Only for Pearson
> > >product-moment correlation coefficients is the variance then
> approximately
> > >1/(n-3). I have seen many cases where people converted all kinds of
> > >statistics to 'correlations', then applied Fisher's r-to-z
> transformation,
> > >and then used 1/(n-3) as the variance, which is just flat out wrong in
> > >most cases. Various books on meta-analysis even make such faulty
> > >suggestions.
> > >
> > >Also, Fisher's r-to-z transformation will *only* be a variance
> stabilizing
> > >transformation for Pearson product-moment correlation coefficients
> (e.g.,
> > >the actual variance stabilizing transformation for biserial correlation
> > >coefficients is given by equation 17 in Jacobs & Viechtbauer, 2017 --
> and
> > >even that is just an approximation, since it is based on Soper's
> > >approximate formula). If you apply Fisher's r-to-z transformation to
> other
> > >types of coefficients, you have to use the right sampling variance (see
> > >James' mail). Also note: You cannot mix different transformations (i.e.,
> > >use Fisher's r-to-z transformation for all).
> > >
> > >Whether applying Fisher's r-to-z transformation to other coefficients
> > >(other than 'regular' correlation coefficients) is actually advantageous
> > >is debatable. Again, you do not get the nice variance stabilizing
> > >properties here (the transformation may still have some normalizing
> > >properties). If I remember correctly, James examined this in his 2014
> > >paper, at least for biserial correlations (James, please correct me if I
> > >misremember).
> > >
> > >Best,
> > >Wolfgang
> >
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis at r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list