# [R-meta] Fitting three-level meta-analytic models in R: A step-by-step tutorial

Viechtbauer Wolfgang (SP) wolfgang.viechtbauer at maastrichtuniversity.nl
Thu Aug 17 21:45:05 CEST 2017

```Thanks for pointing out this article. Hadn't come across it.

I just skimmed through it. The authors are illustrating the use of a three-level model for analyzing estimates with correlated sampling errors. These posts are relevant:

https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2017-August/000100.html
https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2017-August/000102.html
https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2017-August/000103.html

As I mention there, I do not consider this approach fully sufficient.

Otherwise, this article seems to provide a useful tutorial for the three-level model in general. There are some minor mistakes though.

For example, tdist=TRUE (or test="t") does not mean that the Knapp and Hartung method is used. The Knapp and Hartung method involves an adjustment to the var-cov matrix of the fixed effects besides then using t/F distributions for computing p-values. I am not aware of an extension of the Knapp and Hartung method to models like the ones that can be fitted with rma.mv(). Using tdist=TRUE (or test="t") simply means that p-values are computed based on t/F distributions, but no adjustment to the var-cov matrix is made. So, this isn't really the Knapp and Hartung method.

Also, p-values for LRTs of variance components provided by anova() are not two-sided. When testing whether a variance component is 0 or not via a LRT, then the problem is that the value under the null falls on the boundary of the parameter space, in which case the (asymptotic) distribution under the null is no longer chi-squared with df=1. See, for example:

Self SG, Liang KY. Asymptotic properties of maximum likelihood estimators and likelihood ratio tests under nonstandard conditions. Journal of the American Statistical Association 1987; 82(398):605-610.

Stram DO, Lee JW. Variance components testing in the longitudinal mixed effects model. Biometrics 1994; 50(4):1171-1177.

In some cases, the (asymptotic) distribution is a 50:50 mixture of a degenerate random variable with all of its probability mass concentrated at 0 and a chi-squared random variable with 1 degree of freedom. And in that case, one can get the 'right' p-value by dividing the p-value based on a chi-square distribution (with df=1) by 2. But this has nothing to do with one- vs. two-sided.

But okay, maybe I am just nitpicking here.

Let me make one final comment about the use of three-level models for analyzing estimates with correlated sampling errors (i.e., multivariate data). I think it is great that people are starting to use all of the available data instead of doing wasteful things like picking one estimate per study, or doing dodgy things (like first averaging multiple estimates from the same study -- which is often done in an incorrect way, since doing it correctly would also require knowing the covariances), or doing outright wrong things (like ignoring the dependency). And in many cases, using the three-level model for analyzing multivariate data is probably going to give you something that is good enough for government work (although that doesn't put the bar very high these days, but let's not get political here). But it doesn't take much in my opinion to analyse the data in an even better way, even if we do not know the covariances. We first approximate the covariances, then use a multilevel/multivariate model, and then use cluster-robust methods. In principle, one could skip the first step (of approximating the covariances), but then one is likely to lose some efficiency. By using a decent 'working' var-cov matrix (V + the random effects structure) to begin with, we gain efficiency, and hence, the fixed effects should be estimated more accurately (note that cluster robust methods do not affect the estimates of the fixed effects, so if you have poor estimates of the fixed effects to begin with, cluster robust methods will not fix that up). Also, the three-level model is equivalent to assuming a compound symmetric structure for the var-cov matrix of the true effects (see: http://www.metafor-project.org/doku.php/analyses:konstantopoulos2011). If one has enough data, then one can try more complex structures (like heteroscedastic CS or even unstructured) that may provide better approximations to reality.

Sorry for the rant.

Best,
Wolfgang

--
Wolfgang Viechtbauer, Ph.D., Statistician | Department of Psychiatry and
Neuropsychology | Maastricht University | P.O. Box 616 (VIJV1) | 6200 MD
Maastricht, The Netherlands | +31 (43) 388-4170 | http://www.wvbauer.com

-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-project.org] On Behalf Of Patrizio Tressoldi - patrizio.tressoldi at unipd.it
Sent: Wednesday, August 16, 2017 16:27
To: r-sig-meta-analysis at r-project.org
Subject: [R-meta] Fitting three-level meta-analytic models in R: A step-by-step tutorial

What do you think about this proposal?

Assink, Mark ... Wibbelink, Carlijn J. M. (2016).  Fitting three-level
meta-analytic models in R: A step-by-step tutorial.The Quantitative
Methods for Psychology

Full text open access available at:
http://www.tqmp.org/RegularArticles/vol12-3/p154/p154.pdf

--
Patrizio E. Tressoldi Ph.D.
Dipartimento di Psicologia Generale
Università di Padova
via Venezia 8
35131 Padova - ITALY
http://www.patriziotressoldi.it
Science of Consciousness Research Group
http://dpg.unipd.it/en/soc

Make war history
support http://www.emergency.it

```

More information about the R-sig-meta-analysis mailing list