[R-meta] Awkward results while conducting a Meta Analytic Reliability Generalization study by metafor package
Viechtbauer Wolfgang (SP)
wolfgang.viechtbauer at maastrichtuniversity.nl
Thu Oct 26 10:49:57 CEST 2017
Dear Davut,
The sampling variance (and hence standard error) of measure "AHW" is a function of the value of alpha itself. Therefore, it is not all that surprising that there might be correlation between the (transformed) alpha values and the sampling variances / standard errors.
This is a general problem that comes up with all kinds of measures: The sampling variances are, by construction, often correlated with the estimates, which can easily lead to spurious relationships, suggesting potential publication bias. Therefore, some have suggested not to look for an association between the estimates and their sampling variances / standard error, but between the estimates and the sample sizes (or some transformation thereof).
For measure "ABT", the sampling variances are not a function of the alpha values, so this issue does not apply here. Therefore, I might be inclined to trust the results from the regression/rank correlation tests for this measure more than the ones you got with measure "AHW".
For regtest(), you can also choose to use a different 'predictor' (including the sample size, the inverse sample size, the square root transformed sample size, and the inverse of the square root transformed sample size). See help(regtest).
I assume the funnel plot is based on measures "ABT". By default, the plot is drawn based on the transformed values and those are not constrained to be <= 1.
Best,
Wolfgang
-----Original Message-----
From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-project.org] On Behalf Of Davut CANLI
Sent: Thursday, 26 October, 2017 9:05
To: r-sig-meta-analysis at r-project.org
Subject: [R-meta] Awkward results while conducting a Meta Analytic Reliability Generalization study by metafor package
Dear all,
There are two common methods (Bonett's transformation and Hakstian and Whalen transformation) while conducting an RG study when the issued reliability coefficient is Cronbach's alpha. The problem occurs when I use the regtest or ranktest functions of metafor. Using one method (say Bonett's transformation, ABT), the output of the regtest (or ranktest) suggest that there is no evidence for publication bias by providing a p-value bigger than 0.05. On the other hand the use of other method ("AHW") suggests signicant evidence (p<.0001) for a possible publication bias (which I believe in fact this should be correct).
Another thing is the output of the funnel command. Even though I have not a value of coefficient alpha bigger than one in my data set, there I see a value that seems bigger than one on the output of the funnel.
Are these some kind of bugs of functions in code or something that I still have some misunderstandings on the subject.
############################## Bonett's Transformation #####################
> regtest(MetaOBO_ABT)
Regression Test for Funnel Plot Asymmetry
model: mixed-effects meta-regression model
predictor: standard error
test for funnel plot asymmetry: z = -0.5277, p = 0.5977
> ranktest(MetaOBO_ABT)
Rank Correlation Test for Funnel Plot Asymmetry
Kendall's tau = -0.0108, p = 0.8664
Warning message:
In cor.test.default(yi.star, vi, method = "kendall", exact = TRUE) :
Cannot compute exact p-value with ties
############################## Using Haks. & Wha. Transformation #####################
> regtest(MetaOBO_AHW)
Regression Test for Funnel Plot Asymmetry
model: mixed-effects meta-regression model
predictor: standard error
test for funnel plot asymmetry: z = -4.1625, p < .0001
> ranktest(MetaOBO_AHW)
Rank Correlation Test for Funnel Plot Asymmetry
Kendall's tau = -0.1980, p = 0.0020
############################### Funnel Plot ##############################################
In attached.
Thanks all in advance.
Davut CANLI
--
Ordu University
Faculty of Arts and Sciences
Department of Mathemathics
More information about the R-sig-meta-analysis
mailing list