[R-meta] Meta-analysis of single group attitude scores
Tommy van Steen
tommyvansteen at yahoo.com
Wed Jan 17 18:16:13 CET 2018
Thanks for explaining this, it’s much clearer to me now.
Dr Tommy van Steen
Research Associate in Psychology
University of Bath
> On 16 Jan 2018, at 17:34, Viechtbauer Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
> Hi Tommy,
> The sampling variance of a 'd' value (as in the present context, but this also applies to 'd' values in the more usual context where two independent groups are being compared) is only very indirectly a function of the SD of the measurements. In fact, the SD only affects the 'd' values itself, which is part of the computation of the sampling variance, but it does not enter the equation of the sampling variance in any other form. This might not be very intuitive, but that's just how it is.
> Generally though, the effect of the 'd' value on the sampling variance is relatively minor compared to how strongly it is affected by the sample size. For example, if n had been 200 for the first study, then we would get a sampling variance of 0.0075, which is much smaller than that of the second study. Now that indeed makes a lot of intuitive sense.
> -----Original Message-----
> From: Tommy van Steen [mailto:tommyvansteen at yahoo.com]
> Sent: Tuesday, 16 January, 2018 16:27
> To: Viechtbauer Wolfgang (SP)
> Cc: Michael Dewey; r-sig-meta-analysis at r-project.org
> Subject: Re: [R-meta] Meta-analysis of single group attitude scores
> Dear Michael and Wolfgang,
> Thank you both very much for your helpful comments.
> I have been testing the two methods suggested by Wolfgang on some fictional data to see where the differences lie, and I don’t fully understand the resulting sampling variances.
> Fictional data of 2 studies:
> Both are attitudes on a 7-point scale (neutral point = 4, range = 6) with n = 100.
> Both studies have a mean score of 5.
> Study 1 SD = 1
> Study 2 SD = 2.
> With the first idea outlined by Wolfgang, the sampling variances are 0.015 (Study 1) and 0.01125 (Study 2).
> With the second idea, the sampling variances are 0.000277 (Study 1) and 0.00111 (study 2).
> So using the second idea, Study 2 results in a larger sampling variance, and therefore a less precise measurement, and inverse variance weighting results in a lower weight for this study. This seems to make sense as the SD for Study 2 is higher than for Study 1.
> However, using the first idea, the sampling variance of Study 2 is actually lower, which suggests that even though the SD of that study is higher, the study is more precise.
> Am I interpreting the results wrong? Or could it be that the first idea already incorporates the inverse variance calculation?
> Thank you for your help!
> Best wishes,
> Dr Tommy van Steen
> Research Associate in Psychology
> University of Bath
> On 26 Nov 2017, at 10:54, Viechtbauer Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
> I agree that ideally one would want to use the raw mean (or raw mean minus neutral point) as the outcome measure. However, if studies differ in terms of the number of answering possibilities, then the raw values are not really comparable.
> Two ideas:
> 1) Divide by the SD. So you then compute d = (mean - neutral point) / SD. Then the (large-sample) sampling variance can be estimated with:
> 1/n + d^2/(2*n)
> 2) Divide by the possible range (not the observed one!). So you then compute d = (mean - neutral point) / range. Then the sampling variance can be estimated with:
> SD^2 / (n * range^2)
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis