[R-meta] Choice of measure in escalc().

Cedric Ginestet c@g|ne@tet05 @end|ng |rom goog|em@||@com
Thu May 9 17:50:43 CEST 2019


Thank you very much for such a comprehensive answer.

It has been extremely helpful.


On 06/05/2019 15:37, James Pustejovsky wrote:
> Cedric,
> Here are some initial comments. It would be great for others to share 
> their perspectives as well, as well to share any references that cover 
> this set of ES measures. (I don't know of good ones. My usual go-to 
> ref is Borenstein in the Handbook of Research Synthesis and 
> Meta-Analysis, but he covers only MC and SMCR.)
> I think the over-arching point to keep in mind is that ES measures are 
> really modeling assumptions about the equivalence or approximate 
> equivalence of effects across studies. Thus, choosing an appropriate 
> ES is analogous to determining an appropriate assumption for any other 
> statistical model---the choice should be based on consideration of the 
> substantive context of the analysis, the data available to you, and on 
> evidence in the data you have.
>       * |"MC"| for the /raw mean change/.
> Raw mean change is only appropriate if all of the studies use a common 
> scale. If that is the case, it is an attractive measure because the 
> meta-analysis results can then be interpreted on the same scale as the 
> results of each primary study. If studies use different measurement 
> scales, then MC is not appropriate.
>  * |"ROMC"| for the /log transformed ratio of means/ (Lajeunesse, 2011).
> ROMC is appropriate if a) all of the studies use measures that can be 
> treated as ratio scales (i.e., where the zero of the scale really 
> corresponds to absence of the outcome, so that it is sensible to talk 
> about percentage changes in the outcome). The central assumption of 
> ROMC is that effects (whether treatment effects or natural changes 
> over time) are approximately proportionate to the baseline level of 
> the outcome.
> If studies all use a common scale, then MC or ROMC might be 
> appropriate. The choice between them really depends on whether you 
> think effects are (approximately) proportionate to baseline or are 
> (more or less) unrelated to baseline levels.
> The remaining effect sizes are all forms of standardized mean 
> difference. They are useful when studies all make use of different 
> scales. Each measure uses a different component of variance to 
> standardize the effects, so you can think of the measures as making 
> different assumptions about how best to (linearly) equate the outcome 
> scales across studies.
>       * |"SMCR"| for the /standardized mean change/ using raw score
>         standardization.
>       * |"SMCRH"| for the /standardized mean change/ using raw score
>         standardization with heteroscedastic population variances at
>     the two
>         measurement occasions (Bonett, 2008).
> SMCR and SMCRH use the standard deviation of the raw scores at 
> pre-test to equate effects across studies. This is sensible if a) the 
> magnitude of effects are (more or less) unrelated to baseline levels 
> and b) the scales do not have range restriction or unreliability 
> issues at baseline. If (b) is a concern, then it might be more 
> sensible to use the post-test SD instead (see comments in ?escalc to 
> that effect). As far as I can tell, the only difference between SMCR 
> and SMCRH is in how their sampling variances are computed. SMCRH alls 
> that the population variance at post-test might be different than the 
> population variance at pre-test, whereas SMCR assumes that the 
> population variances are equal.
>       * |"SMCC"| for the /standardized mean change/ using change score
>         standardization.
> SMCC uses the standard deviation of the outcome change scores to 
> equate effects across studies. In practice, it is sometimes possible 
> to compute SMCC even when SMCR or SMCRH cannot be computed, which I 
> think is one of the main reasons SMCC is used. Personlly, I think of 
> SMCC as an ES of last resort because I don't think most scales are 
> designed to have stable change score variation. For example, a small 
> change in the reliability of a scale could create fairly large changes 
> in the SD of change scores on that scale. In contrast, if the scale is 
> fairly reliable, a small change in reliability will have only a small 
> effect on the SD of the raw score, making the SMCR/SMCRH less 
> sensitive to differential measurement reliability.
> James

	[[alternative HTML version deleted]]

More information about the R-sig-meta-analysis mailing list