[R-meta] Score Normalization for Moderator Analysis in Meta-Analysis
Kiet Huynh
k|etduchuynh @end|ng |rom gm@||@com
Thu Sep 14 17:09:45 CEST 2023
Hi Wolfgang,
Thanks for the reminder about including links when cross posting.
I appreciate the helpful expiation for the proportion/percentage of maximum possible' (POMP) score method for moderation analysis. Especially helpful was the tip on using the scale type to interact with the POMP score mean to determine if the relationship between social support and the strength of the association between LGBTQ+ discrimination and mental health differs depending on the scale used. Do you have a sense of how many effect sizes would be needed for that?
Best,
Kiet
> On Sep 13, 2023, at 3:25 AM, Viechtbauer, Wolfgang (NP) <wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>
> Dear Kiet,
>
> I don't mind cross-posting, but when doing so, please indicate this in posts, so in case answers are provided elsewhere, duplicate efforts can be avoided. For reference, this question was also posted here:
>
> https://stats.stackexchange.com/questions/626306/score-normalization-for-moderator-analysis-in-meta-analysis
>
> What you describe under 2 is the 'proportion/percentage of maximum possible' (POMP) score method, which is nicely discussed in this article:
>
> Cohen, P., Cohen, J., Aiken, L. S., & West, S. G. (1999). The problem of units and the circumstance for POMP. Multivariate Behavioral Research, 34(3), 315-346. https://doi.org/10.1207/S15327906MBR3403_2
>
> This approach assumes that the observed values on one scale are linear transformations of the observed values on other scales. Of course that is never quite true, but can hold as a rough approximation. In fact, this is also the assumption underlying various effect size / outcome measures (e.g., standardized mean differences, correlation coefficients), so it is an implicit assumption in many meta-analyses anyway (except that you are now also applying this assumption to the moderator variable). There was a thread related to this in April:
>
> https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2023-April/004529.html
>
> When this assumption is not correct (with respect to the variables involved in computing the correlations or with respect to the moderator variable), then this becomes one of the sources of (residual) heterogeneity. Of course, we have random/mixed-effects models to account for (residual) heterogeneity, so this is not the end of the world. But if scales are measuring entirely different constructs, then we should be more worried if we lump them together.
>
> If you have enough studies, then you can also code the type of scale used to measure social support (e.g., MSPSS versus other or even more fine-grained if you have enough studies) and include this in your moderator analysis and allow it to interact with the POMP score mean of the social support scale. That way, you can examine if the relationship between social support and the strength of the association between LGBTQ+ discrimination and mental health differs depending on the scale used.
>
> Best,
> Wolfgang
>
>> -----Original Message-----
>> From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>> Behalf Of Kiet Huynh via R-sig-meta-analysis
>> Sent: Tuesday, 12 September, 2023 21:56
>> To: R meta
>> Cc: Kiet Huynh
>> Subject: [R-meta] Score Normalization for Moderator Analysis in Meta-Analysis
>>
>> Hello colleagues,
>>
>> I’m conducting a meta-analysis of the association between LGBTQ+ discrimination
>> and mental health. Both are continuous variables, and I am analyzing correlation
>> coefficients. I’m interested in looking at moderators (continuous) of the
>> relationship between these two variables. One such moderator is social support
>> (continuous). I am considering two approaches for running the moderator analysis:
>>
>> 1) Many of the studies used the same MSPSS social support scale. I plan to use
>> the mean value of the MSPSS as a continuous moderator variable of the
>> discrimination-mental health relationship.
>>
>> 2) Most studies, however, use different measures of social support. I plan to use
>> the min-max normalization method to put all the social support measure on the
>> same scale, and then use that normalized mean as the moderator variable of the
>> discrimination-mental health relationship. For an example use of min-max
>> normalization method, see Endendijk et al. (2020). However, the Endendijk et al.
>> (2020) study uses the min-max normalization method for the outcome and not for a
>> moderator. The formula for the min-max normalization method is:
>>
>> x’ = (x - min)/(max - min)
>> x’ is the normalized mean, x is the mean of the sample, min is the minimum
>> possible value of the scale, and max is the maximum possible value of the scale.
>>
>> The benefit to the second approach is that I can include more studies in this
>> moderator analysis, and not just the studies using the same measure of social
>> support.
>>
>> My question is whether both approaches are valid methods for testing moderator
>> analysis? Are there any issues with using the of min-max normalization method for
>> moderator analysis?
>>
>> Thank you,
>>
>> - KH
>>
>> Reference:
>>
>> Endendijk, J. J., van Baar, A. L., & Deković, M. (2020). He is a stud, she is a
>> slut! A meta-analysis on the continued existence of sexual double standards.
>> Personality and Social Psychology Review, 24(2), 163–190.
>> https://doi.org/10.1177/1088868319891310
>>
>> ----
>>
>> Kiet Huynh, PhD (he/him)
>> (hear pronunciation <https://www.name-coach.com/kiet-huynh-94be0772-1bfd-4ece-
>> afba-14699186f2b9>)
>> Assistant Professor
>> Department of Psychology
>>
>> Terrill Hall Rm # 336
>> University of North Texas
>> Denton, TX 76203
>> Kiet.Huynh using unt.edu <mailto:Kiet.Huynh using unt.edu> <mailto:Kiet.Huynh using unt.edu>
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list