[R-meta] Score Normalization for Moderator Analysis in Meta-Analysis

Viechtbauer, Wolfgang (NP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Fri Sep 29 13:27:56 CEST 2023


I just noticed that the last question has remained unanswered: 

Depends on what you mean by "need". To run such an analysis, assuming 'scale' is just a two-level factor and you want to run a model with '~ factor(scale) * pompmean', then you will need five effect sizes, two for the first and three for the second level of 'scale'. That will give you just enough information to fit such a model and estimate the amount of residual heterogeneity.

But I assume that this is not what you mean by "need". If you meant something along the lines of 'having enough power', then I cannot give you an answer to that question, because it is like asking: "I want to run a study - how many subjects do I need?" (although turns out that the answer to that question is: "three patients" -- https://www.youtube.com/watch?v=Hz1fyhVOjr4). To give an informed answer to that question, one would have to do a power analysis:

Hedges, L. V., & Pigott, T. D. (2004). The power of statistical tests for moderators in meta-analysis. Psychological Methods, 9(4), 426-445. 

If you meant something along the lines of 'so that reviewers are not going to complain that my sample size is too small', then one could refer to rules of thumb like what you can find in the Cochrane Handbook:

https://training.cochrane.org/handbook/current/chapter-10#section-10-11-5-1

"It is very unlikely that an investigation of heterogeneity will produce useful findings unless there is a substantial number of studies. Typical advice for undertaking simple regression analyses: that at least ten observations (i.e. ten studies in a meta-analysis) should be available for each characteristic modelled. However, even this will be too few when the covariates are unevenly distributed across studies."

To be clear, this is an entirely arbitrary rule (and one also finds suggestions like '5 studies per characteristic'). Also, what exactly 'for each characteristic modelled' means is not entirely clear, but say we interpret this as 'per model coefficient'. The model above has 4 model coefficients (including the intercept), so then we would need at least 40 effect sizes.

To be fair, this rule does relate somewhat to the issue of overfitting, since more complex models require more data points to avoid overfitting. But even then, one would have to articulate more precisely what exactly one is concerned about.

Best,
Wolfgang

>-----Original Message-----
>From: Kiet Huynh [mailto:kietduchuynh using gmail.com]
>Sent: Thursday, 14 September, 2023 17:10
>To: Viechtbauer, Wolfgang (NP)
>Cc: R Special Interest Group for Meta-Analysis
>Subject: Re: [R-meta] Score Normalization for Moderator Analysis in Meta-Analysis
>
>Hi Wolfgang,
>
>Thanks for the reminder about including links when cross posting.
>
>I appreciate the helpful expiation for the proportion/percentage of maximum
>possible' (POMP) score method for moderation analysis. Especially helpful was the
>tip on using the scale type to interact with the POMP score mean to determine if
>the relationship between social support and the strength of the association
>between LGBTQ+ discrimination and mental health differs depending on the scale
>used. Do you have a sense of how many effect sizes would be needed for that?
>
>Best,
>
>Kiet
>
>On Sep 13, 2023, at 3:25 AM, Viechtbauer, Wolfgang (NP)
><wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>
>Dear Kiet,
>
>I don't mind cross-posting, but when doing so, please indicate this in posts, so
>in case answers are provided elsewhere, duplicate efforts can be avoided. For
>reference, this question was also posted here:
>
>https://stats.stackexchange.com/questions/626306/score-normalization-for-
>moderator-analysis-in-meta-analysis
>
>What you describe under 2 is the 'proportion/percentage of maximum possible'
>(POMP) score method, which is nicely discussed in this article:
>
>Cohen, P., Cohen, J., Aiken, L. S., & West, S. G. (1999). The problem of units
>and the circumstance for POMP. Multivariate Behavioral Research, 34(3), 315-
>346. https://doi.org/10.1207/S15327906MBR3403_2
>
>This approach assumes that the observed values on one scale are linear
>transformations of the observed values on other scales. Of course that is never
>quite true, but can hold as a rough approximation. In fact, this is also the
>assumption underlying various effect size / outcome measures (e.g., standardized
>mean differences, correlation coefficients), so it is an implicit assumption in
>many meta-analyses anyway (except that you are now also applying this assumption
>to the moderator variable). There was a thread related to this in April:
>
>https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2023-April/004529.html
>
>When this assumption is not correct (with respect to the variables involved in
>computing the correlations or with respect to the moderator variable), then this
>becomes one of the sources of (residual) heterogeneity. Of course, we have
>random/mixed-effects models to account for (residual) heterogeneity, so this is
>not the end of the world. But if scales are measuring entirely different
>constructs, then we should be more worried if we lump them together.
>
>If you have enough studies, then you can also code the type of scale used to
>measure social support (e.g., MSPSS versus other or even more fine-grained if you
>have enough studies) and include this in your moderator analysis and allow it to
>interact with the POMP score mean of the social support scale. That way, you can
>examine if the relationship between social support and the strength of the
>association between LGBTQ+ discrimination and mental health differs depending on
>the scale used.
>
>Best,
>Wolfgang
>
>-----Original Message-----
>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>Behalf Of Kiet Huynh via R-sig-meta-analysis
>Sent: Tuesday, 12 September, 2023 21:56
>To: R meta
>Cc: Kiet Huynh
>Subject: [R-meta] Score Normalization for Moderator Analysis in Meta-Analysis
>
>Hello colleagues,
>
>I’m conducting a meta-analysis of the association between LGBTQ+ discrimination
>and mental health. Both are continuous variables, and I am analyzing correlation
>coefficients. I’m interested in looking at moderators (continuous) of the
>relationship between these two variables. One such moderator is social support
>(continuous). I am considering two approaches for running the moderator analysis:
>
>1) Many of the studies used the same MSPSS social support scale. I plan to use
>the mean value of the MSPSS as a continuous moderator variable of the
>discrimination-mental health relationship.
>
>2) Most studies, however, use different measures of social support. I plan to use
>the min-max normalization method to put all the social support measure on the
>same scale, and then use that normalized mean as the moderator variable of the
>discrimination-mental health relationship. For an example use of min-max
>normalization method, see Endendijk et al. (2020). However, the Endendijk et al.
>(2020) study uses the min-max normalization method for the outcome and not for a
>moderator. The formula for the min-max normalization method is:
>
>x’ = (x - min)/(max - min)
>x’ is the normalized mean, x is the mean of the sample, min is the minimum
>possible value of the scale, and max is the maximum possible value of the scale.
>
>The benefit to the second approach is that I can include more studies in this
>moderator analysis, and not just the studies using the same measure of social
>support.
>
>My question is whether both approaches are valid methods for testing moderator
>analysis? Are there any issues with using the of min-max normalization method for
>moderator analysis?
>
>Thank you,
>
>- KH
>
>Reference:
>
>Endendijk, J. J., van Baar, A. L., & Deković, M. (2020). He is a stud, she is a
>slut! A meta-analysis on the continued existence of sexual double standards.
>Personality and Social Psychology Review, 24(2), 163–190.
>https://doi.org/10.1177/1088868319891310
>
>----
>
>Kiet Huynh, PhD (he/him)
>(hear pronunciation <https://www.name-coach.com/kiet-huynh-94be0772-1bfd-4ece-
>afba-14699186f2b9>)
>Assistant Professor
>Department of Psychology
>
>Terrill Hall Rm # 336
>University of North Texas
>Denton, TX 76203
>Kiet.Huynh using unt.edu <mailto:Kiet.Huynh using unt.edu>



More information about the R-sig-meta-analysis mailing list