[R-meta] Meta-analysis approach for physical qualities benchmarks

James Pustejovsky jepu@to @end|ng |rom gm@||@com
Tue Jul 2 20:00:13 CEST 2024


Hi Tzlil,

>From my perspective, your approach seems reasonable as a starting point for
characterizing the distribution of each of these quantities, but I would be
cautious about trying to create benchmarks based on the results of two
separate models. It seems like the benchmarks would be a non-linear
function of both the Ms and the SDs. Evaluating a non-linear function at
average values of the inputs does not produce the same result as evaluating
the average of a non-linear function of individual inputs, and it can be
poor even as an approximation. I would think that it would be preferable to
work towards a joint model for the Ms and SDs---treating them as two
dimensions of a bivariate effect size measure. I think this would be
feasible using multivariate meta-analysis models, for which the metafor
documentation provides extensive documentation. See also Gasparrini and
Armstrong (2011; https://doi.org/10.1002/sim.4226) and Sera et al. (2019;
https://doi.org/10.1002/sim.8362).

A further reason to consider a joint (multivariate) model is that for many
distributions other than the Gaussian, mean parameters and variance
parameters tend to be related. For instance, count data distributions
typically have variances that grow larger as the mean grows larger. If the
physical quantities that you are modeling follow such distributions, then
capturing the interrelationship between the M and SD could be important
both for purposes of obtaining precise summary estimates and for the
interpretation of the results.

One other small note about your code: for purposes of creating a sampling
variance covariance matrix, it makes sense to impute covariances between
effect size estimates that are based on the same  sample (or at least
partially overlapping samples). I see from your rma.mv code that you have
random effects for effect sizes nested in groups nested in studies. If the
groups within a study are independent (e.g., separate samples of male and
female athletes), then the effect sizes from different groups should
probably be treated as independent. In this case, your call to
impute_covariance_matrix() should cluster by Group.id instead of by
Study.id. But for purposes of computing robust standard errors, you would
still use cluster = Study.id.

James

On Sun, Jun 30, 2024 at 7:31 PM Tzlil Shushan via R-sig-meta-analysis <
r-sig-meta-analysis using r-project.org> wrote:

> Dear Wolfgang and R-sig-meta-analysis community,
>
> I would like to see if I can pick your thoughts about an approach I am
> using in my current meta-analysis research.
>
> We are conducting a meta-analysis on a range of physical qualities. The
> primary objective of these meta-analyses is to create benchmarks for
> previous and future observations.
>
> For example, one of the physical qualities includes sprint times from
> discrete distances (5m to 40m). We have gathered descriptive data (means
> and standard deviations) from approximately 250 studies.
>
> We aim to provide practitioners in the field with tools to compare the
> results of their athletes to this benchmarking meta-analysis. Therefore, we
> want to include commonly used tools in our field, such as z-scores and
> percentiles, to facilitate these comparisons, alongside measures of
> uncertainty using CIs and PIs.
>
> Given that these approaches require the sample/population standard
> deviations, I have conducted separate multilevel mixed-effects
> meta-analyses for means and standard deviations.
>
> Below is an example of the approach I am considering:
>
> ############
> Meta-analysis of means:
>
> data_means <- escalc(measure = "MN",
>                mi = Final.Outcome,
>                sdi = Final.SD,
>                ni = Sample.Size,
>                data = data)
>
> V <- impute_covariance_matrix(vi = data_means$vi,
>
>                               cluster = data_means$Study.id,
>
>                               r = .7,
>
>                               smooth_vi = T)
>
>
> rma_means_model <- rma_means_model <- rma.mv(yi,
>
>                     V_means,
>                     random = list(~ 1 | Study.id/Group.id/ES.id),
>                     digits = 2,
>                     data = data_means,
>                     method = "REML",
>                     test = "t",
>                     control=list(optimizer="optim",
> optmethod="Nelder-Mead"))
>
> robust_means_model <- robust.rma.mv(rma_means_model,
>                               cluster = data_means$Study.id
>                               adjust = T,
>                               clubSandwich = T)
>
>
> est_robust_means_model <- predict.rma(robust_means_model, digits = 2, level
> = .9)
>
>
> ############
> Meta-analysis of SDs:
>
> data_sd <- escalc(measure = "SDLN",
>                sdi = Final.SD,
>                ni = Sample.Size,
>                data = data)
>
> V <- impute_covariance_matrix(vi = data_sd$vi,
>
>                               cluster = data_sd$Study.id,
>
>                               r = .7,
>
>                               smooth_vi = T)
>
>
> rma_sd_model <- rma.mv(yi,
>                     V_sd,
>                     random = list(~ 1 | Study.id./Group.id/ES.id),
>                     digits = 2,
>                     data = data_sd,
>                     method = "REML",
>                     test = "t",
>                     control=list(optimizer="optim",
> optmethod="Nelder-Mead"))
>
> robust_sd_model <- robust.rma.mv(rma_sd_model,
>                               cluster = data_sd$Study.id,
>                               adjust = T,
>                               clubSandwich = T)
>
>
> est_robust_sd_model <- predict.rma(robust_sd_model, digits = 2, transf =
> transf.exp.int, level = .9)
>
> I would greatly appreciate your thoughts/feedback on whether this approach
> is statistically sound. Specifically, is it appropriate to conduct separate
> meta-analyses for means and SDs and then use the pooled estimates for
> creating benchmarks? Are there any potential pitfalls or alternative
> methods you would recommend?
>
> Tzlil Shushan | Sport Scientist, Physical Preparation Coach
>
> BEd Physical Education and Exercise Science
> MSc Exercise Science - High Performance Sports: Strength &
> Conditioning, CSCS
> PhD Human Performance Science & Sports Analytics
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-meta-analysis mailing list @ R-sig-meta-analysis using r-project.org
> To manage your subscription to this mailing list, go to:
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list