[R-meta] Meta-analysis approach for physical qualities benchmarks

Tzlil Shushan tz|||21092 @end|ng |rom gm@||@com
Mon Jul 1 02:30:12 CEST 2024


Dear Wolfgang and R-sig-meta-analysis community,

I would like to see if I can pick your thoughts about an approach I am
using in my current meta-analysis research.

We are conducting a meta-analysis on a range of physical qualities. The
primary objective of these meta-analyses is to create benchmarks for
previous and future observations.

For example, one of the physical qualities includes sprint times from
discrete distances (5m to 40m). We have gathered descriptive data (means
and standard deviations) from approximately 250 studies.

We aim to provide practitioners in the field with tools to compare the
results of their athletes to this benchmarking meta-analysis. Therefore, we
want to include commonly used tools in our field, such as z-scores and
percentiles, to facilitate these comparisons, alongside measures of
uncertainty using CIs and PIs.

Given that these approaches require the sample/population standard
deviations, I have conducted separate multilevel mixed-effects
meta-analyses for means and standard deviations.

Below is an example of the approach I am considering:

############
Meta-analysis of means:

data_means <- escalc(measure = "MN",
               mi = Final.Outcome,
               sdi = Final.SD,
               ni = Sample.Size,
               data = data)

V <- impute_covariance_matrix(vi = data_means$vi,

                              cluster = data_means$Study.id,

                              r = .7,

                              smooth_vi = T)


rma_means_model <- rma_means_model <- rma.mv(yi,

                    V_means,
                    random = list(~ 1 | Study.id/Group.id/ES.id),
                    digits = 2,
                    data = data_means,
                    method = "REML",
                    test = "t",
                    control=list(optimizer="optim",
optmethod="Nelder-Mead"))

robust_means_model <- robust.rma.mv(rma_means_model,
                              cluster = data_means$Study.id
                              adjust = T,
                              clubSandwich = T)


est_robust_means_model <- predict.rma(robust_means_model, digits = 2, level
= .9)


############
Meta-analysis of SDs:

data_sd <- escalc(measure = "SDLN",
               sdi = Final.SD,
               ni = Sample.Size,
               data = data)

V <- impute_covariance_matrix(vi = data_sd$vi,

                              cluster = data_sd$Study.id,

                              r = .7,

                              smooth_vi = T)


rma_sd_model <- rma.mv(yi,
                    V_sd,
                    random = list(~ 1 | Study.id./Group.id/ES.id),
                    digits = 2,
                    data = data_sd,
                    method = "REML",
                    test = "t",
                    control=list(optimizer="optim",
optmethod="Nelder-Mead"))

robust_sd_model <- robust.rma.mv(rma_sd_model,
                              cluster = data_sd$Study.id,
                              adjust = T,
                              clubSandwich = T)


est_robust_sd_model <- predict.rma(robust_sd_model, digits = 2, transf =
transf.exp.int, level = .9)

I would greatly appreciate your thoughts/feedback on whether this approach
is statistically sound. Specifically, is it appropriate to conduct separate
meta-analyses for means and SDs and then use the pooled estimates for
creating benchmarks? Are there any potential pitfalls or alternative
methods you would recommend?

Tzlil Shushan | Sport Scientist, Physical Preparation Coach

BEd Physical Education and Exercise Science
MSc Exercise Science - High Performance Sports: Strength &
Conditioning, CSCS
PhD Human Performance Science & Sports Analytics

	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list