[R-meta] Performing a multilevel meta-analysis

Viechtbauer, Wolfgang (SP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Mon Aug 17 22:41:05 CEST 2020


Dear Tzlil,

Just to let you know (so you don't keep waiting for a response from me): I have no suggestions for how one would meta-analyze such values.

Best,
Wolfgang

>-----Original Message-----
>From: Tzlil Shushan [mailto:tzlil21092 using gmail.com]
>Sent: Saturday, 15 August, 2020 5:10
>To: Viechtbauer, Wolfgang (SP)
>Cc: r-sig-meta-analysis using r-project.org
>Subject: Re: [R-meta] Performing a multilevel meta-analysis
>
>Dear Wolfgang,
>
>First, thank you so much for the quick response and the time you dedicate to
>my questions. And yes, I looked on the mailing list and have seen some
>meaningful discussions around some of my questions. Based on the readings, I
>assume that an extension of my multilevel model with robust variance
>inference is a good idea.
>
>However, I still would like to give a chance to the second question I had
>and I'll try to be more specific this time. I hope you (or others in this
>group) can help me with that.
>
>One of the effect sizes in the meta-analysis is the 'standard error of
>measurement' (SEM) of heart rate from a test-retest (reliability)
>assessment. Simply described, this assessment was performed twice on a
>matched group and I'm interested in the variability of this measure. This
>effect size is derived from the pooled standard deviation (mean test-retest
>SD) and intraclass correlation (ICC) of a test-retest. For example, if the
>mean ± SD of test one is 80.0 ± 4.0 and test two is 80.5 ± 4.8, and
>intraclass correlation is 0.95, the SEM will be 4.4*√(1-0.95)= 0.98.
>Practically, this effect size is a form of SD value.
>
>I'm aware of the fact that the first thing that I probably should do if I
>want to use metafor package is to convert these values into coefficient of
>variation (CV%). However, because the outcome measure (heart rate) is
>already calculated in percentages values (% of heart rate maximum), we'd
>like to meta-analyse the SEM in the original raw values. Further, using this
>effect size is important for having practical implications in the paper.
>
>I've seen some discussion in the mailing
>list https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2018-
>May/000828.html?fbclid=IwAR2dSpruCCqlk631VKBAflkibrD8Gke-
>9sSGgMHxG4TtY_ocZX1IsZCPlI0 on CV% from matched groups
>with escalc(measure="CVRC", y = logCV_1 - logCV_2). However, I'd like to
>know if there is a way to fit the escalc equation to the SEM values (which
>is only one value from each paired test)? or alternatively, if there are
>other approaches I should consider?
>
>Kind regards,
>
>Tzlil Shushan | Sport Scientist, Physical Preparation Coach
>
>BEd Physical Education and Exercise Science
>MSc Exercise Science - High Performance Sports: Strength &
>Conditioning, CSCS
>PhD Candidate Human Performance Science & Sports Analytics
>
>‫בתאריך יום ד׳, 12 באוג׳ 2020 ב-4:46 מאת ‪Viechtbauer, Wolfgang (SP)‬‏
><‪wolfgang.viechtbauer using maastrichtuniversity.nl‬‏>:‬
>Dear Tzlil,
>
>Your questions are a bit too general for me to give meaningful answers.
>Also, some of your questions (with regard to modeling dependent effects and
>using cluster robust methods) have been extensively discussed on this
>mailing list, so no need to repeat all of that. But yes, if you use cluster
>robust inference methods, I would use them not just for the 'overall model'
>but also for models including moderators.
>
>Best,
>Wolfgang
>
>>-----Original Message-----
>>From: Tzlil Shushan [mailto:tzlil21092 using gmail.com]
>>Sent: Thursday, 06 August, 2020 16:05
>>To: Viechtbauer, Wolfgang (SP)
>>Cc: r-sig-meta-analysis using r-project.org
>>Subject: Re: [R-meta] Performing a multilevel meta-analysis
>>
>>Dear Wolfgang,
>>
>>Thanks for your quick reply and sorry in advance for the long ‘assay’..
>>
>>It is probably be better if I give an overview on my analysis. Generally, I
>>employ meta-analysis on the reliability and validity of heart rate response
>>during sub-maximal assessments. We were able to compute three different
>>effect sizes reflects reliability; mean differences, ICC and standard error
>>of measurement of test-retest design, while for validity, we computer
>>correlation coefficient between heart rate values and maximal aerobic
>>fitness.
>>
>>Since both measurement properties (i.e reliability/validity) of heart rate
>>can be analysed from different intensities during the assessment (for
>>example, 70, 80 and 90% from heart rate maximum), different modalities of
>>tests (e.g running, cycling), and multiple time points across the year
>(e.g.
>>before season, in-season), one sample can have more than one effect size.
>>
>>I decided to employ three level meta-analysis, while level two and three
>>pertaining to within and between samples variance, respectively. Then,
>>include moderators effect within and between samples).
>>
>>Regarding the weights, the only reason I wonder if I need to adjust them is
>>because the wide range of effect sizes per sample (1-4 per sample) and
>>thought to use the approach you discussed in your recent post here.
>>http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models
>>
>>However, as I understand the default W in rma.mv will work quite well?
>>
>>With regards to the above (i.e multiple effect sizes for samples), I
>>consider to add robust cluster test to get more accurate standard error
>>values. As I understand, it may be a good option to control for the natural
>>(unknown) correlations between effect sizes from the same sample.
>>First, do you think it is necessary? If so, would you apply cluster test
>>just to the overall model or for additional models including moderators.
>>Second, Is it reasonable to report the results obtained from the multilevel
>>and cluster analyses in the paper?
>>Of note, my dataset isn’t large and includes between 15-20 samples
>>(clusters) while around 50-60% have multiple effect sizes.
>>
>>With regards to the second question in the original email, we computer the
>>standard error of measurement (usually attained from pooled SD of test-
>>retest multiply the square root of 1-icc). Practically, these effect sizes
>>are sd values. I haven’t seen enough meta-analysis studies using standard
>>error of measurement as effect size and I speculate if you can suggest me
>>what would be a decent approach for this?
>>
>>Cheers,
>>
>>On Thu, 6 Aug 2020 at 22:30, Viechtbauer, Wolfgang (SP)
>><wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>>Dear Tzlil,
>>
>>Unless you have good reasons to do so, do not use custom weights. rma.mv()
>>uses weights and the default ones are usually fine.
>>
>>weights(res, type="rowsum") will only (currently) work in the 'devel'
>>version of metafor, which you can install as described here:
>>
>>https://wviechtb.github.io/metafor/#installation
>>
>>I can't really comment on the second question, because answering this would
>>require knowing all details of what is being computed/reported.
>>
>>As for the last question ("is there a straightforward way in metafor to
>>specify the analysis with Chi-square values"): No, chi-square values are
>>test statistics, not an effect size / outcome measure, so they cannot be
>>used for a meta-analysis (at least not with metafor).
>>
>>Best,
>>Wolfgang
>>
>>>-----Original Message-----
>>>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-
>>project.org]
>>>On Behalf Of Tzlil Shushan
>>>Sent: Wednesday, 05 August, 2020 5:45
>>>To: r-sig-meta-analysis using r-project.org
>>>Subject: [R-meta] Performing a multilevel meta-analysis
>>>
>>>Hi R legends!
>>>
>>>My name is Tzlil and I'm a PhD candidate in Sport Science - Human
>>>performance science and sports analytics
>>>
>>>I'm currently working on a  multilevel meta-analysis using the metafor
>>>package.
>>>
>>>My first question is around the methods used to assign weights within
>>rma.mv
>>>models.
>>>
>>>I'd like to know if there is a conventional or 'most conservative'
>approach
>>>to continue with. Since I haven't found a consistent methodology within
>the
>>>multilevel meta-analyses papers I read, I originally applied a weight
>which
>>>pertains to variance (vi) and number of effect sizes from the same study.
>I
>>>found this method in a lecture by Joshua R. Polanin
>>>https://www.youtube.com/watch?v=rJjeRRf23L8&t=1719s from 28:00.
>>>
>>>W = 1/vi, then divided by the number of ES for a study
>>>for example, a study with vi = 0.0402 and 2 different ES will weight as
>>>follow;
>>>1/0.0402 = 24.88, then 24.88/2 = 12.44 (finally, converting into
>>>percentages based on the overall weights in the analysis)
>>>
>>>After I've read some of the great posts provided in last threads here such
>>>as;
>>>http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models and
>>>https://www.jepusto.com/weighting-in-multivariate-meta-analysis/
>>>I wonder if it is not correct and I need to modify the way I use weights
>in
>>>my model..
>>>
>>>For some reason, I tried to imitate the approach used in the first link
>>>above. However, for some reason I get an error every time I tried to
>>>specify weights(res, type="rowsum") *Error in match.arg(type,
>c("diagonal",
>>>"matrix")) : 'arg' should be one of “diagonal”, “matrix”*
>>>
>>>My second question is related to the way I meta-analyse a specific ES. My
>>>meta-analysis involves the reliability and convergent validity of heart
>>>rate during a specific task, which is measured in relative values (i.e.
>>>percentages). Therefore, my meta-analysis includes four different ESs
>>>parameters (mean difference; MD, interclass correlation; ICC, standard
>>>error of measurement; SEM, and correlation coefficient; r).
>>>
>>>I wonder how I need to use SEM before starting the analysis. I've seen
>some
>>>papers which squared and log transformed the SEM before performing a
>>>meta-analysis, while others converted the SEM into CV%. Due to the
>original
>>>scale of our ES (which is already in percentages) I'd like to perform the
>>>analysis without converting it into CV% values. Should I use the SEM as
>the
>>>reported values? only log transformed it? Further, is there a
>>>straightforward way  in metafor to specify the analysis with Chi-square
>>>values (as "ZCOR" in correlations)?
>>>
>>>Thanks in advance!
>>>
>>>Kind regards,
>>>
>>>Tzlil Shushan | Sport Scientist, Physical Preparation Coach
>>>
>>>BEd Physical Education and Exercise Science
>>>MSc Exercise Science - High Performance Sports: Strength &
>>>Conditioning, CSCS
>>>PhD Candidate Human Performance Science & Sports Analytics


More information about the R-sig-meta-analysis mailing list