[R-meta] Performing a multilevel meta-analysis

Viechtbauer, Wolfgang (SP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Tue Aug 11 20:46:31 CEST 2020


Dear Tzlil,

Your questions are a bit too general for me to give meaningful answers. Also, some of your questions (with regard to modeling dependent effects and using cluster robust methods) have been extensively discussed on this mailing list, so no need to repeat all of that. But yes, if you use cluster robust inference methods, I would use them not just for the 'overall model' but also for models including moderators.

Best,
Wolfgang

>-----Original Message-----
>From: Tzlil Shushan [mailto:tzlil21092 using gmail.com]
>Sent: Thursday, 06 August, 2020 16:05
>To: Viechtbauer, Wolfgang (SP)
>Cc: r-sig-meta-analysis using r-project.org
>Subject: Re: [R-meta] Performing a multilevel meta-analysis
>
>Dear Wolfgang,
>
>Thanks for your quick reply and sorry in advance for the long ‘assay’..
>
>It is probably be better if I give an overview on my analysis. Generally, I
>employ meta-analysis on the reliability and validity of heart rate response
>during sub-maximal assessments. We were able to compute three different
>effect sizes reflects reliability; mean differences, ICC and standard error
>of measurement of test-retest design, while for validity, we computer
>correlation coefficient between heart rate values and maximal aerobic
>fitness.
>
>Since both measurement properties (i.e reliability/validity) of heart rate
>can be analysed from different intensities during the assessment (for
>example, 70, 80 and 90% from heart rate maximum), different modalities of
>tests (e.g running, cycling), and multiple time points across the year (e.g.
>before season, in-season), one sample can have more than one effect size.
>
>I decided to employ three level meta-analysis, while level two and three
>pertaining to within and between samples variance, respectively. Then,
>include moderators effect within and between samples).
>
>Regarding the weights, the only reason I wonder if I need to adjust them is
>because the wide range of effect sizes per sample (1-4 per sample) and
>thought to use the approach you discussed in your recent post here.
>http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models
>
>However, as I understand the default W in rma.mv will work quite well?
>
>With regards to the above (i.e multiple effect sizes for samples), I
>consider to add robust cluster test to get more accurate standard error
>values. As I understand, it may be a good option to control for the natural
>(unknown) correlations between effect sizes from the same sample.
>First, do you think it is necessary? If so, would you apply cluster test
>just to the overall model or for additional models including moderators.
>Second, Is it reasonable to report the results obtained from the multilevel
>and cluster analyses in the paper?
>Of note, my dataset isn’t large and includes between 15-20 samples
>(clusters) while around 50-60% have multiple effect sizes.
>
>With regards to the second question in the original email, we computer the
>standard error of measurement (usually attained from pooled SD of test-
>retest multiply the square root of 1-icc). Practically, these effect sizes
>are sd values. I haven’t seen enough meta-analysis studies using standard
>error of measurement as effect size and I speculate if you can suggest me
>what would be a decent approach for this?
>
>Cheers,
>
>On Thu, 6 Aug 2020 at 22:30, Viechtbauer, Wolfgang (SP)
><wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>Dear Tzlil,
>
>Unless you have good reasons to do so, do not use custom weights. rma.mv()
>uses weights and the default ones are usually fine.
>
>weights(res, type="rowsum") will only (currently) work in the 'devel'
>version of metafor, which you can install as described here:
>
>https://wviechtb.github.io/metafor/#installation
>
>I can't really comment on the second question, because answering this would
>require knowing all details of what is being computed/reported.
>
>As for the last question ("is there a straightforward way in metafor to
>specify the analysis with Chi-square values"): No, chi-square values are
>test statistics, not an effect size / outcome measure, so they cannot be
>used for a meta-analysis (at least not with metafor).
>
>Best,
>Wolfgang
>
>>-----Original Message-----
>>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-
>project.org]
>>On Behalf Of Tzlil Shushan
>>Sent: Wednesday, 05 August, 2020 5:45
>>To: r-sig-meta-analysis using r-project.org
>>Subject: [R-meta] Performing a multilevel meta-analysis
>>
>>Hi R legends!
>>
>>My name is Tzlil and I'm a PhD candidate in Sport Science - Human
>>performance science and sports analytics
>>
>>I'm currently working on a  multilevel meta-analysis using the metafor
>>package.
>>
>>My first question is around the methods used to assign weights within
>rma.mv
>>models.
>>
>>I'd like to know if there is a conventional or 'most conservative' approach
>>to continue with. Since I haven't found a consistent methodology within the
>>multilevel meta-analyses papers I read, I originally applied a weight which
>>pertains to variance (vi) and number of effect sizes from the same study. I
>>found this method in a lecture by Joshua R. Polanin
>>https://www.youtube.com/watch?v=rJjeRRf23L8&t=1719s from 28:00.
>>
>>W = 1/vi, then divided by the number of ES for a study
>>for example, a study with vi = 0.0402 and 2 different ES will weight as
>>follow;
>>1/0.0402 = 24.88, then 24.88/2 = 12.44 (finally, converting into
>>percentages based on the overall weights in the analysis)
>>
>>After I've read some of the great posts provided in last threads here such
>>as;
>>http://www.metafor-project.org/doku.php/tips:weights_in_rma.mv_models and
>>https://www.jepusto.com/weighting-in-multivariate-meta-analysis/
>>I wonder if it is not correct and I need to modify the way I use weights in
>>my model..
>>
>>For some reason, I tried to imitate the approach used in the first link
>>above. However, for some reason I get an error every time I tried to
>>specify weights(res, type="rowsum") *Error in match.arg(type, c("diagonal",
>>"matrix")) : 'arg' should be one of “diagonal”, “matrix”*
>>
>>My second question is related to the way I meta-analyse a specific ES. My
>>meta-analysis involves the reliability and convergent validity of heart
>>rate during a specific task, which is measured in relative values (i.e.
>>percentages). Therefore, my meta-analysis includes four different ESs
>>parameters (mean difference; MD, interclass correlation; ICC, standard
>>error of measurement; SEM, and correlation coefficient; r).
>>
>>I wonder how I need to use SEM before starting the analysis. I've seen some
>>papers which squared and log transformed the SEM before performing a
>>meta-analysis, while others converted the SEM into CV%. Due to the original
>>scale of our ES (which is already in percentages) I'd like to perform the
>>analysis without converting it into CV% values. Should I use the SEM as the
>>reported values? only log transformed it? Further, is there a
>>straightforward way  in metafor to specify the analysis with Chi-square
>>values (as "ZCOR" in correlations)?
>>
>>Thanks in advance!
>>
>>Kind regards,
>>
>>Tzlil Shushan | Sport Scientist, Physical Preparation Coach
>>
>>BEd Physical Education and Exercise Science
>>MSc Exercise Science - High Performance Sports: Strength &
>>Conditioning, CSCS
>>PhD Candidate Human Performance Science & Sports Analytics


More information about the R-sig-meta-analysis mailing list