[R-meta] Dear Wolfgang

Dr. Gerta Rücker ruecker @end|ng |rom |mb|@un|-|re|burg@de
Mon Mar 30 20:42:59 CEST 2020


Dear Ju,

Another (maybe simplistic) solution could be to use only one (e.g., 
always the first) time point for those studies that report repeated 
measurements. This can be justified because you wrote that the "large 
majority of studies measure this over short term experiment and thus on 
a single time point" - so, as I understand, you it is only a minority of 
studies that causes the multiplicity issue.

A similar simplification is often done when including a small number of 
cross-over trials into a meta-analysis of RCTs with parallel-group design.

Best,

Gerta

Am 30.03.2020 um 20:36 schrieb Nicky Welton:
> If you are interested in modelling the time-course relationship then there is a new package in R to do this within a network meta-analysis framework (although it can also be used if there are only 2 interventions):
> https://cran.r-project.org/web/packages/MBNMAtime/index.html
>
> Best wishes,
>
> Nicky
>
> -----Original Message-----
> From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org> On Behalf Of Viechtbauer, Wolfgang (SP)
> Sent: 30 March 2020 19:00
> To: Ju Lee <juhyung2 using stanford.edu>; r-sig-meta-analysis using r-project.org
> Subject: Re: [R-meta] Dear Wolfgang
>
> Thanks for the clarification.
>
> Computing a time-averaged d (or g) value is tricky because the values are not independent. So, if you meta-analyze them, the standard error of the pooled estimate would not be correct unless you take the dependency into consideration. And to answer one of your questions, squaring the standard error from the model would give you the sampling variance, but again, that value would not be correct.
>
> Basically what you have is the 'multiple-endpoint' case described here:
>
> http://www.metafor-project.org/doku.php/analyses:gleser2009#multiple-endpoint_studies
>
> You would need an estimate of the correlation between the repeated measurements (and if you have more than two time points, then an entire correlation matrix) to construct the V matrix for each study before meta-analyzing the values. Then you could use:
>
> res <- rma.mv(yi, V, data=dat)
>
> to pool the estimates into a time-averaged estimate, coef(res), and vcov(res) would give you the sampling variance. But the difficult part is constructing V.
>
> Maybe you can make some reasonable assumptions about the size of correlation (which probably should be lower the further time points are separated, although if there are seasonal effects, then measurements taken during similar seasons - even if they are further apart - may tend to be more correlated again). Based on equations (19.26) and (19.27) from the Gleser and Olkin  (2009) chapter, you can then construct the V matrix.
>
> Best,
> Wolfgang
>
> -----Original Message-----
> From: Ju Lee [mailto:juhyung2 using stanford.edu]
> Sent: Monday, 30 March, 2020 18:16
> To: Michael Dewey; Viechtbauer, Wolfgang (SP); r-sig-meta-analysis using r-project.org
> Subject: Re: [R-meta] Dear Wolfgang
>
> Dear Wolfgang, Michael
>   
> These questions are important, and thank you for pointing them out.
>   
> In answers to your questions:
>   
> 1. Studies are measuring predation or herbivory rate on experimental prey in control and treatment conditions in the field. Large majority of studies measure this over short term experiment and thus on a single time point (ex. After 24 hr of field exposure).
>
> 2. However, some studies will monitor these responses over long-term period over multiple seasons to understand the seasonal dynamics. The issue here is that responses show seasonal fluctuation, which is what they were looking for. But the reviewer of our study has warned against using single representative timepoint within these multiple measures (ex. Peak season) but rather time-average these multiple measurements to reduce time-related bias.
>
> 3. So back to the point: In all of studies, in multiple time point, same treatment vs. control group are being compared using the same sampling method. We have mean and SD for all of these data point, separately for control and treatment groups for each time point of multiple measurements.
>
> 4. I am using Hedges' d as effect sizes.
>   
> Thank you and I hope this clarifies the question better!
> Best,
> JU
>
> ________________________________________
> From: Michael Dewey <lists using dewey.myzen.co.uk>
> Sent: Monday, March 30, 2020 5:32 AM
> To: Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer using maastrichtuniversity.nl>; Ju Lee <juhyung2 using stanford.edu>; r-sig-meta-analysis using r-project.org <r-sig-meta-analysis using r-project.org>
> Subject: Re: [R-meta] Dear Wolfgang
>   
> And in addition to Wolfgang's comments it would be helpful to know what scientific question underlies the decision to measure at multiple time points. Presumably the authors of primary studies did not do it for fun.
>
> Michael
>
> On 30/03/2020 11:37, Viechtbauer, Wolfgang (SP) wrote:
>> Dear Ju,
>>
>> Before I can try to address your actual questions, please say a bit more about the studies that measure responses at a single time point. Are groups (e.g., treatment versus control) being compared within these studies? Are the 'responses' continuous (such that means and SDs are being reported) or dichotomous (such that counts or proportions are being reported) or something else? And related to this, what effect size measure are you using for quantifying the group difference within studies? Standardized mean differences (which would make sense when means/SDs are being reported), risk differences or (log) risk/odds ratios (based on counts/proportions), or something else?
>>
>> And the studies that measure responses at multiple time points: Are they just doing the same thing that the 'single time point studies' are doing, but at multiple time points? For example, instead of reporting the means and SDs of the treatment and control group once, there are several follow-ups, such that such the means and SDs of the two groups are reported at each follow-up time point?
>>
>> Best,
>> Wolfgang
>>
>> -----Original Message-----
>> From: R-sig-meta-analysis
>> [mailto:r-sig-meta-analysis-bounces using r-project.org] On Behalf Of Ju Lee
>> Sent: Sunday, 29 March, 2020 20:16
>> To: r-sig-meta-analysis using r-project.org
>> Subject: [R-meta] Dear Wolfgang
>>
>> Dear Wolfgang,
>>
>> I sincerely hope you are well and healthy.
>> I wanted to reach out regarding ways to incorporate studies with repeated-measures to overall mixed effect models.
>>
>> My data is almost entirely composed of studies measuring responses at a single time point, but there are few studies that have been measuring responses multiple times throughout study seasons. I was advised that time-averaging these multiple responses makes more sense for these studies.
>>
>> My understanding was that you could 1) do a fixed effect meta-analysis of these studies to generate a single mean effect sizes and sampling variance from these repeated measurements and then 2) incorporate the single effect size and variance into overall mixed-effect model. Is this a correct approach?
>>
>> If so, how would I calculate sampling variance from the fixed model in the step 1? Is it based on SE outputs of the fixed effect model?
>>
>> Thank you very much, and I look forward to hearing from you!
>> Best,
>> JU
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis



More information about the R-sig-meta-analysis mailing list