[R-meta] Dear Wolfgang
Michael Dewey
||@t@ @end|ng |rom dewey@myzen@co@uk
Wed Apr 1 14:28:03 CEST 2020
Dear Ju
Based on that I would definitely include a moderator to differentiate
short term from long term.
Micahel
On 31/03/2020 21:44, Ju Lee wrote:
> Dear Wolfgang and all,
>
> Thank you very much for these different suggestions. If time-averaging
> effect sizes are less straightforward in our case, we may lean more
> toward using a single final data point, based on the logic that all of
> the responses measured were accumulative to more general extent. We also
> considered standardizing this by using the first sampling (as Gerta
> suggested) instead, but these multiple time-point studies were conducted
> for a reason as in their particular study systems and responses being
> measured, it takes naturally longer to see treatment effects compared to
> ones used in short-term studies. Maybe it short vs. long term
> measurement as a moderator could be a supplementary moderator to see if
> that drove any response variations?
>
> Wolfgang: I have looked into the chapter that you have suggested in
> Gleser and Olkin (2009), and I am less clear from reading the section
> related to how you can assume the correlation estimate between Y_j and
> Y_j* in equation 19.26 and 27. Do you have any suggestions on relatively
> straightforward ways to calculate or estimate sample correlation from
> different means of same treatment or control groups across the multiple
> sampling time point?
>
> In this section of Gleser and Olkin (2009), they say: "When the sample
> covariance matrix of the endpoint measures for the study is published,
> ˆjj* can be taken to be the sample correlation, rjj*. Otherwise, ˆjj*
> will have to be imputed from other available information, for example,
> from the sample correlation for endpoints Y_j and Y_j* taken from
> another study."
>
> Thank you everyone for your time and inputs!
> Sincerely,
> JU
> ------------------------------------------------------------------------
> *From:* Michael Dewey <lists using dewey.myzen.co.uk>
> *Sent:* Tuesday, March 31, 2020 4:28 AM
> *To:* Dr. Gerta Rücker <ruecker using imbi.uni-freiburg.de>; Nicky Welton
> <Nicky.Welton using bristol.ac.uk>; Viechtbauer, Wolfgang (SP)
> <wolfgang.viechtbauer using maastrichtuniversity.nl>; Ju Lee
> <juhyung2 using stanford.edu>; r-sig-meta-analysis using r-project.org
> <r-sig-meta-analysis using r-project.org>
> *Subject:* Re: [R-meta] Dear Wolfgang
> Indeed if the single time point studies have used different times Ju
> will probably want to do a meta-regression with time as a moderator so
> it would not matter too much if Ju chose any single value from the
> multiple time point studies. That would avoid the complexity of
> estimating the V matrix.
>
> Michael
>
> On 30/03/2020 19:42, Dr. Gerta Rücker wrote:
>> Dear Ju,
>>
>> Another (maybe simplistic) solution could be to use only one (e.g.,
>> always the first) time point for those studies that report repeated
>> measurements. This can be justified because you wrote that the "large
>> majority of studies measure this over short term experiment and thus on
>> a single time point" - so, as I understand, you it is only a minority of
>> studies that causes the multiplicity issue.
>>
>> A similar simplification is often done when including a small number of
>> cross-over trials into a meta-analysis of RCTs with parallel-group design.
>>
>> Best,
>>
>> Gerta
>>
>> Am 30.03.2020 um 20:36 schrieb Nicky Welton:
>>> If you are interested in modelling the time-course relationship then
>>> there is a new package in R to do this within a network meta-analysis
>>> framework (although it can also be used if there are only 2
>>> interventions):
>>> https://cran.r-project.org/web/packages/MBNMAtime/index.html
>>>
>>> Best wishes,
>>>
>>> Nicky
>>>
>>> -----Original Message-----
>>> From: R-sig-meta-analysis <r-sig-meta-analysis-bounces using r-project.org>
>>> On Behalf Of Viechtbauer, Wolfgang (SP)
>>> Sent: 30 March 2020 19:00
>>> To: Ju Lee <juhyung2 using stanford.edu>; r-sig-meta-analysis using r-project.org
>>> Subject: Re: [R-meta] Dear Wolfgang
>>>
>>> Thanks for the clarification.
>>>
>>> Computing a time-averaged d (or g) value is tricky because the values
>>> are not independent. So, if you meta-analyze them, the standard error
>>> of the pooled estimate would not be correct unless you take the
>>> dependency into consideration. And to answer one of your questions,
>>> squaring the standard error from the model would give you the sampling
>>> variance, but again, that value would not be correct.
>>>
>>> Basically what you have is the 'multiple-endpoint' case described here:
>>>
>>> http://www.metafor-project.org/doku.php/analyses:gleser2009#multiple-endpoint_studies
>
>>>
>>>
>>> You would need an estimate of the correlation between the repeated
>>> measurements (and if you have more than two time points, then an
>>> entire correlation matrix) to construct the V matrix for each study
>>> before meta-analyzing the values. Then you could use:
>>>
>>> res <- rma.mv(yi, V, data=dat)
>>>
>>> to pool the estimates into a time-averaged estimate, coef(res), and
>>> vcov(res) would give you the sampling variance. But the difficult part
>>> is constructing V.
>>>
>>> Maybe you can make some reasonable assumptions about the size of
>>> correlation (which probably should be lower the further time points
>>> are separated, although if there are seasonal effects, then
>>> measurements taken during similar seasons - even if they are further
>>> apart - may tend to be more correlated again). Based on equations
>>> (19.26) and (19.27) from the Gleser and Olkin (2009) chapter, you can
>>> then construct the V matrix.
>>>
>>> Best,
>>> Wolfgang
>>>
>>> -----Original Message-----
>>> From: Ju Lee [mailto:juhyung2 using stanford.edu]
>>> Sent: Monday, 30 March, 2020 18:16
>>> To: Michael Dewey; Viechtbauer, Wolfgang (SP);
>>> r-sig-meta-analysis using r-project.org
>>> Subject: Re: [R-meta] Dear Wolfgang
>>>
>>> Dear Wolfgang, Michael
>>> These questions are important, and thank you for pointing them out.
>>> In answers to your questions:
>>> 1. Studies are measuring predation or herbivory rate on experimental
>>> prey in control and treatment conditions in the field. Large majority
>>> of studies measure this over short term experiment and thus on a
>>> single time point (ex. After 24 hr of field exposure).
>>>
>>> 2. However, some studies will monitor these responses over long-term
>>> period over multiple seasons to understand the seasonal dynamics. The
>>> issue here is that responses show seasonal fluctuation, which is what
>>> they were looking for. But the reviewer of our study has warned
>>> against using single representative timepoint within these multiple
>>> measures (ex. Peak season) but rather time-average these multiple
>>> measurements to reduce time-related bias.
>>>
>>> 3. So back to the point: In all of studies, in multiple time point,
>>> same treatment vs. control group are being compared using the same
>>> sampling method. We have mean and SD for all of these data point,
>>> separately for control and treatment groups for each time point of
>>> multiple measurements.
>>>
>>> 4. I am using Hedges' d as effect sizes.
>>> Thank you and I hope this clarifies the question better!
>>> Best,
>>> JU
>>>
>>> ________________________________________
>>> From: Michael Dewey <lists using dewey.myzen.co.uk>
>>> Sent: Monday, March 30, 2020 5:32 AM
>>> To: Viechtbauer, Wolfgang (SP)
>>> <wolfgang.viechtbauer using maastrichtuniversity.nl>; Ju Lee
>>> <juhyung2 using stanford.edu>; r-sig-meta-analysis using r-project.org
>>> <r-sig-meta-analysis using r-project.org>
>>> Subject: Re: [R-meta] Dear Wolfgang
>>> And in addition to Wolfgang's comments it would be helpful to know
>>> what scientific question underlies the decision to measure at multiple
>>> time points. Presumably the authors of primary studies did not do it
>>> for fun.
>>>
>>> Michael
>>>
>>> On 30/03/2020 11:37, Viechtbauer, Wolfgang (SP) wrote:
>>>> Dear Ju,
>>>>
>>>> Before I can try to address your actual questions, please say a bit
>>>> more about the studies that measure responses at a single time point.
>>>> Are groups (e.g., treatment versus control) being compared within
>>>> these studies? Are the 'responses' continuous (such that means and
>>>> SDs are being reported) or dichotomous (such that counts or
>>>> proportions are being reported) or something else? And related to
>>>> this, what effect size measure are you using for quantifying the
>>>> group difference within studies? Standardized mean differences (which
>>>> would make sense when means/SDs are being reported), risk differences
>>>> or (log) risk/odds ratios (based on counts/proportions), or something
>>>> else?
>>>>
>>>> And the studies that measure responses at multiple time points: Are
>>>> they just doing the same thing that the 'single time point studies'
>>>> are doing, but at multiple time points? For example, instead of
>>>> reporting the means and SDs of the treatment and control group once,
>>>> there are several follow-ups, such that such the means and SDs of the
>>>> two groups are reported at each follow-up time point?
>>>>
>>>> Best,
>>>> Wolfgang
>>>>
>>>> -----Original Message-----
>>>> From: R-sig-meta-analysis
>>>> [mailto:r-sig-meta-analysis-bounces using r-project.org] On Behalf Of Ju Lee
>>>> Sent: Sunday, 29 March, 2020 20:16
>>>> To: r-sig-meta-analysis using r-project.org
>>>> Subject: [R-meta] Dear Wolfgang
>>>>
>>>> Dear Wolfgang,
>>>>
>>>> I sincerely hope you are well and healthy.
>>>> I wanted to reach out regarding ways to incorporate studies with
>>>> repeated-measures to overall mixed effect models.
>>>>
>>>> My data is almost entirely composed of studies measuring responses at
>>>> a single time point, but there are few studies that have been
>>>> measuring responses multiple times throughout study seasons. I was
>>>> advised that time-averaging these multiple responses makes more sense
>>>> for these studies.
>>>>
>>>> My understanding was that you could 1) do a fixed effect
>>>> meta-analysis of these studies to generate a single mean effect sizes
>>>> and sampling variance from these repeated measurements and then 2)
>>>> incorporate the single effect size and variance into overall
>>>> mixed-effect model. Is this a correct approach?
>>>>
>>>> If so, how would I calculate sampling variance from the fixed model
>>>> in the step 1? Is it based on SE outputs of the fixed effect model?
>>>>
>>>> Thank you very much, and I look forward to hearing from you!
>>>> Best,
>>>> JU
>>> _______________________________________________
>>> R-sig-meta-analysis mailing list
>>> R-sig-meta-analysis using r-project.org
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>>>
>>> _______________________________________________
>>> R-sig-meta-analysis mailing list
>>> R-sig-meta-analysis using r-project.org
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>>
>> _______________________________________________
>> R-sig-meta-analysis mailing list
>> R-sig-meta-analysis using r-project.org
>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>>
>
> --
> Michael
> http://www.dewey.myzen.co.uk/home.html
>
> <http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient>
> Virus-free. www.avg.com
> <http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient>
>
>
> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
--
Michael
http://www.dewey.myzen.co.uk/home.html
More information about the R-sig-meta-analysis
mailing list