[R-meta] Violation in non-independece of errors (head to head studies and mutlilevel meta-analysis)?

Emily Finne emily.finne at uni-bielefeld.de
Sat Mar 24 21:40:49 CET 2018


Dear Wolfgang,


oh yes, many people were  sick during the last weeks, here too. I hope 
you're feeling better by now.

Yes, that is exactly the data structure I have. I completely missed out 
to think of the problem as crossed random effects! Thank you!

I am quite sure that I constructed the var-cov-matrix V right. I used 
the formulas by Gleser & Olkin and James Pustejovsky. I double-checked 
the resulting matrix. Additionally, I used robust estimation, since most 
correlations between outcomes were only a best guess.

Only to make sure, that I understand correctly the point about the 
random effects: I code two different treatment groups within one study 
with different numbers starting with 1 (for example) and than use the 
code you provided for the crossed random effects. But the numbers given 
to different treatments are arbitrary and don't mean that the group with 
'treatment = 1' always got the same treatment. It is only to code that 
treatment 1 and 2 within one study are different (say medication A and B 
each compared against a placebo control), not that 1 and 2 always means 
the same thing in different studies (it could also stand for medication 
B and C vs. control in another study). Am I right?

The treatments we look at in our analysis, in fact, are all different in 
some aspects although pursuing the same goal. We use characteristics of 
the treatments as moderators then and hope to explain differences in 
effect sizes.

Again, thank you so much for you detailed help.

I will try, if the model with the crossed effects converge. Otherwise, I 
would stick to the old model (only  random = ~ outcome | study, 
struct="UN") and discuss this as a limitation.


Best,

Emily



Am 24.03.2018 um 14:19 schrieb Viechtbauer Wolfgang (SP):
> Due to illness (was sick twice this month), I have not found any time to repond to questions here. I am trying to catch up now.
>
> I am not entirely sure if I understand your data structure, but it seems to me that it is something like this:
>
> id  study  trt  outcome
> -----------------------
> 1   1      1    1
> 2   1      2    1
> 3   2      1    1
> 4   2      1    2
> 5   3      1    1
> 6   3      1    2
> 7   3      2    1
> 8   3      2    2
> ...
>
> So, study 1 has two treatment groups compared against a common control group and outcome 1 was measured. Study 2 has a single treatment group, but in addition to outcome 1, also outcome 2 was measured. And study 3 has both, 2 treatment groups and both outcomes.
>
> It is not clear to me whether you are actually computing covariances for the sampling errors or not. For the case of multiple treatment groups (compared to the same control group), no additional information is needed except what is typically required for computing the effect sizes themselves. For the case of two outcomes, additional information is needed (i.e., the correlation between the two outcomes). Code for computing the covariances for various effect size measures is provided here:
>
> http://www.metafor-project.org/doku.php/analyses:gleser2009
>
> When both cases occur together, things get tricky. One can derive the covariances, but this takes a bit of work. James Pustejovsky has posted some work on his blog that shows how one can derive those covariances: https://jepusto.github.io/archive/
>
> So, for the illustrative case above, the V matrix should be block-diagonal with blocks of 2x2, 2x2, and 4x4.
>
> Aside from this, there is the question of what random effects to add. First: Things like 'random = ~ outcome | study/id' do not work. I am surprised that you did not get an error when you tried this, but I may have added the check for this more recently. So, make sure you install the 'devel' version:
>
> https://github.com/wviechtb/metafor#installation
>
> so that it will catch this and throw an error. I can't recall what this would actually do before I added the check, but whatever it is, it's probably nonsense.
>
> In the case above, we want to allow for correlation for the true effects of the 2 different outcomes. That could be captured with:
>
> random = ~ outcome | study, struct="UN"
>
> which, btw, is like the Berkey et al. example:
>
> http://www.metafor-project.org/doku.php/analyses:berkey1998
>
> But we also want to allow for correlation between the true effects when there are 2 different treatment group. That could be captured with:
>
> random = ~ trt | study
>
> Note that I did not set 'struct' (so using the default struct="CS"). Unless trt=1 and trt=2 always have the same meaning across studies, the values 1 and 2 are essentially arbitary, so we just use them to distinguish the two treatments, but do not want to allow for a different tau^2 for trt=1 vs trt=2.
>
> So, combining these two things would lead to:
>
> random = list(~ outcome | study, ~ trt | study), struct=c("UN","CS")
>
> This is a model with crossed random effects. I don't know whether this model can really be fitted with your data. If it does converge, profile the variance/correlation components with:
>
> profile(res)
>
> (where 'res' is the fitted model). All profile likelihood plots should peak and show no flatness (horizontal lines). That's a good indication that the variance/correlation components are identifiable.
>
> Best,
> Wolfgang
>
> -----Original Message-----
> From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-project.org] On Behalf Of Emily Finne
> Sent: Monday, 05 March, 2018 21:39
> To: r-sig-meta-analysis at r-project.org
> Subject: Re: [R-meta] Violation in non-independece of errors (head to head studies and mutlilevel meta-analysis)?
>
> Update:
>
> I had a closer look at the models and what changes in the estimates:
>
> When I add "id" as another random effect level, I find that sigma^2 is
> 0.000 for Outcome and id (or very near to that in the models with
> moderators). Effect estimates for moderators remain nearly unchanged
> (although some CIs change).  To me it doesn't look as if it makes sense
> therefore to include "id" in addition to the original multivariate model.
>
> So, if it is not really wrong, I would prefer to stick to the original
> model.
>
> But why is the effect size estimate different (SMD about 0.1 higher with
> a three-level model or when including the id random effect) and which
> one is correct?
>
> Best,
>
> Emily
>
> Am 05.03.2018 um 17:12 schrieb Emily Finne:
>> Dear Wolfgang,
>>
>> may I just chimp into your conversation, since after reading it, I am
>> getting quite uncertain about our own analysis...
>>
>> We have a combination of studies with multiple treatments compared to
>> the same control group (in some studies) and of 2 different outcome
>> measures (but also only in a subset of the studies, i.e. one outcome was
>> present in all studies, the second was additionally present in a subset
>> of studies). We first looked at the overall effect and in a next step
>> tested different moderators.
>>
>> We followed a multivariate approach with rma.mv and used the mutivariate
>> parametrization as described in the konstantopoulos2011 example on the
>> metafor website. So we have:
>>
>> random = ~ Outcome | study
>>
>> However,  we also have studies with multiple treatment groups. After
>> reading your example code (from you reply below), I am not sure if it
>> would be correct to add another random effect for each effect size, i.e.
>>
>> random = ~ Outcome | study/id following your example below.
>>
>> We did not do that, because we thought that with Outcome as inner factor
>> we have added random variation between the different effect sizes within
>> each study (for those cases where more than one effect size is included).
>>
>> After trying out  random = ~ Outcome | trial/id, however, we get a
>> different (higher) overall effect.
>>
>> And after reading the website example again, I also compared the results
>> for a three level (random = ~ 1 | study/Outcome) versus a multilevel
>> parametrization (random = ~ Outcome | study).
>>
>> In fact, these results also differ, and the overall estimated effect
>> size for the three-level model is (in terms of robust CIs very nearly)
>> the same as for the model with random = ~ Outcome | study/id.
>>
>> Are we making a mistake if we ignore the additional "id" level random
>> effect? Or do we add this random effect mistakenly twice, since we
>> already have incorporated random varation within the studies by using
>> Outcome as inner factor?
>>
>> There are, in fact, 2 studies which had both: 2 outcomes but also 2 or 3
>> treatment groups. So this may be the part we missed so far by ignoring
>> "id" as additional level?
>>
>> We have 6 studies which had both outcomes but only one treatment group.
>> I am therefore not sure if we would overparameterize if we include "id",
>> because these trials have two lines in the dataset (2 ids) that also
>> stand for the 2 Outcomes.
>>
>> The variance-covariance-matrix includes covariances for different
>> outcomes within the same study, for different treatment groups within
>> one study or for both, as appropriate. The profile likelihood plots for
>> our orignal multivariate (~ Outcome | study) model looked fine.
>>
>> Or would it be better to stick to the three level model?  - We describe,
>> but not further analyze or discuss, differences between both outcomes,
>> because both, though one gives a bit higher estimates, intend to measure
>> the same outcome by different instruments. For the analysis of
>> moderators which is our main question it makes more sense to look at
>> only one moderator effect instead of one for each outcome measure, since
>> only some studies used both outcome measures. But of course, we would
>> like to take account of the fact that different outcome measures were used.
>>
>> Would a change in the strategy likely result in changes of the fixed
>> effects of moderators?
>>
>> I hope it is to some extent clear what I mean.  Any help would be very
>> much apreciated!
>>
>> Thanks in advance!
>>
>> Best,
>>
>> Emily
>>
>> Am 05.03.2018 um 10:12 schrieb Viechtbauer Wolfgang (SP):
>>> Dear Natan,
>>>
>>> If you reuse the information from a placebo group to compute multiple effects (i.e., treatment 1 vs placebo, treatment 2 vs placebo, etc.), then this automatically induces dependency in the sampling errors of the estimates. Code to compute the covariance for various effect size measures can be found here:
>>>
>>> http://www.metafor-project.org/doku.php/analyses:gleser2009
>>>
>>> So, you need to construct the full V matrix, use rma.mv(), and also include appropriate random effects (at least for studies and for each row of the dataset) in the model. So, something like this:
>>>
>>> dat$id <- 1:nrow(dat)
>>> res <- rma.mv(yi, V, mods = ~ <whatever fixed effects you think are needed>,
>>>                  random = ~ 1 | study/id, data=dat)
>>>
>>> I am a bit confused about:
>>>
>>>> We are trying to avoid network meta-analysis, given we want our results
>>>> to be adjusted by several moderators that affect antidepressant response.
>>> Why do you think that network meta-analysis is not compatible with 'adjustment by moderators'?
>>>
>>> Best,
>>> Wolfgang
>>>
>>>> -----Original Message-----
>>>> From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces at r-
>>>> project.org] On Behalf Of Natan Gosmann
>>>> Sent: Saturday, 03 March, 2018 20:46
>>>> To: r-sig-meta-analysis at r-project.org
>>>> Subject: [R-meta] Violation in non-independece of errors (head to head
>>>> studies and mutlilevel meta-analysis)?
>>>>
>>>> Hello all,
>>>>
>>>> We are conducting a large multilevel meta-analysis using the metafor
>>>> package considering all RCTs that assessed medications vs placebo for
>>>> psychiatric disorders.
>>>>
>>>> We included all available outcomes from each study and therefore, we are
>>>> considering study and assessment instrument (scale) as random variables
>>>> in
>>>> the model. The yi comes from differences in standardized mean change
>>>> between medication and placebo for each study.
>>>>
>>>> We are trying to avoid network meta-analysis, given we want our results
>>>> to
>>>> be adjusted by several moderators that affect antidepressant response.
>>>>
>>>> However, we have doubts about how we can handle head to head studies
>>>> (studies with more than one medication) and studies with distinct dosages
>>>> of the same medication. We were thinking to just calculate differences
>>>> from
>>>> placebo (but placebo would be the same group for those studies - would be
>>>> the contrast group for more then one medication or dosage group).
>>>> Including
>>>> study ID as random variable already accounts for violation in
>>>> non-independence of errors? Is that an appropriate way of doing that?
>>>>
>>>> Alternatively, should we select only one medication from head to head
>>>> trials?
>>>>
>>>> I would very much appreciate if you could help us with that.
>>>>
>>>> Best regards,
>>>> Natan


	[[alternative HTML version deleted]]



More information about the R-sig-meta-analysis mailing list