[R-meta] 3 candidate random structures
jepu@to @end|ng |rom gm@||@com
Mon Aug 16 16:43:49 CEST 2021
On Sat, Aug 14, 2021 at 10:28 PM Timothy MacKenzie <fswfswt using gmail.com>
> Dear James,
> Thank you, this is very helpful to know. Is there currently a way to
> statistically test individual correlations between outcome levels (assuming
> a "UN" structure, say in outcome | study, where the outcome has 3 levels)?
Yes. This can be tested using a likelihood ratio test comparing the full
model to a model with one (or more) of the correlations fixed to specified
values. You can fix these values using the rho argument in rma.mv. See the
section "Fixing Variance Components and/or Correlations" of ?rma.mv.
> Also, do you think that struct="GEN" might also allow such correlations to
> be investigated with more thoroughly? For example, if I specify my random
> part as `random = ~ treat_length * outcome | study, struct = "GEN"` would
> that allow understanding for how the correlations among outcome levels
> change for various `treat_length`?
Does that syntax even work? I'm not sure how you would interpret it.
> Also, in your hypothetical D specification, you suggested `~ outcome |
> interaction(study,gr,time)` as one of the terms, that had me wondering why
> you took `outcome` to be nested in `time` (I always thought the other way
> around i.e., time being nested in outcome).
Either way (outcomes nested in timepoints or timepoints nested in outcomes)
entails some simplifying assumptions. It seems more plausible that there
would be some structured correlation between all of the effect sizes within
a given study (including correlation between effects from different
outcomes at different time points) but I don't think it's possible to fit
something like that with rma.mv().
> Kind regards,
> On Sat, Aug 14, 2021 at 9:37 PM James Pustejovsky <jepusto using gmail.com>
>> See responses below.
>> On Fri, Aug 6, 2021 at 12:27 PM Timothy MacKenzie <fswfswt using gmail.com>
>>> Dear James,
>>> I seem to have forgotten to answer your question at the start of your
>>> answer. Yes, my outcomes are comparable across my studies. However, I have
>>> no intention of generalizing beyond my outcome levels. This is because the
>>> levels of my outcome correspond to a specific theory in my area of research
>>> and can't be beyond what the theory describes.
>> This makes sense. In multivariate models like these, generalization is to
>> a (hypothetical) population of the units on the right-hand side of the | in
>> the random formula---that is, the units corresponding to the IDs---not to
>> the levels of the outcomes. For instance, say that the outcome variable has
>> levels A, B, C. If you use random = ~ outcome | study, then the model is
>> describing the multivariate distribution of the outcomes (the joint
>> distribution of A, B, C) in a population of studies, of which you have a
>> sample. Some studies in the sample might report a only subset of outcomes
>> (only A, or only A and B), but we could imagine that all of the outcomes
>> *could* have been measured in every study.
>>> However, I want to take a somewhat multivariate approach and let my
>>> outcome levels correlate with one another across my studies because I
>>> actually want to investigate the interrelationships among the existing
>>> levels of my outcome themselves.
>>> That's a good reason to use a multivariate model.
>>> Given this context, can I ignore the generalizability aspect of the
>>> multilevel/multivariate approach, and instead take this approach because it
>>> allows for the correlation among the existing outcome levels to be
>> Yes, I think so.
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis