[R-meta] MLMA - shared control group

Reza Norouzian rnorouz|@n @end|ng |rom gm@||@com
Tue Aug 31 18:25:40 CEST 2021

Dear Jorge,

The idea behind impute_covariance_matrix() is to make a general correlation
matrix specifying the correlational structure of effect size estimates in
each study. By a general correlation matrix, I mean one where the diagonal
elements are the estimates of sampling variances (vi) for your observed
effect sizes (yi), and the off-diagonal elements are obtained by
multiplying your assumed constant correlation (what you input as a single
"r" value) by the square root of every unique pair of "vi" in each study
(if you input a single value for "r", then this "r" is going to be the same
across all other studies).

You can see this in the simple R code below. So, this is not really
related to the context-specific formulas discussed in the Glesser &
Olkin's chapter and demonstrated by Wolfgang. This is a very general and in
many cases, "guesstimatey" approach for dealing with several (and perhaps
messy) sources of sampling dependence.

Note that estimating dependence among effect size estimates in a study due
to a shared control group doesn't require knowledge of any correlation
reported by the primary author(s). So, if a good number of studies in your
study pool are dependent due to sharing a control group, then you can
manually calculate the covariance among their effect size estimates in a
couple of studies using Glesser & Olkin's relevant formula to get a better
feel for your range of "r", or to even defend your choice of a particular
value for "r" (maybe the average of "r" estimates in your manually
calculated studies), especially if your results ended up being sensitive to
the choice of "r" when using impute_covariance_matrix().

These are important analytic decisions yet perhaps confusing to many
consumers of research. So, I would clearly explain in my paper what I did,
share my data, and the exact same code I used for replicability purposes.


txt = "
  study     yi                  vi
1     1       1.7581030   0.3947423
2     1       1.9324494   0.8075765
3     1       0.1225808  0.5933262
data <- read.table(text = txt, header = TRUE)

# Assume an r for this and ALL other studies not shown:
r = .6

# off-diagonal elements:
r * sqrt(0.3947423*0.8075765)
r * sqrt(0.3947423*0.5933262)
r * sqrt(0.8075765*0.5933262)

# diagonal elements:

# Now compare the above matrix element with those in the function's output:
with(data, impute_covariance_matrix(vi, study, r = r))

*Reza Norouzian (he/him/his)*

On Tue, Aug 31, 2021 at 6:40 AM Jorge Teixeira <jorgemmtteixeira using gmail.com>

> Thank you. :)
> Viechtbauer, Wolfgang (SP) <wolfgang.viechtbauer using maastrichtuniversity.nl>
> escreveu no dia terça, 31/08/2021 à(s) 09:57:
>> >-----Original Message-----
>> >From: Jorge Teixeira [mailto:jorgemmtteixeira using gmail.com]
>> >Sent: Tuesday, 31 August, 2021 10:05
>> >To: Viechtbauer, Wolfgang (SP)
>> >Cc: Reza Norouzian; R meta
>> >Subject: Re: [R-meta] MLMA - shared control group
>> >
>> >Thanks Wolfgang and Reza - I have made some progress, at least.
>> >
>> >Yes, I am thinking about 3-level MA.
>> >
>> >Just 2 last points:
>> >
>> >1) Is V** supposed to be equivalent to a certain default correlation
>> value in
>> >impute_covariance_matrix(). (IE. r=0.5)?
>> >
>> >(** --> V
>> ><- bldiag(lapply(split(dat, dat$study), calc.v))
>> >)
>> >
>> >The 2 methods seem to give different results, across multiple r values.
>> It's not clear what exactly you are comparing, but I guess you are
>> comparing impute_covariance_matrix() with the code you found on the metafor
>> website, namely:
>> https://www.metafor-project.org/doku.php/analyses:gleser2009
>> Those are different approaches, so they are not expected to give the same
>> results.
>> >2) r values are pretty much based on "expert" opinion and faith? We
>> don't have
>> >tools to assess which value would be the best choice?
>> The correlations should be based on the actual data, like in this example:
>> https://www.metafor-project.org/doku.php/analyses:gleser2009#multiple-endpoint_studies
>> If you don't know the correlations, then one can make a 'guestimate'.
>> Maybe a few studies do report the correlations, so one can base this
>> guestimate on that.
>> But no, there isn't really a way of assessing which guestimate is 'best'
>> (well, one can imagine some rather complex methods that might go in this
>> direction, but this is beyond the scope of this discussion).
>> Best,
>> Wolfgang

	[[alternative HTML version deleted]]

More information about the R-sig-meta-analysis mailing list