[R-meta] MLMA - shared control group
Reza Norouzian
rnorouz|@n @end|ng |rom gm@||@com
Mon Aug 30 17:35:42 CEST 2021
Dear Wolfgang,
Jorge may benefit from using cluster-robust estimates of its fixed
effects in his (perhaps 3-level) model. However, my current
understanding is that even assuming Cov(e_{ijk}, e_{ij'k}) = 0 for two
observed estimates from a person on say two outcomes in the same study
while that is, in reality, not the case (perhaps in a major way),
gives Type I error rates and confidence intervals' coverage that are
nearly accurate.
But, surely using cluster-robust estimates of the fixed effects may
further improve this, but as I indicated in my response does much more
for the systematically biased estimates of variance components (a
between that is likely overestimated, and a within that is likely
underestimated).
Kind regards,
Reza
On Mon, Aug 30, 2021 at 4:35 AM Viechtbauer, Wolfgang (SP)
<wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>
> Please see below for a note.
>
> Best,
> Wolfgang
>
> >-----Original Message-----
> >From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
> >Behalf Of Reza Norouzian
> >Sent: Saturday, 28 August, 2021 22:51
> >To: Jorge Teixeira
> >Cc: R meta
> >Subject: Re: [R-meta] MLMA - shared control group
> >
> >Please see below.
> >
> >1) If I get things right, can we copy+paste the matrix code and it
> >always works in similar cases?
> >
> >If ALL studies are structured like what Wolfgang demonstrated based on
> >Gleser & Olkin's chapter, yes. But note that this formula assumes
> >that, for example, all studies have measured their subjects on a
> >single outcome. If you have some studies that in addition to having
> >several treatment groups have more than one outcome, or have used one
> >or more post-tests then, this may not be useful in those cases
> >(although extensions are possible for those cases).
> >
> >One way to avoid all the headache is to guesstimate the correlation
> >among effects due to perhaps several sources of sampling dependence
> >using V <- clubSandwich::impute_covariance_matrix(), and then, feed it
> >into the rma.mv() function via its V argument.
> >
> >There are plenty of examples of this if you search the archives.
> >
> >2) For meta-regression, we also have to use V, not vi, correct?
> >
> >In the end, you need to input either vi or V. If you use vi, then
> >you're ignoring sampling dependence. If you ignore such sampling
> >dependence, no major harm is done to your estimates of average effects
> >(fixed effects), but your estimate(s) of how variable your effects at
> >each level may be systematically biased (i.e., even if you have a very
> >large dataset, you may not still obtain the true value of
> >heterogeneity).
> >
> >If you don't care about heterogeneity of effect sizes, then knowing
> >about "any correlation among effect sizes" is not necessary, and you
> >can only use vi.
>
> Not only the estimates of heterogeneity are off, but the SEs of the fixed effects can also not be trusted. So, generally speaking, if you know that your sampling errors are not independent (e.g., due to the computation of multiple effects based on the same sample of subjects or due to the use of a shared control group), then one should try to compute those sampling error covariances and put them into the V matrix or construct an approximate V matrix, using for example impute_covariance_matrix().
>
> One *could* also ignore the covariances and then use cluster-robust inference methods so that the SEs are (at least asymptotically) correct. This won't 'fix' the estimates of heterogeneity, so those can still not be trusted then.
>
> Also, as explained by Reza, if the V matrix is only an approximate one, then one could also use cluster-robust inference methods.
>
> [snip]
More information about the R-sig-meta-analysis
mailing list