[R-meta] Response Ratios in nested studies

Farzad Keyhan |@keyh@n|h@ @end|ng |rom gm@||@com
Tue Oct 19 17:06:53 CEST 2021


Dear Reza and James,

Thank you so much for your, as always, valuable advice. Can we
possibly combine your two suggestions?

I mean can we both correct the initial, incorrect sampling variances
and then apply the clubSandwich package?

The reason is that finding the correct ICC is one issue, but then
assuming that ICC is going to be the same across the groups is another
issue which together make such a correction possibly a bit imprecise.

Thanks much,
Fred


On Tue, Oct 19, 2021 at 9:30 AM James Pustejovsky <jepusto using gmail.com> wrote:
>
> Hi Fred,
>
> This is a good question. I am in the same boat as Reza, as I don't know of any methods work that examines the issue (though it seems like the sort of thing that must be out there?). I'm going to respond under the assumption that you don't have access to raw data and are just working with reported summary statistics from a set of studies, some or all of which ignored the clustering issue.
>
> My first thought would be to use the same sort of cluster-correction that is used for raw or standardized mean differences. The variance of the LRR is based on a delta method approximation, and it can be expressed as
>
> vi = se1^2 / m1^2 + se2^2 / m2^2,
>
> where se1 = sd1 / sqrt(n1) and se2 = sd2 / sqrt(n2) are the standard errors of the means in each group (calculated ignoring clustering, assuming a sample of independent observations). The issue with clustered data is that the usual standard errors are too small because of dependent observations. The usual way to correct the issue is to inflate the standard errors by the square root of the design effect, defined as
>
> DEF = (n-lower - 1) * ICC + 1,
>
> where n-lower is the number of lower-level observations per cluster (or the average number of observations per cluster, if there is variation in cluster size) and ICC is an intra-class correlation describing the proportion of the total variation in the outcome that is between clusters.
>
> If we assume that the ICC is the same in each group, then the design effect hits both standard errors the same way, and so we can just use
>
> vi = DEF * (se1^2 / m1^2 + se2^2 / m2^2),
>
> In some areas of application, it can be hard to find empirical information about ICCs, in which case you may just have to make some rough assumptions in calculating the DEF then conduct sensitivity analysis for varying values of ICC.
>
> If my initial assumption is wrong and you do have access to raw data, then the following recent article might be of help:
> https://doi.org/10.1002/sim.9226
>
> Best,
> James
>
> On Fri, Oct 15, 2021 at 9:00 PM Farzad Keyhan <f.keyhaniha using gmail.com> wrote:
>>
>> Hello All,
>>
>> I recently came across a post
>> (https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2021-October/003330.html)
>> that discussed an issue that is relevant to my meta-analysis.
>>
>> In short, if some studies have nested structures, and the effect size
>> of interest is log response ratio (LRR), is there a way to adjust the
>> sampling variances (below) before modeling the effect sizes?
>>
>> vi = sd1i^2/(n1i*m1i^2) + sd2i^2/(n2i*m2i^2)
>>
>> Thank you,
>> Fred
>>
>> _______________________________________________
>> R-sig-meta-analysis mailing list
>> R-sig-meta-analysis using r-project.org
>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis



More information about the R-sig-meta-analysis mailing list