[R-meta] mean-variance relationships introduces additional heterogeneity, how?

James Pustejovsky jepu@to @end|ng |rom gm@||@com
Mon Oct 25 14:50:39 CEST 2021


Hi Luke,

Whoop, I switched notation halfway through writing my reply and evidently missed a couple of spots. Yes, for pi_Ai read m_Ai and for pi_Bi read m_Bi.

As far as graphical diagnostics, it's a bit easier in your case than in the poisson-gamma model. Since you're working with proportions, I would suggest graphing the sds of the control groups against the means of the control groups (and the same for treatment groups). If you have some outcomes that are raw counts, you'd first want to translate the the summary statistics into proportions by using m_i / Ti and sd_i / Ti.

Actually, if you know the number of trials going into each proportion, then one better than plotting sd_Ai versus m_Ai would be to factor out the number of trials, so plot sd_Ai * sqrt(Ti) versus m_Ai.

Another potentially useful diagnostic is just to plot each effect size metric (SMD and response ratio) against m_Ai and against Ti. If my speculative model is in the ballpark, we would expect to see that the SMD would be moderated by both m_Ai and Ti.

James

Again since you're working

James


On Sun, Oct 24, 2021 at 6:08 PM Luke Martinez <martinezlukerm using gmail.com> wrote:
> 
> Many thanks James!
> 
> Just to make sure, do pi_Ai and pi_Bi represent (in my participants
> taking a test context) the proportion of correct responses in each
> group (treatment vs. control) respectively?
> 
> We're saying that even though these proportions (pi_Ai and pi_Bi) have
> a constant and linear relation via the multiplicative coefficient
> lambda in and across all the studies such that the treatment effect is
> the same across all the studies, we still get varying SMDs across
> these studies and that's due to simply the natural mean-sd
> relationship underlying the data-generating process for the raw scores
> (counts of some sort).
> 
> If yes, I guess the difficult part (for me) becomes providing
> empirical support that at least there is evidence of mean-sd relation
> in my data whereby to provide evidence for the assumed the
> data-generating process. In my case (with longitudinal primary
> studies), for example, the smallest study gives the following results.
> 
> Can I plot anything here (or for any other study) to empirically show
> mean-sd relation?
> 
> Many thanks,
> Luke
> 
> #----------------------------------------------
> d="
> Mt             SDt  nt       Mc     SDc nc time
> 0.0799 0.0367 21 0.0763 0.0389 26 0
> 0.113   0.0472 21 0.1095 0.0537 26 1"
> 
> dat <- read.table(text=d, h=T)
> #----------------------------------------------
> 
> On Sun, Oct 24, 2021 at 1:42 PM James Pustejovsky <jepusto using gmail.com> wrote:
>> Hi Luke,
>> My original response to your question was not specifically about mean-variance relationships within a study across multiple time points, but rather was more generally about mean-variance relationships in summary statistics. The simplest case here is where there's just one effect size per study and where the studies vary in the mean level of the outcome in a given group. I'll give you an example of the sort of data-generating process that I had in mind. It's important to bear in mind, though, that this is all entirely speculative---I'm *not* asserting that this model is appropriate for your data specifically, only that the example is one plausible situation where mean-variance relationships arise.
>> Suppose that we have k studies, each involving a two-group comparison, with groups of equal size. In study i, the outcomes in group A follow a poisson distribution with mean m_Ai, so that the variance of the outcomes in group A is also m_Ai, for i = 1,...,k. The outcomes in group B follow a poisson distribution with mean m_Bi, so the variance is also m_Bi. Now, suppose that there is a fixed, proportional relationship between pi_Bi and pi_Ai, so that pi_Bi = lambda pi_Ai for some lambda > 0. In other words, the treatment contrast is *constant* on the scale of the response ratio. However, the means in group A vary from study to study, according to some distribution, say a gamma distribution with shape parameter alpha and rate parameter beta.  What does this model imply about the distribution of standardized mean differences across this set of studies?
>> The SMD parameter for study i (call it delta_i) is the ratio of the mean difference to the square root of the pooled variance. So:
>> delta_i = (m_Bi - m_Ai) / sqrt[(m_Bi + m_Ai) / 2]
>> = (lambda - 1) m_Ai / sqrt[(1 + lambda) m_Ai / 2]
>> = sqrt(m_Ai) * (lambda - 1) * sqrt[2 * / (lambda + 1)]
>> The second and third term in the above expression are constants that only depend on the size of the response ratio. The first term is random because we have assumed that the group A means vary from study to study. It will therefore create heterogeneity in the SMD parameters---the greater the variance of the m_Ai's, the greater the heterogeneity in delta_i.
>> I've used poisson distributions for outcomes and the gamma distribution for the m_Ai's only for sake of simplicity. Something very similar would hold if we considered outcomes that were binomially distributed because the binomial distribution has variance that is strongly related to its mean. If the mean proportions vary from study to study, this implies that the variances will also vary from study-to-study in a non-linear way, which will induce heterogeneity in the SMD parameters.
>> James
>> On Sat, Oct 23, 2021 at 11:53 PM Luke Martinez <martinezlukerm using gmail.com> wrote:
>>> Hello All,
>>> I wanted to follow up on an answer
>>> (https://stat.ethz.ch/pipermail/r-sig-meta-analysis/2021-October/003354.html)
>>> on the list that, in a nutshell, says:
>>> Existence of a relationship between the Mean and SD for a given study
>>> group (e.g., Treatment), over a couple of time points, can potentially
>>> introduce additional heterogeneity in the SMD effect sizes across the
>>> studies.
>>> I was wondering how exactly this heterogeneity comes about?
>>> A part of me says that such a Mean-SD relationship for a given study
>>> group over time is indicative of what Hedges' (1981;
>>> https://doi.org/10.3102/10769986006002107) refers to as
>>> "subject-treatment interaction"?
>>> Another part of me says, no, "subject-treatment interaction" is
>>> controllable by adding a random effect for individuals, and thus it is
>>> different.
>>> I highly appreciate your insights regarding HOW Mean-SD relationship
>>> for study groups over time can potentially introduce additional
>>> heterogeneity in SMDs across studies?
>>> Luke
>>> PS:
>>> Here are the relation I see between mean and sd of a study group
>>> across 3 time points:
>>> group1_Ms = c(.39, .18, .13)
>>> group1_SDs = c(.25, .16, .13)
>>> plot(group1_Ms, group1_SDs, type="l")
>>> _______________________________________________
>>> R-sig-meta-analysis mailing list
>>> R-sig-meta-analysis using r-project.org
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis



More information about the R-sig-meta-analysis mailing list