[R-meta] Alternative view of fixed effects in meta-regression
Lukasz Stasielowicz
|uk@@z@@t@@|e|ow|cz @end|ng |rom un|-o@n@brueck@de
Sat Aug 28 21:17:27 CEST 2021
Dear Fred,
isn't it sufficient to include two variables rather than four variables
when disentangling within-group-effects and between-group-effects?
Some references:
*Bell, A., Fairbrother, M. & Jones, K. Fixed and random effects models:
Making an informed choice. Qual Quant 53, 1051–1074 (2019).
https://doi.org/10.1007/s11135-018-0802-x
*https://strengejacke.github.io/mixed-models-snippets/random-effects-within-between-effects-model.html#the-complex-random-effect-within-between-model-rewb
According to the cited literature one could include two variables in
multilevel models: X_within_group and X_btw_group
X_btw_group refers to the group mean (e.g., mean age in the study j: x_j)
X_within_group refers to the difference between each observation and its
group mean (x_ij - x_j).
Best,
Lukasz
--
Lukasz Stasielowicz
Osnabrück University
Institute for Psychology
Research methods, psychological assessment, and evaluation
Seminarstraße 20
49074 Osnabrück (Germany)
Am 28.08.2021 um 05:32 schrieb r-sig-meta-analysis-request using r-project.org:
> Send R-sig-meta-analysis mailing list submissions to
> r-sig-meta-analysis using r-project.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
> or, via email, send a message with subject or body 'help' to
> r-sig-meta-analysis-request using r-project.org
>
> You can reach the person managing the list at
> r-sig-meta-analysis-owner using r-project.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of R-sig-meta-analysis digest..."
>
>
> Today's Topics:
>
> 1. Re: robust variance estimation with small number of elements
> within the cluster (Diego Grados Bedoya)
> 2. Re: Alternative view of fixed effects in meta-regression
> (Farzad Keyhan)
> 3. Publication bias with multivariate meta analysis (Huang Wu)
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 27 Aug 2021 15:06:27 +0200
> From: Diego Grados Bedoya <diegogradosb using gmail.com>
> To: James Pustejovsky <jepusto using gmail.com>
> Cc: R meta <r-sig-meta-analysis using r-project.org>
> Subject: Re: [R-meta] robust variance estimation with small number of
> elements within the cluster
> Message-ID:
> <CANkiHXmA+XyAYZN=2M3t2-UU+f+cr=6QTnOw7RsSPDPV-fNgVQ using mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi James,
>
> Thank you very much for your nice explanation.
>
> Greetings,
>
> Diego
>
> On Thu, 26 Aug 2021 at 18:11, James Pustejovsky <jepusto using gmail.com> wrote:
>
>> Hi Diego,
>>
>> clubSandwich uses cluster-robust (or "sandwich") variance estimators.
>> These estimators quantify the uncertainty in an average effect (or fixed
>> effect in a meta-regression model) *using only the between-cluster
>> variation* in effect sizes. Therefore, they simply will not work if all of
>> the information about a given category of effect sizes comes from a single
>> cluster.
>>
>> As a rough heuristic, you can think of cluster-robust variance estimators
>> as involving the following. Consider your example where you have a
>> multi-level mixed model that has a categorical moderator and omits the
>> intercept. The model coefficients are therefore interpreted as average
>> effect sizes for each category of the moderator. Imagine aggregating the
>> effect sizes to the level of the study, so that there is only one average
>> effect size per category per study. Denote the aggregated effect for
>> category c in study j as T_cj. Suppose that there are k_c studies that
>> include effect sizes in category c. The overall average effect for category
>> c is then going to be, roughly, a weighted average of the study-level
>> aggregated effects for category c:
>>
>> beta_c = sum_{j=1}^{k} w_cj T_cj
>>
>> for some weights w_cj. (This is a rough approximation because, if you're
>> using a multilevel model, the estimate beta_c will actually also involve
>> the average effect sizes for categories other than c. But let's ignore that
>> wrinkle for purposes of building intuition.)
>>
>> The robust estimator of the sampling variance of beta_c is going to be a
>> weighted version of the sample variance of the T_cj's:
>>
>> V_c = sum_{j=1}^k (w_{cj})^2 (T_cj - beta_c)^2
>>
>> If the weights are roughly equal, then the robust estimator ends up having
>> the even simpler form
>>
>> V_c = S_c^2 / k_c,
>>
>> where S_c^2 is the sample variance of the T_cj's. As you can see from
>> this, if there is only one study that includes effect size estimates in
>> category c, then S_c^2 will be zero by definition, but it can't be correct
>> that there is no uncertainty in the average effect size. If there's only
>> one observation, there's no information available to estimate the
>> uncertainty in the average. The cluster-robust variance estimator just
>> won't work.
>>
>> James
>>
>> On Thu, Aug 26, 2021 at 3:36 AM Diego Grados Bedoya <
>> diegogradosb using gmail.com> wrote:
>>
>>> Dear all,
>>>
>>> I would like to report the parameters obtained using a robust variance
>>> estimation of a multilevel mixed model (including a categorical moderator)
>>> without the intercept (sample size is 42). The categorical variable has 17
>>> levels of which 7 of them only have 1 study (due to the nature of the
>>> intervention). I am using as a cluster the study ID since some studies
>>> contributed with more than 1 effect size. Using the clubSandwhich R
>>> package, 4 out of the 7 levels of the moderator reported NA for the df and
>>> CIs values. I think the reason is that the estimates of these levels
>>> precisely coincide with the effect sizes obtained from the studies (se=0).
>>> I wonder if someone has faced something similar? (the robust function from
>>> metafor package is reporting the same values of the estimates).
>>>
>>> Any hint is more than welcome,
>>>
>>> Diego
>>>
>>> [[alternative HTML version deleted]]
>>>
>>> _______________________________________________
>>> R-sig-meta-analysis mailing list
>>> R-sig-meta-analysis using r-project.org
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>>>
>>
>
> [[alternative HTML version deleted]]
>
>
>
>
> ------------------------------
>
> Message: 2
> Date: Fri, 27 Aug 2021 20:31:55 -0500
> From: Farzad Keyhan <f.keyhaniha using gmail.com>
> To: Timothy MacKenzie <fswfswt using gmail.com>
> Cc: R meta <r-sig-meta-analysis using r-project.org>
> Subject: Re: [R-meta] Alternative view of fixed effects in
> meta-regression
> Message-ID:
> <CAEvy2r2yqKiR4UkL=9PNDC2n4c78sawxL+PXODzELCEoWnyfqQ using mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Dear Tim,
>
> Unconditional 3-level models (i.e., models with no moderator) fit by
> --rma.mv()-- assume: (A) normality of individual effects within
> studies, (B) normality of level-specific effects, and that (C) the
> relationship among the effects at each level is univariate linear.
>
> (If your model is a multivariate one, then those relationships are
> assumed to be multivariate linear).
>
> Applying these assumptions to the model that you referred to (j cases
> nested in k studies) would mean that the potential linear relationship
> between case-specific effects can be estimated by adding a moderator
> (e.g., --Age_jk--) that can vary at the case level.
>
> Now, if you add a moderator that varies among the cases, then, your
> fixed-effect coefficient for --Age_jk-- would detonate the amount of
> change in case-specific true effects (which are averages of individual
> effect sizes for each case) relative to 1 year increase in --Age_jk--.
>
> Or equivalently: “the difference in average effect sizes between cases
> that differ in age by one year”.
>
> So, you can add moderators at any level, and interpret the fixed
> effects for those moderators as: the amount of change in
> level-specific true effects relative to 1 unit increase in those
> moderators.
>
> To (partially) answer your final question, for moderators that can
> vary between more than one level, a single regression coefficient is a
> mix of the moderators’ effects on more than one levels’ true effects.
> Thus, it is a good idea to disentangle these effects. In the context
> of multilevel meta-regression, I’m not sure if there is a suggested
> procedure to do so. But *conceptually* something like what follows
> *might* make sense:
>
> 1- Create a variable called “X_btw_study”: Average X in each study.
> 2- Create a variable called “X_btw_outcome”: Average X in each
> outcome in each study.
> 3- Create a variable called “X_btw_outcome_study”: Subtract (1) from (2).
> 4- Create a variable called “X_wthn_study”: Subtract (1) from each
> X value in each study.
> 5- Create a variable called “X_wthn_outcome”: Subtract (2) from X
> value of that outcome in each study.
> 6- Fit the following model: >> rma.mv(yi ~ X_btw_study +
> X_btw_outcome + X_btw_outcome_study + X_wthn_study + X_wthn_outcome,
> random=~1 | study/outcome) <<
>
> In my conceptual description above, I divided X into five parts
> between two levels. But I leave it to other meta-regression experts to
> comment on whether I've missed something or if they know of a
> practical way of to deal with moderators that can vary across more
> than one level
>
> Best,
> Fred
>
> On Sat, Aug 21, 2021 at 9:09 AM Timothy MacKenzie <fswfswt using gmail.com> wrote:
>>
>> Dear Colleagues,
>>
>> I have some clarification questions.
>>
>> In multilevel models, what do the fixed-effect coefficients exactly
>> predict? (change in the 'observed' effect yi for 1 unit of increase in
>> moderator X OR change in some form of 'true effect' [depending on the
>> random-part specification] for 1 unit of increase in moderator X)
>>
>> The reason I ask this is the bottom of p.26 of this paper (
>> https://osf.io/4fe6u/). In this paper, Dr. Pustejovsky describes a 3-level
>> model (j cases in k studies):
>>
>> Rjk = Y0 + Y1(Age)jk + Vk + Ujk + ejk
>>
>> Then, he interprets Age's fixed effect coefficient as: *"the difference in
>> average effect sizes between cases [level 2] that differ in age by one
>> year"*.
>>
>> I wonder how this interpretation is possible and can be extended to other
>> models (see below)?
>>
>> Say X is a continuous moderator that can vary between 'studies' and
>> 'outcomes'. How can we apply Dr. Pustojuvsky's logic to the interpretation
>> of 'X' fixed coefficient separately in:
>>
>> (A) 'rma.mv(yi ~ X, random=~1 | study)'
>> vs.
>> (B) 'rma.mv(yi ~ X, random=~1 | study/outcome)' differ?
>>
>> Thank you very much,
>> Tim
>>
>> [[alternative HTML version deleted]]
>>
>> _______________________________________________
>> R-sig-meta-analysis mailing list
>> R-sig-meta-analysis using r-project.org
>> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
>
>
>
> ------------------------------
>
> Message: 3
> Date: Sat, 28 Aug 2021 01:19:29 +0000
> From: Huang Wu <huang.wu using wmich.edu>
> To: "r-sig-meta-analysis using r-project.org"
> <r-sig-meta-analysis using r-project.org>
> Subject: [R-meta] Publication bias with multivariate meta analysis
> Message-ID:
> <DM6PR08MB6266ED645DABFE96F7F0C99F81C99 using DM6PR08MB6266.namprd08.prod.outlook.com>
>
> Content-Type: text/plain; charset="utf-8"
>
> Hi all,
>
> I am conducting a multivariate meta-analysis using rma.mv. I want to test for publication bias.
> I noticed in a previous post, Dr. Pustejovsky provided the following code for Egger’s test.
>
> egger_multi <- rma.mv(yi = yi, V = sei^2, random = ~ 1 | studyID/effectID,
> mods = ~ sei, data = dat)
> coef_test(egger_multi, vcov = "CR2")
>
> Because I conducted a multivariate meta-analysis assuming rho = 0.8, I wonder for the Egger’s test, Do I need to let V equals to the imputed covariance matrix?
> Would anyone help me to see if my following code is correct? Thanks.
>
> V_listm <- impute_covariance_matrix(vi = meta$dv,
> cluster = meta$Study.ID,
> r = 0.8)
> egger_multi <- rma.mv(yi =Cohen.s.d, V = V_listm, random = ~ 1 | Study.ID/IID,
> mods = ~ sqrt(dv), data = meta)
> coef_test(egger_multi, vcov = "CR2")
>
> Also, I have tried V = V_listm and V = dv, but it gave me different results. When I use V = V_Vlistm, my results suggest the effect was no longer statistically significant but when I use V = dv, my result is still significant.
> Does that mean my results were sensitive to the value of rho? Thanks.
>
> By the way, does anyone have any suggestions/codes for other methods of testing publication bias? Many thanks.
>
> Best wishes
> Huang
>
> [[alternative HTML version deleted]]
>
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
>
> ------------------------------
>
> End of R-sig-meta-analysis Digest, Vol 51, Issue 41
> ***************************************************
>
More information about the R-sig-meta-analysis
mailing list