[R-sig-ME] Calculating effect sizes of fixed effects in lmer
Viechtbauer, Wolfgang (SP)
wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Thu Sep 24 11:44:51 CEST 2020
Dear Amie,
I would say the answer to "is there a current standard practice for calculating effect sizes of fixed effects?" is No. The difficulty is how to standardize the predictors/outcome. In standard regression models, the 'standardized coefficients' (often referred to as 'beta') can be easily obtained by standardizing the outcome and the predictor variables before fitting the model (which is equivalent to computing beta = b * sd(x) / sd(y), the equation usually shown in textbooks for computing standardized regression coefficients). An example:
x1 <- c(2,4,3,5,6,7,4,6)
x2 <- c(0,0,0,0,0,1,1,1)
y <- c(4,3,2,4,5,4,7,4)
res <- lm(y ~ x1 + x2)
coef(res)[2] * sd(x1) / sd(y)
res <- lm(scale(y) ~ I(scale(x1)) + I(scale(x2)))
coef(res)[2]
One could in principle do the same in mixed-effects models, but such models are often used for data that have some kind of multilevel structure (e.g., repeated measurements within subjects and/or subjects nested within some higher-level grouping variable such as pupils nested within schools). We then try to account for this structure by modeling different sources of variability (e.g., variance between schools versus variance between pupils). Computing sd(y) and sd(x1) (as above) would ignore this structure and just lumps everything together. Of course there are all kinds of proposals out there how one could do this more 'correctly' in the context of such models, but I don't think there is a general agreement on how this should be done.
Indeed, reviewers often ask authors to report some kind of 'effect size'. Nothing wrong with that, but unfortunately a lot of people interpret the term 'effect size' to refer to some kind of *standardized* measure. To me, that is an overly narrow definition of what an effect size is. For example, the (unstandardized) difference in means between two groups (e.g., treated versus control) is an effect size. And so is an unstandardized regression coefficient.
Standardized effects sizes are a crutch we use for example in meta-analysis to make results from different studies more comparable to each other because unstandardized coefficients / effects are only directly comparable if the units of y and x are the same across studies.
But for interpreting the results from a single study, an unstandardized effect size is perfectly fine as long as we start to have an appreciation for the units of the scales that we work with. If I tell an experienced clinician that some treatment for depression on average leads to a 10 point reduction on the Beck Depression Inventory, they should be able to understand what that means and how clinically relevant that is. Or to use Cohen's own words (from his infamous 1994 paper 'The earth is round (p < .05)'):
"To work constructively with 'raw' regression coefficients and confidence intervals, psychologists have to start respecting the units they work with, or develop measurement units they can respect enough so that researchers in a given field or subfield can agree to use them. In this way, there can be hope that researchers' knowledge can be cumulative. (p. 1001).
I went on a bit of a rant there towards the end, but this insistence on standardized effect sizes is a bit of a pet-peeve of mine.
Best,
Wolfgang
>-----Original Message-----
>From: R-sig-mixed-models [mailto:r-sig-mixed-models-bounces using r-project.org]
>On Behalf Of FAIRS Amie
>Sent: Thursday, 24 September, 2020 10:49
>To: r-sig-mixed-models using r-project.org
>Subject: [R-sig-ME] Calculating effect sizes of fixed effects in lmer
>
>Dear list,
>
>I’m hoping someone know the current practice or wisdom regarding calculating
>(standardised) effect sizes of fixed effects in a mixed model (I fit all
>mine with lmer). By effect size I mean something akin to a cohen’s d type
>value. I’ve followed this list for the past few years and my understanding
>is that there is no easy way to do this, because of working out the degrees
>of freedom of the random structure (I hope I’ve understood that correctly).
>
>However, in searching the list archives for the past two years I have seen
>some discussion about it, in October 2019, and I have also seen that the
>emmeans package has a function called eff_size (though calculating the
>required values for the parameters seems like it could be prone to error for
>myself), so I thought I would ask: is there a current standard practice for
>calculating effect sizes of fixed effects?
>
>I’m not doing this in response to reviewer comments, but I anticipate I will
>get a comment like this for something I want to submit soon 😊
>
>Best,
>
>Amie
>
>------------------
>Dr. Amie Fairs
>Post-doctorant
>Aix-Marseille Université
>Laboratoire Parole et Langage (LPL) | CNRS UMR 7309 | 5 Avenue Pasteur |
>13100 Aix-en-Provence
>Email : amie.fairs using univ-amu.fr<mailto:amie.fairs using univ-amu.fr>
>
>While I may send this email outside of typical working hours, I have no
>expectation to receive an email outside of your typical hours.
More information about the R-sig-mixed-models
mailing list