[R-sig-ME] Most principled reporting of mixed-effect model regression coefficients

Ades, James j@de@ @end|ng |rom he@|th@uc@d@edu
Mon Feb 17 05:31:30 CET 2020


Thanks, Maarten. So I was planning on reporting R^2 (along with AIC) for the overall model fit, not for each predictor, since the regression coefficients themselves give a good indication of relationship (though I wasn't aware that R^2 is "riddled with complications") Is Henrik only saying this only with regard to LMMs and GLMMs?

When you say "there is no agreed upon way to calculate effect sizes" I'm a little confused. I read through your stack exchange posting, but Henrik's answer refers to standardized effect size. You write, later down, "Whenever possible, we report unstandardized effect sizes which is in line with general recommendation of how to report effect sizes"

I'm also working on a systematic review where there's disagreement over whether effect sizes should be standardized, but it does seem that yield any kind of meaningful comparison, effect sizes would have to be standardized. I don't usually report standardized effect sizes...however, there are times when I z-score IVs to put them on the same scale, and I guess the output of that would be a standardized effect size. I wasn't aware of push back on that practice. What issues would arise from this?

I learned that mixed models are used predominantly for overall predictions vs individual coefficients, but I still was under the impression that one could derive effect sizes from predictor variables, and that this was largely sound. Am I incorrect?

In this particular study, there are four timepoints with 1286 students, though at each timepoint, there are roughly 1000 students. All students complete the same executive function tasks, so in that regard, there isn't really a formal factorial design at play, though there are multiple independent variables.

Best,

James
________________________________
From: Maarten Jung <Maarten.Jung using mailbox.tu-dresden.de>
Sent: Sunday, February 16, 2020 12:36 AM
To: Ades, James <jades using health.ucsd.edu>
Cc: r-sig-mixed-models using r-project.org <r-sig-mixed-models using r-project.org>
Subject: Re: [R-sig-ME] Most principled reporting of mixed-effect model regression coefficients

Dear James,

I think most people from Psychology (and many from Neuroscience, but
probably more dependent on the subfield) are used to see the value of
a test statistic (often F-tests), the number(s) of degrees of freedom,
the corresponding p-value, and some sort of effect-size measure.
To calculate the semi-partial-R-squared values for each categorical
predictor (if you have a factorial design) as described in Jaeger,
Edwards, Das, and Sen (2017) [1], you can use the r2glmm::r2beta()
function which you should get from GitHub.
Note that there is (to my knowledge) no agreed-upon way to calculate
effect sizes for (linear) mixed models. My answer here [2] might be
helpful for a quick overview and my personal take on this topic.

[1] https://doi.org/10.1080/02664763.2016.1193725
[2] https://stats.stackexchange.com/a/439842/136579

Best,
Maarten

On Sun, Feb 16, 2020 at 8:30 AM Ades, James <jades using health.ucsd.edu> wrote:
>
> Thanks a lot for the responses! These are great.
>
> Sorry--I should've clarified field...it's for Neuroscience/Psych. Still, speaking for that specific field, I can't find any unanimous agreement.
>
> Daniel, you're sjPlot package is amazing! I'm using it for our current paper and incorporated the patchwork package to create something I never could've otherwise. Easy to learn, simple and intuitive to execute, plentiful with options.
>
> I just found this paper on reporting R^2 for mixed models: https://besjournals.onlinelibrary.wiley.com/doi/epdf/10.1111/j.2041-210x.2012.00261.x I think I'll also include that, as R^2 (adjusted), at least to me, provides a more intuitive interpretation to people both in and out of the scientific community. Thanks for referencing the "performance" package. I'll look into that.
>
> Awesome! Thanks much, John, Thierry, and Daniel
> [https://besjournals.onlinelibrary.wiley.com/cms/asset/d67e1788-346b-4b61-8614-30bb2b237646/mee3.2013.4.issue-2.cover.gif]<https://besjournals.onlinelibrary.wiley.com/doi/epdf/10.1111/j.2041-210x.2012.00261.x>
> A general and simple method for obtaining R2 from generalized linear mixed‐effects models<https://besjournals.onlinelibrary.wiley.com/doi/epdf/10.1111/j.2041-210x.2012.00261.x>
> The use of both linear and generalized linear mixed‐effects models (LMMs and GLMMs) has become popular not only in social and medical sciences, but also in biological sciences, especially in the ...
> besjournals.onlinelibrary.wiley.com
>
> ________________________________
> From: Daniel Lüdecke <d.luedecke using uke.de>
> Sent: Saturday, February 15, 2020 7:00 AM
> To: Ades, James <jades using health.ucsd.edu>
> Cc: r-sig-mixed-models using r-project.org <r-sig-mixed-models using r-project.org>
> Subject: AW: [R-sig-ME] Most principled reporting of mixed-effect model regression coefficients
>
> The "parameters" package (https://easystats.github.io/parameters/) offers some convenient functions to extract standard errors, p-values or confidence intervals for a vast range of models. Just use "model_parameters()" or "ci()" if you are only interested in CIs. Note that there is a small issue with p-values/CIs based on Kenward-Roger or Satterthwaite approximated degrees of freedom with the current CRAN version, however, these issues are fixed in the latest GitHub version.
>
> According to your original question: it really depends on the field, or even on the journal what information is required. I would say estimate, CI and p-value are often the "standard", and some information on the random effect variances (which you can also get with the parameters-package, using "random_parameters()") and/or R2/ICC are also useful measures to have information about the proportion of explained variance that can be accounted to the random effect parameters (R2 and ICC, in turn, are available in the "performance" package - https://easystats.github.io/performance/).
>
> I personally usually report estimates, CIs, p-values, within- and between-group-variances and ICC (and here again: "group" is sometimes called "subjects", sometimes "clusters", depending on the discipline).
>
> Best
> Daniel
>
>
> -----Ursprüngliche Nachricht-----
> Von: R-sig-mixed-models <r-sig-mixed-models-bounces using r-project.org> Im Auftrag von Ades, James
> Gesendet: Samstag, 15. Februar 2020 01:29
> An: Thierry Onkelinx <thierry.onkelinx using inbo.be>
> Cc: r-sig-mixed-models using r-project.org
> Betreff: Re: [R-sig-ME] Most principled reporting of mixed-effect model regression coefficients
>
> Thanks, Thierry. This is what I was looking for!
>
> When I try confint(lme4_model) I get the following warning:
>
> ```{r}
>
> Computing profile confidence intervals ...
> Error in zeta(shiftpar, start = opt[seqpar1][-w]) :
>   profiling detected new, lower deviance
>
> ```
> Is there an easier way of extracting confidence intervals for fixed effects in lme4 than calculating them using the point estimate +/- Z * SE ?
>
> Best,
> James
> ________________________________
> From: Thierry Onkelinx <thierry.onkelinx using inbo.be>
> Sent: Friday, February 14, 2020 1:47 AM
> To: Ades, James <jades using health.ucsd.edu>
> Cc: r-sig-mixed-models using r-project.org <r-sig-mixed-models using r-project.org>
> Subject: Re: [R-sig-ME] Most principled reporting of mixed-effect model regression coefficients
>
> Dear James,
>
> IMHO the estimate and its CI works best. They instantly provide the range of uncertainty around the estimate without the reader having to do the math. CI also work with skewed distributions. p-values don't offer much added value over a CI.
> Below are a few examples of four estimates and their uncertainties. The first line displays the estimate and its SE. The second line displays the estimate, SE and p-values. The third displays the estimate and a relative error. While the last one displays the estimate and 95% CI.
>
> Keep in mind that readers are more likely to understand CI rather than SE.
>
> "1.2 � 0.3"  "10.5 � 4.5" "0.0 � 0.3"  "0.0 � 5.0"
> "1.2 � 0.3 (p = 0.0001)"  "10.5 � 4.5 (p = 0.0196)" "0.0 � 0.3 (p = 1.0000)"  "0.0 � 5.0 (p = 1.0000)"
>  "1.2 � 25.0%"  "10.5 � 42.9%" "0.0 � Inf%"   "0.0 � Inf%"
> "1.2 (0.6; 1.8)"   "10.5 (1.7; 19.3)" "0.0 (-0.6; 0.6)"  "0.0 (-9.8; 9.8)"
>
> Best regards,
>
> Thierry
>
> ir. Thierry Onkelinx
> Statisticus / Statistician
>
> Vlaamse Overheid / Government of Flanders
> INSTITUUT VOOR NATUUR- EN BOSONDERZOEK / RESEARCH INSTITUTE FOR NATURE AND FOREST
> Team Biometrie & Kwaliteitszorg / Team Biometrics & Quality Assurance
> thierry.onkelinx using inbo.be<mailto:thierry.onkelinx using inbo.be>
> Havenlaan 88 bus 73, 1000 Brussel
> www.inbo.be<http://www.inbo.be>
>
> ///////////////////////////////////////////////////////////////////////////////////////////
> To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of. ~ Sir Ronald Aylmer Fisher
> The plural of anecdote is not data. ~ Roger Brinner
> The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data. ~ John Tukey
> ///////////////////////////////////////////////////////////////////////////////////////////
>
> [https://inbo-website-prd-532750756126.s3-eu-west-1.amazonaws.com/inbologoleeuw_nl.png]<https://www.inbo.be>
>
>
> Op vr 14 feb. 2020 om 09:31 schreef Ades, James <jades using health.ucsd.edu<mailto:jades using health.ucsd.edu>>:
> Hi all,
>
>
>
> It�s been surprisingly difficult to find the most principled reporting of mixed-effect model regression coefficients (for individual fixed-effects). One stack overflow article lead me to this paper�a systematic review of the incorporating and reporting of GLMMs ( https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0112653#pone.0112653.s001)  which references a paper by Ben Bolker (https://www.sciencedirect.com/science/article/pii/S0169534709000196). Oddly, I don�t really find an answer to this in either of those. I�ve heard mixed things regarding fixed effect coefficients in LMM (that LMM/and GLMMs are more about the predictive power of an entire model than the individual predictors themselves), but overall, my understanding is that it�s kosher (and informative) to look at effect sizes of regression (fixed effect) coefficients�only that lme4 doesn�t currently provide p values (though Lmertest does).
>
>
>
> It seems like reporting effect size of regression coefficients and their SEs should suffice; though sometimes people report CI with those as well (but isn�t that a little redundant). My PI is telling me to include p-values. So many different things, so little agreement.
>
>
>
> I figured I�d turn here for something of a �definitive� answer.
>
>
>
> Ben, I definitely need to go back and read through your paper more thoroughly for a deeper understanding of the nuances of GLMMs. Currently watching�and reading�McElreath�s Statistical Rethinking, but I�m not quite at the level of implementing MCMCs.
>
>
> Much thanks,
>
>
> James
>
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models using r-project.org<mailto:R-sig-mixed-models using r-project.org> mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
>         [[alternative HTML version deleted]]
>
>
> --
>
> _____________________________________________________________________
>
> Universitätsklinikum Hamburg-Eppendorf; Körperschaft des öffentlichen Rechts; Gerichtsstand: Hamburg | www.uke.de
> Vorstandsmitglieder: Prof. Dr. Burkhard Göke (Vorsitzender), Prof. Dr. Dr. Uwe Koch-Gromus, Joachim Prölß, Marya Verdel
> _____________________________________________________________________
>
> SAVE PAPER - THINK BEFORE PRINTING
>
>         [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list