[R-sig-ME] Testing all random-effects correlations in lme4

Ben Bolker bbo|ker @end|ng |rom gm@||@com
Fri Mar 11 17:37:14 CET 2022


   [please keep r-sig-mixed-models in the cc: list]

   You can set the confidence intervals to be computed in parallel (see 
?lme4::profile.merMod, the parallel arguments would get passed through 
as part of the '...' argument from confint.merMod)

   Alternatively, you could use the merDerivs package, e.g.

library(lme4)
library(merDeriv)
fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
## sd(int), sd(days), cor(int,days), resid
sdcor <- as.data.frame(VarCorr(fm1))[["sdcor"]]
vv <- vcov(fm1, full = TRUE, ranpar = "sd")
se <- sqrt(diag(vv))[c(3,5,4,6)]  ## different order!

then construct the Wald CIs yourself

  I should really add something like this to broom.mixed, although Wald 
CIs of random-effect parameters should come with a giant WARNING SIGN 
that says they are often unreliable measures of uncertainty (which is 
why Doug Bates didn't bother to implement them in the first place ...)

   cheers
    Ben Bolker



On 3/10/22 11:50 PM, Timothy MacKenzie wrote:
> Anyway, this takes a huge amount of time using the profile method. Any
> other possible alternative?
> 
> On Thu, Mar 10, 2022 at 10:49 PM Timothy MacKenzie <fswfswt using gmail.com> wrote:
>>
>> Oh just saw the documentation, although my data is huge.
>>
>> On Thu, Mar 10, 2022 at 10:44 PM Timothy MacKenzie <fswfswt using gmail.com> wrote:
>>>
>>> Thanks Ben.
>>>
>>> confint(full_model, method = "Wald", oldNames=TRUE) returns all NAs
>>> for random effects, am I missing something?
>>>
>>>
>>>                 2.5 %     97.5 %
>>> sd_item_typemultiple-choice_grammar|user_id
>>>                     NA         NA
>>> cor_item_typemultiple-choice_vocabulary.item_typemultiple-choice_grammar|user_id
>>>           NA         NA
>>> cor_item_typeproduction_grammar.item_typemultiple-choice_grammar|user_id
>>>                   NA         NA
>>> cor_item_typeproduction_vocabulary.item_typemultiple-choice_grammar|user_id
>>>                NA         NA
>>> sd_item_typemultiple-choice_vocabulary|user_id
>>>                     NA         NA
>>> cor_item_typeproduction_grammar.item_typemultiple-choice_vocabulary|user_id
>>>                NA         NA
>>> cor_item_typeproduction_vocabulary.item_typemultiple-choice_vocabulary|user_id
>>>             NA         NA
>>> sd_item_typeproduction_grammar|user_id
>>>                     NA         NA
>>> cor_item_typeproduction_vocabulary.item_typeproduction_grammar|user_id
>>>                     NA         NA
>>> sd_item_typeproduction_vocabulary|user_id
>>>                     NA         NA
>>> (Intercept)
>>>            -0.02832784  0.2143555
>>> item_typemultiple-choice_vocabulary
>>>             1.46685879  4.7702998
>>> item_typeproduction_grammar
>>>            -0.65403796 -0.2758050
>>> item_typeproduction_vocabulary
>>>             1.00471913  1.5127176
>>> timePost-test
>>>             0.57518030  0.8745273
>>> item_typemultiple-choice_vocabulary:timePost-test
>>>            -0.22191717  0.8354483
>>> item_typeproduction_grammar:timePost-test
>>>             0.16377639  0.6762834
>>> item_typeproduction_vocabulary:timePost-test
>>>             0.06959180  0.6428313
>>>
>>> On Thu, Mar 10, 2022 at 10:30 PM Ben Bolker <bbolker using gmail.com> wrote:
>>>>
>>>>      Calculate the confidence intervals?
>>>>
>>>>      confint(full_model) ...
>>>>
>>>>     (By the way, *none* of the correlation parameters are equal to zero.
>>>>    But you can test whether they're significantly different from zero or
>>>> not :-) )
>>>>
>>>>     Besides which, constraining a single correlation coefficient to zero
>>>> may be harder than you think ...
>>>>
>>>> On 3/10/22 11:27 PM, Timothy MacKenzie wrote:
>>>>> Dear All,
>>>>>
>>>>> Am I right that to test whether each correlation estimate in G (below)
>>>>> is 0 or not, the only way is to have a full model and then fit
>>>>> different models each time fixing a correlation estimate to 0 and then
>>>>> compare the fit of the full model to the model whose target
>>>>> correlation estimate has been fixed to 0?
>>>>>
>>>>> Is there any quicker way not to repeat this process manually so many times?
>>>>>
>>>>> Thanks,
>>>>> Tim M
>>>>>
>>>>> library(lme4)
>>>>>
>>>>> dat <- read.csv("https://raw.githubusercontent.com/fpqq/w/main/d.csv")
>>>>>
>>>>> form3 <- y ~ item_type*time + (0 + item_type | user_id)
>>>>>
>>>>> full_model <- glmer(form3, family = binomial, data = dat,
>>>>>               control =
>>>>>                 glmerControl(optimizer = "bobyqa",
>>>>>                              optCtrl=list(maxfun=1e4)))
>>>>>
>>>>> G <- VarCorr(full_model)
>>>>>
>>>>> attr(G$user_id, "correlation")
>>>>>
>>>>> _______________________________________________
>>>>> R-sig-mixed-models using r-project.org mailing list
>>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>>
>>>> --
>>>> Dr. Benjamin Bolker
>>>> Professor, Mathematics & Statistics and Biology, McMaster University
>>>> Director, School of Computational Science and Engineering
>>>> Graduate chair, Mathematics & Statistics

-- 
Dr. Benjamin Bolker
Professor, Mathematics & Statistics and Biology, McMaster University
Director, School of Computational Science and Engineering
(Acting) Graduate chair, Mathematics & Statistics



More information about the R-sig-mixed-models mailing list