[R-sig-ME] lmer code for multiple random slopes

Peter R Law pr|db @end|ng |rom protonm@||@com
Thu Feb 25 05:16:13 CET 2021


Alas I am still puzzled. I have extracted some data in trial.txt (no categorical variables) and attached some code in trial.R. I am only using this data to test code, which I hope to apply to a larger data set obtained from multiple populations, so with more structure. So the results themselves are not important, only whether the code does what I think it should. It occurred to me to test each model with lme and lmer.

For a model with only a random intercept plus fixed effects, lme and lmer returned the same results, except that lme returns estimates with more decimal places (so here lmer returned zero variance for the random intercpt and lme a very small number).

Adding a random slope, the main difference in the results is that lme and lmer returned very different estimates of the slope variance. Is that surprising?

For a model with two random slopes, lme returned results as expected but lmer still claims there are 78 variance components, which is the number one would get from a 12 x 12 covariance matrix. Where the number 12 comes from is a mystery to me, especially as lme does what I expected (I ran this model with both raw data and normalized predictor values to see if something fishy was happening there but no):

> M5n <- lme(Response~normP+normA, random=~normP+normA|Group,data=Trial, method="ML")
> summary(M2n)
Linear mixed-effects model fit by maximum likelihood
 Data: Trial
       AIC      BIC    logLik
  536.4562 559.4969 -258.2281

Random effects:
 Formula: ~P + A | Group
 Structure: General positive-definite, Log-Cholesky parametrization
            StdDev       Corr
(Intercept) 3.061144e-04 (Intr) P
P           1.535752e-09 0
A           8.517774e-13 0      0
Residual    7.929821e+00

Fixed effects: Response ~ P + A
                Value Std.Error DF    t-value p-value
(Intercept) 25.453398 10.828841 46  2.3505192  0.0231
P           -0.029307  0.035939 46 -0.8154787  0.4190
A            0.140819  0.282018 46  0.4993255  0.6199
 Correlation:
  (Intr) P
P -0.033
A -0.975 -0.173

Standardized Within-Group Residuals:
       Min         Q1        Med         Q3        Max
-1.8447131 -0.5563605 -0.2613520  0.2755547  5.5643551

Number of Observations: 74
Number of Groups: 26
> M5 <- lmer(Response~normP+normA+(normP+normA|Group), REML="False", data=Trial)
Error: number of observations (=74) <= number of random effects (=78) for term (normP + normA | Group); the random-effects parameters and the residual variance (or scale parameter) are probably unidentifiable.

It didn't matter which pair of the three predictors I used as far as the error message lmer returned but lme only returned results for P+A, not any pair involving PR, which apparently created computational issues, distinct to the interpretation of the code.

In lmer, replacing the code (x+y|Group) by either (x+y||Group) or (X|Group) + (Y|Group) returned the expected results (in that lme4 interpreted the code as expected). Is there lme code for either of these models?

Many thanks for any help.

Peter

P. S. I just noticed that responding to Ben's email sends my email to Ben, not to r-sig. I hope that sending my email to r-sig is the right thing to do and doesn't break the chain to my previous email.


Sent with ProtonMail Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Wednesday, February 17, 2021 10:40 PM, Peter R Law <prldb using protonmail.com> wrote:

> Thanks for your quick response. I take it that the code should mean what I thought it would and that somehow lmer is not interpreting what I wrote in my actual example as intended. None of the variables are categorical but I'll give it some more thought and see if I can figure it out. If not, I'll provide more details.
>
> Peter
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> On Tuesday, February 16, 2021 10:04 AM, Ben Bolker bbolker using gmail.com wrote:
>
> > I second Phillip's point. The example below works as expected (gets
> > a singular fit, but there are 6 covariance parameters as expected).
> > Based on what you've told us so far, the most plausible explanation is
> > that one or both of your covariates (x and/or z) are factors
> > (categorical) rather than numeric.
> > Ben Bolker
> > ============================================================================================================================================================================================================================================================================================================================
> > set.seed(101)
> > dd <- data.frame(x=rnorm(500),z=rnorm(500),
> > g=factor(sample(1:6,size=500,replace=TRUE)))
> > form <- y ~ x + z + (x+z|g)
> > dd$y <- simulate(form[-2],
> > newdata=dd,
> > newparams=list(beta=rep(0,3),
> > theta=rep(1,6),
> > sigma=1))[[1]]
> > library(lme4)
> > m1 <- lmer(form, data=dd)
> > VarCorr(m1)
> > On 2/16/21 8:18 AM, Phillip Alday wrote:
> >
> > > I suspect we'll need to know a bit more about your data to answer this
> > > question. Can you share it in any form (e.g. variables renamed and
> > > levels of factors changed to something opaque) ?
> > > Best,
> > > Phillip
> > > On 16/2/21 4:02 am, Peter R Law via R-sig-mixed-models wrote:
> > >
> > > > I am trying to fit a model with two covariates, x and z say, for response y, with a random factor g and want each of x and y to have a random slope. I expected
> > > > lmer(y ~ x + z + (x+z|g),...)
> > > > to fit a model with 6 random variance components, the intercept, two slopes and three correlations. But I got an error message saying there were 74 random variance components and my data was insufficient to fit the model. Yet
> > > > lmer(y ~ x + z + (x+z||g),...)
> > > > returned what I expected, a model with the random intercept and two slopes but no correlations. How is lmer interpreting the first line of code above and how I would code for what I want. I have not been able to find any examples in the literature or online that help me but I may have easily missed something so if anyone knows of a useful link that'd be great. The only examples of multiple random slopes I've seen take the form
> > > > lmer(y~x + z +(x|g) + (z|g),...)
> > > > specifically excluding correlations between the random slopes and intercept of the two predictors. Even if the latter is a more sensible approach I'd like to understand the coding issue.
> > > > Thanks.
> > > > Peter
> > > > Sent with ProtonMail Secure Email.
> > > > [[alternative HTML version deleted]]
> > > > R-sig-mixed-models using r-project.org mailing list
> > > > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> > >
> > > R-sig-mixed-models using r-project.org mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> >
> > R-sig-mixed-models using r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models


-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: Trial.txt
URL: <https://stat.ethz.ch/pipermail/r-sig-mixed-models/attachments/20210225/877b0eb9/attachment.txt>


More information about the R-sig-mixed-models mailing list