[R-sig-ME] Latent variable regression in lme4 as in HLM
Ben Bolker
bbo|ker @end|ng |rom gm@||@com
Thu May 28 03:17:06 CEST 2020
On 5/26/20 10:12 PM, Simon Harmel wrote:
> Dear James,
>
> Thanks for your response. I should research this a bit more. I have
> the feeling that HLM software under the /Latent Variable Regression
> /tab might be doing something different. But I highly appreciate you
> insightful response.
>
> Dear Ben, lmerControl(check.nobs.vs.nRE="ignore") didn't work.
Really? (Whenever you say "didn't work" you should give more
detail; as I show below, it worked on my platform. That it didn't work
for you could mean a lot of different things [e.g. it really didn't work
on your computer due to differences in lme4 version or OS or R version
or ... ; you misinterpreted the results; ... Knowing what didn't work
gives us a big head-start in helping you resolve the problem ...)
I think it did what it was intended to do, which is bypass the check
of numbers of obs vs number of random effect levels.
I get:
Warning message:
In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 0.0028078 (tol = 0.002,
component 1)
However, the results look sensible. Further exploration:
aa <- allFit(m1)
> aa
original model:
y ~ year + (year | stid)
data: dat
optimizers (7): bobyqa, Nelder_Mead, nlminbwrap, nmkbw,
optimx.L-BFGS-B,nloptwrap.NLOPT_LN_N...
differences in negative log-likelihoods:
max= 4.25e-06 ; std dev= 1.58e-06
summary(aa)
Differences among parameter estimates, log-likelihoods, etc. are all
very small, so the convergence warning is a false positive.
cheers
Ben Bolker
>
> library(lme4)
> dat <- read.csv('https://raw.githubusercontent.com/hkil/m/master/z.csv')
> m1 <- lmer(y~ year + (year|stid), data = dat,
> control=lmerControl(check.nobs.vs.nRE="ignore"))
>
> On Tue, May 26, 2020 at 7:38 PM Ben Bolker <bbolker using gmail.com
> <mailto:bbolker using gmail.com>> wrote:
>
> For what it's worth, *if* you're sufficiently sure that your model
> is identifiable, you can override the checks that test the relative
> numbers of observations/levels/etc.; see the "check.*" options in
> ?lme4::lmerControl, and set the relevant ones to "ignore"
>
> On Tue, May 26, 2020 at 7:12 PM Uanhoro, James
> <uanhoro.1 using buckeyemail.osu.edu
> <mailto:uanhoro.1 using buckeyemail.osu.edu>> wrote:
> >
> > Hello Simon,
> >
> > I'm not sure what HLM does. However: if your question is about using
> > the random intercepts (individuals' starting points) to predict the
> > random slopes (their linear growth rate), then the model you
> need is:
> >
> > summary(m2 <- lmer(y ~ year + (1 + year | stid), dat))
> >
> > whcih returns the random intercept and a random slope on time.
> >
> > The correlation between both random effects is the regression
> > coefficient from regressing the slope on the intercept (or
> vice-versa)
> > when both variables are standardized.
> >
> > More generally, you can always obtain regression coefficients from a
> > correlation/covariance matrix of random effects. With a two-by-two
> > correlation matrix, the single correlation is the coefficient
> (in both
> > directions). In a larger matrix of random effects, you can use the
> > solve() function in R to obtain coefficients from the matrix.
> See here:
> >
> https://stackoverflow.com/questions/40762865/how-do-i-get-regression-
> > coefficients-from-a-variance-covariance-matrix-in-r
> >
> > I tried your exact example, and m2 above will not fit because
> some of
> > your participants have under 2 time points while the maximum
> number of
> > time points is 3, resulting in a situation where the software is
> > attempting to compute more random effect values than there are
> rows in
> > the data - the software complains. Also, it is a good idea to
> rescale
> > that y variable prior to data analysis. I was able to get the
> model to
> > run by limiting the data to cases with more than 1 recorded time
> point:
> >
> > summary(m3 <- lmer(y.s ~ year + (1 + year | stid), data = t.dat,
> subset
> > = n > 1))
> >
> > I arrived at a correlation/coefficient of -0.06.
> >
> > Hope this helps, -James.
> >
> > Sent from Outlook Mobile
> >
> > From: R-sig-mixed-models
> <r-sig-mixed-models-bounces using r-project.org
> <mailto:r-sig-mixed-models-bounces using r-project.org>> on
> > behalf of Simon Harmel <sim.harmel using gmail.com
> <mailto:sim.harmel using gmail.com>>
> > Sent: Tuesday, May 26, 2020, 18:27
> > To: r-sig-mixed-models
> > Subject: [R-sig-ME] Latent variable regression in lme4 as in HLM
> >
> > Dear All,
> >
> > I know that in the HLM software, it is possible to use "intercept"
> > (e.g.,
> > initial place of students at year "0") as the *predictor *of "slope"
> > (e.g.,
> > fixed rate of change in years) under the *Latent Variable Regression
> > *tab.
> >
> > I was wondering if this is also possible in "lme4" or any other
> > mixed-modeling packages in R? *Thanks, Simon*
> >
> > *## Here is an example dataset for demonstration:*
> > library(lme4)
> > dat <- read.csv('
> >
> https://urldefense.com/v3/__https://raw.githubusercontent.com/hkil/m/master/z.csv__;!!KGKeukY!gjIgidLro6PaJJUHZOY1gk9IW8FfrzGWzo9IEaCRgFwkorvpE1tkLXqn3ujcsmy6OvxsB2xlUIM$
> > ')
> > m1 <- lmer(y ~ year + (1|stid), data = dat) #### 'stid' =
> student
> > id
> >
> > [[alternative HTML version deleted]]
> >
> > _______________________________________________
> > R-sig-mixed-models using r-project.org
> <mailto:R-sig-mixed-models using r-project.org> mailing list
> >
> >
> https://urldefense.com/v3/__https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models__;!!KGKeukY!gjIgidLro6PaJJUHZOY1gk9IW8FfrzGWzo9IEaCRgFwkorvpE1tkLXqn3ujcsmy6Ovxs517Hn00$
> >
> > _______________________________________________
> > R-sig-mixed-models using r-project.org
> <mailto:R-sig-mixed-models using r-project.org> mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models
mailing list