[R-sig-ME] Speed estimation for lmer?

Douglas Bates bates at stat.wisc.edu
Thu Sep 11 21:04:21 CEST 2008


On Thu, Sep 11, 2008 at 10:09 AM, Ben Bolker <bolker at ufl.edu> wrote:
> Adam D. I. Kramer wrote:
>> Hi,
>>
>>     I'm about to estimate what I expect to be a fairly involved model,
>> like this one:
>>
>> l <- lmer(y ~ x1*x2*x3 + (x1*x2*x3|grp) )
>>
>> ...the data set has 3,232,255 rows, for about 18000 grps, each of which
>> has around 700 observations;

Are you sure?

> 3232255/18000
[1] 179.5697

>>  x1, x2, x3 are continuous variables.
>>
>> Is there any way I can estimate how long this run will take? Obviously this
>> depends on things like memory, processor, etc....but perhaps I could run it
>> on 5 groups and then multiply the amount of time it takes, or something
>> like
>> that?

I would start with a sample of groups and all observations for each of
the groups in the sample.  Something like 1000 groups should not take
a huge amount of time (easily under an hour) if you simplify the
random effects specification.  I would start with the

gsamp <- with(mydata, sample(levels(grp), 1000))
datasamp <- subset(mydata, grp %in% gsamp)
system.time(fm1 <- lmer(y ~ x1 * x2 * x3 + (1|grp), datasamp, verbose = TRUE))
system.time(fm2 <- lmer(y ~ x1 * x2 * x3 + (x1 + x2 + x3|grp),
datasamp, verbose = TRUE))
system.time(fm3 <- lmer(y ~ x1 * x2 * x3 + (x1 * x2 * x3|grp),
datasamp, verbose = TRUE))

The reason that I suggest using verbose = TRUE is because you will be
able to see the progress of the iterations and to get a feeling for
the amount of time per iteration.  Having a large number of
observations and of groups will increase the amount of time per
iteration but my guess is that it would be more-or-less linear in the
number of observations.  I wouldn't expect that it would be worse than
quadratic in the number of observations.

However, increasing the number of random effects per group could cost
you substantially because that increases the number of parameters to
be estimated as part of the nonlinear optimization.  The iteration
output for fm1 will show one parameter being optimized.  Many other
parameters are involved in the model but their estimates are
calculated directly, given the one parameter which happens to be the
ratio of the standard deviation of the random effects to the standard
deviation of the residuals.  The iteration output for fm2 will show 10
parameters in the nonlinear optimization, corresponding to four
variances and 6 covariances of the random effects.  I'm not sure
exactly how many parameters will be in the nonlinear optimization for
fm3 but it will be a large number.  I would certainly do some
graphical exploration before I embarked on trying to fit a model of
that complexity.

I would be interested in what your experiences are.
>  I don't know exactly how it will scale, but I would guess offhand that
> it wouldn't be much worse than linear in number of points and in number
> of groups (???).  I would suggest you try it for 5, 10, 20, and 100
> groups and extrapolate from there ...  it does seem frighteningly big
> to me.
>
>> Also, given this information, is there some faster way to run the model? In
>> theory, I'd be interested in systematically checking which random effects I
>> could drop, but not if it would take weeks. Some prior posts to this list
>> (which I have only been actively reading since yesterday) suggest that lmer
>> is likely faster than lmer2, but there doesn't seem to be much
>> discussion on
>> the speed of various modeling functions (lme, lmer, lmer2, glmer, glmmPLQ,
>> etc.).
>
>  Roughly speaking: lmer and lmer2 aren't (I think) different any more,
> they were different branches of the same software.  They should both
> be much faster than lme.  glmer (from lme4) and glmmPQL (from nlme)
> should not be necessary unless you have binomial, Poisson, etc. data
> rather than normally distributed responses.

Ben is correct.  There are functions lmer and lmer2 in the current
lme4 package but lmer2 is just a stub that turns around and calls
lmer.  If your response y is on a continuous scale then it should be
lmer that you use.

It will help to have a machine with a lot of memory so you don't end
up swapping.  I would recommend using a machine running 64-bit Linux
(we don't have a 64-bit Windows version of R because we are waiting on
freely available compilers).  You may also want to try with an
accelerated BLAS.  However, I would check both with and without
accelerated BLAS if you have a multi-core processor.  Sometimes a
multithreaded accelerated BLAS can make lmer run slower, not faster.
This is because the calls to the BLAS are predominantly for small
matrices and the communications overhead for multithreaded versions
more than offsets the performance gain from using multiple cores.




More information about the R-sig-mixed-models mailing list