[R-sig-ME] Nonlinear mixed models with nlme::nlme() and lme4::nlmer()

Ben Bolker bbo|ker @end|ng |rom gm@||@com
Sat Feb 1 00:55:59 CET 2025


    Unfortunately nlmer needs help, and has needed it for a long time 
now.  It does sound like nlmer is doing something weird; my best guess 
without spending any more time on it is that it's analogous to this 
problem: https://github.com/lme4/lme4/issues/47

    Are you willing to open an issue on the lme4 github site, which is a 
slightly better venue for this kind of bug-hunting?

On 2025-01-31 11:29 a.m., David Sirl wrote:
> Hi all,
> 
> Following up my email below about getting different results when fitting
> the same model to the same data using nlme::nlme() and lme4::nlmer()
> [they agree fairly well on reported deviance, quartiles of residuals,
> estimated random effect variances and estimated fixed effects; but
> differ wildly on the standard errors and correlations associated with
> fixed effects]; I wonder if anyone has any suggestion/s for where or how
> I might be able to tempt someone to speculate what's going on please?
> 
> (I am very willing to follow instructions / try things out to help
> figure out what's happening; though due to other commitments I may be a
> few days between opportunities to do so.)
> 
> Best wishes,
> Dave
> 
> 
> On 21/11/2024 21:12, David Sirl wrote:
>>
>> Hi R-sig-mixed-models-ers,
>>
>> I'm fairly new to working with mixed models (and only have a couple of
>> years of working with R behind me) and have reason to want to fit some
>> nonlinear mixed models, so I'm working through the analysis of the
>> Soybean data that's in Sec 6.3 of Pinheiro & Bates' book
>> "Mixed-Effects Models in S and S-plus" (2000, Springer series on
>> Statistics & Computing).
>>
>> To help learn about these models I've been reproducing this analysis
>> using both nlme::nlme() (as P&B use, though in S rather than R) and
>> lme4::nlmer() (which I understand to have broadly similar
>> functionality but with better numerics in some places) and the results
>> from the two different packages agreed fairly well for the first
>> analyses I did. But now I have tried to fit a particular model (to the
>> same data) using these two methods and I get results that (i) suggest
>> I really am fitting the same model to the same data, (ii) in some
>> respects agree closely, but (iii) in other respects differ wildly.
>>
>> A couple of colleagues have checked my code and don't see any issues,
>> so I wonder if anyone here might be able to suggest what's going wrong
>> with either one of these functions (which I presume is not very
>> likely) or my use of these functions (presumably much more likely!)
>> please?
>>
>> I've set out what I've done below; and I'd be very grateful for any
>> suggestions that anyone can offer re what has gone wrong and where.
>>
>> Best wishes,
>> Dave
>>
>>
>> *A first sanity check*
>>
>> First I'll just write out the key parts of the calls I make to these
>> functions, in case I've been really daft and not noticed that my calls
>> should not be expected to fit the same model:
>>
>> nlme::nlme(weight ~ SSlogis(Time, Asym, xmid, scal),
>>             data = ...,
>>             fixed = Asym+xmid+scal~1,
>>             random = Asym~1,
>>             groups = ~Plot,
>>             start = ...)
>>
>> lme4::nlmer(weight ~ SSlogis(Time, Asym, xmid, scal) ~ (Asym + xmid +
>> scal) + (Asym|Plot),
>>              data = ...,
>>              start = ...)
>>
>>
>> *Some more detail*
>>
>> Assuming/hoping that the code above suggests that I am fitting the
>> same model with both methods, here's a bit more detail of the context,
>> what I'm trying to do and what's happening that suggests to me
>> something is going wrong.
>>
>> The dataset is Soybean (has "weight" recorded at 8-10 values of "Time"
>> for each of 48 values of the grouping factor "Plot") and the model I'm
>> trying to fit is of logistic growth using SSlogis(Time, Asym, xmid,
>> scal) with a random effect for Asym only (and fixed effects for all
>> three parameters).
>>
>> The starting parameter values I use are the (marginal) medians of the
>> parameter estimates for group-wise OLS fits of a logistic function to
>> the data for each Plot (fitted using nlme::olsList()) - I've omitted
>> this and entered the values explicitly here:
>>
>> startParams = c(Asym=19.813031, xmid=56.243394, scal=8.718537)
>>
>> fm3Soy.nlme =
>> nlme::nlme(weight ~ SSlogis(Time, Asym, xmid, scal),
>>               data = nlme::Soybean
>>               fixed = Asym+xmid+scal~1,
>>               random = Asym~1,
>>               groups = ~Plot,
>>               start = startParams)
>>
>>
>> fm3Soy.nlmer =
>> lme4::nlmer(weight ~ SSlogis(Time, Asym, xmid, scal) ~ (Asym + xmid +
>> scal) + (Asym|Plot),
>>                data = nlme::Soybean
>>                start = startParams)
>>
>>  From the summary() of each fit I get what appear to be exactly the
>> same parameters being estimated / reported on, and they're very much
>> in agreement regarding likelihood/AIC/BIC values, variances of random
>> effects, point estimates of fixed effects. But they are qualitatively
>> very different in terms of standard errors and correlations for fixed
>> effects. The standard errors differ by a factor of ~100 and the
>> correlations are in the range 0.3 - 0.7 vs -0.3 - 0.0 and, based on
>> what I know of the data (and the output in P&B's book!)  I'm fairly
>> confident that the results from lme4::nlmer() which are incorrect.
>>
>> A further suggestion of things going wrong arises if I try to find CIs
>> for model parameters [using intervals(fm3Soy.nlme) and
>> confint(fm3Soy.nlmer)]. The latter gives an error message "Error in
>> eval(expr, envir, enclos): step factor reduced below 0.001 without
>> reducing pwrss" which I don't understand and for which my Google-ing
>> doesn't yield any help - I only see help for when this error arises in
>> fitting the model.
>>
>> Another possibly relevant point is that this is not a model that's
>> actually fitted in P&B (the first thing they do is deal with the
>> non-constant error variance). But even if some of the model
>> assumptions are violated I am still fitting the same model to the same
>> data, so I feel quite un-nerved by the big discrepancy in the results
>> from the two methods.
>>
>> The fits both seem reasonably good in the sense that the augmented
>> predictions [e.g. with nlme::augPred(fm3Soy.nlme, level=0:1) |>
>> plot()] for both methods agree with each other and, visually, agree
>> fairly well to the data. (It's only standard errors and correlations
>> of fixed effects that differ.)
>>
>> (I have also fitted the model in a Bayesian way using brms::brm() and
>> get results that are in keeping with those from nlme::nlme() above.)
>>
>> Thanks very much if you've read this far. I hope I've included all
>> relevant information to help track down what might be happening, but
>> if someone is interested in helping and would like some more
>> information then please do let me know.
>>
> 	[[alternative HTML version deleted]]
> 
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models

-- 
Dr. Benjamin Bolker
Professor, Mathematics & Statistics and Biology, McMaster University
Director, School of Computational Science and Engineering
 > E-mail is sent at my convenience; I don't expect replies outside of 
working hours.



More information about the R-sig-mixed-models mailing list