[R-sig-ME] [R] linear mixed model required for the U.S. FDA
bbo|ker @end|ng |rom gm@||@com
Mon Aug 19 20:46:10 CEST 2019
On 2019-08-19 10:00 a.m., Helmut Schütz wrote:
> Dear Thierry,
> Thierry Onkelinx wrote on 2019-08-19 13:00:
>> […] The model does not converge on my machine.
>> model2 <- lme(log(PK) ~ period + sequence + treatment , random = ~
>> treatment | subject, data = data, weights = varIdent(~treatment))
> Switching the optimizer from the default "nlminb" to the old one "optim"
> mod <- lme(log(PK) ~ period + sequence + treatment,
> random = ~ treatment | subject,
> data = data, weights = varIdent(~ treatment),
> method = "REML", na.action = na.exclude,
> control = list(opt = "optim"))
> Now I get a CI of 1.0710967-1.2518824 which is slightly more
> conservative than 1.0710440-1.2489393.
This is certainly worth pursuing, but again I want to point out that
these are *barely* different in quantitative terms -- a difference of
0.2% in the upper CI (and note that CI boundaries are inherently *more*
uncertain than the point estimates themselves). I wouldn't be at all
surprised if small changes in the underlying computational platform
(operating system, compiler, etc.) could make differences this big.
It would be good to know what level of match is really required. It's
unlikely that you're going to be able to get an *exact* match to the
floating-point results that SAS gives. It would also be worth checking
the log-likelihood/REML criterion for each fit -- what happens if lme4
is getting a slightly *better* fit than SAS? (Also note that the
difference between Wald and profile CIs is about the same magnitude as
the numerical differences you're seeing between packages.)
"The man with two watches never knows what time it is."
> A small step for a man but a giant leap for mankind.Of course, requires
> a lot of testing to check whether this is /always/ the case.
> All the best,
More information about the R-sig-mixed-models