[R-sig-ME] mysterious syntax error in call to nlmer

Douglas Bates bates at stat.wisc.edu
Tue Apr 5 22:59:56 CEST 2011


On Sun, Mar 27, 2011 at 4:36 PM, Ben Bolker <bbolker at gmail.com> wrote:
> Jacob Wegelin <jacobwegelin at ...> writes:
>
>
>  Thank you for your very clear and reproducible example!
>
>> I'm trying to fit my first nlmer model. I follow Doug Bates's code
>> at
>> http://lme4.r-forge.r-project.org/slides/2009-07-01-Lausanne/8NLMMD.pdf
>> and use lme4_0.999375-39.  A call to nls works fine, but when I turn
>> it into a call to nlmer and add the random component, errors are
>> returned. Providing start as a named vector returns one kind of
>> error; providing start as a list returns a different error.  My data
>> are simulated but they contain noise (per the warning on the nlmer
>> help page).  You can see a plot of the data at
>> http://jacobwegelin.net/r-help/2011_03_27_1311_simulated.pdf.  The
>> error messages are as follows. Code to reproduce the entire problem
>> is below under Details.
>
>
>
> set.seed(1)
> N<-100; nIDs<-20
> X<-seq(1,19, length=N)
> Y<-sin(X) + rnorm(N)/10
> IDs<-1:nIDs
>
> ## I have a silly tangential question.  Why put the commas
> ##  on the next line?  I haven't seen this convention much before,
> ##  but it has started appearing on this list.  In general it
> ##  makes much more sense to me (and occasionally makes a
> ##  difference) to put connectors (commas, arithmetic operators etc.)
> ##  at the end of the previous line rather than the beginning of
> ##  the next ...
>
> trueRandomEffects<-data.frame(
>        xintercept=rnorm(nIDs)/3
>        , amplitude=1+ rnorm(nIDs)/9
>        , frequency =1+rnorm(nIDs)/10
>        , xshift =runif(nIDs) * pi
>        )
> rownames( trueRandomEffects)<- IDs
> mdoot<- DAT<-data.frame(ID=IDs[1], X=X, Y=Y)
> for(thisperson in IDs[-1]) {
>        nextOne<-DAT
>        nextOne$Y<-  (
>                     trueRandomEffects[thisperson,"xintercept"]
>                + trueRandomEffects[thisperson,"amplitude"]
>                * sin(
>                trueRandomEffects[thisperson,"frequency"]  * X
>                - trueRandomEffects[thisperson,"xshift"]
>                )
>                + rnorm(N)/10
>                        )
>        nextOne$ID<- thisperson
>        mdoot<-rbind( mdoot, nextOne)
> }
> attr(mdoot, "ranef")<-trueRandomEffects
>
> require(ggplot2)
> print(
> ggplot(data=mdoot, mapping=aes(x=X, y=Y, group=ID, color=ID))
> + geom_path())
>
> whump<-nls(
>        Y ~ xintercept + amplitude * sin( frequency * X - xshift),
>  data=mdoot,
>  start=list(xintercept=1, amplitude=1.5, frequency=1, xshift=pi/2)
>                )
> print(whump)
>
> print(summary(whump))
> mdoot$nlsfit<-predict(whump)
> print(head(mdoot))
>
> print(
> ggplot(data=mdoot, mapping=aes(x=X, y=Y, group=ID, color=ID)) +
>  geom_path() +
> geom_point(mapping=aes(x=X, y=nlsfit))
> )
>
> require(lme4)
>
> ## at the moment nlmer assumes that the objective function has
> ##  a gradient attribute associated.  *If* you have a simple
> ##  objective function (i.e. one that R's built-in deriv()
> ##  function can handle), this is simple to address
>
> parnames <- c("xintercept","amplitude","frequency","xshift")
> dd <- deriv(expression(xintercept + amplitude *
>        sin( frequency * X - xshift)),
>    namevec=parnames, function.arg=c(parnames,"X"))
>
> roop <-nlmer(
>             Y ~ dd(xintercept,amplitude,frequency,xshift,X)~xshift|ID,
>             data=mdoot,
>             start=list(fixef=c(xintercept=1, amplitude=1.5,
>                     frequency=1, xshift=pi/2))
>             )
> summary(roop)
> detach("package:lme4")
>
> ## however, you can also do this (possibly better?)
> ## with nlme:
>
> library(nlme)
>
>
> roop2 <-nlme(  Y ~ dd(xintercept,amplitude,frequency,xshift,X),
>              data=mdoot,
>             fixed = xintercept + amplitude + frequency + xshift ~ 1,
>             random = xshift~1|ID,
>              start=c(xintercept=1, amplitude=1.5,
>                            frequency=1, xshift=pi/2)
>             )
>
> summary(roop2)
>
> ## it's a little bit alarming that nlme finds a non-zero xshift
> ## variance while nlmer doesn't ...

It might, in fact, be the other way around.  Compare the
log-likelihoods from the two model fits. If I haven't botched the
calculations (which is entirely possible) then the nlmer fit is better
than the nlme fit.  It may be that the values of the log-likelihood
are not comparable because they are calculated in different ways, but
I think they should be.

Also, if you look at the verbose output from the nlmer fit


> roop <-nlmer(
+             Y ~ dd(xintercept,amplitude,frequency,xshift,X)~xshift|ID,
+             data=mdoot, verbose=1L,
+             start=list(fixef=c(xintercept=1, amplitude=1.5,
+                     frequency=1, xshift=pi/2))
+             )
  0:     3674.0646: 0.163299  1.00000  1.50000  1.00000  1.57080
  1:     2272.5273: 0.973647 0.530645  1.29359 0.721209  1.62290
  2:     2205.2381: 0.983334 0.268247  1.04419  1.65332  1.61921
  3:     1455.5351:  1.00046 0.0718815 0.587221  1.70113  1.62596
  4:     1142.6573:  1.03086 0.0733082 0.103787  1.57847  1.64384
  5:     1124.2213:  1.02956 0.0703039 0.0140006  1.58682  1.64493
  6:     1123.1203:  1.02950 0.0756640 -0.00544582  1.58649  1.64516
  7:     1123.0674:  1.02949 0.0767893 -0.00860760  1.58666  1.64509
  8:     1123.0623:  1.02952 0.0808718 -0.0108216  1.58862  1.64064
  9:     1123.0551:  1.02935 0.0753004 -0.0137590  1.58970  1.63856
 10:     1123.0272:  1.02922 0.0774450 -0.0109283  1.59142  1.63312
 11:     1123.0199:  1.02904 0.0789350 -0.0160090  1.59543  1.63213
 12:     1122.8941:  1.03216 0.0702361 -0.0223557  1.61759  1.59772
 13:     1122.5006:  1.03967 0.0634442 -0.0291930  1.64563  1.51852
 14:     1121.9584:  1.03941 0.0795832 -0.0274689  1.65187  1.51782
 15:     1121.5795:  1.03778 0.0660772 -0.0362778  1.68783  1.51358
 16:     1121.0556:  1.02974 0.0854996 -0.0365876  1.71205  1.44101
 17:     1120.7508:  1.01744 0.0746180 -0.0410671  1.73067  1.36579
 18:     1120.5819: 0.976987 0.0787048 -0.0405667  1.73702  1.29796
 19:     1120.5238: 0.940846 0.0741361 -0.0351745  1.72659  1.28897
 20:     1119.8662: 0.704002 0.0724824 -0.0444264  1.72337 0.936691
 21:     1119.8438: 0.339802 0.0914764 -0.0442596  1.70652 0.719874
 22:     1118.9071: 0.130980 0.0775044 -0.0582427  1.71383 0.751868
 23:     1118.4469: 0.102282 0.0691767 -0.0647637  1.69347 0.960971
 24:     1118.2405:  0.00000 0.0786482 -0.0690935  1.71164  1.11381
 25:     1118.2366:  0.00000 0.0782510 -0.0712534  1.69999 0.971243
 26:     1118.2176:  0.00000 0.0756729 -0.0705572  1.70607  1.04360
 27:     1118.2155: 0.00737987 0.0770602 -0.0705417  1.70661  1.04500
 28:     1118.2152:  0.00000 0.0766721 -0.0709915  1.70552  1.03690
 29:     1118.2150:  0.00000 0.0768363 -0.0707948  1.70614  1.04127
 30:     1118.2150:  0.00000 0.0767926 -0.0707770  1.70609  1.04074
 31:     1118.2150:  0.00000 0.0767976 -0.0707763  1.70608  1.04074
 32:     1118.2150: 5.81067e-06 0.0767978 -0.0707754  1.70608  1.04074

you will see that the first parameter, which is the relative standard
deviation of the random effects,  stays close to 1 for a long time
before it drops.  And, furthermore the difference in the deviance
between, say, iteration 20 and iteration 32 is very small.  So I think
the lesson is that the standard deviation of the random effects is
very poorly determined.




More information about the R-sig-mixed-models mailing list