[R] {Spam?} Re: mgcv, testing gamm vs lme, which degrees of freedom?
Joris Meys
jorismeys at gmail.com
Tue Jun 22 15:30:51 CEST 2010
On Mon, Jun 21, 2010 at 10:01 PM, Bert Gunter <gunter.berton at gene.com> wrote:
>
> 1. This discussion probably belongs on the sig-mixed-models list.
Correct, but I'll keep it here now for archive purposes.
>
> 2. Your claim is incorrect, I think. The structure of the random errors =
> model covariance can be parameterized in various ways, and one can try to
> test significance of nested parameterizations (for a particular fixed
> effects parameterizaton). Whether you can do so meaningfully especially in
> the gamm context -- is another issue, but if you check e.g. Bates and
> Pinheiro, anova for different random effects parameterizations is advocated,
> and is implemented in the anova.lme() nlme function.
>
It is a matter of debate and interpretation. Mind you that I never
said it can't be done. I just wanted to pointed out that in most
cases, it shouldn't be done. As somebody else noted, both Wood and
Pinheiro and Bates suggest testing the random components _if the fixed
effects are the same_ . Yet, even then it gets difficult to offer the
correct hypothesis. In the example of Carlo, H0 is actually not
correct :
1) The "7 extra random components" are not really 7 extra random
components, as they are definitely not independent, but actually all
part of the same spline fit.
2) According to my understanding, the degrees of freedom for a
likelihood is determined by the amount of parameters fitted, not the
amount of variables in the model. The parameters linked to the random
effect are part of the variance structure, and the number of
parameters to be fitted in this variance structure is not dependent on
the number of knots, but on the number of smooths. Indeed, in the gamm
model the variance of the parameters b for the random effect is
considered to be equal for all b's related to the same smooth.
3) it appears to me that you violate the restriction both
Pinheiro/Bates and Wood put on the testing of models with LR : you can
only do so if none of the variance parameters are restriced to the
edge of the feasible parameter space. Again as I see it, the model
with less knots implies the assumption that some variance components
are zero.
Hence, you cannot use a LR test to compare both models. The anova.lme
won't carry out the test anyway :
library(mgcv)
library(nlme)
set.seed(123)
x <- runif(100,1,10) # regressor
b0 <- rep(rnorm(10,mean=1,sd=2),each=10) # random intercept
id <- rep(1:10, each=10) # identifier
y <- b0 + x - 0.1 * x^3 + rnorm(100,0,1) # dependent variable
f1 <- gamm(y ~ s(x, k=3, bs="cr"), random = list(id=~1), method="ML" )
f2 <- gamm(y ~ s(x, k=10, bs="cr"), random = list(id=~1),method="ML" )
> anova(f1$lme,f2$lme)
Model df AIC BIC logLik
f1$lme 1 5 466.6232 479.6491 -228.3116
f2$lme 2 5 347.6293 360.6551 -168.8146
If you change the random term not related to the splines, it does
carry out the test. In this case, you can test the random effects as
explained by Pinheiro/Bates.
> f3 <- gamm(y ~ s(x, k=10, bs="cr"),method="ML" )
> anova(f2$lme,f3$lme)
Model df AIC BIC logLik Test L.Ratio p-value
f2$lme 1 5 347.6293 360.6551 -168.8146
f3$lme 2 4 446.2704 456.6910 -219.1352 1 vs 2 100.6411 <.0001
As an illustration, the variance of the random effects :
> f1$lme
...
Random effects:
Formula: ~Xr.1 - 1 | g.1
Xr.1
StdDev: 163.8058
Formula: ~1 | id %in% g.1
(Intercept) Residual
StdDev: 1.686341 2.063269
Number of Observations: 100
Number of Groups:
g.1 id %in% g.1
1 10
> f2$lme
...
Random effects:
Formula: ~Xr.1 - 1 | g.1
Structure: pdIdnot
Xr.11 Xr.12 Xr.13 Xr.14 Xr.15 Xr.16 Xr.17 Xr.18
StdDev: 2.242349 2.242349 2.242349 2.242349 2.242349 2.242349 2.242349 2.242349
...
Clearly, the SD for all parameters b is exactly the same, and hence
only 1 parameter in the likelihood function. So both models won't
differ in df when using the ML estimation. This does not mean that the
b-coefficients for the random effects cannot be obtained. They are
just not part of the minimalization in the likelihood function. Or, as
Wood said about random effects : you don't estimate them, although you
might want to predict them.
Cheers
Joris
--
Joris Meys
Statistical consultant
Ghent University
Faculty of Bioscience Engineering
Department of Applied mathematics, biometrics and process control
tel : +32 9 264 59 87
Joris.Meys at Ugent.be
-------------------------------
Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php
More information about the R-help
mailing list