# [R-sig-ME] models with no fixed effects

Andy Fugard a.fugard at ed.ac.uk
Fri Sep 12 13:35:38 CEST 2008

```I wonder do you get something analogous if you just stick in a Gaussian
distributed x, just as a way to get p > 0 for fit0.

> df\$y <- with(df <- data.frame(i = gl(10, 10), b = rep(25 +
>   rnorm(10), each = 10)), b + rnorm(100))
> df\$x = rnorm(100,0,5)
>
> fit1 <- lmer(y ~ 1 + x + (1 | i), df, REML=F)
> fit0 <- lmer(y ~ 0 + x + (1 | i), df, REML=F)

So the intercept in fit1 != 0 (by design):

> fixef(fit1)
(Intercept)
25

And indeed on average the random effect estimates in fit0 are about 25
bigger than they were in fit1:

> summary(ranef(fit0)\$i - ranef(fit1)\$i)
(Intercept)
Min.   :24.4
1st Qu.:24.5
Median :24.6
Mean   :24.6
3rd Qu.:24.6
Max.   :24.8

The residuals have the same variance in both fit0 and fit1, but the
variance of i is much larger as you'd expect:

> summary(fit1)
...
Random effects:
Groups   Name        Variance Std.Dev.
i        (Intercept) 0.379    0.616
Residual             0.790    0.889
...

> summary(fit0)
...
Random effects:
Groups   Name        Variance Std.Dev.
i        (Intercept) 605.18   24.600
Residual               0.79    0.889
...

But fit1 seems better by AIC, BIC, and the LLR-test:

> anova(fit0,fit1)
...
Df  AIC  BIC logLik Chisq Chi Df Pr(>Chisq)
fit0  3  356  363   -175
fit1  4  286  296   -139  71.9      1     <2e-16 ***

So the variance of all the random effects, not only the residuals,
affects the fit(?).  Dunno if this is an artefact of the x ~ N(0,5).

Trying again with the intercept at 0, then (most of the time!) there's
no difference between the models:

> df\$y <- with(df <- data.frame(i = gl(10, 10), b = rep(0 + rnorm(10),
>   each = 10)), b + rnorm(100))
> df\$x = rnorm(100,0,5)
>
> fit1 <- lmer(y ~ 1 + x + (1 | i), df, REML=F)
> fit0 <- lmer(y ~ 0 + x + (1 | i), df, REML=F)
>
> anova(fit0,fit1)
Data: df
Models:
...
Df  AIC  BIC logLik Chisq Chi Df Pr(>Chisq)
fit0  3  299  306   -146
fit1  4  300  311   -146  0.25      1       0.62

Andy

Jeff Powell wrote:
>
> On Sep 11, 2008, at 10:15 PM, r-sig-mixed-models-request at r-project.org
> <mailto:r-sig-mixed-models-request at r-project.org> wrote:
> ------------------------------
> ------------------------------
>
>> Message: 7
>> Date: Thu, 11 Sep 2008 21:15:21 +0100
>> From: Andy Fugard <a.fugard at ed.ac.uk <mailto:a.fugard at ed.ac.uk>>
>> Subject: Re: [R-sig-ME] models with no fixed effects
>> To: Peter Dixon <peter.dixon at ualberta.ca <mailto:peter.dixon at ualberta.ca>>
>> Cc: r-sig-mixed-models at r-project.org
>> <mailto:r-sig-mixed-models at r-project.org>
>> Message-ID: <48C97C59.6050103 at ed.ac.uk <mailto:48C97C59.6050103 at ed.ac.uk>>
>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>
>> Peter Dixon wrote:
>>> On Sep 11, 2008, at 1:15 PM, Douglas Bates wrote:
>>>
>>>> I should definitely add a check on p to the validate method.  (In some
>>>> ways I'm surprised that it got as far as mer_finalize before kicking
>>>> an error).  I suppose that p = 0 could be allowed and I could add some
>>>> conditional code in the appropriate places but does it really make
>>>> sense to have p = 0?  The random effects are defined to have mean
>>>> zero.  If you have p = 0 that means that E[Y] = 0.  I would have
>>>> difficulty imagining when I would want to make that restriction.
>>>>
>>>> Let me make this offer - if someone could suggest circumstances in
>>>> which such a model would make sense, I will add the appropriate
>>>> conditional code to allow for p = 0. For the time being I will just
>>>> add a requirement of  p >  0 to the validate method.
>>>
>>> I think it would make sense to consider a model in which E[Y] = 0 when
>>> the data are (either explicitly or implicitly) difference scores. (In
>>> fact, I tried to fit such a model with lmer a few months ago and ran
>>> into exactly this problem.)
>>
>> Wouldn't you still need the intercept?  The fixed effect tells you
>> whether on average the difference differs from zero.  The random effect
>> estimates tell you by how much each individual's difference differs from
>> the mean difference.
>
> If you are looking for correlations between difference scores, the model
> needs to be constrained to pass through the intercept. Otherwise the fit
> is dependent on the direction in which the differences are calculated
> (positive/negative scores). I ran into this error earlier this week
> while trying a mixed model to evaluate some phylogenetically-independent
> contrasts.
>
> Jeff
>
>
>
> __________________________________
>
> Jeff R. Powell, Ph.D.
> Postdoctoral Fellow
> Freie Universität Berlin
> Intitut für Biologie - Ökologie der Pflanzen
> Altensteinstraße 6
> 14195 Berlin
> Germany
>
> Ph: ++49 (0)30 838-53145
> Fx: ++49 (0)30 838-55434
> skype: jeff-powell
> jeffpowell2 at gmail.com <mailto:jeffpowell2 at gmail.com>
>
> __________________________________
>

--
Andy Fugard, Postgraduate Research Student
Psychology (Room S6), The University of Edinburgh,
7 George Square, Edinburgh EH8 9JZ, UK
+44 (0)78 123 87190   http://figuraleffect.googlepages.com/

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

```