[R-sig-ME] models with no fixed effects

Andy Fugard a.fugard at ed.ac.uk
Fri Sep 12 00:57:07 CEST 2008


On 11 Sep 2008, at 22:06, Peter Dixon wrote:

>
> On Sep 11, 2008, at 2:15 PM, Andy Fugard wrote:
>
>> Peter Dixon wrote:
>>> On Sep 11, 2008, at 1:15 PM, Douglas Bates wrote:
>>>> I should definitely add a check on p to the validate method.  (In
>>>> some
>>>> ways I'm surprised that it got as far as mer_finalize before  
>>>> kicking
>>>> an error).  I suppose that p = 0 could be allowed and I could add
>>>> some
>>>> conditional code in the appropriate places but does it really make
>>>> sense to have p = 0?  The random effects are defined to have mean
>>>> zero.  If you have p = 0 that means that E[Y] = 0.  I would have
>>>> difficulty imagining when I would want to make that restriction.
>>>>
>>>> Let me make this offer - if someone could suggest circumstances in
>>>> which such a model would make sense, I will add the appropriate
>>>> conditional code to allow for p = 0. For the time being I will just
>>>> add a requirement of  p >  0 to the validate method.
>>> I think it would make sense to consider a model in which E[Y] = 0
>>> when  the data are (either explicitly or implicitly) difference
>>> scores. (In  fact, I tried to fit such a model with lmer a few
>>> months ago and ran  into exactly this problem.)
>>
>> Wouldn't you still need the intercept?  The fixed effect tells you
>> whether on average the difference differs from zero.  The random
>> effect estimates tell you by how much each individual's difference
>> differs from the mean difference.
>>
>> A
>>
>> -- 
>> Andy Fugard, Postgraduate Research Student
>> Psychology (Room S6), The University of Edinburgh,
>> 7 George Square, Edinburgh EH8 9JZ, UK
>> +44 (0)78 123 87190   http://figuraleffect.googlepages.com/
>>
>> The University of Edinburgh is a charitable body, registered in
>> Scotland, with registration number SC005336.
>>
>>
>
>
> In the context in which this arose, I was interested in assessing the
> evidence for an overall positive difference score (i.e., that E(Y)>0),
> and my strategy was to compare the fit of two models, essentially,  
> D~0+
> (1|Subject) and D~1+(1|Subject), using AIC values. To get a sensible
> assessment of the evidence for the fixed effect, it seemed to me that
> one would want to have the same random effects in the two models being
> compared. The second model is clearly more obvious, but the
> interpretation of D~0+(1|Subject) could be that subjects differ
> randomly in their response to the treatment, but that there is no
> consistent effect in the population.

I /think/ I get this, by analogy with how I use AIC/BIC/LRTs to test  
predictors.  But still a bit confused.  The two models are:

   y_ij = a + b_j + e_ij     (1)
   y_ij = c_j + e_ij         (2)

Suppose a != 0 in model 1.  Then in model 2:

    c_j = b_j + a.

(Maybe it's not as simple as this!)  But I'm not sure what effect that  
would have on the e_ij's - and my intuition says that's what's going  
to affect the fit.  Also I would have thought model 2 would give a  
better fit since having one fewer predictor is going to have less of a  
penalising effect in the AIC and BIC.

Andy

-- 
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.




More information about the R-sig-mixed-models mailing list