[R-sig-ME] repeated measure in partially crossed design

matteo dossena m.dossena at qmul.ac.uk
Wed Feb 1 11:53:16 CET 2012


Really appreciate Ben,

this really make things clearer now, seems like (season|subject), could be the appropriate structure.

However, a last doubt still trouble me.

Having (season|subject) fitted as random effect, is it taking in consideration pseudoreplication (repeated measures on subject)?
If I would do this analysis with lme() I would fit a model with the argument correlation=CorCompSymm(form=~1|subject),
and a model without correlation than compared the two to assess wether or not  there is violation of the independence.
Is this a sensible things to do?

Since i'm working with lmer(), how can I check if correlation has to be included in the model?

Cheers
m.

Il giorno 1 Feb 2012, alle ore 02:15, Ben Bolker ha scritto:

> matteo dossena <m.dossena at ...> writes:
> 
>> Dear all,
> 
>> sorry to write again on this topic, but i feel like I haven't make
>> myself clear.  I try to rephrase my question, hope I'm not annoying
>> you.  So given that each level of season - e.g. April and Oct -
>> occurs at each level of subject while each level of treatment
>> -e.g. high or control - only occurs on a half of the the subjects
>> respectively and randomly, should I specify the random effects in
>> the model as
> 
> If you really want to "... assess[] the effect of treatment, season
> and their interaction on the relationship between the two variables",
> you may want treatment*season*V2 as fixed effect (so you can tell whether 
> the V1~V2 relationship changes with treatment and season)
> 
>  Having any *factor* included as both a fixed effect and a random
> effect will cause trouble, e.g. in your model (2).  (On the other
> hand, it does sometimes make sense to include a _continuous_ predictor
> as both fixed (which will estimate a linear trend) and random (which
> will consider variation around the linear trend -- this only makes
> sense if you have multiple measurements per value of the predictor,
> though.  Another apparent exception to this is subject in the
> (1|treatment/subject) term, which is only included as subject nested
> within treatment.
> 
>> (1) having subject nested within treatment and crossed with date, 
>> V1 ~ treatment * season + V2 + (1|treatment/subject) + (1|season)
> 
>  Here both treatment and season are included as both fixed and random --
> probably not a good idea.
> 
>> (2) subject crossed with date ignoring the nesting with treatment,
>> (3) random effects on subject only ignoring the crossed and nested
>> data structure V1 ~ treatment * season + V2 + (1|subject) +
>> (1|season)
> 
>  Still probably don't want season and (1|season)
>> 
> 
>> (3) random effects on subject only ignoring the crossed and nested
>> data structure V1 ~ treatment * season + V2 + (1|subject)
> 
>   This is not unreasonable.  You could consider (season|subject),
> or (1|subject)+(0+season|subject) [which fits the intercept and slope
> independently], since you have both seasons assessed for each individual.
> 
>   This gets raised a lot on this list, but: I would generally only
> drop a random effect from the model if it actually appears overfitted
> (i.e.  estimated as zeros, or a perfect +1/-1 correlation between
> random effects), and not if it is merely non-significant (Hurlbert
> calls this "sacrificial pseudoreplication").  I've been very impressed
> by the results from the blme package, which incorporates a weak
> Bayesian prior to push underdetermined variance components away from
> zero ...
> 
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models




More information about the R-sig-mixed-models mailing list