[R-sig-ME] Current recommended method to test fixed-effects?

Geoff Schultz Geoff.Schultz at albertahealthservices.ca
Fri Mar 25 15:56:46 CET 2011


Thank you for the link (and suggestions).
Cheers,
Geoff

-----Original Message-----
From: Ben Bolker [mailto:bbolker at gmail.com]
Sent: March 24, 2011 13:33
To: Geoff Schultz
Cc: r-sig-mixed-models at r-project.org
Subject: Re: [R-sig-ME] Current recommended method to test fixed-effects?

On 03/24/2011 02:02 PM, Geoff Schultz wrote:
> Hi, I'm hoping that someone can help a person new to the lme4
> package. My analysis  should be dead simple but the more I'm reading,
> the more I'm becoming less clear on what the current recommended
> approach is for evaluating the significance for fixed-effects.  It
> seems that lme4 using anova(model)  no longer provides p values that
> the lme package provided when using anova(model).   I'm not clear on
> the reason for this.

<http://cran.r-project.org/doc/FAQ/R-FAQ.html#Why-are-p_002dvalues-not-displayed-when-using-lmer_0028_0029_003f>

<http://glmm.wikidot.com/faq> [search within the page for "p values"]

I'm also under the impression that using a
> model comparison technique leading to a LRT is misleading for
> evaluating Fixed-effects as well (from Pinheiro and Bates).

 Potentially yes, if the data set is "small" (measured in terms of (#
observations - parameters), or in terms of number of random effect levels)

  Hummm...
> any clarity would be appreciated.
>
> The dataset is a remarkably simple repeated measures design with one
> unbalanced fixed effect;
>
> SUBJECT   TIME  GRAD  PAIN 1              Pre         Yes
> 9.0 1              Post       Yes         5.0 2              Pre
> No          8.6 2              Post       No          6.8 ... 52
> Pre         Yes         6.3 52           Post       Yes         6.0
>
> (41 'GRAD Yes' SUBJECTS and 11 'GRAD No' SUBJECTS)
>
> Using    model <- lmer(PAIN~1+TIME*GRAD+(1|SUBJECT),data)
>
> Thanks. Geoff

   (1) You could go ahead and run this in lme (
lme(PAIN~TIME*GRAD,random=~1|SUBJECT,data) ) to get denominator degrees
of freedom (which should be reasonably well defined for this simple case)

  (2) you could figure out the denominator df yourself: it's something
like 52 (you have 1 df available within each subject for estimating the
among-subject variance, which seems to be the appropriate 'denominator'
for testing GRAD, which is unreplicated within subjects)

  (3) since your denominator df are at least 50, details about the df
are likely to be very unimportant

  (4) since you only have two treatments to contrast within each
subject, I think you could just calculate "pre-post"

library(plyr)
newdata <- ddply(data,"SUBJECT",function(x) {

with(x,data.frame(GRAD=GRAD,PAINdiff=PAIN[TIME=="Post"]-PAIN[TIME=="Pre"],
   PAINmean=mean(PAIN)) })

then a t-test of PAINdiff against a NH of PAINdiff==0 is your main
effect of TIME; a t-test of PAINdiff for GRAD=="Yes" vs GRAD=="No" is
your interaction between TIME and GRAD; and a t-test of PAINmean by GRAD
is your main effect of GRAD ...


This message and any attached documents are only for the use of the intended recipient(s), are confidential and may contain privileged information. Any unauthorized review, use, retransmission, or other disclosure is strictly prohibited. If you have received this message in error, please notify the sender immediately, and then delete the original message. Thank you.




More information about the R-sig-mixed-models mailing list