[R-sig-eco] lme post-hoc help
Nicholas Lewin-Koh
nikko at hailmail.net
Thu Mar 4 17:32:56 CET 2010
Hi Nathan,
I don't use SPSS, so I can't comment on what it is doing,
but if you look at the bottom of the output from multcomp
it says:(Adjusted p values reported -- single-step method)
What that means is that multcomp is adjusting for the fact
that you are doing six comparisons. So a quick and dirty
explanation is that worst case (Bonferonni), if you were
rejecting at 0.05, you would have to reject at 0.05/6=0.0083
and the p-values are adjusted accordingly. The adjustment
multcomp uses as default is not as severe as that.
Might I ask how many days you have? If you only have a small
number of days, you may not be able to estimate the variance
of the random effects very accurately, and you are better off
putting it in as a fixed effect.
lastly, your effect sizes aren't all that big, you may
need to look at your measurement instrument, are you using
% cover? or some other measure.
Best
Nicholas
> Message: 2
> Date: Wed, 3 Mar 2010 13:04:54 -0600
> From: Nathan Lemoine <lemoine.nathan at gmail.com>
> To: r-sig-ecology at r-project.org
> Subject: [R-sig-eco] lme post-hoc help
> Message-ID: <DE063D70-0155-4DE7-8DE6-569E6AFB9121 at gmail.com>
> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
>
> Hi all,
>
> I'm attempting to analyze some data on log-transformed algae grazing
> rates that I've collected in different habitats. I when collecting the
> data, I blocked for day to account for temporal variation in grazing
> intensity, and I'm considering DAY as a random factor in my model. As
> such, I've used the lme model to construct the mixed-effects model as
> follows:
>
> > al_lme <- lme(grazing~habitat, random = ~1|day, data=algae)
>
> The ANOVA summary shows a significant result:
>
> > anova(al_lme)
>
> numDF denDF F-value p-value
> (Intercept) 1 32 174.97322 <.0001
> HAB 3 32 3.31776 0.0321
>
> Yet, when I do a post-hoc comparison, none of the pairwise tests are
> significant:
>
> > pairs <- glht(al_lme, linfct = mcp("habitat"="Tukey")
> > summary(pairs)
>
> Estimate Std. Error z value
> Pr(>|z|)
> Fake - Complex == 0 0.2125 0.5390 0.394 0.978
> Far - Complex == 0 1.1937 0.5390 2.215 0.114
> Near - Complex == 0 0.7758 0.3623 2.142 0.134
> Far - Fake == 0 0.9813 0.4437 2.211 0.115
> Near - Fake == 0 0.5633 0.5390 1.045 0.715
> Near - Far == 0 -0.4179 0.5390 -0.775 0.861
> (Adjusted p values reported -- single-step method)
>
> How is this possible? In visually inspecting the data, it is apparent
> that at least the Far-Complex ought to be significant. To be sure, I
> double checked my statistics using SPSS, which is where I'm getting
> more confused.
>
> In SPSS, I built a blocked, general linear model with Loss as the
> dependent, Habitat as the fixed factor, and Day as a random factor. I
> used the default Type III SS because the design was not balanced. SPSS
> also returns a significant effect:
>
> Habitat F = 4.741, denom df = 33, p = 0.015
>
> and the Tukey's HSD post-hoc test returns a significant difference
> between the Far-Complex habitats, like expected. My questions are:
> First, how can I receive a significant result in R and have no
> significant pairwise effects? Second, what are the differences between
> SPSS and R, that SPSS uses a different denominator df to calculate the
> F-statistic? This is probably the reason that the p-value for SPSS is
> lower, but I'm not sure that this is part of the reason for the
> different post-hoc results.
>
> Thanks for any help,
>
> Nate Lemoine
>
>
>
> ------------------------------
>
> _______________________________________________
> R-sig-ecology mailing list
> R-sig-ecology at r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-ecology
>
>
> End of R-sig-ecology Digest, Vol 24, Issue 2
> ********************************************
>
More information about the R-sig-ecology
mailing list