[R-sig-eco] lme post-hoc help

Nicholas Lewin-Koh nikko at hailmail.net
Sat Mar 6 17:28:03 CET 2010


Hi Nathan,
Seeing as no one has replied, the problem seems to be that you have no
overlap
in your blocking :( So the model you would like to fit is not really
well identified.
You have near and complex as one experiment and far and fake in another.
Unless you are willing to code far and fake as Day 1 2, assuming that
day 4 and 5 is the
same as day 1 and 2. I wouldn't do that. 

Nicholas

On Thu, 04 Mar 2010 15:03 -0600, "Nathan Lemoine"
<lemoine.nathan at gmail.com> wrote:
> Thanks for the responses,
> 
> So my design is an unbalanced incomplete randomized block design,  
> actually. There are 4 replicates per day over the course of three days  
> for the Simple and Complex habitats, and 4 replicates per day over the  
> course of 2 separate days for the Fake and Far habitats.
> 
> So it looks like this:
> Habitat         Day
> Near            1
> Near            2
> Near            3
> Complex 1
> Complex 2
> Complex 3
> Far                     4
> Far                     5
> Fake            4
> Fake            5
> 
> With four replicates in each of those categories. The metric I used  
> was the hourly % loss of a tethered set of algae. Basically, I set the  
> tether out, came back an hour later, and quantified the percent loss  
> in terms of mass.
> 
> 
> 
> On Mar 4, 2010, at 10:32 AM, Nicholas Lewin-Koh wrote:
> 
> > Hi Nathan,
> > I don't use SPSS, so I can't comment on what it is doing,
> > but if you look at the bottom of the output from multcomp
> > it says:(Adjusted p values reported -- single-step method)
> > What that means is that multcomp is adjusting for the fact
> > that you are doing six comparisons. So a quick and dirty
> > explanation is that worst case (Bonferonni), if you were
> > rejecting at 0.05, you would have to reject at 0.05/6=0.0083
> > and the p-values are adjusted accordingly. The adjustment
> > multcomp uses as default is not as severe as that.
> >
> > Might I ask how many days you have? If you only have a small
> > number of days, you may not be able to estimate the variance
> > of the random effects very accurately, and you are better off
> > putting it in as a fixed effect.
> >
> > lastly, your effect sizes aren't all that big, you may
> > need to look at your measurement instrument, are you using
> > % cover? or some other measure.
> >
> > Best
> > Nicholas
> >
> >> Message: 2
> >> Date: Wed, 3 Mar 2010 13:04:54 -0600
> >> From: Nathan Lemoine <lemoine.nathan at gmail.com>
> >> To: r-sig-ecology at r-project.org
> >> Subject: [R-sig-eco] lme post-hoc help
> >> Message-ID: <DE063D70-0155-4DE7-8DE6-569E6AFB9121 at gmail.com>
> >> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
> >>
> >> Hi all,
> >>
> >> I'm attempting to analyze some data on log-transformed algae grazing
> >> rates that I've collected in different habitats. I when collecting  
> >> the
> >> data, I blocked for day to account for temporal variation in grazing
> >> intensity, and I'm considering DAY as a random factor in my model. As
> >> such, I've used the lme model to construct the mixed-effects model as
> >> follows:
> >>
> >>> al_lme <- lme(grazing~habitat, random = ~1|day, data=algae)
> >>
> >> The ANOVA summary shows a significant result:
> >>
> >>> anova(al_lme)
> >>
> >>            numDF   denDF   F-value        p-value
> >> (Intercept)     1    32        174.97322    <.0001
> >> HAB               3    32          3.31776       0.0321
> >>
> >> Yet, when I do a post-hoc comparison, none of the pairwise tests are
> >> significant:
> >>
> >>> pairs <- glht(al_lme, linfct = mcp("habitat"="Tukey")
> >>> summary(pairs)
> >>
> >>                                       Estimate Std. Error  z value
> >> Pr(>|z|)
> >> Fake - Complex == 0    0.2125     0.5390   0.394    0.978
> >> Far - Complex == 0       1.1937     0.5390   2.215    0.114
> >> Near - Complex == 0   0.7758     0.3623   2.142    0.134
> >> Far - Fake == 0              0.9813     0.4437   2.211    0.115
> >> Near - Fake == 0           0.5633     0.5390   1.045    0.715
> >> Near - Far == 0             -0.4179     0.5390  -0.775    0.861
> >> (Adjusted p values reported -- single-step method)
> >>
> >> How is this possible? In visually inspecting the data, it is apparent
> >> that at least the Far-Complex ought to be significant. To be sure, I
> >> double checked my statistics using SPSS, which is where I'm getting
> >> more confused.
> >>
> >> In SPSS, I built a blocked, general linear model with Loss as the
> >> dependent, Habitat as the fixed factor, and Day as a random factor. I
> >> used the default Type III SS because the design was not balanced.  
> >> SPSS
> >> also returns a significant effect:
> >>
> >> Habitat F = 4.741, denom df = 33, p = 0.015
> >>
> >> and the Tukey's HSD post-hoc test returns a significant difference
> >> between the Far-Complex habitats, like expected. My questions are:
> >> First, how can I receive a significant result in R and have no
> >> significant pairwise effects? Second, what are the differences  
> >> between
> >> SPSS and R, that SPSS uses a different denominator df to calculate  
> >> the
> >> F-statistic? This is probably the reason that the p-value for SPSS is
> >> lower, but I'm not sure that this is part of the reason for the
> >> different post-hoc results.
> >>
> >> Thanks for any help,
> >>
> >> Nate Lemoine
> >>
> >>
> >>
> >> ------------------------------
> >>
> >> _______________________________________________
> >> R-sig-ecology mailing list
> >> R-sig-ecology at r-project.org
> >> https://stat.ethz.ch/mailman/listinfo/r-sig-ecology
> >>
> >>
> >> End of R-sig-ecology Digest, Vol 24, Issue 2
> >> ********************************************
> >>
> 
>



More information about the R-sig-ecology mailing list