[R-sig-ME] Interpretation of lmer output in R

Douglas Bates bates at stat.wisc.edu
Wed Feb 23 19:20:44 CET 2011


On Wed, Feb 23, 2011 at 6:17 AM, Julia Sommerfeld
<Julia.Sommerfeld at utas.edu.au> wrote:
> Dear Douglas and list,

> Again, thanks for the thorough answers.

> Regarding my last email: I'm mixing up my own data! Sorry for that.

> 1. plogis(x)

> SameSite=1 means that birds did use the SAME nest site, hence: they show
> site-fidelity.

> SameSite=0 means that birds did CHANGE nest site, hence: no site-fidelity.

> This means that site-fidelity of "bird" who failed breeding corresponds to a
> probability of 44% and that bird who was successful corresponds to a
> probability of 72% (see below).

Yes.  When you code the response as 0 and 1 then the model provides
the probability that the response will be a 1.

> Thus, with "plogis" of the Intercept and/or Intercept+Fixed Effect Estimate
> I can calculate the probability of my observed event (R-help: plogis gives
> the distribution function; being the inverse logit-function)?

Yes.  It happens that the logit function (see
http://en.wikipedia.org/wiki/Logit) is the quantile function for the
standard logistic distribution (the R function qlogis) and its inverse
is the cumulative probability function for the standard logistic (the
R function plogis).  Because I know that the qlogis and plogis
functions are carefully written to handle difficult cases gracefully I
use them instead of writing out the expressions explicitly.

> That is: plogis of the intercept estimate is based upon BreedSuc1=0,
> SameSite=1 and plogis of the intercept+BreedSuc1 Estimate is based upon
> BreedSuc1=1, SameSite=1?

Yes.  Another way of looking at it would be to extract the model
matrix for your fitted model, using the function model.matrix.  You
will see that the first column is all 1's and the second column will
be 0's and 1's according to that bird's breeding success.  Technically
the fixed-effects part of the predicted log-odds for a particular bird
is (Intercept) * first_column + BreedSuc1 * second_column but, because
every row is either 1 0 or 1 1,  the only possible values of the
predicted log-odds are (Intercept) and (Intercept) + BreedSuc1

> And the "result" will always depend upon my coding (BreedSuc1=1 means
> successful, SameSite=1 means site-fidelity). This is still a bit
> confusing...

I would consider the two factors separately.  Regarding the SameSite
encoding just think of it as you are eventually modeling the
probability of an "event" for which there are two possible outcomes.
You can call them Failure/Success or Off/On or No/Yes or whatever you
want but when you get to a numerical coding as 0/1 the model will be
for the probability of getting a 1.

As for the breeding success factor, you again need to characterize it
according to some numerical encoding to be able to create the model
matrix.  You happened to choose a 0/1 encoding so in the model the
coefficient for that term is added to the intercept when there is a 1
for that factor and not added when there is a 0.

> My question was:
> What if both codings change? I.e. BreedSuc1=0 means successful and
> SameSite=0 means site-fidelity? Would it then still be  1-p instead of p?

If change the encoding of the response, then your coefficients will
all flip signs.  What was previously a log-odds of site fidelity of,
say, 1.2 will now become a log-odds of site switching of -1.2.  You
can check that

plogis(-x) = 1 - plogis(x)

If you change the encoding of BreedSuc then the (Intercept)
coefficient will be the log-odds of a '1' outcome (whether that means
site fidelity or site switching) for a bird who was successful and
(Intercept) + BreedFail1 will be the log-odds of a 1 outcome for a
bird who was successful.

I think part of the confusion for you is that you are trying to
combine the encoding of site-fidelity and breeding success.  It is
best to keep them distinct because site-fidelity is the response and
breeding success is a covariate.

> 2. Z-value:
>
> I assume that z-value is the same as z-score? I've found the following link
> (http://resources.esri.com/help/9.3/arcgisdesktop/com/gp_toolref/spatial_statistics_toolbox/what_is_a_z_score_what_is_a_p_value.htm).
>

> There, the z-score is defined as:
>
> "The Z score is a test of statistical significance that helps you decide
> whether or not to reject the null hypothesis".
>
> "Z scores are measures of standard deviation. For example, if a tool returns
> a Z score of +2.5 it is interpreted as "+2.5 standard deviations away from
> the mean". P-values are probabilities. Both statistics are associated with
> the standard normal distribution. This distribution relates standard
> deviations with probabilities and allows significance and confidence to be
> attached to Z scores and p-values".

> Is this correct?

Well, not really.  The logic behind statistical hypothesis testing is
subtle and frequently misunderstood.  When you try to simplify the
explanation, as is done above, it often ends up not quite accurate.

The way I present it to students is that we have two competing claims,
which for us will boil down to a simpler model being adequate (called
the "null hypothesis" as in "nothing going on here, folks") or the
simpler model is not adequate and we need to extend it.  The second
claim is called the alternative hypothesis.  In your case the null
hypothesis is that breeding success doesn't influence the probability
of site fidelity and the alternative hypothesis is that it does.  You
want to establish the alternative hypothesis but, because of the
variability in the data, you can't "prove" it directly.  You have to
use an indirect method, which is to assume that it is not the case
(i.e. the null hypothesis is true) and demonstrate that it is unlikely
to obtain the data that you did, or something even more unusual, if
the null hypothesis were true.

So that is the subtlety.  You can't prove your point directly so you
set up the "straw man" argument (the null hypothesis) and try to
refute it.  But nothing is certain.  You can't prove that it is
impossible to get the data that you did if breeding success did not
influence site fidelity.  The best you can do is say it is extremely
unlikely.

Now we need to formalize this argument by settling on a "test
statistic", which is a quantity that can be calculated from the
observed data.  Along with the test statistic we need a reference
distribution which is the distribution of the test statistic when the
null hypothesis is true.  Formally, we are supposed to set up all the
rules about the test statistic and the reference distribution before
we see the data - sort of like the "pistols at dawn, march ten paces
then turn and fire" rules of a duel.  In practice we sometimes look at
the data before we decide exactly what hypotheses we will test.  This
is the "data snooping" that Ben wrote about.

Anyway, when we have set up the rules and we have the data we evaluate
the test statistic and then determine the probability of seeing that
value or something even more extreme when the null hypothesis is true.
 This is what we call the p-value.  If that probability is very small
then either we encountered a rare case or the null hypothesis is
false.  That's why you, as the researcher, want the p-value to be
small, when you are trying to establish the alternative.

(As an aside, this is where most people lose track of the argument.
They think the p-value is related to the probability of one of the
hypotheses being true.  That is not the case.  The p-value is the
probability of seeing the data that we did, or something more unusual,
if the null hypothesis - the "no change" assumption - were true.)

Now we get to the point of how do we calculate the p-value.  We need
to come up with a way of characterizing "unusual" data.  What I prefer
to do is to see how good the fit is without allowing for differences
due to breeding success and how good it is when allowing for
differences due to breeding success.  The measure of "how good is the
fit" is called the deviance.  So we fit the model with the BreedSuc1
term and without it and compare the deviance values.  The reference
distribution for this difference is generally taken to be a
chi-squared distribution.  Technically all we can say is that when the
sample sizes get very large, the difference in the deviance values
tends to a chi-squared distribution.  But I would claim that it is a
reasonable way of judging the difference in quality of fit for finite
samples too.

An alternative approach to judging if a coefficient is "significantly
different" from zero is to take the value of the coefficient and
divide by its standard error.   If everything is behaving well, this
ratio would have a standard Gaussian (or "normal") distribution when
the null hypothesis is true.  We write a standard normal random
variable as Z so this ratio is called a z-statistic.

For me the problem with using a z-statistic is that you are comparing
two models (with and without the term of interest) by fitting only one
of them and then extrapolating to decide what the other fit should be
like.  This was a necessary short-cut when fitting a model might
involve weeks of hand calculation.  But if fitting a model involves
only a few seconds of computer time, why not fit both and do a
comparison of the actual difference in the quality of the fits?

So, why should we even both printing the z-statistics?  I consider
them to be a guide but if I actually want to do the hypothesis test
involving a simple model versus a more complex model I fit both
models.

I hope this is more illuminating than confusing.

>>
>> #site-fidelity (SameSite=1)  of "bird"
>> who was unsuccessful in breeding in the previous season (BreedSuc1=0),
>> corresponds to a probability of 44%:
>> plogis(-0.2135)
>> [1] 0.4468268
>
> Perhaps I misunderstood your original posting.  I thought that
> SameSite=0 meant that the bird did not return to the same nest site.
> That is, site fidelity corresponds to SameSite = 1.
>
> In any case the probabilities constructed as you have done are the
> probabilities for SameSite = 1.
>
>> Further, site-fidelity (SameSite=0)  of "bird" who was successful in
>> breeding in the previous season (BreedSuc1=1), corresponds to a
>> probability
>> of 72%:
>> plogis(-0.2135 + 1.1831)
>> [1] 0.7250398
>>
>>
>> Is my interpretation correct?
>
>
>> So this output is always based on SameSite=0?
>
>> (SameSite=0 means that birds did NOT change nest site, i.e. they show high
>> site-fidelity). But what if SameSite=1 would mean high site-fidelity (no
>> change of nests?)?
>
> Then the 0's and 1's would be reversed for that response variable and
> your probabilities would be the complement (i.e. 1-p instead of p) of
> those calculated above.
>
>
>
>
> 2011/2/22 Douglas Bates <bates at stat.wisc.edu>
>>
>> On Mon, Feb 21, 2011 at 8:26 AM, Julia Sommerfeld
>> <Julia.Sommerfeld at utas.edu.au> wrote:
>> > Dear Douglas and list member,
>> >
>> > Thank you heaps for your answers. The interpretation of the summary
>> > output
>> > (lmer) is becoming much clearer now. I have to admit I had a slightly
>> > (not
>> > to say HUGE) different idea of the summary output.
>> >
>> > But a few questions still remain...
>> >
>> > I have tried out the suggestions with the following results:
>> >
>> > 1. I tested if "Sex" is an important factor in the model:
>> >
>> > fm<-lmer(SameSite~BreedSuc1+Sex+(1|Bird), family="binomial")
>> > fm1<-lmer(SameSite~BreedSuc1+(1|Bird), family="binomial")
>> > anova(fm1,fm)
>> >
>> > Data:
>> > Models:
>> > fm1: SameSite ~ BreedSuc1 + (1 | Bird)
>> > fm: SameSite ~ BreedSuc1 + Sex + (1 | Bird)
>> >     Df    AIC    BIC  logLik  Chisq Chi Df Pr(>Chisq)
>> > fm1  3 75.518 81.485 -34.759
>> > fm   4 77.379 85.335 -34.690 0.1387      1     0.7096
>>
>> So I would conclude that even though sex was recorded it turns out
>> that it is not a significant predictor of the probability of site
>> fidelity and I would omit that term from the model.  As Ben described,
>> he would not be in favor of this approach because it verges on "data
>> snooping".  One can argue either way and, in this case it wouldn't
>> make much difference in the final conclusion whether or not the Sex
>> term is included.
>>
>> > 2. Since Sex is not "important",  I compared fm1 with fm2 to test if
>> > BreedSuc1 is an important factor:
>> >
>> > fm2<-lmer(SameSite ~ 1 + (1|Bird), family="binomial")
>> > anova(fm2,fm1)
>> >
>> > Data:
>> > Models:
>> > fm2: SameSite ~ 1 + (1 | Bird)
>> > fm1: SameSite ~ BreedSuc1 + (1 | Bird)
>> >         Df    AIC        BIC      logLik      Chisq     Chi Df
>> >  Pr(>Chisq)
>> > fm2  2     77.617     81.595     -36.808
>> > fm1  3     75.518      81.485     -34.759     4.0991          1
>> >  0.04291 *
>>
>> So if I needed to quote a p-value for the BreedSuc factor this is what
>> I would quote.
>>
>> > BUT: Ben (thanks for the input) disagreed here:
>> > "I'm afraid I have to disagree with Doug here. This kind of model
>> > reduction,
>> > while seemingly sensible (and not as abusive as large-scale, automated
>> > stepwise regression), is a mild form of data snooping. Don't do it ..."
>> >
>> > If I don't do it, what could be an alternative? Simply rely on the
>> > output of
>> > fm1 without looking at fm2?
>>
>> The alternative is to fit a model of the form
>>
>> fm2a <- lmer(SameSite ~ 1 + Sex + (1|Bird), family="binomial")
>>
>> and compare it to the original model, fm, using
>>
>> anova(fm2a, fm)
>>
>> The general idea of testing whether BreedSuc makes a significant
>> contribution to predicting the probability of site fidelity is to fit
>> a model with the term and then fit the model without the term and
>> compare the quality of the fits.  To me the most sensible way to
>> compare the quality of the fits is to consider the likelihood ratio.
>> The model with the term will always do better than the one without the
>> term - the question is, "Is it significantly better?".  One way to
>> answer that question is to convert the likelihood ratio test (LRT)
>> statistic to a probability or p-value using the result that, under the
>> null hypothesis (that the term does not make a significant
>> contribution) the LRT statistic has a chi-squared distribution with 1
>> degree of freedom.  One can set up other criteria; for example AIC
>> penalizes each parameter as 2 units on the deviance scale (negative
>> twice the log-likelihood).  BIC is a bit more complicated in that the
>> number of units of penalty per parameter on the deviance scale depends
>> on the number of observations in the data set.
>>
>> I would claim that the LRT statistic is always a good way of
>> evaluating the difference in the quality of fit for two models - it is
>> how you convert it to a p-value that is not clear when you have small
>> sample sizes.
>>
>> The difference between what I would advise and what Ben would advise
>> regarding the LRT is what the null and alternative models are.  I
>> would remove the Sex term from both.  He would retain the Sex term in
>> both.  This will result in slightly different conclusions.
>>
>> This, by the way, emphasizes the point that a test statistic and its
>> corresponding p-value is not a property of the BreedSuc term.  It
>> results from comparing the quality of fits of two models - one with
>> the term and one without.  When we quote t- or z-statistics, and
>> p-values, in a coefficients table we are providing a summary of many
>> different types of tests simultaneously.  Unfortunately the conclusion
>> that is often drawn from the table is that the p-value is a property
>> of the term itself, which is wrong.
>>
>>
>> > 3.  From the output above, I conclude that BreedSuc1 has an effect on
>> > SameSite:
>> >
>> > summary(fm1)
>> >
>> > Generalized linear mixed model fit by the Laplace approximation
>> > Formula: SameSite ~ BreedSuc1 + (1 | Bird)
>> >    AIC       BIC         logLik     deviance
>> >  75.52     81.48     -34.76    69.52
>> > Random effects:
>> >  Groups Name            Variance     Std.Dev.
>> >  Bird   (Intercept)         0.12332      0.35117
>> > Number of obs: 54, groups: Bird, 46
>> >
>> > Fixed effects:
>> >                     Estimate     Std. Error     z value     Pr(>|z|)
>> > (Intercept)      -0.2135       0.3794          -0.563        0.5736
>> > BreedSuc11    1.1831         0.5921           1.998        0.0457 *
>> > ---
>> > Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>> >
>> > Correlation of Fixed Effects:
>> >            (Intr)
>> > BreedSuc11 -0.638
>> >
>> >
>> > Now, again the interpretation of the Fixed effects:
>> >
>> > BreedSuc1 has a significant effect on SameSite.
>> >
>> > #site-fidelity (SameSite=0)  of "bird" (bird(!), since I droppped
>> > "Sex"?)
>> > who was unsuccessful in breeding in the previous season (BreedSuc1=0),
>> > corresponds to a probability of 44%:
>> > plogis(-0.2135)
>> > [1] 0.4468268
>>
>> Perhaps I misunderstood your original posting.  I thought that
>> SameSite=0 meant that the bird did not return to the same nest site.
>> That is, site fidelity corresponds to SameSite = 1.
>>
>> In any case the probabilities constructed as you have done are the
>> probabilities for SameSite = 1.
>>
>> > Further, site-fidelity (SameSite=0)  of "bird" who was successful in
>> > breeding in the previous season (BreedSuc1=1), corresponds to a
>> > probability
>> > of 72%:
>> > plogis(-0.2135 + 1.1831)
>> > [1] 0.7250398
>> >
>> >
>> > Is my interpretation correct?
>>
>>
>> > So this output is always based on SameSite=0?
>>
>> > (SameSite=0 means that birds did NOT change nest site, i.e. they show
>> > high
>> > site-fidelity). But what if SameSite=1 would mean high site-fidelity (no
>> > change of nests?)?
>>
>> Then the 0's and 1's would be reversed for that response variable and
>> your probabilities would be the complement (i.e. 1-p instead of p) of
>> those calculated above.
>>
>> > 4. if I don't drop the term "Sex":
>> >
>> > summary(fm)
>> >
>> > Generalized linear mixed model fit by the Laplace approximation
>> > Formula: SameSite ~ BreedSuc1 + Sex + (1 | Bird)
>> >    AIC   BIC logLik deviance
>> >  77.38 85.34 -34.69    69.38
>> > Random effects:
>> >  Groups Name        Variance     Std.Dev.
>> >  Bird   (Intercept)     0.14080       0.37524
>> > Number of obs: 54, groups: Bird, 46
>> >
>> > Fixed effects:
>> >                     Estimate         Std. Error     z value     Pr(>|z|)
>> > (Intercept)     -0.3294         0.4890         -0.674     0.5006
>> > BreedSuc11   1.1988          0.5957        2.012      0.0442 *
>> > Sex    M             0.2215           0.5877           0.377      0.7062
>> > ---
>> > Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>> >
>> > Correlation of Fixed Effects:
>> >            (Intr)             BrdS11
>> > BreedSuc11     -0.536
>> > SexM           -0.628      0.065
>> >
>> >
>> > site fidelity for a female bird (i.e. SexM is 0) who was unsuccessful in
>> > breeding the previous season (i.e.
>> > BreedSuc1 is 0) is -0.3294, corresponding to a probability of about 42%
>> >> plogis(-0.3294)
>> > [1] 0.4183866
>> >
>> > The log-odds of site fidelity for a female bird who was successful in
>> > breeding is -0.3294 + 1.1988, corresponding to a probability of about
>> > 70%
>> >> plogis(-0.3294 + 1.1988)
>> > [1] 0.7046208
>> >
>> >
>> > 5. The z-value: Sorry, I still have some trouble with this value....
>> >
>> > In model fm (without "Sex") the z-value of BreedSuc1 corresponds to
>> > 1.998.
>> >
>> > In model fm1("Sex" included) the z-value corresponds to 2.012.
>> >
>> > Nearly the same value in both models. But what can someone conclude
>> > from: p<
>> > 0.05, z=1.998 ??? Because this is what many people write in their result
>> > section (I was told to do so the same....).
>> >
>> > "You should see a LRT test statistic close to, but not exactly the same
>> > as,
>> > the square of the z value when you compare the models with and without
>> > that
>> > term.
>> > This is the sense in which the z-value is an approximation".
>> >
>> > I don't really understand how the z-value can be seen as an
>> > approximation?
>> > Am I missing some background knowledge here?
>>
>> I may have been too terse in my explanations.  As mentioned above, I
>> would claim that the LRT statistic is a reasonable way to compare the
>> fits of two models, because it is based upon fitting the model with
>> and without the term of interest.  In the case of a linear model
>> without random effects it is not necessary to fit the model without
>> the term just to discover what the LRT statistic would be.  You can
>> tell from the model fit with the term what the maximum value for the
>> likelihood of any sub-model will be.
>>
>> In the case of a linear mixed model or a generalized linear mixed
>> model you can't decide on the basis of the one model fit what the
>> likelihood for the other will be.  You can approximate but you you
>> don't get an exact value.  When it took a very long time to do a model
>> fit we just used the approximation.  Now that these fits can be done
>> much more quickly, it makes sense to fit both with and without.
>>
>> The nature of the approximation is to take the parameter estimate for
>> BreedSuc and divide it by its approximate standard error.  We call
>> this the z-statistic because, when everything is working properly,
>> this should have a distribution close to a standard normal, which we
>> often write as Z.  The LRT statistic is the difference in the deviance
>> of the model without and the model with the term.  To me, that is the
>> quantity of interest and the fact that it should be approximately the
>> square of the z-statistic is helpful in making rough decisions but I
>> still want to calculate the difference in the deviance before making a
>> final decision.
>>
>> In the summary of fm1 the z-statistic is 1.998 whereas the LRT
>> statistic comparing fm2 to fm1 is 4.0991.  The square of the
>> z-statistic will be close to, but not exactly the same as, the LRT
>> statistic.
>>
>> > Again, thanks heaps for your help!!!!
>>
>> You're welcome.  Thanks for the question.
>>
>> > 2011/2/19 Douglas Bates <bates at stat.wisc.edu>
>> >>
>> >> Thank you for your questions and for transferring the discussion to
>> >> the R-SIG-Mixed-Models mailing list, as we had discussed.  I have also
>> >> copied the mailing list for a class on mixed-effects models that I am
>> >> teaching.
>> >>
>> >> I particularly appreciate your desire to learn about the model instead
>> >> of just quoting a p-value.  I often lament to my classes that
>> >> statisticians have been far too successful in propagating the idea of
>> >> p-values, to the extent that some researchers believe that is all that
>> >> is needed to learn about an analysis.
>> >>
>> >> On Sat, Feb 19, 2011 at 3:05 AM, Julia Sommerfeld
>> >> <Julia.Sommerfeld at utas.edu.au> wrote:
>> >> > Dear Douglas and list members,
>> >> >
>> >> > Apologies in advance if you might consider my questions as too simple
>> >> > to
>> >> > be asking the godfather of lme4 for an answer...thus, please feel
>> >> > free
>> >> > to
>> >> > ignore my email or to forward it to someone else.
>> >> >
>> >> > I'm a PhD student (Australia/Germany) working on tropical seabirds.
>> >> > As
>> >> > many of my PhD-collegues, I'm having some difficulties with the
>> >> > analysis
>> >> > of my data using lmer (family=binomial). While some say: What do you
>> >> > care
>> >> >
>> >> > about all the other values as long as you've got a p-value... I do
>> >> > believe
>> >> > that it is essential to understand WHAT I'm doing here and WHAT all
>> >> > these
>> >> > numbers/values mean.
>> >> >
>> >> > I've read the Chapters (lme4 Book Chapters) and publications about
>> >> > the
>> >> > use
>> >> > of lmer and searched the forums - but I don't find a satisfying
>> >> > answer.
>> >> > And I have the feeling that 1. the statistic lecture at my university
>> >> > was
>> >> > a joke (sad to say this) 2. that I need a huge
>> >> > statistical/mathematical
>> >> > background to fully understand GLMMs.
>> >> >
>> >> >
>> >> > One of the question I would like to answer is:
>> >> > Does the previous breeding success influences nest site fidelity?
>> >> >
>> >> > I have binomial data:
>> >> > SameSite=1 means birds use the same site
>> >> >
>> >> > SameSite=0 means birds change nest site
>> >> >
>> >> > BreedSuc1=1 Birds were successful in previous breeding season
>> >> > BreedSuc1=0 Birds were not successful "     "        "
>> >> >
>> >> > Sex= male, female
>> >> > Bird= Bird ID
>> >> >
>> >> > This is my model:
>> >>
>> >> > fm<-lmer(SameSite~BreedSuc1+Sex+(1|Bird), family="binomial")
>> >>
>> >> > where Bird is my random factor (same birds were sampled more than
>> >> > once)
>> >>
>> >> One thing to note is that there are 46 different birds in the 54
>> >> observations.  Most birds will have just one observation so a random
>> >> effect for bird may not be necessary.
>> >>
>> >> > summary(fm)
>> >> >
>> >> > Generalized linear mixed model fit by the Laplace approximation
>> >> >
>> >> > Formula: SameSite ~ BreedSuc1 + Sex + (1 | Bird)
>> >> >   AIC   BIC logLik deviance
>> >> >  77.38 85.34 -34.69    69.38
>> >> > Random effects:
>> >> >  Groups Name        Variance Std.Dev.
>> >> >  Bird   (Intercept) 0.14080  0.37524
>> >> > Number of obs: 54, groups: Bird, 46
>> >> >
>> >> > Fixed effects:
>> >> >            Estimate Std. Error z value Pr(>|z|)
>> >> > (Intercept)  -0.3294     0.4890  -0.674   0.5006
>> >> > BreedSuc11    1.1988     0.5957   2.012   0.0442 *
>> >> > SexM          0.2215     0.5877   0.377   0.7062
>> >>
>> >> this suggests that sex is not an important factor in the model.  The
>> >> (Intercept) term is close to zero, relative to its standard error, but
>> >> we would retain it in the model as explained below.
>> >>
>> >> > ---
>> >> > Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>> >> >
>> >> > Correlation of Fixed Effects:
>> >> >           (Intr) BrdS11
>> >> > BreedSuc11 -0.536
>> >> > SexM       -0.628  0.065
>> >> >
>> >> >
>> >> > From this summary output I do understand that the Breeding Success
>> >> > has a
>> >> > significant effect on nest-site fidelity (p<0.05).
>> >>
>> >> Yes, but ... this p-value should be used as a guide only.  As
>> >> described below a p-value must be viewed in context.  It is not a
>> >> property of the Breeding Success factor; it comes from a comparison of
>> >> two models and we should bear in mind that these models are before
>> >> interpreting this number.
>> >>
>> >> The interpretation of a p-value for a particular coefficient is that
>> >> it is an approximation to the p-value we would get from comparing the
>> >> model that has been fit to the mode fit without this particular
>> >> coefficient.  In this case the coefficient corresponds to one of the
>> >> terms in the model and I would advocate performing a likelihood ratio
>> >> test comparing the two models
>> >>
>> >> fm <- glmer(SameSite~BreedSuc1+Sex+(1|Bird), family="binomial")
>> >> fm0 <- glmer(SameSite~Sex+(1|Bird), family="binomial")  # the null
>> >> hypothesis model
>> >> anova(fm0, fm)
>> >>
>> >> Even though the function is called anova it will, in this case,
>> >> perform a likelihood ratio test (LRT).  It also prints the values of
>> >> AIC and BIC if you prefer to compare models according to one of those
>> >> criteria but I prefer using the likelihood ratio for nested models.
>> >>
>> >> However, before doing that comparison you should ask yourself whether
>> >> you want to compare models that have the, apparently unnecessary term
>> >> for Sex in them.  The way I would approach the model building is first
>> >> to reduce the model to
>> >>
>> >> fm1 <- lmer(SameSite~BreedSuc1+(1|Bird), family="binomial")
>> >>
>> >> You could then compare
>> >>
>> >> anova(fm1, fm)
>> >>
>> >> which I presume will give a large p-value for the LRT, so we prefer
>> >> the simpler model, fm1.  After that, I would compare
>> >>
>> >> fm2 <- lmer(SameSite ~ 1 + (1|Bird), family="binomial")
>> >> anova(fm2, fm1)
>> >>
>> >> to see if the BreedSuc1 factor is an important predictor in its own
>> >> right.
>> >>
>> >> Note that we don't drop the implicit "(Intercept)" term, even though
>> >> it has a high p-value in the coefficient table.  The reason is that
>> >> the interpretation of the (Intercept) coefficient depends on the
>> >> coding of BreedSuc1.
>> >>
>> >> In model fm, the log-odds of site fidelity for a female bird (i.e.
>> >> SexM is 0) who was unsuccessful in breeding the previous season (i.e.
>> >> BreedSuc1 is 0) is -0.3294, corresponding to a probability of about
>> >> 42%
>> >>
>> >> > plogis(-0.3294)
>> >> [1] 0.4183866
>> >>
>> >> The log-odds of site fidelity for a female bird who was successful in
>> >> breeding is -0.3294 + 1.1988, corresponding to a probability of about
>> >> 70%
>> >>
>> >> > plogis(-0.3294 + 1.1988)
>> >> [1] 0.7046208
>> >>
>> >> If you had reversed the meaning of BreedSuc to BreedFail, where 0
>> >> indicates no failure at breeding and 1 indicates failure, then the
>> >> coefficient would change sign (i.e. the coefficient for BreedFail
>> >> would be -1.1988) and the intercept would change to
>> >>
>> >> > -0.3294 + 1.1988
>> >> [1] 0.8694
>> >>
>> >> because the reference level would now be a female bird who was
>> >> successful in breeding.
>> >>
>> >> Because the interpretation of the intercept depends upon the coding of
>> >> other factors, we retain it in the model whenever other terms are
>> >> retained.
>> >>
>> >>
>> >>
>> >> > But what else can I conclude from this model?
>> >> >
>> >> > Questions:
>> >> >
>> >> > 1.Random effects: What does the Random Effect table - the Variance,
>> >> > Std.
>> >> > Dev. and Intercept - tells me: Is there a random effect that my model
>> >> > has
>> >> > to account for?
>> >>
>> >> First I would remove the apparently unnecessary Sex term then,
>> >> ideally, I would check by comparing the fit of the reduced model to
>> >> that of a GLM without the random effect for Bird.  Unfortunately, I
>> >> don't think the definition of deviance for a glm fit is compatible
>> >> with that for a model fit by glmer.  This is something we will need to
>> >> fix.  For the time being I would instead examine the "caterpillar
>> >> plot" obtained with
>> >>
>> >> dotplot(ranef(fm1, postVar=TRUE))
>> >>
>> >> which represent the 95% prediction intervals for each of the birds.
>> >> If these all overlap zero comfortably I would conclude that the random
>> >> effect is not needed an fit a glm without a random effect for bird.
>> >> > Random effects:
>> >> >  Groups Name        Variance Std.Dev.
>> >> >  Bird   (Intercept) 0.14080  0.37524
>> >> > Number of obs: 54, groups: Bird, 46
>> >>
>> >> That estimated standard deviation is fairly large. We would expect a
>> >> range of contributions on the log-odds scale of about +/- 2 sd which,
>> >> at this point of the logistic curve corresponds to considerable
>> >> variability in predicted probabilities for birds with the same
>> >> characteristics.
>> >>
>> >> > 2. Fixed Effects: Again the Intercept? Not sure if I understand the
>> >> > meaning of it (sorry, explanation in Chapter I also doesn't help
>> >> > much)
>> >>
>> >> Actually in this model it is a bit different from the models described
>> >> in chapter 1.  I hope the explanation above makes sense.  Think of it
>> >> as the log-odds of site fidelity for a bird in the "reference group"
>> >> where reference group means that all the other variables are a the
>> >> zero level.
>> >>
>> >> > Fixed effects:
>> >> >            Estimate Std. Error z value Pr(>|z|)
>> >> > (Intercept)  -0.3294     0.4890  -0.674   0.5006
>> >> > BreedSuc11    1.1988     0.5957   2.012   0.0442 *
>> >> >
>> >> > SexM          0.2215     0.5877   0.377   0.7062
>> >> >
>> >> > 3. Meaning of the z-value? Why shall I mention it in te result
>> >> > section?
>> >>
>> >> I would regard the z-value as an approximation.  The quantity of
>> >> interest is the likelihood ratio test statistic which has a
>> >> chi-squared distribution under the null hypothesis (i.e. the term can
>> >> be deleted from the model without getting a significantly worse fit).
>> >> It happens that this would be a chi-squared distribution with 1 degree
>> >> of freedom, which corresponds to the square of a standard normal
>> >> distribution.  You should see a LRT test statistic close to, but not
>> >> exactly the same as, the square of the z value when you compare the
>> >> models with and without that term.  This is the sense in which the
>> >> z-value is an approximation.  To me the LRT statistic is more reliable
>> >> because it is based upon actually refitting the model.
>> >>
>> >> > 4. Estimate and Std. Error of the fixed effects? How can I tell from
>> >> > these
>> >> > values WHAT kind of effect (positiv, negativ?) these parameter have
>> >> > on
>> >> > nest-site fidelity? Do birds that were successful during the previous
>> >> > breeding success show a higher nest-site fidelity? Remember, I have
>> >> > binomial data...
>> >>
>> >> That is described above.  If you want the estimate of the site
>> >> fidelity for bird with certain characteristics you evaluate the
>> >> corresponding combination of coefficients and apply plogis to the
>> >> result.
>> >> > I would highly appreciate your feedback and/or suggestions of
>> >> >
>> >> > papers/chapters I could read for a better understanding of the
>> >> > output.
>> >> >
>> >> > Best regards,
>> >> >
>> >> >
>> >> > Julia
>> >>
>> >> I hope this helps.
>> >
>> >
>> >
>
>
>
> --
> Julia Sommerfeld - PhD Candidate
> Institute for Marine and Antarctic Studies
> University of Tasmania
> Private Bag 129, Hobart
> TAS 7001
>
> Phone: +61 458 247 348
> Email: julia.somma at gmx.de
> Julia.Sommerfeld at utas.edu.au
>




More information about the R-sig-mixed-models mailing list