[R] Is it possible to use glm() with 30 observations?

(Ted Harding) Ted.Harding at nessie.mcc.ac.uk
Sat Jul 2 12:00:56 CEST 2005


On 02-Jul-05 Kerry Bush wrote:
> I have a very simple problem. When using glm to fit
> binary logistic regression model, sometimes I receive
> the following warning:
> 
> Warning messages:
> 1: fitted probabilities numerically 0 or 1 occurred
> in: glm.fit(x = X, y = Y, weights = weights, start =
> start, etastart = etastart,  
> 2: fitted probabilities numerically 0 or 1 occurred
> in: glm.fit(x = X, y = Y, weights = weights, start =
> start, etastart = etastart,  
> 
> What does this output tell me? Since I only have 30
> observations, i assume this is a small sample problem.

It isn't. Spencer Graves has shown clearly with two examples
that you can get a fit with no warnings with 3 observations,
and a fit with warnings for 1000 observations. As he says,
it arises when you get "perfect separation" with respect to
the linear model.

However, it may be worth expanding Spencer's explanation.
With a single explanatory variable x (as in Spencer's examples),
"perfect separation" occurs when y = 0 for all x <= some x0,
and y = 1 for all x > x0.

One of the parameters in the linear model is the "scale parameter"
(i.e. the recipsorcal of the "slope"). If you express the model
in the form

  logit(P(Y=1;x)) = (x - mu)/sigma

then sigma is the scale parameter in question.

As sigma -> 0, P(Y=1;x) -> 0 for x < mu, and -> 1 for x > mu.

Therefore, for any value of mu between x0 (at and below which
all y=0 in your data) and x1 (the next larger value of x, at
and above which all y=1), letting sigma -> 0 gives a fit
which perfectly predicts your y-values: it predicts P(Y=1) = 0,
i.e. P(Y=0) = 1, for x < mu, and predicts P(Y=1) = 1 for x > mu;
and this is exactly what you have observed in the data.

So it's not a disaster -- in model-fitting terms, it is a
resounding success!

However, in real life one does not expect to be dealing with
a situation where the outcomes are so perfectly predictable,
and therefore one views such a result with due mistrust.
One attributes the "perfect separation" not to perfect
predictability, but to the possibility that, by chance,
all the "y=0" occur at lower values of x, and all the "y=1"
at higher values of x.

> Is it possible to fit this model in R with only 30
> observations? Could any expert provide suggestions to
> avoid the warning?

Yes! As Spencer showed, it is possible with 3 -- but of
course it depends on the outcomes y.

As to a suggestion to "avoid the warning" -- what you really
need to avoid is data where the x-values are so sparse in
the neighbourhood of the "P(Y=1;x) = 0.5" area that it becomes
likely that you will get y-values showing perfect separation.

What that means in practice is that, over the range of x-values
such that P(Y=1;x) rises from (say) 0.2 to 0.8 (chosen for
illustration), you should have several x values in your data.
Then the phenomonon of "perfect separation" becomes unlikely.

But what that means in real life is that you need enough data,
over the relevant range of x-values, to enable you to obtain
(with high probability) a non-zero estimate of sigma (i.e. an
estimate of slope 1/sigma which is not infinite) -- i.e. that
you have enough data, and in the right places, to estimate the
rate at which the probability increases from low to high values.

(Theoretically, it is possible that you get "perfect separation"
even with well-distributed x-values and sigma > 0; but with
well-distributed x-values the chance that this would occur is so
small that it can be neglected).

So, to come back to your particular case, the really meaningful
suggestion for "avoiding the warning" is that you need better
data. If your study is such that you have to take the x-values
as they come (as opposed to a designed experiement where you
can decide what they are to be), then this suggestion boils
down to "get more data".

What that would mean depends on having information about the
smallest value of sigma (largest value of slope) that is
*plausible* in your context. Your data are not particularly
useful, since they positively encourage adopting sigma=0.
So objective information about this could only come from
independent knowledge.

However, as a rule of thumb, in such a situation I would try
to get more data until I had, say, 10 x-values roughly evenly
distributed between the largest for which y=0 and the smallest
for which y=1. If that didn't work first time, then repeat
using the extended data as starting-point.

Or simply sample more data until the phenomenonof perfect
separation was well avoided, and the S.D. of the x-coefficient
was distinctly smaller than the value of the x-coefficient.

Hoping this helps,
Ted.


--------------------------------------------------------------------
E-Mail: (Ted Harding) <Ted.Harding at nessie.mcc.ac.uk>
Fax-to-email: +44 (0)870 094 0861
Date: 02-Jul-05                                       Time: 10:45:04
------------------------------ XFMail ------------------------------




More information about the R-help mailing list