[R] Separation issue in binary response models - glm, brglm, logistf

Ben Bolker bbolker at gmail.com
Thu Feb 28 17:22:17 CET 2013


Xochitl CORMON <Xochitl.Cormon <at> ifremer.fr> writes:

> Dear all,
> 
> I am encountering some issues with my data and need some help.
> I am trying to run glm analysis with a presence/absence variable as 
> response variable and several explanatory variable (time, location, 
> presence/absence data, abundance data).
> 
> First I tried to use the glm() function, however I was having 2 warnings 
> concerning glm.fit () :
> # 1: glm.fit: algorithm did not converge
> # 2: glm.fit: fitted probabilities numerically 0 or 1 occurred
> After some investigation I found out that the problem was most probably 
> quasi complete separation and therefor decide to use brglm and/or logistf.
> 
> * logistf : analysis does not run
> When running logistf() I get a error message saying :
> # error in chol.default(x) :
> # leading minor 39 is not positive definite
> I looked into logistf package manual, on Internet, in the theoretical 
> and technical paper of Heinze and Ploner and cannot find where this 
> function is used and if the error can be fixed by some settings.

 chol.default is a function for Cholesky decomposition, which is
going to be embedded fairly deeply in the code ...

> * brglm : analysis run
> However I get a warning message saying :
> # In fit.proc(x = X, y = Y, weights = weights, start = start, etastart # 
> = etastart,  :
> # Iteration limit reached
> Like before i cannot find where and why this function is used while 
> running the package and if it can be fixed by adjusting some settings.
> 
> In a more general way, I was wondering what are the fundamental 
> differences of these packages.

 You might also take a crack with bayesglm() in the arm package,
which should (?) be able to overcome the separation problem by
specifying a not-completely-uninformative prior.

> I hope this make sense enough and I am sorry if this is kind of 
> statistical evidence that I'm not aware of.
> 
> -----------------------------------------------------------------------
> 
> Here an extract of my table and the different formula I run :
> 
>  > head (CPUE_table)
>    Year Quarter Subarea Latitude Longitude Presence.S CPUE.S Presence.H 
> CPUE.H Presence.NP CPUE.NP Presence.BW CPUE.BW Presence.C CPUE.C 
> Presence.P CPUE.P Presence.W   CPUE.W
> 1 2000       1    31F1    51.25       1.5          0      0          0 
>      0           0       0           0       0          1 76.002 
>    0      0          1 3358.667

 [snip]

> logistf_binomPres <- logistf (Presence.S ~ (Presence.BW + Presence.W + 
> Presence.C + Presence.NP +Presence.P + Presence.H +CPUE.BW + CPUE.H + 
> CPUE.P + CPUE.NP + CPUE.W + CPUE.C + Year + Quarter + Latitude + 
> Longitude)^2, data = CPUE_table)
> 
> Brglm_binomPres <- brglm (Presence.S ~ (Presence.BW + Presence.W + 
> Presence.C + Presence.NP +Presence.P + Presence.H +CPUE.BW + CPUE.H + 
> CPUE.P + CPUE.NP + CPUE.W + CPUE.C + Year + Quarter + Latitude + 
> Longitude)^2, family = binomial, data = CPUE_table)

   It's not much to go on, but:

* are you overfitting your data?  That is, do you have at least 20 times
as many 1's or 0's (whichever is rarer) as the number of parameters you
are trying to estimated?
* have you examined your data graphically and looked for any strong
outliers that might be throwing off the fit?
* do you have some strongly correlated/multicollinear predictors?
* for what it's worth it looks like a variety of your variables might
be dummy variables, which you can often express more compactly by
using a factor variable and letting R construct the design matrix
(i.e. generating the dummy variables on the fly), although that shouldn't
change your results



More information about the R-help mailing list