[R] Gaussian frailty leads to segmentation fault

Thomas Lumley tlumley at u.washington.edu
Fri Jul 30 00:53:59 CEST 2004


The crash happens when the penalty on the frailty term gets so small that
the model is singular and there are no degrees of freedom left for the
treatment effect.  The attached revision for survival/R/coxph.wtest.s
prevents the crash, but the model is still singular.

If you use method="aic" instead of the default method="reml" the model
converges. It is interesting to note that the estimated frailty variance
is then very small, so the two methods of estimating the frailty variance
disagree completely on these data.

This may well indicate that the penalized likelihood approach to the
frailty should be approached with strong distrust in these data.

	-thomas

On Thu, 29 Jul 2004, Christian Lederer wrote:

> Dear Thomas,
>
> attached you find a data frame which produces the error.
> I am using survival 2.11-5 under R 1.9.1-1 and 1.9.0-1.
>
> By the way, if i randomly omit 50% of the data, i usually
> get no crash, but a warning message like this:
> Inner loop failed to coverge for iterations 1 2 3 in: coxpenal.fit(X,
> Y, strats, offset, init = init, control, weights = weights,
>
> Maybe, the model is not appropriate for this kind of data.
> But on the other hand, as soon the treatment group (study == 1,
> treatment == 1) is smaller than the randomized placebo group
> (study == 1, treatment == 0), the warnings disappear.
> and the model gives reasonable results in my first simulations
> with normally distributed study effects.
>
> Christian
>
>
>
>
> Thomas Lumley wrote:
> > We really need a reproducible example to find segmentation faults.  Can
> > you make one?
> >
> > 	-thomas
> >
> >
> > On Wed, 28 Jul 2004, Christian Lederer wrote:
> >
> >
> >>Dear R gurus,
> >>
> >>for a simulation concerning study effects and historical controls
> >>in survival analysis, i would like to experiment with a gaussian
> >>frailty model.
> >>
> >>The simulated scenario consists of a randomized trial
> >>(treatment and placebo) and historical controls (only placebo).
> >>
> >>So the simulated data frames consist of four columns
> >>$time, $cens, $study, $treat.
> >>$time, $cens are the usual survival data.
> >>For the binary thretment indicator we have
> >>$treat == 0 or 1, if $study == 1,
> >>$treat == 1 if $study > 1
> >>
> >>Typical parameters for my simulations are:
> >>sample sizes (per arm):         between 100 and 200
> >>number of historical studies:   between 7 and 15
> >>hazard ratio treatment/placebo: between 0.7 and 1
> >>variance of the study effekt:   between 0 and 0.3
> >>
> >>Depending on the sample sizes, the following call sometimes leads to
> >>a segmentation fault:
> >>
> >>coxph(Surv(time,cens) ~
> >>       as.factor(treatment) + frailty(study, distribution="gaussian"),
> >>       data=data)
> >>
> >>I noticed, that this segmentation fault occures most frequently, if the
> >>number of randomized treatment patients is higher than the number of
> >>randomized placebo patients, and the number of historical studies is
> >>large.
> >>There seems to be no problem, if there are at least as many randomized
> >>placebo patients as treated patients. Unfortunately, this is not the
> >>situation i want to investigate (historical controls should be used
> >>to decrease the number of treated patients).
> >>
> >>Is there a way to circumwent this problem?
> >>
> >>Christian
> >>
> >>P.S.
> >>Is it allowed, to attach gzipped sample data sets in this mailing list?
> >>
> >>______________________________________________
> >>R-help at stat.math.ethz.ch mailing list
> >>https://www.stat.math.ethz.ch/mailman/listinfo/r-help
> >>PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
> >>
> >
> >
> > Thomas Lumley			Assoc. Professor, Biostatistics
> > tlumley at u.washington.edu	University of Washington, Seattle
> >
> > ______________________________________________
> > R-help at stat.math.ethz.ch mailing list
> > https://www.stat.math.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
> >
> >
>
>

Thomas Lumley			Assoc. Professor, Biostatistics
tlumley at u.washington.edu	University of Washington, Seattle
-------------- next part --------------
# SCCS @(#)coxph.wtest.s	1.2 10/28/98
#
# A Wald test routine, used by the Cox model
#  Why not just do  sum(b * solve(var, b))? -- because the solve
#  function chokes on singular matrices.
#
coxph.wtest <- function(var, b, toler.chol=1e-9) {
    if (is.matrix(b)) {
        nvar <- nrow(b)
        ntest<- ncol(b)
        }
    else {
        nvar <- length(b)
        ntest<- 1
        }
    
    if (length(var)==1) {
        if (nvar ==1) return(list(test=b*b/var, df=1, solve=b/var))
        else stop("Argument lengths do not match")
    }

    if (length(var)==0){
        if (nvar==0) return(list(test=numeric(0),df=0,solve=0))
        else  stop("Argument lengths do not match")
    }


    if (!is.matrix(var) || (nrow(var) != ncol(var)))
            stop("First argument must be a square matrix")
    if (nrow(var) != nvar) stop("Argument lengths do not match")

    temp <- .C('coxph_wtest', df=as.integer(nvar),
                              as.integer(ntest),
                              as.double(var),
                              tests= as.double(b),
                              solve= double(nvar*ntest),
	                      as.double(toler.chol),
               PACKAGE="survival")
    if (ntest==1) list(test=temp$tests[1], df=temp$df, solve=temp$solve)
    else          list(test=temp$tests[1:ntest], df=temp$df, 
                       solve=matrix(temp$solve, nvar, ntest))
    }


More information about the R-help mailing list