[R-sig-ME] Problems with convergence
Ken Beath
ken.beath at mq.edu.au
Mon Mar 30 07:22:05 CEST 2015
Yes, I was demonstrating that it fails convergence and then as a
consequence fails to profile. I have my doubts about convergence for the bobyqa
algorithm, I have other applications where it doesn't converge properly.
For some of my own work I've used nlminb followed by Nelder-Mead if there
is a convergence failure. Not optimal but it seems to work.
While it is fairly heavily parameterised it is a real model, a frequentist
implementation of Smith, T. C., Spiegelhalter, D. J., & Thomas, a. (1995).
Bayesian approaches to random-effects meta-analysis: a comparative study.
Statistics in Medicine, 14(24), 2685–99. The reason for having studies as
fixed effects is probably philosophical, the overall success rates are not
likely to be given by normally distributed random effects, and are in many
cases specifically chosen.
I did find that one of the data sets that I have also failed, but fitted
with a commercial program that is based on the EM algorithm. For this type
of problem it is actually faster, as any type of quasi-Newton needs to
calculate lots of derivatives.
Anyway, I'm going to keep looking at the methods, and eventually the code
for glmer and may eventually have some suggestions.
On 19 March 2015 at 14:45, Ben Bolker <bbolker at gmail.com> wrote:
> Ken Beath <ken.beath at ...> writes:
>
>
> > The following code shows that there are convergence problem messages
> > where there is a problem with convergence. The profiling shows that
> > the maximum found is not the correct one. This is simulated data for
> > a binary meta-analysis with fixed effect for study and random effect
> > for treatment.
>
> I'm not 100% sure what the point is here (that's a real question,
> not a criticism). I *think* it's that we can get a convergence
> failure with a computed gradient size that seems fairly innocuous
> (only 0.003, which is perhaps below what I where I would have set
> the new cutoff to eliminate a lot of the probably-false-positives
> that people have been getting with larger data sets), which is
> nevertheless a real convergence failure. This underscores the
> need to set a gradient tolerance that is data-set-size-dependent
> (which the developers have discussed a bit, but maybe not in a
> public forum).
>
> However, may I comment that this is a slightly ridiculous scenario?
> The data set here has 40 observations, and the model tries to fit 22
> parameters. The model that treats id as a random effect works much
> better. I can believe there are scenarios where you really do
> want study as a fixed effect, but did you expect it to be practical
> here?
>
> But maybe you're just trying to show that this is a "true positive"
> case for the convergence warnings.
>
> Some random code I wrote while diagnosing what was going on:
>
> library(ggplot2); theme_set(theme_bw())
>
> ## proportion + weights is a little easier to handle
> thedata <- transform(thedata,prop=nEvents/total)
>
> ggplot(thedata,aes(trt,prop))+geom_point(aes(size=total))+
> geom_line(aes(group=id),colour="gray")
> glmer1 <- glmer(prop~trt+factor(id)+(0+trt|id),
> weights=total,data=thedata,family=binomial)
>
> ## id as RE
> glmer2 <- glmer(prop~trt+(1|id)+(0+trt|id),
> weights=total,data=thedata,family=binomial)
>
> dd <- update(glmer1,devFunOnly=TRUE)
> pars <- unlist(getME(glmer1,c("theta","fixef")))
> library("bbmle")
> ss <- slice2D(pars,dd)
> library("lattice")
> plot(ss)
> ## too complex, but too much work to cut down significantly
>
>
> > library(lme4)
> >
> > thedata <- structure(list(nEvents=c(10L,53L,17L,18L,22L,6L,16L,
> > 14L,13L,18L,15L,19L,52L,19L,8L,16L,50L,8L,9L,4L,
> > 26L,45L,18L,20L,5L,16L,18L,7L,3L,19L,30L,26L,66L,
> > 23L,29L,18L,72L,25L,9L,2L),total=c(200,200,200,200,
> > 200,200,200,200,200,200,200,200,200,200,200,200,200,
> > 200,200,200,200,200,200,200,200,200,200,200,200,200,
> > 200,200,200,200,200,200,200,200,200,200),trt=c(0,
> > 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,
> > 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1),id=structure(c(1L,
> > 2L,3L,4L,5L,6L,7L,8L,9L,10L,11L,12L,13L,14L,15L,
> > 16L,17L,18L,19L,20L,1L,2L,3L,4L,5L,6L,7L,8L,9L,
> > 10L,11L,12L,13L,14L,15L,16L,17L,18L,19L,20L),.Label=c("1",
> > "2","3","4","5","6","7","8","9","10","11","12","13",
> > "14","15","16","17","18","19","20"),class="factor")),.Names=c("nEvents",
> > "total","trt","id"),row.names=c(NA,40L),class="data.frame")
> >
> > glmer1<-glmer(cbind(nEvents,total-nEvents)~trt+factor(id)+
> ## (0+trt|id),data=thedata,family=binomial)
> >
> > # while glmer has problems with component 9 it is 8 with a problem
> profile
> > # I've use devtol so the discrepancy is printed
> > prof.glmer1<-profile(glmer1,which=8,devtol=1.0e-3)
> >
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
--
*Ken Beath*
Lecturer
Statistics Department
MACQUARIE UNIVERSITY NSW 2109, Australia
Phone: +61 (0)2 9850 8516
Building E4A, room 526
http://stat.mq.edu.au/our_staff/staff_-_alphabetical/staff/beath,_ken/
CRICOS Provider No 00002J
This message is intended for the addressee named and may...{{dropped:9}}
More information about the R-sig-mixed-models
mailing list