Yes, initially, it didn't work and thanks to one of the examples in the help
file, I found out that I need to set maximize = T...but thanks for your
suggestion anyway.
I mainly work with state space models and I'm currently dealing with a case
where the estimation time is halved (!!!) by spg().
Shimrit
On Tue, Feb 24, 2009 at 3:25 PM, Ravi Varadhan wrote:
> Hi Shimrit,
>
> Make sure that you set maximize=TRUE in the control settings (since you
> have
> fnscale = -1 in your optim() call).
>
> A nice feature of spg() is that the entire code is in R, and can be readily
> seen by just typing the function name at the R prompt. On smaller problems
> (with only a few parameters), it is usually slower than optim() or
> nlminb(),
> since much of the computing is performed in C and/or Fortran. But the
> difference in speed is not that important in small problems, anyway.
> However, it is faster on large-scale problems.
>
> Best,
> Ravi.
>
>
>
> ----------------------------------------------------------------------------
> -------
>
> Ravi Varadhan, Ph.D.
>
> Assistant Professor, The Center on Aging and Health
>
> Division of Geriatric Medicine and Gerontology
>
> Johns Hopkins University
>
> Ph: (410) 502-2619
>
> Fax: (410) 614-9625
>
> Email: rvaradhan@jhmi.edu
>
> Webpage: http://www.jhsph.edu/agingandhealth/People/Faculty/Varadhan.html
>
>
>
>
> ----------------------------------------------------------------------------
> --------
>
>
> -----Original Message-----
> From: r-help-bounces@r-project.org [mailto:r-help-bounces@r-project.org]
> On
> Behalf Of Shimrit Abraham
> Sent: Tuesday, February 24, 2009 10:15 AM
> To: Ravi Varadhan
> Cc: r-help@r-project.org
> Subject: Re: [R] Tracing gradient during optimization
>
> Hi Ravi,
>
> Thanks for your great suggestion, it does exactly what I need as it
> provides
> more insight into what is going on in the 'black box'. In addition, it's
> much faster than optim(). I will use this function in the future.
>
> Kind Regards,
>
> Shimrit
>
>
>
> On Tue, Feb 24, 2009 at 2:33 PM, Ravi Varadhan wrote:
>
> > Hi,
> >
> > If you look at the source code for optim() in the optim.c file, you
> > will see the following lines for "BFGS":
> >
> > if (trace && (iter % nREPORT == 0))
> > Rprintf("iter%4d value %f\n", iter, f);
> >
> > This means that "BFGS" does not output gradient values when you
> > "trace" the iterations. Let us look at the code for "L-BFGS-B":
> >
> > if(trace == 1 && (iter % nREPORT == 0)) {
> > Rprintf("iter %4d value %f\n", iter, f);
> >
> > So, it seems like even "L-BFGS-B" algorithm is also not going to be
> > useful to you.
> >
> >
> > You can use the spg() function in the "BB" package. Its usage is very
> > similar to that of optim(). When you specify trace=TRUE, it will give
> > you both function and (projected) gradient information. You can use
> > the "triter" parameter to control the frequency of output, i.e. settig
> > triter=1, will give you the fn and gr values at each iteration.
> >
> > library(BB)
> > ?spg
> >
> > Hope this helps,
> > Ravi.
> >
> >
> >
> >
> > ----------------------------------------------------------------------
> > ------
> > -------
> >
> > Ravi Varadhan, Ph.D.
> >
> > Assistant Professor, The Center on Aging and Health
> >
> > Division of Geriatric Medicine and Gerontology
> >
> > Johns Hopkins University
> >
> > Ph: (410) 502-2619
> >
> > Fax: (410) 614-9625
> >
> > Email: rvaradhan@jhmi.edu
> >
> > Webpage:
> > http://www.jhsph.edu/agingandhealth/People/Faculty/Varadhan.html
> >
> >
> >
> >
> > ----------------------------------------------------------------------
> > ------
> > --------
> >
> >
> > -----Original Message-----
> > From: r-help-bounces@r-project.org
> > [mailto:r-help-bounces@r-project.org]
> > On
> > Behalf Of Shimrit Abraham
> > Sent: Tuesday, February 24, 2009 9:00 AM
> > To: r-help@r-project.org
> > Subject: [R] Tracing gradient during optimization
> >
> > Hi everyone,
> >
> > I am currently using the function optim() to maximize/minimize
> > functions and I would like to see more output of the optimization
> > procedure, in particular the numerical gradient of the parameter
> > vector during each iteration.
> > The documentation of optim() describes that the trace parameter should
> > allow one to trace the progress of the optimization.
> > I use the following command:
> >
> > optim(par = vPar,
> > fn = calcLogLik,
> > method = "BFGS",
> > control = list(trace = TRUE, fnscale = -1, maxit = 2000));
> >
> > which gives very little information:
> >
> > initial value 3.056998
> > final value 2.978351
> > converged
> >
> > Specifying trace >1, for instance trace = 20, does not result in more
> > information. Is there a way to view more details of the progress
> > perhaps by using another optimizer?
> >
> > Thanks,
> >
> > Shimrit Abraham
> >
> > [[alternative HTML version deleted]]
> >
> > ______________________________________________
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> >
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]