[R] Dealing with -Inf in a maximisation problem.

ProfJCNash profjcnash at gmail.com
Mon Nov 7 02:14:07 CET 2016


Rolf, What optimizers did you try? There are a few in the optimrx package on R-forge that handle bounds, and it may be
useful to set bounds in this case. Transformations using log or exp can be helpful if done carefully, but as you note,
they can make the function more difficult to optimize.

Be cautious about using the default numerical gradient approximations. optimrx allows selection of the numDeriv grad()
function, which is quite good. Complex step would be better, but you need a function which can be computed with complex
arguments. Unfortunately, numerical gradients often step over the cliff edge of computability of the function. The
bounds are not checked for the step to compute things like (f(x+h) - f(x) / h.

Cheers, JN

On 16-11-06 07:07 PM, William Dunlap via R-help wrote:
> Have you tried reparameterizing, using logb (=log(b)) instead of b?
> 
> Bill Dunlap
> TIBCO Software
> wdunlap tibco.com
> 
> On Sun, Nov 6, 2016 at 1:17 PM, Rolf Turner <r.turner at auckland.ac.nz> wrote:
> 
>>
>> I am trying to deal with a maximisation problem in which it is possible
>> for the objective function to (quite legitimately) return the value -Inf,
>> which causes the numerical optimisers that I have tried to fall over.
>>
>> The -Inf values arise from expressions of the form "a * log(b)", with b =
>> 0.  Under the *starting* values of the parameters, a must equal equal 0
>> whenever b = 0, so we can legitimately say that a * log(b) = 0 in these
>> circumstances.  However as the maximisation algorithm searches over
>> parameters it is possible for b to take the value 0 for values of
>> a that are strictly positive.  (The values of "a" do not change during
>> this search, although they *do* change between "successive searches".)
>>
>> Clearly if one is *maximising* the objective then -Inf is not a value of
>> particular interest, and we should be able to "move away".  But the
>> optimising function just stops.
>>
>> It is also clear that "moving away" is not a simple task; you can't
>> estimate a gradient or Hessian at a point where the function value is -Inf.
>>
>> Can anyone suggest a way out of this dilemma, perhaps an optimiser that is
>> equipped to cope with -Inf values in some sneaky way?
>>
>> Various ad hoc kludges spring to mind, but they all seem to be fraught
>> with peril.
>>
>> I have tried changing the value returned by the objective function from
>> "v" to exp(v) --- which maps -Inf to 0, which is nice and finite. However
>> this seemed to flatten out the objective surface too much, and the search
>> stalled at the 0 value, which is the antithesis of optimal.
>>
>> The problem arises in a context of applying the EM algorithm where the
>> M-step cannot be carried out explicitly, whence numerical optimisation.
>> I can give more detail if anyone thinks that it could be relevant.
>>
>> I would appreciate advice from younger and wiser heads! :-)
>>
>> cheers,
>>
>> Rolf Turner
>>
>> --
>> Technical Editor ANZJS
>> Department of Statistics
>> University of Auckland
>> Phone: +64-9-373-7599 ext. 88276
>>
>> ______________________________________________
>> R-help at r-project.org mailing list -- To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posti
>> ng-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
> 
> 	[[alternative HTML version deleted]]
> 
> ______________________________________________
> R-help at r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



More information about the R-help mailing list