[Rd] overhaul of mle

Roger D. Peng rpeng at jhsph.edu
Sat Jun 12 03:40:24 CEST 2004


Taking advantage of lexical scoping is definitely the way to go 
here.  I usually write a function called `makeLogLik' or 
something like that which basically returns a function that can 
be passed into an optimizer.  Something like:

makeLogLik <- function(X=Xdefault, Y=Ydefault, Z=Zdefault) {
	function(par1, par2, par3) {
		...
	}
}

-roger

Peter Dalgaard wrote:

> Ben Bolker <bolker at zoo.ufl.edu> writes:
> 
>  
> 
>>    OK, I want to hear about this.  My normal approach to writing
>>likelihood functions that can be evaluated with more than one data
>>set is essentially
>>
>>mll <- function(par1,par2,par3,X=Xdefault,Y=Ydefault,Z=Zdefault) { ... }
>>
>>where X, Y, Z are the data values that may change from one fitting
>>exercise to the next.  This seems straightforward to me, and I always
>>thought it was the reason why optim() had a ... in its argument list,
>>so that one could easily pass these arguments down.  I have to confess
>>that I don't quite understand how your paradigm with with() works: if
>>mll() is defined as you have it above, "data" is a data frame containing
>>$x and $y, right?  How do I run mle(minuslogl=mll,start=...) for
>>different values of "data" (data1, data2, data3) in this case?  Does
>>it go in the call as mle(minuslogl=mll,start=...,data=...)?  Once I've
>>found my mle, how do I view/access these values when I want to see
>>what data I used to fit mle1, mle2, etc.?
> 
> 
> You generate different likelihood functions. If you do it using
> lexical scoping, the data is available in the environment(mll). If you
> insist on having one function that can be modified by changing data, you
> can simply assign into that environment.
>  
> 
>>   I'm willing to change the way I do things (and teach my students
>>differently), but I need to see how ... I don't see how what I've
>>written is an "abuse" of the fixed arguments [I'm willing to believe
>>that it is, but just don't know why]
> 
> 
> "Abuse" is of course a strong word, but...
> 
> The point is that likelihood inference views the likelihood as a
> function of the model parameters *only* and it is useful to stick to
> that concept in the implementation. The fixed arguments were only ever
> introduced to allow profiling by keeping some parameters fixed during
> optimization. 
> 
> Another aspect is that one of my ultimate goals with this stuff was to
> allow generic operations to be defined on likelihoods (combining,
> integrating, mixing, conditioning...) and that becomes even more
> elusive if you have to deal with data arguments for the component
> likelihoods. Not that I've got very far thinking about the design, but
> I'm rather sure that the "likelihood object" needs to be well-defined
> if it is to have a chance at all.
> 
> 
>>>Hmm.. I see the effect with the current version too. Depending on
>>>temperament, it is the labels rather than the order that is wrong...
>>
>>   Hmmm.  By "current version" do you mean you can still
>>find ways to get wrong answers with my modified version?
>>Can you send me an example?  Can you think of a better way to fix this?
> 
> 
> Strike "too", haven't gotten around to checking your code yet.
> Possibly, this requires a bug fix for 1.9.1.
>



More information about the R-devel mailing list