[R] Testing a linear hypothesis after maximum likelihood
Spencer Graves
spencer.graves at pdf.com
Thu Dec 29 20:26:34 CET 2005
1. I try to avoid dogmatism and use whatever seems sufficiently
accurate for the intended purposes and easiest to explain to the
intended audience.
2. I'm not aware of any package that will compute Wald tests from
optim(...)$hessian, etc., so I write my own code when I want that.
3. Likelihood ratio tests are known to be more accurate than Wald
tests. Linear regression can be thought of as projection onto a
subspace. Nonlinear least squares and maximum likelihood more generally
involve projection onto a nonlinear manifold. It does this by creating
local linear approximations. There are two sources of error in this due
to (1) intrinsic curvature of the manifold and (2) parameter effects
curvature. I mention this, because likelihood ratio procedures are
distorted only by the intrinsic curvature, while Wald procedures are
subject to both. Moreover, in evaluating numerous published
applications of nonlinear least squares, Bates and Watts found that the
intrinsic curvature was never much worse than the parameter effects and
was usually at least an order of magnitude smaller. See Bates and Watts
(1988) Nonlinear Regression Analysis and Its Applications (Wiley) or
Seber and Wild (1988) Nonlinear Regression (Wiley).
Bottom line: I routinely use Wald procedures to compute confidence
intervals, because computing them by profiling log(likelihood ratio) is
usually more work than I have time for. However, for testing, when I
have the time, I use likelihood ratio procedures.
spencer graves
Peter Muhlberger wrote:
> On 12/29/05 1:35 PM, "Spencer Graves" <spencer.graves at pdf.com> wrote:
>
>
>> I think the question was appropriate for this list. If you want to
>>do a Wald test, you might consider asking "optim" for "hessian=TRUE".
>>If the function that "optim" minimizes is (-log(likelihood)), then the
>>optional component "hessian" of the output of optim should be the
>>observed information matrix. An inverse of that should then estimate
>>the parameter covariance matrix. I often use that when "nls" dies on
>>me, because "optim" will give me an answer. If the hessian is singular,
>>I can sometimes diagnose the problem by looking at eigenvalues and
>>eigenvectors of the hessian.
>
>
> Niffty, thanks again! Do you construct your own wald tests out of
matrixes
> or use something packaged? Or do you just avoid wald tests at all
costs :)
> ?
>
> Peter
>
Spencer Graves wrote:
> I think the question was appropriate for this list. If you want
> to do a Wald test, you might consider asking "optim" for "hessian=TRUE".
> If the function that "optim" minimizes is (-log(likelihood)), then the
> optional component "hessian" of the output of optim should be the
> observed information matrix. An inverse of that should then estimate
> the parameter covariance matrix. I often use that when "nls" dies on
> me, because "optim" will give me an answer. If the hessian is singular,
> I can sometimes diagnose the problem by looking at eigenvalues and
> eigenvectors of the hessian.
>
> hope this helps.
> spencer graves
>
> ####################
> On 12/29/05 7:04 AM, "Spencer Graves" <spencer.graves at pdf.com> wrote:
>
>
> >> Why can't you use a likelihood ratio? I would write two slightly
> >> different functions, the second of which would use the linear
> constraint
> >> to eliminate one of the coefficients. Then I'd refer 2*log(likelihood
> >> ratio) to chi-square(1). If I had some question about the chi-square
> >> approximation to the distribution of that 2*log(likelihood ratio)
> >> statistic, I'm use some kind of Monte Carlo, e.g., MCMC.
> >>
>
>
> Neat solution, thanks! I didn't see that, having focused my attention on
> finding some way to do a Wald test. I think I was so focused because I
> thought it would be good to have some way of testing hypotheses w/o having
> to rerun my model every time.
>
>
> >> If you'd like more help from this listserve, PLEASE do read the
> >> posting guide! "www.R-project.org/posting-guide.html". Anecdotal
> >> evidence suggests that posts that follow more closely the
> suggestions in
> >> that guide tend to get more useful replies quicker.
>
>
> Ok, I guess you're hinting that I'm violating the 'do your homework' norm.
> I'm not a statistician (I'm a social scientist) & was thinking about
> alternatives to the likelihood ratio test, so the self-evident solution you
> mention above didn't occur to me. I did spend a long time trying to figure
> out whether there were facilities for Wald tests and whether they might
> work
> w/ ML output. It wasn't clear what would work & it would have taken even
> more time to try some alternatives out, so I thought I'd just ask the
> list--surely people have tests they typically run after ML.
>
> In hindsight, I guess the question as asked was rather dumb, so my
> apologies. Perhaps I should have asked if anyone uses a built-in Wald
> function after ML? Or perhaps even that question is far too basic for a
> list composed of such capable people.
>
> Anyway, thanks for the insight!
>
> Peter
> #####################################################
> Why can't you use a likelihood ratio? I would write two slightly
> different functions, the second of which would use the linear constraint
> to eliminate one of the coefficients. Then I'd refer 2*log(likelihood
> ratio) to chi-square(1). If I had some question about the chi-square
> approximation to the distribution of that 2*log(likelihood ratio)
> statistic, I'm use some kind of Monte Carlo, e.g., MCMC.
>
> If you'd like more help from this listserve, PLEASE do read the
> posting guide! "www.R-project.org/posting-guide.html". Anecdotal
> evidence suggests that posts that follow more closely the suggestions in
> that guide tend to get more useful replies quicker.
>
> hope this helps.
> spencer graves
>
>
> Peter Muhlberger wrote:
>
>> I'd like to be able to test linear hypotheses after setting up and
>> running a
>> model using optim or perhaps nlm. One hypothesis I need to test are that
>> the average of several coefficients is less than zero, so I don't
>> believe I
>> can use the likelihood ratio test.
>>
>> I can't seem to find a provision anywhere for testing linear
>> combinations of
>> coefficients after max. likelihood.
>>
>> Cheers & happy holidays,
>>
>> Peter
>>
>> ______________________________________________
>> R-help at stat.math.ethz.ch mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide!
>> http://www.R-project.org/posting-guide.html
>
>
--
Spencer Graves, PhD
Senior Development Engineer
PDF Solutions, Inc.
333 West San Carlos Street Suite 700
San Jose, CA 95110, USA
spencer.graves at pdf.com
www.pdf.com <http://www.pdf.com>
Tel: 408-938-4420
Fax: 408-280-7915
More information about the R-help
mailing list