[R] Non-Linear Regression (Cobb-Douglas and C.E.S)
James Wettenhall
wettenhall at wehi.edu.au
Mon Apr 19 05:14:17 CEST 2004
On Sun, 18 Apr 2004, Mohammad Ehsanul Karim wrote:
> concern (In this case there is no way to linearize it), the Cobb-Douglas
> being just a 'Toy problem' to see how non-linear process works. And i'm
> sorry that i cannot guess some approximate parameter values for that CES
> using some "typical" Y,L,K data : that why it is a problem (doing a grid
> search over infinite parameter space is indeed time consuming).
Mohammed,
OK, so you really do want to try nonlinear regression. That's
fine as long as you know that there are a lot more things that
can go wrong than with linear regression. Have you read the
references at the end of the help for nls? (I have to admit I
haven't yet.)
Do you know under what conditions your cost function will be
convex with respect to the parameters you are estimating? The
sum of convex functions is convex. So if every one of your
squared-error terms is convex then the sum will be.
Let's say you are minimizing this cost function:
Sum_i (Y_i - f(delta,beta,phi)_i)^2
where f()_i is your C.E.S. function evaluated at
each data point (L_i,K_i).
Can you calculate the Hessian matrix (second derivative matrix)
of the cost function with respect to the parameters, and see
under what conditions it is positive definite? (i.e. under
what conditions is your cost function convex?)
A non-convex cost function is one possible reason why a
nonlinear optimization routine may have trouble converging.
There are some fiddles you can apply if you don't have convexity
but they don't always work. For example, in Newton descent, you
use a Hessian matrix to calculate a descent step and in BFGS you
use an approximate inverse Hessian to calculate a descent step.
If the Hessian is not positive definite, you can cheat by making
it positive-definite by using a modified cholesky factorization,
e.g. Schnabel and Eskow. This should guarantee a descent
direction.
Sometimes you don't need a Hessian to be positive-definite in
all directions, only within the subspace dictated by the
constraints. For example the negative(*) Cobb-Douglas utility
function (for two commodities) is not convex over all R2 but it
is convex in the subspace in which the budget constraint is
satisfied at equality. I'm not talking about regression here,
just "max utility subject to budget constraint".
(*) negative, because instead of maximizing utility and having
to talk about concavity I want to talk about minimizing
negative utility so I can talk about convexity.
Consider a second-order Taylor approximation about a potential
minimizing solution x*. x is the vector of parameters. g is
the cost function (because I already used 'f' above for the
C.E.S. function). x* is the optimal solution. If there's
any chance of standard nonlinear optimization working, we
hope that g'(x*) is zero.
g(x) = g(x*) + g'(x*)(x-x*) + (1/2)(x-x*)T g''(x*) (x-x*)
(where T means transpose)
So with g'(x*) = 0, we have
g(x)-g(x*) = (1/2)(x-x*)T g''(x*)(x-x*)
So if we move in a direction x-x* away from the optimal point
x*, we want this to always be positive (strictly convex,
positive definte hessian), or at least never be negative
(positive semi-definite hessian). g''(x*) is the Hessian
evaluated at the optimal solution.
There are many ways to test positive-definiteness. If all the
eigenvalues are positive the matrix is positive definte. There
are several ways to test positive definiteness over the subspace
dictated by your constraints, e.g. a bordered hessian.
Good luck,
James
More information about the R-help
mailing list