[R] Confidence interval calculation for intersection of two quadratic lines
Leif Kirschenbaum
Leif.Kirschenbaum at grandisinc.com
Fri Jun 18 20:19:24 CEST 2010
How do I calculate the confidence interval for the value x given by the intersection of two quadratics (i.e. parabolas)?
I fit two quadratics of the form:
y = C1 + B1*x + A1*x^2
y = C2 + B2*x + A2*x^2
to two sets of points N1 and N2.
I test for whether they intersect, if they do then I calculate the roots of:
0 = (C1 - C2) + (B1 - B2)*x + (A1 - A2)*x^2
to determine where they intersect and choose the proper root.
I have propagated errors through the quadratic equation so that I may calculate the variance of each root.
Naively I might take the variance of the root, find a Student-t quantile for degrees-of-freedom = (N1 + N2 - 6), and then calculate
C.I. = root +/- variance * t-quantile
However I have several reservations about this approach:
(1) Let's say that one quadratic fit devolves to a linear fit (A1 ~ 0)*, then I have the intersection of a straight line and a quadratic. (I code this situation separately with different error propagation formulas) However intuitively if the intercept of the straight line varies by some dy up or down I would expect that the intersection with a parabola varies by a different amount if the straight line is moved up versus down, e.g. the confidence interval should be asymmetric.
(2) The variances for the coefficients A1, B1, C1 depend on N1 data points (and A2,B2,C2 on N2) and therefore intuitively it doesn't seem accurate to use a Student-t quantile derived from (N1 + N2). Another way to think of it is if I constructed a linear model:
x.1 <- x * (x < 0)
x.2 <- x * (x >= 0)
x2 <- x^2
x2.1 <- x2 * (x < 0)
x2.2 <- x2 * (x >= 0)
mylm <- lm(y ~ x.1 + x.2 + x2.1 + x2.2)
some coefficients are dependent on N1 data points (and their noise) and other coefficients depend on N2 data points (and their noise). (and I don't think lm would given me an aswer with such bifurcated
Sincerely,
Leif S. Kirschenbaum, Ph.D., PMP, CRE
* In reality I will perform both quadratic and linear fits on N1 and N2, compare MSEs of the quadratic vs. linear on N1 to determine the best fit for N1 points and similarly on N2. The situation is that I am measuring many instances of an object (~16,000 devices per wafer) whose behavior is not well modelled at room temperature. By inspection of ~4,000 plots of device data I see that there is sometimes a linear dependence and sometimes a dependence with curvature adequately represented by a quadratic fit.
The zero temperature device behavior is given by ( (y-yoffset)/yscale )^(2/3) + ( (x-xcenter)/xscale )^(2/3) = 1 which is darn hard to fit with linear models even for perfect devices, which we do not have. The room temperature device behavior is so poorly understood that it is only simulated with finite element models, which does not give me a neat analytic expression such as p^(2/3) + q^(2/3) = 1, and in any event the plots rarely look anything like x^(2/3) behavior.
Background:
I am developing a test system in LabVIEW, however I simulate data analysis offline in R, i.e. acquire raw data on the test system, import raw data into R and try analysis methods, then port the methods back to the test system.
Therefore use of any R functions more sophisticated than lm will be of limited use; LabVIEW does not have very sophisticated statistical functions (thank goodness it includes inverse-t). I have already developed propagation of quadratic fit variances and written my own linear fit confidence interval calculations and now I am extending this to quadratics and to this intersection problem.
More information about the R-help
mailing list