Question: LinearFit "variancecovariance" matrix calculation method?

Dear all,

 

I'm trying to duplicate by hand the variancecovariance matrix (and stanard error vector) of a simple LinearFit of data (2nd order function of form a + bx + cx^2), using the inverse of the curvature matrix as in Chapter 15 "Modeling of Data" of the Numerical Algorithms book 3rd Edition, or the paper by Keith H. burrell "Error analysis for parameters determined in nonlinwear least-squares fits", American journal of Physics, Vol. 58, No. 2, 160-164 (1990). Unfortunately, I cannot get the same answer, even with simple unweighted examples.

The Maple help manual gives a complete example in the section "Statistics[LinearFit]-fit a linear model function to data". What I do by hand is calculate the curvature matrix as follows (LaTeX notation):

 

\alpha_{kj} = \sum_{i=0}^{N-1} \frac{X_j(x_i) X_k(x_i)}{\sigma_i^2}

 

where there are N data points with x the independent variable, and the sigma_i are the standard errors for each of the dependent y values (the dependent values themselves do not appear in the above equation). For the example, I set these sigma to unity. The basis functions X_j (and X_k) are 1, x^ and x^2 for my quadratic fitting.

I therefore construct the curvature matrix made up of the terms above, and invert it to obtain the covariance matrix, which I expect to be the same as that given in the output of Maple for such fitting, as the "variancecovariance matrix", and the square root of the diagonal would then be the "standard error vector". However, my results, either for the simple Maple example, or for an example with weights, are not the same.

 

I suspect I have either misunderstood the hand calculation method entirely, or else I misunderstand what Maple has done and is presenting to me in the output.

Any tips would be very much appreciated.

 

Best regards,

  Gernot Hassenpflug, NICT, Tokyo

Please Wait...