Curve fitting is the practical way of converting data to a best-fit function that can be carried forward into other calculations. The candidate function is a special case of datum that varies with an independent variable. What is ‘best’ is assessed by minimizing deviation of the data from the local datum. For example, the linear least squares method solves the best intercept and best slope from total covariance matrix components. However, the uncertainty analysis for these formulas has relied on the assumption that errors are normally distributed and deviations alias as errors for the intercept and slope. This could be solved by converting existing numbers and formulas to the duals arithmetic. Additionally, the ‘degrees of freedom’ statement that determines standard-deviation-of-the-fit may spark a re-derivation of theory using duals arithmetic. The goal of this project is to obtain new formula for calculating best-fit slope and intercept in the presence of errors and to identify procedures that could be generalized to higher dimensional fitting functions such as polynomials.

The project has progressed from casting each instance as a dual of a scaled point and an error vector, to casting a linear function datum as a dual, to formulating deviation-of-fit duals and then using matrix-of-duals functions to calculate total covariances. A key ingredient in the work has been to establish a matrix-of-duals toolbox that provides important operations ‘with error’ such as matrix inversion. The direct way was to use calculations of matrix-determinants embedded with duals arithmetic. This works but might not be the most efficient in all cases. However, the matrix-of-duals format may provide a doorway to procedures for higher dimensional fitting functions.

Now the quality of the linear fit can be judged according to errors accumulated from every N instance of the (x,y) source data (2N error sources). This format dovetails into the verification of the resulting linear fit by providing duals for the intercept and slope. For example, a chosen independent variable value, x, can transmit its own error so that the corresponding evaluation of y from the best-fit has (2N+1) error contributions.