Este conteúdo é licenciado de acordo com a Licença Creative Commons de Atribuição/CompartilhaIgual 3.0 (Unported). Isso significa que você pode redistribuir ou modificar livremente este conteúdo sob as mesmas condições de licença e precisa atribuir ao autor original colocando um hyperlink para este trabalho no seu site. Além disto, favor não modificar qualquer referência ao trabalho original (caso houver) que estiverem contidas neste conteúdo.
The calculator below uses the linear least squares method for curve fitting, in other words, to approximate one variable function using regression analysis, just like the calculator Function approximation with regression analysis. But, unlike the previous calculator, this one can find an approximating function if it is additionally constrained by particular points, which means that the computed curve-fit should pass through these particular points.
Lagrange multipliers are used to find a curve-fit in case of constraints. This poses some limitations to the used regression model, namely, only linear regression models can be used. That's why, unlike the above-mentioned calculator, this one does not include power and exponential regressions. However, it includes 4th and 5th order polynomial regressions. Formulas and a brief theory recap can be found below the calculator, as usual.
Note that if the x-values field is left empty, the calculator assumes that x changes starting from zero with a +1 increment.
Linear least squares (LLS)
Linear least squares (LLS) is the least squares approximation of linear functions to data. And the method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the residuals made in the results of every single equation.
You can find more information, including formulas, about the least squares approximation at Function approximation with regression analysis.
Here we will demonstrate with linear regression models, then the approximating function is the linear combination of parameters that needs to be determined. Determined values, of course, should minimize the sum of the squares of the residuals.
Suppose we have a set of data points .
Our approximating function is the linear combination of parameters to be determined, for example
We can use a matrix notation to express the values of this function
Or, in short notation:
Since we are using the least squares approximation, we should minimize the following function
or, in matrix form
This value is the distance between vector y and vector Xa. To minimize this distance, Xa should be the projection to the X columns space, and vector Xa-y should be orthogonal to that space.
This is possible then
there v is a random vector in the columns space. Since it is random, the only way to satisfy the condition above is to have
The calculator uses the formula above in the case of the unconstrained linear least squares method.
Now let's talk about constraints. These could be:
– curve-fit must pass through particular points (this is supported by the calculator)
– slope of the curve at particular points must be equal to particular values.
So, we need to find the approximating function, which, from one side, should minimize the sum of the squares,
and from the other side, should satisfy the conditions
or, in matrix form,
This is called the conditional extremum, and it is solved by constructing the Lagrangian using Lagrange multipliers.
In our case the Lagrangian is
and the task is to find its extremum. After some derivations, which I have not listed here, the formula to find the parameters is
The calculator uses the formula above in the case of the constrained linear least squares method