introduction to least-squares fitting
a regression model relates response data to predictor data with one or more coefficients. a fitting method is an algorithm that calculates the model coefficients given a set of input data. curve fitting toolbox™ uses least-squares fitting methods to estimate the coefficients of a regression model.
curve fitting toolbox supports the following least-squares fitting methods:
linear least-squares
weighted least-squares
robust least-squares
nonlinear least-squares
the type of regression model and the properties of the input data determine which least-squares method is most appropriate for estimating model coefficients.
calculating residuals
a residual for a data point is the difference between the value of the observed response and the response estimate returned by the fitted model. the formula for calculating the vector of estimated responses is
where
is an n-by-1 vector of response estimates
f is the general form of the regression model.
x is an n-by-m design matrix.
b is an m-by-1 vector of fitted model coefficients.
a least-squares fitting method calculates model coefficients that minimize the sum of squared errors (sse), which is also called the residual sum of squares. given a set of n data points, the residual for the ith data point ri is calculated with the formula
where yi is the ith observed response value and ŷi is the ith fitted response value. the sse is given by
error assumptions
the difference between the observed and true values for a data point is called the error. because it cannot be observed directly, the error for a data point is approximated with the data point's residual.
least-squares fitting methods are most accurate for data sets that do not contain a large number of random errors with extreme values. statistical results, such as confidence and prediction bounds, assume that errors are normally distributed. data fitting techniques typically make two important assumptions about the error in data containing random variations:
the error exists only in the response data, and not in the predictor data.
the errors are random and follow a normal distribution with zero mean and constant variance.
data fitting techniques assume that errors are normally distributed because the normal distribution often provides an adequate approximation to the distribution of many measured quantities. although the least-squares fitting method does not assume normally distributed errors when calculating parameter estimates, the method works best for data that does not contain a large number of random errors with extreme values. the normal distribution is one of the probability distributions in which extreme random errors are uncommon. however, statistical results, such as confidence and prediction bounds require normally distributed errors for their validity.
if the mean of the residuals is nonzero, check whether the residuals are influenced by the choice of model or predictor variables. for fitting methods other than weighted least squares, curve fitting toolbox additionally assumes that the errors have constant variance across the values of the predictor variables. residuals that do not have a constant variance indicate that the fit might be influenced by poor quality data.
linear least squares
curve fitting toolbox uses the linear least-squares method to fit a linear model to data. a linear model is defined as an equation that is linear in its coefficients. use the linear least-squares fitting method when the data contains few extreme values, and the variance of the error is constant across predictor variables.
a linear model of degree m – 1 has the matrix form
where
y is an n-by-1 vector of response data.
is an m-by-1 vector of unknown coefficients.
x is an n-by-m design matrix containing m – 1 predictor columns. each predictor variable corresponds to a column in x. the last column in x is a column of ones representing the model's constant term.
is an n-by-1 vector of unknown errors.
for example, a first-degree polynomial of the form
is given by
you cannot calculate directly because is unknown. the linear least-squares fitting method approximates by calculating a vector of coefficients b that minimizes the sse. curve fitting toolbox calculates b by solving a system of equations called the normal equations. the normal equations are given by the formula
where xt is the transpose of the matrix x. the formula for b is then
to solve the system of simultaneous linear equations for unknown coefficients, use the matlab® backslash operator (). because inverting xtx can lead to unacceptable rounding errors, the backslash operator uses qr decomposition with pivoting, which is a stable algorithm numerically. see for more information about the backslash operator and qr decomposition. to calculate the vector of fitted response values ŷ, substitute b into the model formula.
for an example of fitting a polynomial model using the linear least-squares fitting method, see .
weighted least squares
if the response data error does not have constant variance across the values of the predictor data, the fit can be influenced by poor quality data. the weighted least-squares fitting method uses scaling factors called weights to influence the effect of a response value on the calculation of model coefficients. use the weighted least-squares fitting method if the weights are known, or if the weights follow a particular form.
the weighted least-squares fitting method introduces weights in the formula for the sse, which becomes
where wi are the weights. the weights you supply should transform the response variances to a constant value. if you know the variances of the measurement errors in your data, then the weights are given by . alternatively, you can use the residuals to estimate the error in the calculation of the .
the weighted formula for the sse yields the formula for b
where w is a diagonal matrix such that .
for an example of fitting a polynomial model using the weighted least-squares fitting method, see .
robust least squares
extreme values in the response data are called outliers. linear least-squares fitting is sensitive to outliers because squaring the residuals magnifies the effects of these data points in the sse calculation. use the robust least-squares fitting method if your data contains outliers.
curve fitting toolbox provides the following robust least-squares fitting methods:
least absolute residuals (lar) — this method finds a curve that minimizes the absolute residuals rather than the squared differences. therefore, extreme values have less influence on the fit.
bisquare weights — this method minimizes a weighted sum of squares, where the weight given to each data point depends on how far the point is from a fitted curve. points near the fitted curve get full weight. points farther from the curve get reduced weight. points that are farther from the curve than expected by random chance get zero weight.
the bisquare weights method is often preferred over lar because it simultaneously seeks to find a curve that fits the bulk of the data using the least-squares approach while minimizing the effect of outliers.
robust bisquare weights fitting uses the iteratively reweighted least-squares algorithm, which follows these steps:
fit the model by weighted least squares. for the first iteration, the algorithm uses weights equal to one unless you specify the weights.
calculate the adjusted residuals and standardize them. the adjusted residuals are given by
where hi are parameters that reduce the weight of data points that are far from the fitted curve. the standardized adjusted residuals are given by
where k=4.685 is a tuning constant, and s is the robust standard deviation given by dividing the median absolute deviation (mad) of the residuals by 0.6745.
calculate the robust weights as a function of u. the bisquare weights are given by
if the fit converges, exit the iteration process. otherwise, perform the next iteration of the bisquare weights fitting method by returning to step 1.
instead of minimizing the effects of outliers by using robust least-squares fitting, you can mark data points to be excluded from the fit. see remove outliers for more information.
for an example of fitting a polynomial model using the robust least-squares fitting method, see .
nonlinear least squares
curve fitting toolbox uses the nonlinear least-squares method to fit a nonlinear model to data. a nonlinear model is defined as an equation that is nonlinear in the coefficients, or has a combination of linear and nonlinear coefficients. exponential, fourier, and gaussian models are nonlinear, for example.
a nonlinear model has the matrix form
where
y is an n-by-1 vector of response data.
is an m-by-1 vector of coefficients.
x is the n-by-m design matrix.
f is a nonlinear function of and x.
is an n-by-1 vector of unknown errors.
in a nonlinear model, unlike a linear model, the approximate coefficients b cannot be calculated using matrix techniques. curve fitting toolbox uses the following iterative approach to calculate the coefficients:
initialize the coefficient values. for some nonlinear models, the toolbox provides a heuristic approach for calculating initial values. for other models, the coefficients are initialized with random values in the interval [0,1].
calculate the fitted curve for the current set of coefficients. the fitted response value ŷ is given by and is calculated using the jacobian of . the jacobian of is defined as a matrix of partial derivatives taken with respect to the coefficients in .
adjust the coefficients using one of these nonlinear least-squares algorithms:
trust-region — this algorithm is the default. you must use the trust-region algorithm if you specify coefficient constraints. the trust-region algorithm can solve difficult nonlinear problems more efficiently than other algorithms and is an improvement over the popular levenberg-marquardt algorithm.
levenberg-marquardt — if the trust-region algorithm does not produce a reasonable fit, and you do not have coefficient constraints, use the levenberg-marquardt algorithm.
if the fit satisfies the specified convergence criteria, exit the iteration. otherwise, return to step 2.
curve fitting toolbox supports the use of weights and robust fitting to calculate the sse for nonlinear models.
the accuracy of a nonlinear model's predictions depends on the type of the model, the convergence criteria, the data set, and the initial values assigned to the coefficients. if the default options do not yield a reasonable fit, experiment with different starting values for the model coefficients, nonlinear least-squares algorithms, and convergence criteria. in general, begin by modifying the coefficient starting values, because nonlinear model fits are particularly sensitive to the starting values for the model coefficients. see for more information about modifying the default options.
for an example of fitting an exponential model using the nonlinear least-squares fitting method, see .
references
[1] dumouchel, w. h., and f. l. o'brien. “integrating a robust option into a multiple regression computing environment.” computer science and statistics: proceedings of the 21st symposium on the interface. alexandria, va: american statistical association, 1989.
[2] holland, p. w., and r. e. welsch. “robust regression using iteratively reweighted least-squares.” communications in statistics: theory and methods, a6, 1977, pp. 813–827.
see also
apps
functions
fit
|