evaluating goodness of fit
how to evaluate goodness of fit
after fitting data with one or more models, you should evaluate the goodness of fit. a visual examination of the fitted curve displayed in the curve fitter app should be your first step. beyond that, the toolbox provides these methods to assess goodness of fit for both linear and nonlinear parametric fits:
as is common in statistical literature, the term goodness of fit is used here in several senses: a “good fit” might be a model
that your data could reasonably have come from, given the assumptions of least-squares fitting
in which the model coefficients can be estimated with little uncertainty
that explains a high proportion of the variability in your data, and is able to predict new observations with high certainty
a particular application might dictate still other aspects of model fitting that are important to achieving a good fit, such as a simple model that is easy to interpret. the methods described here can help you determine goodness of fit in all these senses.
these methods group into two types: graphical and numerical. plotting residuals and prediction bounds are graphical methods that aid visual interpretation, while computing goodness-of-fit statistics and coefficient confidence bounds yield numerical measures that aid statistical reasoning.
generally speaking, graphical measures are more beneficial than numerical measures because they allow you to view the entire data set at once, and they can easily display a wide range of relationships between the model and the data. the numerical measures are more narrowly focused on a particular aspect of the data and often try to compress that information into a single number. in practice, depending on your data and analysis requirements, you might need to use both types to determine the best fit.
note that it is possible that none of your fits can be considered suitable for your data, based on these methods. in this case, it might be that you need to select a different model. it is also possible that all the goodness-of-fit measures indicate that a particular fit is suitable. however, if your goal is to extract fitted coefficients that have physical meaning, but your model does not reflect the physics of the data, the resulting coefficients are useless. in this case, understanding what your data represents and how it was measured is just as important as evaluating the goodness of fit.
goodness-of-fit statistics
after using graphical methods to evaluate the goodness of fit, you should examine the goodness-of-fit statistics. curve fitting toolbox™ software supports these goodness-of-fit statistics for parametric models:
the sum of squares due to error (sse)
r-square
adjusted r-square
root mean squared error (rmse)
for the current fit, these statistics are displayed in the results pane in the curve fitter app. for all fits in the current curve-fitting session, you can compare the goodness-of-fit statistics in the table of fits pane.
to examine goodness-of-fit statistics at the command line, either:
in the curve fitter app, export your fit and goodness of fit to the workspace. on the curve fitter tab, in the export section, click export and select export to workspace.
specify the
gof
output argument with thefit
function.
sum of squares due to error
this statistic measures the total deviation of the response values from the fit to the response values. it is also called the summed square of residuals and is usually labeled as sse.
a value closer to 0 indicates that the model has a smaller random error component, and that the fit will be more useful for prediction.
r-square
this statistic measures how successful the fit is in explaining the variation of the data. put another way, r-square is the square of the correlation between the response values and the predicted response values. it is also called the square of the multiple correlation coefficient and the coefficient of multiple determination.
r-square is defined as the ratio of the sum of squares of the regression (ssr) and the total sum of squares (sst). ssr is defined as
sst is also called the sum of squares about the mean, and is defined as
where sst = ssr sse. given these definitions, r-square is expressed as
r-square can take on any value between 0 and 1, with a value closer to 1 indicating that a greater proportion of variance is accounted for by the model. for example, an r-square value of 0.8234 means that the fit explains 82.34% of the total variation in the data about the average.
if you increase the number of fitted coefficients in your model, r-square will increase although the fit may not improve in a practical sense. to avoid this situation, you should use the degrees of freedom adjusted r-square statistic described below.
note that it is possible to get a negative r-square for equations that do not contain a constant term. because r-square is defined as the proportion of variance explained by the fit, if the fit is actually worse than just fitting a horizontal line then r-square is negative. in this case, r-square cannot be interpreted as the square of a correlation. such situations indicate that a constant term should be added to the model.
degrees of freedom adjusted r-square
this statistic uses the r-square statistic defined above, and adjusts it based on the residual degrees of freedom. the residual degrees of freedom is defined as the number of response values n minus the number of fitted coefficients m estimated from the response values.
v = n – m
v indicates the number of independent pieces of information involving the n data points that are required to calculate the sum of squares. note that if parameters are bounded and one or more of the estimates are at their bounds, then those estimates are regarded as fixed. the degrees of freedom is increased by the number of such parameters.
the adjusted r-square statistic is generally the best indicator of the fit quality when you compare two models that are nested — that is, a series of models each of which adds additional coefficients to the previous model.
the adjusted r-square statistic can take on any value less than or equal to 1, with a value closer to 1 indicating a better fit. negative values can occur when the model contains terms that do not help to predict the response.
root mean squared error
this statistic is also known as the fit standard error and the standard error of the regression. it is an estimate of the standard deviation of the random component in the data, and is defined as
where mse is the mean square error or the residual mean square
just as with sse, an mse value closer to 0 indicates a fit that is more useful for prediction.