introduction to feature selection -凯发k8网页登录

main content

introduction to feature selection

this topic provides an introduction to feature selection algorithms and describes the feature selection functions available in statistics and machine learning toolbox™.

feature selection algorithms

feature selection reduces the dimensionality of data by selecting only a subset of measured features (predictor variables) to create a model. feature selection algorithms search for a subset of predictors that optimally models measured responses, subject to constraints such as required or excluded features and the size of the subset. the main benefits of feature selection are to improve prediction performance, provide faster and more cost-effective predictors, and provide a better understanding of the data generation process [1]. using too many features can degrade prediction performance even when all features are relevant and contain information about the response variable.

you can categorize feature selection algorithms into three types:

  • filter type feature selection — the filter type feature selection algorithm measures feature importance based on the characteristics of the features, such as feature variance and feature relevance to the response. you select important features as part of a data preprocessing step and then train a model using the selected features. therefore, filter type feature selection is uncorrelated to the training algorithm.

  • wrapper type feature selection — the wrapper type feature selection algorithm starts training using a subset of features and then adds or removes a feature using a selection criterion. the selection criterion directly measures the change in model performance that results from adding or removing a feature. the algorithm repeats training and improving a model until its stopping criteria are satisfied.

  • embedded type feature selection — the embedded type feature selection algorithm learns feature importance as part of the model learning process. once you train a model, you obtain the importance of the features in the trained model. this type of algorithm selects features that work well with a particular learning process.

in addition, you can categorize feature selection algorithms according to whether or not an algorithm ranks features sequentially. the minimum redundancy maximum relevance (mrmr) algorithm and stepwise regression are two examples of the sequential feature selection algorithm. for details, see .

you can compare the importance of predictor variables visually by creating partial dependence plots (pdp) and individual conditional expectation (ice) plots. for details, see plotpartialdependence.

for classification problems, after selecting features, you can train two models (for example, a full model and a model trained with a subset of predictors) and compare the accuracies of the models by using the , , or functions.

feature selection is preferable to feature transformation when the original features and their units are important and the modeling goal is to identify an influential subset. when categorical features are present, and numerical transformations are inappropriate, feature selection becomes the primary means of dimension reduction.

feature selection functions

statistics and machine learning toolbox offers several functions for feature selection. choose the appropriate feature selection function based on your problem and the data types of the features.

filter type feature selection

functionsupported problemsupported data typedescription
classificationcategorical and continuous features

examine whether each predictor variable is independent of a response variable by using individual chi-square tests, and then rank features using the p-values of the chi-square test statistics.

for examples, see the function reference page .

classificationcategorical and continuous features

rank features sequentially using the .

for examples, see the function reference page .

*classificationcontinuous features

determine the feature weights by using a diagonal adaptation of neighborhood component analysis (nca). this algorithm works best for estimating feature importance for distance-based supervised models that use pairwise distances between observations to predict the response.

for details, see the function reference page and these topics:

regressioncategorical and continuous features

examine the importance of each predictor individually using an f-test, and then rank features using the p-values of the f-test statistics. each f-test tests the hypothesis that the response values grouped by predictor variable values are drawn from populations with the same mean against the alternative hypothesis that the population means are not all the same.

for examples, see the function reference page .

regressioncategorical and continuous features

rank features sequentially using the .

for examples, see the function reference page .

*regressioncontinuous features

determine the feature weights by using a diagonal adaptation of neighborhood component analysis (nca). this algorithm works best for estimating feature importance for distance-based supervised models that use pairwise distances between observations to predict the response.

for details, see the function reference page and these topics:

fsulaplacianunsupervised learningcontinuous features

rank features using the laplacian score.

for examples, see the function reference page fsulaplacian.

classification and regressioneither all categorical or all continuous features

rank features using the algorithm for classification and the algorithm for regression. this algorithm works best for estimating feature importance for distance-based supervised models that use pairwise distances between observations to predict the response.

for examples, see the function reference page .

classification and regressioneither all categorical or all continuous features

select features sequentially using a custom criterion. define a function that measures the characteristics of data to select features, and pass the function handle to the sequentialfs function. you can specify sequential forward selection or sequential backward selection by using the 'direction' name-value pair argument. sequentialfs evaluates the criterion using cross-validation.

*you can also consider fscnca and fsrnca as embedded type feature selection functions because they return a trained model object and you can use the object functions predict and loss. however, you typically use these object functions to tune the regularization parameter of the algorithm. after selecting features using the fscnca or fsrnca function as part of a data preprocessing step, you can apply another classification or regression algorithm for your problem.

wrapper type feature selection

functionsupported problemsupported data typedescription
classification and regressioneither all categorical or all continuous features

select features sequentially using a custom criterion. define a function that implements a supervised learning algorithm or a function that measures performance of a learning algorithm, and pass the function handle to the sequentialfs function. you can specify sequential forward selection or sequential backward selection by using the 'direction' name-value pair argument. sequentialfs evaluates the criterion using cross-validation.

for examples, see the function reference page and these topics:

embedded type feature selection

functionsupported problemsupported data typedescription
deltapredictor property of a model objectlinear discriminant analysis classificationcontinuous features

create a linear discriminant analysis classifier by using fitcdiscr. a trained classifier, returned as , stores the coefficient magnitude in the deltapredictor property. you can use the values in deltapredictor as measures of the predictor importance. this classifier uses the two regularization parameters to identify and remove redundant predictors. you can obtain appropriate values for these parameters by using the function or the 'optimizehyperparameters' name-value pair argument.

for examples, see these topics:

fitcecoc with templatelinearlinear classification for multiclass learning with high-dimensional datacontinuous features

train a linear classification model by using fitcecoc and linear binary learners defined by templatelinear. specify 'regularization' of templatelinear as 'lasso' to use lasso regularization.

for an example, see . this example determines a good lasso-penalty strength by evaluating models with different strength values using . you can also evaluate models using , , , , or .

fitclinearlinear classification for binary learning with high-dimensional datacontinuous features

train a linear classification model by using fitclinear. specify 'regularization' of fitclinear as 'lasso' to use lasso regularization.

for an example, see . this example determines a good lasso-penalty strength by evaluating models with different strength values using the auc values. compute the cross-validated posterior class probabilities by using , and compute the auc values by using . you can also evaluate models using , , , , , , or .

fitrgpregressioncategorical and continuous features

train a gaussian process regression (gpr) model by using fitrgp. set the 'kernelfunction' name-value pair argument to use automatic relevance determination (ard). available options are 'ardsquaredexponential', 'ardexponential', 'ardmatern32', 'ardmatern52', and 'ardrationalquadratic'. find the predictor weights by taking the exponential of the negative learned length scales, stored in the kernelinformation property.

for examples, see these topics:

fitrlinearlinear regression with high-dimensional datacontinuous features

train a linear regression model by using fitrlinear. specify 'regularization' of fitrlinear as 'lasso' to use lasso regularization.

for examples, see these topics:

linear regressioncontinuous features

train a linear regression model with regularization by using lasso. you can specify the weight of lasso versus ridge optimization by using the name-value pair argument.

for examples, see the function reference page and these topics:

generalized linear regressioncontinuous features

train a generalized linear regression model with regularization by using lassoglm. you can specify the weight of lasso versus ridge optimization by using the name-value pair argument.

for details, see the function reference page and these topics:

oobpermutedpredictorimportance** of classification with an ensemble of bagged decision trees (for example, random forest)categorical and continuous features

train a bagged classification ensemble with tree learners by using fitcensemble and specifying 'method' as 'bag'. then, use oobpermutedpredictorimportance to compute out-of-bag, predictor importance estimates by permutation. the function measures how influential the predictor variables in the model are at predicting the response.

for examples, see the function reference page and the topic oobpermutedpredictorimportance.

oobpermutedpredictorimportance** of regressionbaggedensembleregression with an ensemble of bagged decision trees (for example, random forest)categorical and continuous features

train a bagged regression ensemble with tree learners by using and specifying as 'bag'. then, use oobpermutedpredictorimportance to compute out-of-bag, predictor importance estimates by permutation. the function measures how influential the predictor variables in the model are at predicting the response.

for examples, see the function reference page oobpermutedpredictorimportance and .

predictorimportance** of classification with an ensemble of decision treescategorical and continuous features

train a classification ensemble with tree learners by using fitcensemble. then, use predictorimportance to compute estimates of predictor importance for the ensemble by summing changes in the risk due to splits on every predictor and dividing the sum by the number of branch nodes.

for examples, see the function reference page predictorimportance.

** of classification with a decision treecategorical and continuous features

train a classification tree by using fitctree. then, use predictorimportance to compute estimates of for the tree by summing changes in the risk due to splits on every predictor and dividing the sum by the number of branch nodes.

for examples, see the function reference page .

** of regression with an ensemble of decision treescategorical and continuous features

train a regression ensemble with tree learners by using . then, use predictorimportance to compute estimates of for the ensemble by summing changes in the risk due to splits on every predictor and dividing the sum by the number of branch nodes.

for examples, see the function reference page .

** of regressiontreeregression with a decision treecategorical and continuous features

train a regression tree by using fitrtree. then, use predictorimportance to compute estimates of for the tree by summing changes in the mean squared error (mse) due to splits on every predictor and dividing the sum by the number of branch nodes.

for examples, see the function reference page .

***generalized linear regressioncategorical and continuous features

fit a generalized linear regression model using stepwise regression by using stepwiseglm. alternatively, you can fit a linear regression model by using and then adjust the model by using . stepwise regression is a systematic method for adding and removing terms from the model based on their statistical significance in explaining the response variable.

for details, see the function reference page and these topics:

***linear regressioncategorical and continuous features

fit a linear regression model using stepwise regression by using stepwiselm. alternatively, you can fit a linear regression model by using and then adjust the model by using . stepwise regression is a systematic method for adding and removing terms from the model based on their statistical significance in explaining the response variable.

for details, see the function reference page and these topics:

**for a tree-based algorithm, specify 'predictorselection' as 'interaction-curvature' to use the interaction test for selecting the best split predictor. the interaction test is useful in identifying important variables in the presence of many irrelevant variables. also, if the training data includes many predictors, then specify 'numvariablestosample' as 'all' for training. otherwise, the software might not select some predictors, underestimating their importance. for details, see fitctree, fitrtree, and .

***stepwiseglm and stepwiselm are not wrapper type functions because you cannot use them as a wrapper for another training function. however, these two functions use the wrapper type algorithm to find important features.

references

[1] guyon, isabelle, and a. elisseeff. "an introduction to variable and feature selection." journal of machine learning research. vol. 3, 2003, pp. 1157–1182.

see also

(bioinformatics toolbox)

related topics

网站地图