introduction to feature selection
this topic provides an introduction to feature selection algorithms and describes the feature selection functions available in statistics and machine learning toolbox™.
feature selection algorithms
feature selection reduces the dimensionality of data by selecting only a subset of measured features (predictor variables) to create a model. feature selection algorithms search for a subset of predictors that optimally models measured responses, subject to constraints such as required or excluded features and the size of the subset. the main benefits of feature selection are to improve prediction performance, provide faster and more cost-effective predictors, and provide a better understanding of the data generation process [1]. using too many features can degrade prediction performance even when all features are relevant and contain information about the response variable.
you can categorize feature selection algorithms into three types:
filter type feature selection — the filter type feature selection algorithm measures feature importance based on the characteristics of the features, such as feature variance and feature relevance to the response. you select important features as part of a data preprocessing step and then train a model using the selected features. therefore, filter type feature selection is uncorrelated to the training algorithm.
wrapper type feature selection — the wrapper type feature selection algorithm starts training using a subset of features and then adds or removes a feature using a selection criterion. the selection criterion directly measures the change in model performance that results from adding or removing a feature. the algorithm repeats training and improving a model until its stopping criteria are satisfied.
embedded type feature selection — the embedded type feature selection algorithm learns feature importance as part of the model learning process. once you train a model, you obtain the importance of the features in the trained model. this type of algorithm selects features that work well with a particular learning process.
in addition, you can categorize feature selection algorithms according to whether or not an algorithm ranks features sequentially. the minimum redundancy maximum relevance (mrmr) algorithm and stepwise regression are two examples of the sequential feature selection algorithm. for details, see .
you can compare the importance of predictor variables visually by creating partial
dependence plots (pdp) and individual conditional expectation (ice) plots. for
details, see plotpartialdependence
.
for classification problems, after selecting features, you can train two models (for example, a full model and a model trained with a subset of predictors) and compare the accuracies of the models by using the , , or functions.
feature selection is preferable to feature transformation when the original features and their units are important and the modeling goal is to identify an influential subset. when categorical features are present, and numerical transformations are inappropriate, feature selection becomes the primary means of dimension reduction.
feature selection functions
statistics and machine learning toolbox offers several functions for feature selection. choose the appropriate feature selection function based on your problem and the data types of the features.
filter type feature selection
function | supported problem | supported data type | description |
---|---|---|---|
classification | categorical and continuous features | examine whether each predictor variable is independent of a response variable by using individual chi-square tests, and then rank features using the p-values of the chi-square test statistics. for examples, see the function reference page . | |
classification | categorical and continuous features | rank features sequentially using the . for examples, see the function reference page . | |
* | classification | continuous features | determine the feature weights by using a diagonal adaptation of neighborhood component analysis (nca). this algorithm works best for estimating feature importance for distance-based supervised models that use pairwise distances between observations to predict the response. for details, see the function reference page and these topics: |
regression | categorical and continuous features | examine the importance of each predictor individually using an f-test, and then rank features using the p-values of the f-test statistics. each f-test tests the hypothesis that the response values grouped by predictor variable values are drawn from populations with the same mean against the alternative hypothesis that the population means are not all the same. for examples, see the function reference page . | |
regression | categorical and continuous features | rank features sequentially using the . for examples, see the function reference page . | |
* | regression | continuous features | determine the feature weights by using a diagonal adaptation of neighborhood component analysis (nca). this algorithm works best for estimating feature importance for distance-based supervised models that use pairwise distances between observations to predict the response. for details, see the function reference page and these topics: |
fsulaplacian | unsupervised learning | continuous features | rank features using the laplacian score. for examples, see the function reference page |
classification and regression | either all categorical or all continuous features | rank features using the algorithm for classification and the algorithm for regression. this algorithm works best for estimating feature importance for distance-based supervised models that use pairwise distances between observations to predict the response. for examples, see the function reference page . | |
classification and regression | either all categorical or all continuous features | select features sequentially using a custom criterion. define a function that measures the characteristics of data to select features, and pass the function handle to the |
*you can also consider fscnca
and fsrnca
as embedded type feature selection functions because they return a trained model object and you can use the object functions predict
and loss
. however, you typically use these object functions to tune the regularization parameter of the algorithm. after selecting features using the fscnca
or fsrnca
function as part of a data preprocessing step, you can apply another classification or regression algorithm for your problem.
wrapper type feature selection
function | supported problem | supported data type | description |
---|---|---|---|
classification and regression | either all categorical or all continuous features | select features sequentially using a custom criterion. define a function that implements a supervised learning algorithm or a function that measures performance of a learning algorithm, and pass the function handle to the for examples, see the function reference page and these topics: |
embedded type feature selection
function | supported problem | supported data type | description |
---|---|---|---|
deltapredictor property of a model object | linear discriminant analysis classification | continuous features | create a linear discriminant analysis classifier by using for examples, see these topics: |
fitcecoc with templatelinear | linear classification for multiclass learning with high-dimensional data | continuous features | train a linear classification model by using for an example, see . this example determines a good lasso-penalty strength by evaluating models with different strength values using . you can also evaluate models using , , , , or . |
fitclinear | linear classification for binary learning with high-dimensional data | continuous features | train a linear classification model by using
for an example, see . this example determines a good lasso-penalty strength by evaluating models with different strength values using the auc values. compute the cross-validated posterior class probabilities by using , and compute the auc values by using . you can also evaluate models using , , , , , , or . |
fitrgp | regression | categorical and continuous features | train a gaussian process regression (gpr) model by using for examples, see these topics: |
fitrlinear | linear regression with high-dimensional data | continuous features | train a linear regression model by using for examples, see these topics: |
linear regression | continuous features | train a linear regression model with regularization by using for examples, see the function reference page and these topics: | |
generalized linear regression | continuous features | train a generalized linear regression model with regularization by using for details, see the function reference page and these topics: | |
oobpermutedpredictorimportance ** of | classification with an ensemble of bagged decision trees (for example, random forest) | categorical and continuous features | train a bagged classification ensemble with tree learners by using for examples, see the function reference page and the topic |
oobpermutedpredictorimportance ** of regressionbaggedensemble | regression with an ensemble of bagged decision trees (for example, random forest) | categorical and continuous features | train a bagged regression ensemble with tree learners by using and specifying as for examples, see the function reference page |
predictorimportance ** of | classification with an ensemble of decision trees | categorical and continuous features | train a classification ensemble with tree learners by using for examples, see the function reference page |
** of | classification with a decision tree | categorical and continuous features | train a classification tree by using for examples, see the function reference page . |
** of | regression with an ensemble of decision trees | categorical and continuous features | train a regression ensemble with tree learners by using . then, use for examples, see the function reference page . |
** of regressiontree | regression with a decision tree | categorical and continuous features | train a regression tree by using for examples, see the function reference page . |
*** | generalized linear regression | categorical and continuous features | fit a generalized linear regression model using stepwise regression by using for details, see the function reference page and these topics: |
*** | linear regression | categorical and continuous features | fit a linear regression model using stepwise regression by using for details, see the function reference page and these topics: |
**for a tree-based algorithm, specify 'predictorselection'
as 'interaction-curvature'
to use the interaction test for selecting the best split predictor. the interaction test is useful in identifying important variables in the presence of many irrelevant variables. also, if the training data includes many predictors, then specify 'numvariablestosample'
as 'all'
for training. otherwise, the software might not select some predictors, underestimating their importance. for details, see fitctree
, fitrtree
, and .
***stepwiseglm
and stepwiselm
are not wrapper type functions because you cannot use them as a wrapper for another training function. however, these two functions use the wrapper type algorithm to find important features.
references
[1] guyon, isabelle, and a. elisseeff. "an introduction to variable and feature selection." journal of machine learning research. vol. 3, 2003, pp. 1157–1182.
see also
(bioinformatics toolbox)