main content

模型的构建和评估 -凯发k8网页登录

特征选择、特色工程、模型选择、超参数优化、交叉验证、预测性能评估和分类准确性比较检验

在构建高质量预测分类模型时,选择正确的特征(或预测变量)并调整超参数(未估计的模型参数)非常重要。

特征选择和超参数调整可能会产生多个模型。您可以比较模型之间的 k 折分类错误率、受试者工作特征 (roc) 曲线或混淆矩阵。还可以进行统计检验,以检测一个分类模型是否明显优于另一个。

要在训练分类模型之前对新函数进行工程处理,请使用 。

要以交互方式构建和评估分类模型,可以使用分类学习器

要自动选择具有调整后的超参数的模型,请使用 fitcauto。此函数尝试选择具有不同超参数值的分类模型类型,并返回预期在新数据上表现良好的最终模型。当您不确定哪些分类器类型最适合您的数据时,请使用 fitcauto

要调整特定模型的超参数,请选择超参数值并使用这些值对模型进行交叉验证。例如,要调整 svm 模型,可以选择一组框约束和核尺度,然后使用每对值对模型进行交叉验证。某些 statistics and machine learning toolbox™ 分类函数通过贝叶斯优化、网格搜索或随机搜索提供自动超参数调整。实现贝叶斯优化的主函数 对于许多其他应用来说也足够灵活。请参阅bayesian optimization workflow

要解释分类模型,您可以使用 limeshapleyplotpartialdependence

app

分类学习器使用有监督的机器学习训练模型以对数据进行分类

函数

univariate feature ranking for classification using chi-square tests
rank features for classification using minimum redundancy maximum relevance (mrmr) algorithm
feature selection using neighborhood component analysis for classification
oobpermutedpredictorimportancepredictor importance estimates by permutation of out-of-bag predictor observations for random forest of classification trees
estimates of predictor importance for classification tree
predictorimportanceestimates of predictor importance for classification ensemble of decision trees
sequential feature selection using custom criterion
rank importance of predictors using relieff or rrelieff algorithm
perform automated feature engineering for classification
describe generated features
transform new data using generated features
fitcautoautomatically select classification model with optimized hyperparameters
select optimal machine learning hyperparameters using bayesian optimization
variable descriptions for optimizing a fit function
variable description for bayesopt or other optimizers
estimate loss using cross-validation
cvpartitionpartition data for cross-validation
repartition data for cross-validation
test indices for cross-validation
training indices for cross-validation

与模型无关的局部可解释性解释 (lime)

limelocal interpretable model-agnostic explanations (lime)
fitfit simple model of local interpretable model-agnostic explanations (lime)
plotplot results of local interpretable model-agnostic explanations (lime)

shapley 值

shapleyshapley values
fitcompute shapley values for query point
plotplot shapley values

部分依赖

partialdependencecompute partial dependence
plotpartialdependencecreate partial dependence plot (pdp) and individual conditional expectation (ice) plots

混淆矩阵

create confusion matrix chart for classification problem
compute confusion matrix for classification problem

受试者工作特征 (roc) 曲线

receiver operating characteristic (roc) curve and performance metrics for binary and multiclass classifiers
compute additional classification performance metrics
compute performance metrics for average receiver operating characteristic (roc) curve in multiclass problem
plot receiver operating characteristic (roc) curves and other performance curves
receiver operating characteristic (roc) curve or other performance curve for classifier output
compare predictive accuracies of two classification models
compare accuracies of two classification models by repeated cross-validation

对象

feature selection for classification using neighborhood component analysis (nca)
generated feature transformations
bayesianoptimizationbayesian optimization results

属性

confusion matrix chart appearance and behavior
receiver operating characteristic (roc) curve appearance and behavior

主题

分类学习器

  • train classification models in classification learner app
    workflow for training, comparing and improving classification models, including automated, manual, and parallel training.

  • compare model accuracy values, visualize results by plotting class predictions, and check performance per class in the confusion matrix.

  • identify useful predictors using plots or feature ranking algorithms, select features to include, and transform features using pca in classification learner.

特征选择

  • introduction to feature selection
    learn about feature selection algorithms and explore the functions available for feature selection.

  • this topic introduces sequential feature selection and provides an example that selects features sequentially using a custom criterion and the sequentialfs function.

  • neighborhood component analysis (nca) is a non-parametric method for selecting features with the goal of maximizing prediction accuracy of regression and classification algorithms.

  • this example shows how to tune the regularization parameter in fscnca using cross-validation.

  • make a more robust and simpler model by removing predictors without compromising the predictive power of the model.
  • 选择用于高维数据分类的特征
    此示例说明如何选择用于高维数据分类的特征。具体而言,示例说明如何执行序列特征选择,这是最常用的特征选择算法之一。示例还说明如何使用留出法和交叉验证来评估所选特征的分类性能。

特征工程


  • use gencfeatures to engineer new features before training a classification model. before making predictions on new data, apply the same feature transformations to the new data set.

自动模型选择

超参数优化

  • bayesian optimization workflow
    perform bayesian optimization using a fit function or by calling bayesopt directly.

  • create variables for bayesian optimization.

  • create the objective function for bayesian optimization.

  • set different types of constraints for bayesian optimization.

  • minimize cross-validation loss using bayesian optimization.

  • 在拟合函数中使用 optimizeparameters 名称-值参数最小化交叉验证损失。

  • visually monitor a bayesian optimization.

  • monitor a bayesian optimization.
  • bayesian optimization algorithm
    understand the underlying algorithms for bayesian optimization.

  • how bayesian optimization works in parallel.

模型解释

交叉验证

分类性能计算


  • use rocmetrics to examine the performance of a classification algorithm on a test data set.

  • learn how the perfcurve function computes a receiver operating characteristic (roc) curve.
网站地图