模型的构建和评估 -凯发k8网页登录
特征选择、特色工程、模型选择、超参数优化、交叉验证、预测性能评估和分类准确性比较检验
在构建高质量预测分类模型时,选择正确的特征(或预测变量)并调整超参数(未估计的模型参数)非常重要。
特征选择和超参数调整可能会产生多个模型。您可以比较模型之间的 k 折分类错误率、受试者工作特征 (roc) 曲线或混淆矩阵。还可以进行统计检验,以检测一个分类模型是否明显优于另一个。
要在训练分类模型之前对新函数进行工程处理,请使用 。
要以交互方式构建和评估分类模型,可以使用分类学习器。
要自动选择具有调整后的超参数的模型,请使用 fitcauto
。此函数尝试选择具有不同超参数值的分类模型类型,并返回预期在新数据上表现良好的最终模型。当您不确定哪些分类器类型最适合您的数据时,请使用 fitcauto
。
要调整特定模型的超参数,请选择超参数值并使用这些值对模型进行交叉验证。例如,要调整 svm 模型,可以选择一组框约束和核尺度,然后使用每对值对模型进行交叉验证。某些 statistics and machine learning toolbox™ 分类函数通过贝叶斯优化、网格搜索或随机搜索提供自动超参数调整。实现贝叶斯优化的主函数 对于许多其他应用来说也足够灵活。请参阅bayesian optimization workflow。
要解释分类模型,您可以使用 lime
、shapley
和 plotpartialdependence
。
app
分类学习器 | 使用有监督的机器学习训练模型以对数据进行分类 |
函数
对象
属性
confusion matrix chart appearance and behavior | |
receiver operating characteristic (roc) curve appearance and behavior |
主题
分类学习器
- train classification models in classification learner app
workflow for training, comparing and improving classification models, including automated, manual, and parallel training.
compare model accuracy values, visualize results by plotting class predictions, and check performance per class in the confusion matrix.
identify useful predictors using plots or feature ranking algorithms, select features to include, and transform features using pca in classification learner.
特征选择
- introduction to feature selection
learn about feature selection algorithms and explore the functions available for feature selection.
this topic introduces sequential feature selection and provides an example that selects features sequentially using a custom criterion and thesequentialfs
function.
neighborhood component analysis (nca) is a non-parametric method for selecting features with the goal of maximizing prediction accuracy of regression and classification algorithms.
this example shows how to tune the regularization parameter infscnca
using cross-validation.
make a more robust and simpler model by removing predictors without compromising the predictive power of the model.- 选择用于高维数据分类的特征
此示例说明如何选择用于高维数据分类的特征。具体而言,示例说明如何执行序列特征选择,这是最常用的特征选择算法之一。示例还说明如何使用留出法和交叉验证来评估所选特征的分类性能。
特征工程
usegencfeatures
to engineer new features before training a classification model. before making predictions on new data, apply the same feature transformations to the new data set.
自动模型选择
- automated classifier selection with bayesian and asha optimization
usefitcauto
to automatically try a selection of classification model types with different hyperparameter values, given training predictor and response data.
超参数优化
- bayesian optimization workflow
perform bayesian optimization using a fit function or by callingbayesopt
directly.
create variables for bayesian optimization.
create the objective function for bayesian optimization.
set different types of constraints for bayesian optimization.
minimize cross-validation loss using bayesian optimization.
在拟合函数中使用optimizeparameters
名称-值参数最小化交叉验证损失。
visually monitor a bayesian optimization.
monitor a bayesian optimization.- bayesian optimization algorithm
understand the underlying algorithms for bayesian optimization.
how bayesian optimization works in parallel.
模型解释
- interpret machine learning models
explain model predictions using thelime
andshapley
objects and theplotpartialdependence
function. - shapley values for machine learning model
compute shapley values for a machine learning model using interventional algorithm or conditional algorithm.
交叉验证
- implement cross-validation using parallel computing
speed up cross-validation using parallel computing.
分类性能计算
userocmetrics
to examine the performance of a classification algorithm on a test data set.
learn how theperfcurve
function computes a receiver operating characteristic (roc) curve.