hyperparameter optimization in classification learner app -凯发k8网页登录

main content

hyperparameter optimization in classification learner app

after you choose a particular type of model to train, for example a decision tree or a support vector machine (svm), you can tune your model by selecting different advanced options. for example, you can change the maximum number of splits for a decision tree or the box constraint of an svm. some of these options are internal parameters of the model, or hyperparameters, that can strongly affect its performance. instead of manually selecting these options, you can use hyperparameter optimization within the classification learner app to automate the selection of hyperparameter values. for a given model type, the app tries different combinations of hyperparameter values by using an optimization scheme that seeks to minimize the model classification error, and returns a model with the optimized hyperparameters. you can use the resulting model as you would any other trained model.

note

because hyperparameter optimization can lead to an overfitted model, the recommended approach is to create a separate test set before importing your data into the classification learner app. after you train your optimizable model, you can see how it performs on your test set. for an example, see .

to perform hyperparameter optimization in classification learner, follow these steps:

  1. choose a model type and decide which hyperparameters to optimize. see .

    note

    hyperparameter optimization is not supported for logistic regression, efficiently trained linear, or kernel approximation models.

  2. (optional) specify how the optimization is performed. for more information, see .

  3. train your model. use the to track the optimization results.

  4. inspect your trained model. see .

select hyperparameters to optimize

in the classification learner app, in the models section of the classification learner tab, click the arrow to open the gallery. the gallery includes optimizable models that you can train using hyperparameter optimization.

after you select an optimizable model, you can choose which of its hyperparameters you want to optimize. in the model summary tab, in the model hyperparameters section, select optimize check boxes for the hyperparameters that you want to optimize. under values, specify the fixed values for the hyperparameters that you do not want to optimize or that are not optimizable.

this table describes the hyperparameters that you can optimize for each type of model and the search range of each hyperparameter. it also includes the additional hyperparameters for which you can specify fixed values.

modeloptimizable hyperparametersadditional hyperparametersnotes
optimizable tree
  • maximum number of splits – the software searches among integers log-scaled in the range [1,max(2,n–1)], where n is the number of observations.

  • split criterion – the software searches among gini's diversity index, twoing rule, and maximum deviance reduction.

  • surrogate decision splits

  • maximum surrogates per node

for more information, see .

optimizable discriminant
  • discriminant type – the software searches among linear, quadratic, diagonal linear, and diagonal quadratic.

 
  • the discriminant type optimizable hyperparameter combines the preset model types (linear discriminant and quadratic discriminant) with the covariance structure advanced option of the preset models.

for more information, see .

optimizable naive bayes
  • distribution names – the software searches between gaussian and kernel.

  • kernel type – the software searches among gaussian, box, epanechnikov, and triangle.

  • support

  • the gaussian value of the distribution names optimizable hyperparameter specifies a gaussian naive bayes model. similarly, the kernel distribution names value specifies a kernel naive bayes model.

for more information, see .

optimizable svm
  • kernel function – the software searches among gaussian, linear, quadratic, and cubic.

  • box constraint level – the software searches among positive values log-scaled in the range [0.001,1000].

  • kernel scale – the software searches among positive values log-scaled in the range [0.001,1000].

  • multiclass method – the software searches between one-vs-one and one-vs-all.

  • standardize data – the software searches between yes and no.

 
  • the kernel scale optimizable hyperparameter combines the kernel scale mode and manual kernel scale advanced options of the preset svm models.

  • you can optimize the kernel scale optimizable hyperparameter only when the kernel function value is gaussian. unless you specify a value for kernel scale by clearing the optimize check box, the app uses the manual value of 1 by default when the kernel function has a value other than gaussian.

for more information, see .

optimizable knn
  • number of neighbors – the software searches among integers log-scaled in the range [1,max(2,round(n/2))], where n is the number of observations.

  • distance metric – the software searches among:

    • euclidean

    • city block

    • chebyshev

    • minkowski (cubic)

    • mahalanobis

    • cosine

    • correlation

    • spearman

    • hamming

    • jaccard

  • distance weight – the software searches among equal, inverse, and squared inverse.

  • standardize data – the software searches between yes and no.

 

for more information, see .

optimizable ensemble
  • ensemble method – the software searches among adaboost, rusboost, logitboost, gentleboost, and bag.

  • maximum number of splits – the software searches among integers log-scaled in the range [1,max(2,n–1)], where n is the number of observations.

  • number of learners – the software searches among integers log-scaled in the range [10,500].

  • learning rate – the software searches among real values log-scaled in the range [0.001,1].

  • number of predictors to sample – the software searches among integers in the range [1,max(2,p)], where p is the number of predictor variables.

  • learner type

  • the adaboost, logitboost, and gentleboost values of the ensemble method optimizable hyperparameter specify a boosted trees model. similarly, the rusboost ensemble method value specifies an rusboosted trees model, and the bag ensemble method value specifies a bagged trees model.

  • the logitboost and gentleboost values are available only for binary classification.

  • you can optimize the number of predictors to sample optimizable hyperparameter only when the ensemble method value is bag. unless you specify a value for number of predictors to sample by clearing the optimize check box, the app uses the default value of select all when the ensemble method has a value other than bag.

for more information, see .

optimizable neural network
  • number of fully connected layers – the software searches among 1, 2, and 3 fully connected layers.

  • first layer size – the software searches among integers log-scaled in the range [1,300].

  • second layer size – the software searches among integers log-scaled in the range [1,300].

  • third layer size – the software searches among integers log-scaled in the range [1,300].

  • activation – the software searches among relu, tanh, none, and sigmoid.

  • regularization strength (lambda) – the software searches among real values log-scaled in the range [1e-5/n,1e5/n], where n is the number of observations.

  • standardize data – the software searches between yes and no.

  • iteration limit

for more information, see .

optimization options

by default, the classification learner app performs hyperparameter tuning by using bayesian optimization. the goal of bayesian optimization, and optimization in general, is to find a point that minimizes an objective function. in the context of hyperparameter tuning in the app, a point is a set of hyperparameter values, and the objective function is the loss function, or the classification error. for more information on the basics of bayesian optimization, see bayesian optimization workflow.

you can specify how the hyperparameter tuning is performed. for example, you can change the optimization method to grid search or limit the training time. on the classification learner tab, in the options section, click optimizer. the app opens a dialog box in which you can select optimization options.

after making your selections, click save and apply. your selections affect all draft optimizable models in the models pane and will be applied to new optimizable models that you create using the gallery in the models section of the classification learner tab.

to specify optimization options for a single optimizable model, open and edit the model summary before training the model. click the model in the models pane. the model summary tab includes an editable optimizer section.

this table describes the available optimization options and their default values.

optiondescription
optimizer

the optimizer values are:

  • bayesopt (default) – use bayesian optimization. internally, the app calls the bayesopt function.

  • grid search – use grid search with the number of values per dimension determined by the number of grid divisions value. the app searches in a random order, using uniform sampling without replacement from the grid.

  • random search – search at random among points, where the number of points corresponds to the iterations value.

acquisition function

when the app performs bayesian optimization for hyperparameter tuning, it uses the acquisition function to determine the next set of hyperparameter values to try.

the acquisition function values are:

  • expected improvement per second plus (default)

  • expected improvement

  • expected improvement plus

  • expected improvement per second

  • lower confidence bound

  • probability of improvement

for details on how these acquisition functions work in the context of bayesian optimization, see acquisition function types.

iterations

each iteration corresponds to a combination of hyperparameter values that the app tries. when you use bayesian optimization or random search, specify a positive integer that sets the number of iterations. the default value is 30.

when you use grid search, the app ignores the iterations value and evaluates the loss at every point in the entire grid. you can set a training time limit to stop the optimization process prematurely.

training time limitto set a training time limit, select this option and set the maximum training time in seconds option. by default, the app does not have a training time limit.
maximum training time in secondsset the training time limit in seconds as a positive real number. the default value is 300. the run time can exceed the training time limit because this limit does not interrupt an iteration evaluation.
number of grid divisionswhen you use grid search, set a positive integer as the number of values the app tries for each numeric hyperparameter. the app ignores this value for categorical hyperparameters. the default value is 10.

minimum classification error plot

after specifying which model hyperparameters to optimize and setting any additional optimization options (optional), train your optimizable model. on the classification learner tab, in the train section, click train all and select train selected. the app creates a minimum classification error plot that it updates as the optimization runs.

minimum classification error plot of an optimizable svm model

the minimum classification error plot displays the following information:

  • estimated minimum classification error – each light blue point corresponds to an estimate of the minimum classification error computed by the optimization process when considering all the sets of hyperparameter values tried so far, including the current iteration.

    the estimate is based on an upper confidence interval of the current classification error objective model, as mentioned in the bestpoint hyperparameters description.

    if you use grid search or random search to perform hyperparameter optimization, the app does not display these light blue points.

  • observed minimum classification error – each dark blue point corresponds to the observed minimum classification error computed so far by the optimization process. for example, at the third iteration, the dark blue point corresponds to the minimum of the classification error observed in the first, second, and third iterations.

  • bestpoint hyperparameters – the red square indicates the iteration that corresponds to the optimized hyperparameters. you can find the values of the optimized hyperparameters listed in the upper right of the plot under optimization results.

    the optimized hyperparameters do not always provide the observed minimum classification error. when the app performs hyperparameter tuning by using bayesian optimization (see for a brief introduction), it chooses the set of hyperparameter values that minimizes an upper confidence interval of the classification error objective model, rather than the set that minimizes the classification error. for more information, see the "criterion","min-visited-upper-confidence-interval" name-value argument of bestpoint.

  • minimum error hyperparameters – the yellow point indicates the iteration that corresponds to the hyperparameters that yield the observed minimum classification error.

    for more information, see the "criterion","min-observed" name-value argument of bestpoint.

    if you use grid search to perform hyperparameter optimization, the bestpoint hyperparameters and the minimum error hyperparameters are the same.

missing points in the plot correspond to nan minimum classification error values.

optimization results

when the app finishes tuning model hyperparameters, it returns a model trained with the optimized hyperparameter values (bestpoint hyperparameters). the model metrics, displayed plots, and exported model correspond to this trained model with fixed hyperparameter values.

to inspect the optimization results of a trained optimizable model, select the model in the models pane and look at the model summary tab.

summary tab of an optimizable svm model

the model summary tab includes these sections:

  • training results – shows the performance of the optimizable model. see .

  • model hyperparameters – displays the type of optimizable model and lists any fixed hyperparameter values

    • optimized hyperparameters – lists the values of the optimized hyperparameters

    • hyperparameter search range – displays the search ranges for the optimized hyperparameters

  • optimizer – shows the selected optimizer options

when you perform hyperparameter tuning using bayesian optimization and you export the resulting trained optimizable model to the workspace as a structure, the structure includes a bayesianoptimization object in the hyperparameteroptimizationresult field. the object contains the results of the optimization performed in the app.

when you generate matlab® code from a trained optimizable model, the generated code uses the fixed and optimized hyperparameter values of the model to train on new data. the generated code does not include the optimization process. for information on how to perform bayesian optimization when you use a fit function, see bayesian optimization using a fit function.

related topics

网站地图