train neural network classification model -凯发k8网页登录
train neural network classification model
since r2021a
syntax
description
use fitcnet
to train a feedforward, fully connected neural
network for classification. the first fully connected layer of the neural network has a
connection from the network input (predictor data), and each subsequent layer has a connection
from the previous layer. each fully connected layer multiplies the input by a weight matrix
and then adds a bias vector. an activation function follows each fully connected layer. the
final fully connected layer and the subsequent softmax activation function produce the
network's output, namely classification scores (posterior probabilities) and predicted labels.
for more information, see neural network structure.
returns a neural network classification model mdl
= fitcnet(tbl
,responsevarname
)mdl
trained using the
predictors in the table tbl
and the class labels in the
responsevarname
table variable.
specifies options using one or more name-value arguments in addition to any of the input
argument combinations in previous syntaxes. for example, you can adjust the number of
outputs and the activation functions for the fully connected layers by specifying the
mdl
= fitcnet(___,name,value
)layersizes
and activations
name-value
arguments.
examples
train neural network classifier
train a neural network classifier, and assess the performance of the classifier on a test set.
read the sample file creditrating_historical.dat
into a table. the predictor data consists of financial ratios and industry sector information for a list of corporate customers. the response variable consists of credit ratings assigned by a rating agency. preview the first few rows of the data set.
creditrating = readtable("creditrating_historical.dat");
head(creditrating)
id wc_ta re_ta ebit_ta mve_bvtd s_ta industry rating _____ ______ ______ _______ ________ _____ ________ _______ 62394 0.013 0.104 0.036 0.447 0.142 3 {'bb' } 48608 0.232 0.335 0.062 1.969 0.281 8 {'a' } 42444 0.311 0.367 0.074 1.935 0.366 1 {'a' } 48631 0.194 0.263 0.062 1.017 0.228 4 {'bbb'} 43768 0.121 0.413 0.057 3.647 0.466 12 {'aaa'} 39255 -0.117 -0.799 0.01 0.179 0.082 4 {'ccc'} 62236 0.087 0.158 0.049 0.816 0.324 2 {'bbb'} 39354 0.005 0.181 0.034 2.597 0.388 7 {'aa' }
because each value in the id
variable is a unique customer id, that is, length(unique(creditrating.id))
is equal to the number of observations in creditrating
, the id
variable is a poor predictor. remove the id
variable from the table, and convert the industry
variable to a categorical
variable.
creditrating = removevars(creditrating,"id");
creditrating.industry = categorical(creditrating.industry);
convert the rating
response variable to an ordinal categorical
variable.
creditrating.rating = categorical(creditrating.rating, ... ["aaa","aa","a","bbb","bb","b","ccc"],"ordinal",true);
partition the data into training and test sets. use approximately 80% of the observations to train a neural network model, and 20% of the observations to test the performance of the trained model on new data. use cvpartition
to partition the data.
rng("default") % for reproducibility of the partition c = cvpartition(creditrating.rating,"holdout",0.20); trainingindices = training(c); % indices for the training set testindices = test(c); % indices for the test set credittrain = creditrating(trainingindices,:); credittest = creditrating(testindices,:);
train a neural network classifier by passing the training data credittrain
to the fitcnet
function.
mdl = fitcnet(credittrain,"rating")
mdl = classificationneuralnetwork predictornames: {'wc_ta' 're_ta' 'ebit_ta' 'mve_bvtd' 's_ta' 'industry'} responsename: 'rating' categoricalpredictors: 6 classnames: [aaa aa a bbb bb b ccc] scoretransform: 'none' numobservations: 3146 layersizes: 10 activations: 'relu' outputlayeractivation: 'softmax' solver: 'lbfgs' convergenceinfo: [1x1 struct] traininghistory: [1000x7 table] properties, methods
mdl
is a trained classificationneuralnetwork
classifier. you can use dot notation to access the properties of mdl
. for example, you can specify mdl.traininghistory
to get more information about the training history of the neural network model.
evaluate the performance of the classifier on the test set by computing the test set classification error. visualize the results by using a confusion matrix.
testaccuracy = 1 - loss(mdl,credittest,"rating", ... "lossfun","classiferror")
testaccuracy = 0.7964
confusionchart(credittest.rating,predict(mdl,credittest))
specify neural network classifier architecture
specify the structure of a neural network classifier, including the size of the fully connected layers.
load the ionosphere
data set, which includes radar signal data. x
contains the predictor data, and y
is the response variable, whose values represent either good ("g") or bad ("b") radar signals.
load ionosphere
separate the data into training data (xtrain
and ytrain
) and test data (xtest
and ytest
) by using a stratified holdout partition. reserve approximately 30% of the observations for testing, and use the rest of the observations for training.
rng("default") % for reproducibility of the partition cvp = cvpartition(y,"holdout",0.3); xtrain = x(training(cvp),:); ytrain = y(training(cvp)); xtest = x(test(cvp),:); ytest = y(test(cvp));
train a neural network classifier. specify to have 35 outputs in the first fully connected layer and 20 outputs in the second fully connected layer. by default, both layers use a rectified linear unit (relu) activation function. you can change the activation functions for the fully connected layers by using the activations
name-value argument.
mdl = fitcnet(xtrain,ytrain, ... "layersizes",[35 20])
mdl = classificationneuralnetwork responsename: 'y' categoricalpredictors: [] classnames: {'b' 'g'} scoretransform: 'none' numobservations: 246 layersizes: [35 20] activations: 'relu' outputlayeractivation: 'softmax' solver: 'lbfgs' convergenceinfo: [1x1 struct] traininghistory: [47x7 table] properties, methods
access the weights and biases for the fully connected layers of the trained classifier by using the layerweights
and layerbiases
properties of mdl
. the first two elements of each property correspond to the values for the first two fully connected layers, and the third element corresponds to the values for the final fully connected layer with a softmax activation function for classification. for example, display the weights and biases for the second fully connected layer.
mdl.layerweights{2}
ans = 20×35
0.0481 0.2501 -0.1535 -0.0934 0.0760 -0.0579 -0.2465 1.0411 0.3712 -1.2007 1.1162 0.4296 0.4045 0.5005 0.8839 0.4624 -0.3154 0.3454 -0.0487 0.2648 0.0732 0.5773 0.4286 0.0881 0.9468 0.2981 0.5534 1.0518 -0.0224 0.6894 0.5527 0.7045 -0.6124 0.2145 -0.0790
-0.9489 -1.8343 0.5510 -0.5751 -0.8726 0.8815 0.0203 -1.6379 2.0315 1.7599 -1.4153 -1.4335 -1.1638 -0.1715 1.1439 -0.7661 1.1230 -1.1982 -0.5409 -0.5821 -0.0627 -0.7038 -0.0817 -1.5773 -1.4671 0.2053 -0.7931 -1.6201 -0.1737 -0.7762 -0.3063 -0.8771 1.5134 -0.4611 -0.0649
-0.1910 0.0246 -0.3511 0.0097 0.3160 -0.0693 0.2270 -0.0783 -0.1626 -0.3478 0.2765 0.4179 0.0727 -0.0314 -0.1798 -0.0583 0.1375 -0.1876 0.2518 0.2137 0.1497 0.0395 0.2859 -0.0905 0.4325 -0.2012 0.0388 -0.1441 -0.1431 -0.0249 -0.2200 0.0860 -0.2076 0.0132 0.1737
-0.0415 -0.0059 -0.0753 -0.1477 -0.1621 -0.1762 0.2164 0.1710 -0.0610 -0.1402 0.1452 0.2890 0.2872 -0.2616 -0.4204 -0.2831 -0.1901 0.0036 0.0781 -0.0826 0.1588 -0.2782 0.2510 -0.1069 -0.2692 0.2306 0.2521 0.0306 0.2524 -0.4218 0.2478 0.2343 -0.1031 0.1037 0.1598
1.1848 1.6142 -0.1352 0.5774 0.5491 0.0103 0.0209 0.7219 -0.8643 -0.5578 1.3595 1.5385 1.0015 0.7416 -0.4342 0.2279 0.5667 1.1589 0.7100 0.1823 0.4171 0.7051 0.0794 1.3267 1.2659 0.3197 0.3947 0.3436 -0.1415 0.6607 1.0071 0.7726 -0.2840 0.8801 0.0848
0.2486 -0.2920 -0.0004 0.2806 0.2987 -0.2709 0.1473 -0.2580 -0.0499 -0.0755 0.2000 0.1535 -0.0285 -0.0520 -0.2523 -0.2505 -0.0437 -0.2323 0.2023 0.2061 -0.1365 0.0744 0.0344 -0.2891 0.2341 -0.1556 0.1459 0.2533 -0.0583 0.0243 -0.2949 -0.1530 0.1546 -0.0340 -0.1562
-0.0516 0.0640 0.1824 -0.0675 -0.2065 -0.0052 -0.1682 -0.1520 0.0060 0.0450 0.0813 -0.0234 0.0657 0.3219 -0.1871 0.0658 -0.2103 0.0060 -0.2831 -0.1811 -0.0988 0.2378 -0.0761 0.1714 -0.1596 -0.0011 0.0609 0.4003 0.3687 -0.2879 0.0910 0.0604 -0.2222 -0.2735 -0.1155
-0.6192 -0.7804 -0.0506 -0.4205 -0.2584 -0.2020 -0.0008 0.0534 1.0185 -0.0307 -0.0539 -0.2020 0.0368 -0.1847 0.0886 -0.4086 -0.4648 -0.3785 0.1542 -0.5176 -0.3207 0.1893 -0.0313 -0.5297 -0.1261 -0.2749 -0.6152 -0.5914 -0.3089 0.2432 -0.3955 -0.1711 0.1710 -0.4477 0.0718
0.5049 -0.1362 -0.2218 0.1637 -0.1282 -0.1008 0.1445 0.4527 -0.4887 0.0503 0.1453 0.1316 -0.3311 -0.1081 -0.7699 0.4062 -0.1105 -0.0855 0.0630 -0.1469 -0.2533 0.3976 0.0418 0.5294 0.3982 0.1027 -0.0973 -0.1282 0.2491 0.0425 0.0533 0.1578 -0.8403 -0.0535 -0.0048
1.1109 -0.0466 0.4044 0.6366 0.1863 0.5660 0.2839 0.8793 -0.5497 0.0057 0.3468 0.0980 0.3364 0.4669 0.1466 0.7883 -0.1743 0.4444 0.4535 0.1521 0.7476 0.2246 0.4473 0.2829 0.8881 0.4666 0.6334 0.3105 0.9571 0.2808 0.6483 0.1180 -0.4558 1.2486 0.2453
⋮
mdl.layerbiases{2}
ans = 20×1
0.6147
0.1891
-0.2767
-0.2977
1.3655
0.0347
0.1509
-0.4839
-0.3960
0.9248
⋮
the final fully connected layer has two outputs, one for each class in the response variable. the number of layer outputs corresponds to the first dimension of the layer weights and layer biases.
size(mdl.layerweights{end})
ans = 1×2
2 20
size(mdl.layerbiases{end})
ans = 1×2
2 1
to estimate the performance of the trained classifier, compute the test set classification error for mdl
.
testerror = loss(mdl,xtest,ytest, ... "lossfun","classiferror")
testerror = 0.0774
accuracy = 1 - testerror
accuracy = 0.9226
mdl
accurately classifies approximately 92% of the observations in the test set.
stop neural network training early using validation data
at each iteration of the training process, compute the validation loss of the neural network. stop the training process early if the validation loss reaches a reasonable minimum.
load the patients
data set. create a table from the data set. each row corresponds to one patient, and each column corresponds to a diagnostic variable. use the smoker
variable as the response variable, and the rest of the variables as predictors.
load patients
tbl = table(diastolic,systolic,gender,height,weight,age,smoker);
separate the data into a training set tbltrain
and a validation set tblvalidation
by using a stratified holdout partition. the software reserves approximately 30% of the observations for the validation data set and uses the rest of the observations for the training data set.
rng("default") % for reproducibility of the partition c = cvpartition(tbl.smoker,"holdout",0.30); trainingindices = training(c); validationindices = test(c); tbltrain = tbl(trainingindices,:); tblvalidation = tbl(validationindices,:);
train a neural network classifier by using the training set. specify the smoker
column of tbltrain
as the response variable. evaluate the model at each iteration by using the validation set. specify to display the training information at each iteration by using the verbose
name-value argument. by default, the training process ends early if the validation cross-entropy loss is greater than or equal to the minimum validation cross-entropy loss computed so far, six times in a row. to change the number of times the validation loss is allowed to be greater than or equal to the minimum, specify the validationpatience
name-value argument.
mdl = fitcnet(tbltrain,"smoker", ... "validationdata",tblvalidation, ... "verbose",1);
|==========================================================================================| | iteration | train loss | gradient | step | iteration | validation | validation | | | | | | time (sec) | loss | checks | |==========================================================================================| | 1| 2.602935| 26.866935| 0.262009| 0.051823| 2.793048| 0| | 2| 1.470816| 42.594723| 0.058323| 0.014575| 1.247046| 0| | 3| 1.299292| 25.854432| 0.034910| 0.005318| 1.507857| 1| | 4| 0.710465| 11.629107| 0.013616| 0.006658| 0.889157| 0| | 5| 0.647783| 2.561740| 0.005753| 0.017113| 0.766728| 0| | 6| 0.645541| 0.681579| 0.001000| 0.001492| 0.776072| 1| | 7| 0.639611| 1.544692| 0.007013| 0.003282| 0.776320| 2| | 8| 0.604189| 5.045676| 0.064190| 0.001400| 0.744919| 0| | 9| 0.565364| 5.851552| 0.068845| 0.000701| 0.694226| 0| | 10| 0.391994| 8.377717| 0.560480| 0.001128| 0.425466| 0| |==========================================================================================| | iteration | train loss | gradient | step | iteration | validation | validation | | | | | | time (sec) | loss | checks | |==========================================================================================| | 11| 0.383843| 0.630246| 0.110270| 0.002463| 0.428487| 1| | 12| 0.369289| 2.404750| 0.084395| 0.001113| 0.405728| 0| | 13| 0.357839| 6.220679| 0.199197| 0.001086| 0.378480| 0| | 14| 0.344974| 2.752717| 0.029013| 0.001361| 0.367279| 0| | 15| 0.333747| 0.711398| 0.074513| 0.003426| 0.348499| 0| | 16| 0.327763| 0.804818| 0.122178| 0.000920| 0.330237| 0| | 17| 0.327702| 0.778169| 0.009810| 0.000796| 0.329095| 0| | 18| 0.327277| 0.020615| 0.004377| 0.000755| 0.329141| 1| | 19| 0.327273| 0.010018| 0.003313| 0.001056| 0.328773| 0| | 20| 0.327268| 0.019497| 0.000805| 0.001192| 0.328831| 1| |==========================================================================================| | iteration | train loss | gradient | step | iteration | validation | validation | | | | | | time (sec) | loss | checks | |==========================================================================================| | 21| 0.327228| 0.113983| 0.005397| 0.000600| 0.329085| 2| | 22| 0.327138| 0.240166| 0.012159| 0.000572| 0.329406| 3| | 23| 0.326865| 0.428912| 0.036841| 0.000787| 0.329952| 4| | 24| 0.325797| 0.255227| 0.139585| 0.000781| 0.331246| 5| | 25| 0.325181| 0.758050| 0.135868| 0.001576| 0.332035| 6| |==========================================================================================|
create a plot that compares the training cross-entropy loss and the validation cross-entropy loss at each iteration. by default, fitcnet
stores the loss information inside the traininghistory
property of the object mdl
. you can access this information by using dot notation.
iteration = mdl.traininghistory.iteration; trainlosses = mdl.traininghistory.trainingloss; vallosses = mdl.traininghistory.validationloss; plot(iteration,trainlosses,iteration,vallosses) legend(["training","validation"]) xlabel("iteration") ylabel("cross-entropy loss")
check the iteration that corresponds to the minimum validation loss. the final returned model mdl
is the model trained at this iteration.
[~,minidx] = min(vallosses); iteration(minidx)
ans = 19
find good regularization strength for neural network using cross-validation
assess the cross-validation loss of neural network models with different regularization strengths, and choose the regularization strength corresponding to the best performing model.
read the sample file creditrating_historical.dat
into a table. the predictor data consists of financial ratios and industry sector information for a list of corporate customers. the response variable consists of credit ratings assigned by a rating agency. preview the first few rows of the data set.
creditrating = readtable("creditrating_historical.dat");
head(creditrating)
id wc_ta re_ta ebit_ta mve_bvtd s_ta industry rating _____ ______ ______ _______ ________ _____ ________ _______ 62394 0.013 0.104 0.036 0.447 0.142 3 {'bb' } 48608 0.232 0.335 0.062 1.969 0.281 8 {'a' } 42444 0.311 0.367 0.074 1.935 0.366 1 {'a' } 48631 0.194 0.263 0.062 1.017 0.228 4 {'bbb'} 43768 0.121 0.413 0.057 3.647 0.466 12 {'aaa'} 39255 -0.117 -0.799 0.01 0.179 0.082 4 {'ccc'} 62236 0.087 0.158 0.049 0.816 0.324 2 {'bbb'} 39354 0.005 0.181 0.034 2.597 0.388 7 {'aa' }
because each value in the id
variable is a unique customer id, that is, length(unique(creditrating.id))
is equal to the number of observations in creditrating
, the id
variable is a poor predictor. remove the id
variable from the table, and convert the industry
variable to a categorical
variable.
creditrating = removevars(creditrating,"id");
creditrating.industry = categorical(creditrating.industry);
convert the rating
response variable to an ordinal categorical
variable.
creditrating.rating = categorical(creditrating.rating, ... ["aaa","aa","a","bbb","bb","b","ccc"],"ordinal",true);
create a cvpartition
object for stratified 5-fold cross-validation. cvp
partitions the data into five folds, where each fold has roughly the same proportions of different credit ratings. set the random seed to the default value for reproducibility of the partition.
rng("default") cvp = cvpartition(creditrating.rating,"kfold",5);
compute the cross-validation classification error for neural network classifiers with different regularization strengths. try regularization strengths on the order of 1/n, where n is the number of observations. specify to standardize the data before training the neural network models.
1/size(creditrating,1)
ans = 2.5432e-04
lambda = (0:0.5:5)*1e-4; cvloss = zeros(length(lambda),1); for i = 1:length(lambda) cvmdl = fitcnet(creditrating,"rating","lambda",lambda(i), ... "cvpartition",cvp,"standardize",true); cvloss(i) = kfoldloss(cvmdl,"lossfun","classiferror"); end
plot the results. find the regularization strength corresponding to the lowest cross-validation classification error.
plot(lambda,cvloss) xlabel("regularization strength") ylabel("cross-validation loss")
[~,idx] = min(cvloss); bestlambda = lambda(idx)
bestlambda = 1.0000e-04
train a neural network classifier using the bestlambda
regularization strength.
mdl = fitcnet(creditrating,"rating","lambda",bestlambda, ... "standardize",true)
mdl = classificationneuralnetwork predictornames: {'wc_ta' 're_ta' 'ebit_ta' 'mve_bvtd' 's_ta' 'industry'} responsename: 'rating' categoricalpredictors: 6 classnames: [aaa aa a bbb bb b ccc] scoretransform: 'none' numobservations: 3932 layersizes: 10 activations: 'relu' outputlayeractivation: 'softmax' solver: 'lbfgs' convergenceinfo: [1×1 struct] traininghistory: [1000×7 table] properties, methods
improve neural network classifier using optimizehyperparameters
train a neural network classifier using the optimizehyperparameters
argument to improve the resulting classifier. using this argument causes fitcnet
to minimize cross-validation loss over some problem hyperparameters using bayesian optimization.
read the sample file creditrating_historical.dat
into a table. the predictor data consists of financial ratios and industry sector information for a list of corporate customers. the response variable consists of credit ratings assigned by a rating agency. preview the first few rows of the data set.
creditrating = readtable("creditrating_historical.dat");
head(creditrating)
ans=8×8 table
id wc_ta re_ta ebit_ta mve_bvtd s_ta industry rating
_____ ______ ______ _______ ________ _____ ________ _______
62394 0.013 0.104 0.036 0.447 0.142 3 {'bb' }
48608 0.232 0.335 0.062 1.969 0.281 8 {'a' }
42444 0.311 0.367 0.074 1.935 0.366 1 {'a' }
48631 0.194 0.263 0.062 1.017 0.228 4 {'bbb'}
43768 0.121 0.413 0.057 3.647 0.466 12 {'aaa'}
39255 -0.117 -0.799 0.01 0.179 0.082 4 {'ccc'}
62236 0.087 0.158 0.049 0.816 0.324 2 {'bbb'}
39354 0.005 0.181 0.034 2.597 0.388 7 {'aa' }
because each value in the id
variable is a unique customer id, that is, length(unique(creditrating.id))
is equal to the number of observations in creditrating
, the id
variable is a poor predictor. remove the id
variable from the table, and convert the industry
variable to a categorical
variable.
creditrating = removevars(creditrating,"id");
creditrating.industry = categorical(creditrating.industry);
convert the rating
response variable to an ordinal categorical
variable.
creditrating.rating = categorical(creditrating.rating, ... ["aaa","aa","a","bbb","bb","b","ccc"],"ordinal",true);
partition the data into training and test sets. use approximately 80% of the observations to train a neural network model, and 20% of the observations to test the performance of the trained model on new data. use cvpartition
to partition the data.
rng("default") % for reproducibility of the partition c = cvpartition(creditrating.rating,"holdout",0.20); trainingindices = training(c); % indices for the training set testindices = test(c); % indices for the test set credittrain = creditrating(trainingindices,:); credittest = creditrating(testindices,:);
train a neural network classifier by passing the training data credittrain
to the fitcnet
function, and include the optimizehyperparameters
argument. for reproducibility, set the acquisitionfunctionname
to "expected-improvement-plus"
in a hyperparameteroptimizationoptions
structure. to attempt to get a better solution, set the number of optimization steps to 100 instead of the default 30. fitcnet
performs bayesian optimization by default. to use grid search or random search, set the optimizer
field in hyperparameteroptimizationoptions
.
rng("default") % for reproducibility mdl = fitcnet(credittrain,"rating","optimizehyperparameters","auto", ... "hyperparameteroptimizationoptions", ... struct("acquisitionfunctionname","expected-improvement-plus", ... "maxobjectiveevaluations",100))
|============================================================================================================================================| | iter | eval | objective | objective | bestsofar | bestsofar | activations | standardize | lambda | layersizes | | | result | | runtime | (observed) | (estim.) | | | | | |============================================================================================================================================| | 1 | best | 0.55944 | 0.85659 | 0.55944 | 0.55944 | none | true | 0.05834 | 3 | | 2 | best | 0.21488 | 10.56 | 0.21488 | 0.22858 | relu | true | 5.0811e-08 | [ 1 25] | | 3 | accept | 0.74189 | 0.38301 | 0.21488 | 0.21522 | sigmoid | true | 0.57986 | 126 | | 4 | accept | 0.4501 | 0.55193 | 0.21488 | 0.21509 | tanh | false | 0.018683 | 10 | | 5 | accept | 0.43071 | 6.8079 | 0.21488 | 0.21508 | relu | true | 3.3991e-06 | [ 2 1 4] | | 6 | accept | 0.21678 | 30.867 | 0.21488 | 0.21585 | relu | true | 6.8351e-09 | [ 2 179] | | 7 | accept | 0.27686 | 22.333 | 0.21488 | 0.21584 | relu | true | 1.3422e-06 | [ 78 4 2] | | 8 | accept | 0.24571 | 13.56 | 0.21488 | 0.21583 | tanh | false | 1.8747e-06 | [ 10 3 19] | | 9 | best | 0.21297 | 39.621 | 0.21297 | 0.21299 | tanh | false | 0.00052 | [ 1 61 64] | | 10 | accept | 0.74189 | 0.82366 | 0.21297 | 0.21299 | tanh | false | 0.15325 | [ 47 148 271] | | 11 | accept | 0.74189 | 0.28355 | 0.21297 | 0.21302 | relu | false | 0.091971 | [ 3 2 64] | | 12 | accept | 0.22123 | 29.531 | 0.21297 | 0.21307 | tanh | false | 1.7719e-06 | [ 3 64 38] | | 13 | accept | 0.74189 | 0.52092 | 0.21297 | 0.213 | tanh | false | 0.51268 | [233 146 6] | | 14 | accept | 0.30197 | 46.694 | 0.21297 | 0.213 | relu | true | 3.4968e-08 | [295 17] | | 15 | accept | 0.2136 | 21.808 | 0.21297 | 0.21302 | tanh | false | 4.2565e-05 | [ 1 61] | | 16 | accept | 0.21519 | 27.504 | 0.21297 | 0.21378 | tanh | false | 3.562e-05 | [ 1 2 91] | | 17 | accept | 0.2136 | 7.4304 | 0.21297 | 0.21379 | relu | true | 3.1901e-09 | 1 | | 18 | accept | 0.22028 | 31.251 | 0.21297 | 0.21296 | tanh | false | 6.7097e-05 | [ 3 144] | | 19 | accept | 0.21615 | 36.667 | 0.21297 | 0.21399 | tanh | false | 7.8065e-08 | [ 1 197 4] | | 20 | accept | 0.2651 | 27.152 | 0.21297 | 0.21401 | tanh | false | 3.3248e-09 | [ 6 112] | |============================================================================================================================================| | iter | eval | objective | objective | bestsofar | bestsofar | activations | standardize | lambda | layersizes | | | result | | runtime | (observed) | (estim.) | | | | | |============================================================================================================================================| | 21 | accept | 0.29339 | 19.958 | 0.21297 | 0.21399 | relu | true | 4.2341e-09 | [ 27 10 54] | | 22 | accept | 0.25556 | 115.66 | 0.21297 | 0.21295 | tanh | false | 3.3922e-09 | [277 228 2] | | 23 | accept | 0.2136 | 7.7187 | 0.21297 | 0.21294 | tanh | false | 3.9912e-07 | 1 | | 24 | accept | 0.2918 | 47.115 | 0.21297 | 0.21294 | tanh | false | 3.9317e-08 | [154 20 55] | | 25 | accept | 0.22123 | 40.451 | 0.21297 | 0.21293 | tanh | false | 0.00066511 | [273 7] | | 26 | accept | 0.21456 | 8.1443 | 0.21297 | 0.21294 | tanh | true | 1.745e-08 | [ 1 2] | | 27 | accept | 0.28417 | 121.37 | 0.21297 | 0.21294 | tanh | true | 3.3445e-07 | [271 239 132] | | 28 | accept | 0.31882 | 34.873 | 0.21297 | 0.21294 | tanh | true | 3.2546e-09 | 259 | | 29 | accept | 0.21329 | 7.056 | 0.21297 | 0.21294 | tanh | true | 1.4764e-07 | 1 | | 30 | accept | 0.21488 | 7.9763 | 0.21297 | 0.21293 | tanh | true | 4.2304e-05 | [ 1 3] | | 31 | accept | 0.28862 | 36.1 | 0.21297 | 0.21293 | tanh | true | 0.0026476 | [ 1 12 193] | | 32 | accept | 0.23872 | 43.329 | 0.21297 | 0.21293 | tanh | true | 0.00012483 | 291 | | 33 | accept | 0.21551 | 9.2561 | 0.21297 | 0.21293 | tanh | true | 3.5356e-06 | [ 1 9] | | 34 | accept | 0.74189 | 0.38512 | 0.21297 | 0.21293 | tanh | true | 5.226 | 284 | | 35 | accept | 0.2136 | 7.8087 | 0.21297 | 0.21293 | sigmoid | false | 2.953e-08 | 1 | | 36 | accept | 0.21742 | 6.1235 | 0.21297 | 0.21293 | sigmoid | false | 1.2958e-06 | 2 | | 37 | accept | 0.2918 | 72.069 | 0.21297 | 0.21303 | sigmoid | false | 1.2858e-07 | [298 128] | | 38 | accept | 0.74189 | 4.0814 | 0.21297 | 0.21293 | sigmoid | false | 0.00049631 | [ 1 56 285] | | 39 | accept | 0.21424 | 8.8157 | 0.21297 | 0.21293 | sigmoid | false | 2.3823e-07 | [ 1 2] | | 40 | accept | 0.21488 | 11.584 | 0.21297 | 0.21293 | sigmoid | false | 3.231e-09 | [ 1 34] | |============================================================================================================================================| | iter | eval | objective | objective | bestsofar | bestsofar | activations | standardize | lambda | layersizes | | | result | | runtime | (observed) | (estim.) | | | | | |============================================================================================================================================| | 41 | accept | 0.21488 | 8.5467 | 0.21297 | 0.21293 | none | false | 3.9919e-09 | [ 1 1] | | 42 | accept | 0.2206 | 17.637 | 0.21297 | 0.21301 | none | false | 1.4528e-07 | 103 | | 43 | accept | 0.21964 | 49.16 | 0.21297 | 0.21293 | none | false | 4.0062e-09 | [289 77] | | 44 | accept | 0.21551 | 8.4409 | 0.21297 | 0.21293 | none | false | 1.8166e-05 | [ 1 7 2] | | 45 | accept | 0.25302 | 6.8665 | 0.21297 | 0.21293 | none | false | 0.00093672 | [273 5 1] | | 46 | accept | 0.21901 | 70.44 | 0.21297 | 0.21293 | none | false | 1.0943e-05 | [285 133 97] | | 47 | accept | 0.74189 | 0.19575 | 0.21297 | 0.213 | none | false | 0.33807 | [ 1 93] | | 48 | accept | 0.21615 | 33.742 | 0.21297 | 0.21292 | none | false | 3.1207e-08 | [ 2 3 290] | | 49 | accept | 0.21837 | 21.618 | 0.21297 | 0.213 | none | false | 0.00010795 | [239 5] | | 50 | accept | 0.21519 | 5.9516 | 0.21297 | 0.21292 | none | false | 1.0462e-06 | 1 | | 51 | accept | 0.21488 | 13.421 | 0.21297 | 0.21292 | none | true | 3.2351e-09 | [ 66 1] | | 52 | accept | 0.21519 | 7.0643 | 0.21297 | 0.21292 | none | true | 1.3037e-07 | [ 1 2] | | 53 | accept | 0.22028 | 33.638 | 0.21297 | 0.213 | none | true | 4.9681e-08 | [272 17 4] | | 54 | accept | 0.21488 | 2.7953 | 0.21297 | 0.21292 | none | true | 1.1517e-08 | [ 1 18 2] | | 55 | accept | 0.2206 | 33.822 | 0.21297 | 0.21292 | none | true | 5.4074e-06 | [287 4 11] | | 56 | accept | 0.22441 | 28.892 | 0.21297 | 0.213 | sigmoid | true | 3.1871e-09 | [ 1 141 5] | | 57 | accept | 0.28544 | 49.046 | 0.21297 | 0.213 | sigmoid | true | 1.5445e-07 | [271 8 47] | | 58 | accept | 0.31151 | 42.681 | 0.21297 | 0.213 | sigmoid | true | 3.1992e-09 | 269 | | 59 | accept | 0.29371 | 58.27 | 0.21297 | 0.213 | relu | false | 3.3691e-09 | [241 91] | | 60 | accept | 0.74189 | 0.4131 | 0.21297 | 0.21301 | relu | true | 30.931 | [232 6] | |============================================================================================================================================| | iter | eval | objective | objective | bestsofar | bestsofar | activations | standardize | lambda | layersizes | | | result | | runtime | (observed) | (estim.) | | | | | |============================================================================================================================================| | 61 | accept | 0.24348 | 9.6687 | 0.21297 | 0.21291 | sigmoid | true | 5.2088e-08 | [ 1 4 1] | | 62 | accept | 0.64844 | 2.7232 | 0.21297 | 0.21301 | relu | false | 3.6858e-07 | [ 1 21 1] | | 63 | accept | 0.21456 | 32.99 | 0.21297 | 0.21291 | none | true | 3.6582e-06 | [ 1 80 188] | | 64 | best | 0.21265 | 18.62 | 0.21265 | 0.21267 | sigmoid | true | 9.6673e-06 | [ 1 75] | | 65 | accept | 0.226 | 11.419 | 0.21265 | 0.21268 | sigmoid | true | 1.5077e-06 | [ 1 24 1] | | 66 | accept | 0.23331 | 102.48 | 0.21265 | 0.21268 | sigmoid | true | 1.5026e-05 | [287 214 74] | | 67 | accept | 0.2206 | 30.992 | 0.21265 | 0.21267 | none | true | 7.5629e-07 | [ 34 2 264] | | 68 | accept | 0.21869 | 4.3461 | 0.21265 | 0.21268 | none | true | 6.758e-05 | [ 1 1 1] | | 69 | accept | 0.21869 | 51.008 | 0.21265 | 0.21268 | none | true | 6.1541e-05 | [175 23 253] | | 70 | accept | 0.21519 | 46.352 | 0.21265 | 0.21267 | sigmoid | false | 5.8406e-07 | [ 1 12 288] | | 71 | accept | 0.74189 | 0.35284 | 0.21265 | 0.21268 | sigmoid | false | 31.7 | [151 36] | | 72 | accept | 0.29625 | 5.4205 | 0.21265 | 0.21268 | sigmoid | true | 0.00015423 | [ 1 35] | | 73 | accept | 0.21647 | 2.6142 | 0.21265 | 0.21268 | none | false | 0.00024113 | [ 1 35] | | 74 | accept | 0.21901 | 76.616 | 0.21265 | 0.2127 | none | true | 2.0906e-05 | [ 6 235 284] | | 75 | accept | 0.2171 | 32.606 | 0.21265 | 0.21268 | none | false | 0.00010157 | [ 6 5 298] | | 76 | accept | 0.21996 | 9.2912 | 0.21265 | 0.21268 | tanh | true | 0.00023083 | [ 1 13] | | 77 | accept | 0.74189 | 0.32671 | 0.21265 | 0.21269 | none | true | 31.208 | 222 | | 78 | accept | 0.21519 | 35.616 | 0.21265 | 0.21269 | tanh | false | 4.4635e-06 | [ 1 7 151] | | 79 | accept | 0.21392 | 9.7813 | 0.21265 | 0.21269 | relu | true | 1.5577e-08 | [ 1 21] | | 80 | accept | 0.21488 | 21.138 | 0.21265 | 0.21269 | none | false | 2.1706e-07 | [ 1 185] | |============================================================================================================================================| | iter | eval | objective | objective | bestsofar | bestsofar | activations | standardize | lambda | layersizes | | | result | | runtime | (observed) | (estim.) | | | | | |============================================================================================================================================| | 81 | accept | 0.21424 | 69.272 | 0.21265 | 0.21118 | tanh | false | 5.8903e-07 | [ 1 230 101] | | 82 | accept | 0.21488 | 27.59 | 0.21265 | 0.21113 | none | true | 9.4233e-09 | [222 2] | | 83 | accept | 0.21933 | 52.768 | 0.21265 | 0.21112 | none | false | 1.0916e-06 | [274 12 211] | | 84 | accept | 0.21456 | 43.454 | 0.21265 | 0.21106 | tanh | true | 4.2988e-08 | [ 1 4 247] | | 85 | accept | 0.21488 | 9.6532 | 0.21265 | 0.21103 | tanh | true | 3.2433e-09 | [ 1 4 2] | | 86 | accept | 0.21424 | 7.4065 | 0.21265 | 0.21104 | tanh | true | 6.8749e-07 | 1 | | 87 | accept | 0.25366 | 47.819 | 0.21265 | 0.21106 | sigmoid | false | 3.6866e-09 | [292 20] | | 88 | accept | 0.2225 | 13.107 | 0.21265 | 0.21108 | none | true | 0.00035663 | [235 12] | | 89 | accept | 0.21805 | 1.9952 | 0.21265 | 0.21114 | none | true | 0.00036004 | [ 1 2] | | 90 | accept | 0.74189 | 0.96416 | 0.21265 | 0.21112 | relu | false | 30.55 | [275 169 155] | | 91 | accept | 0.21488 | 5.7708 | 0.21265 | 0.21119 | none | true | 3.2456e-09 | [ 1 238 31] | | 92 | accept | 0.21392 | 31.018 | 0.21265 | 0.21122 | sigmoid | false | 9.3344e-09 | [ 1 185] | | 93 | accept | 0.21488 | 8.0701 | 0.21265 | 0.21236 | relu | true | 6.5865e-09 | 1 | | 94 | accept | 0.34298 | 1.3016 | 0.21265 | 0.21267 | tanh | false | 0.00020571 | 1 | | 95 | accept | 0.29784 | 87.985 | 0.21265 | 0.21269 | tanh | false | 2.0857e-05 | [ 15 297 124] | | 96 | accept | 0.33153 | 30.766 | 0.21265 | 0.21302 | tanh | false | 0.00021639 | [ 4 135 1] | | 97 | accept | 0.21519 | 20.949 | 0.21265 | 0.21299 | tanh | true | 2.1898e-05 | [ 1 9 57] | | 98 | accept | 0.21996 | 51.698 | 0.21265 | 0.21389 | none | false | 3.8536e-05 | [270 139] | | 99 | best | 0.21202 | 49.605 | 0.21202 | 0.21386 | none | false | 1.7719e-08 | [280 59 2] | | 100 | accept | 0.21488 | 3.0963 | 0.21202 | 0.21383 | none | false | 1.9173e-08 | 1 |
__________________________________________________________ optimization completed. maxobjectiveevaluations of 100 reached. total function evaluations: 100 total elapsed time: 2577.3756 seconds total objective function evaluation time: 2526.3743 best observed feasible point: activations standardize lambda layersizes ___________ ___________ __________ _________________ none false 1.7719e-08 280 59 2 observed objective function value = 0.21202 estimated objective function value = 0.21541 function evaluation time = 49.6049 best estimated feasible point (according to models): activations standardize lambda layersizes ___________ ___________ __________ _______________ none false 0.00010157 6 5 298 estimated objective function value = 0.21383 estimated function evaluation time = 32.5882
mdl = classificationneuralnetwork predictornames: {'wc_ta' 're_ta' 'ebit_ta' 'mve_bvtd' 's_ta' 'industry'} responsename: 'rating' categoricalpredictors: 6 classnames: [aaa aa a bbb bb b ccc] scoretransform: 'none' numobservations: 3146 hyperparameteroptimizationresults: [1×1 bayesianoptimization] layersizes: [6 5 298] activations: 'none' outputlayeractivation: 'softmax' solver: 'lbfgs' convergenceinfo: [1×1 struct] traininghistory: [1000×7 table] properties, methods
mdl
is a trained classificationneuralnetwork
classifier. the model corresponds to the best estimated feasible point, as opposed to the best observed feasible point. (for details on this distinction, see bestpoint
.) you can use dot notation to access the properties of mdl
. for example, you can specify mdl.hyperparameteroptimizationresults
to get more information about the optimization of the neural network model.
find the classification accuracy of the model on the test data set. visualize the results by using a confusion matrix.
modelaccuracy = 1 - loss(mdl,credittest,"rating", ... "lossfun","classiferror")
modelaccuracy = 0.8041
confusionchart(credittest.rating,predict(mdl,credittest))
the model has all predicted classes within one unit of the true classes, meaning all predictions are off by no more than one rating.
customize neural network classifier optimization
train a neural network classifier using the optimizehyperparameters
argument to improve the resulting classification accuracy. use the hyperparameters
function to specify larger-than-default values for the number of layers used and the layer size range.
read the sample file creditrating_historical.dat
into a table. the predictor data consists of financial ratios and industry sector information for a list of corporate customers. the response variable consists of credit ratings assigned by a rating agency.
creditrating = readtable("creditrating_historical.dat");
because each value in the id
variable is a unique customer id, that is, length(unique(creditrating.id))
is equal to the number of observations in creditrating
, the id
variable is a poor predictor. remove the id
variable from the table, and convert the industry
variable to a categorical
variable.
creditrating = removevars(creditrating,"id");
creditrating.industry = categorical(creditrating.industry);
convert the rating
response variable to an ordinal categorical
variable.
creditrating.rating = categorical(creditrating.rating, ... ["aaa","aa","a","bbb","bb","b","ccc"],"ordinal",true);
partition the data into training and test sets. use approximately 80% of the observations to train a neural network model, and 20% of the observations to test the performance of the trained model on new data. use cvpartition
to partition the data.
rng("default") % for reproducibility of the partition c = cvpartition(creditrating.rating,"holdout",0.20); trainingindices = training(c); % indices for the training set testindices = test(c); % indices for the test set credittrain = creditrating(trainingindices,:); credittest = creditrating(testindices,:);
list the hyperparameters available for this problem of fitting the rating
response.
params = hyperparameters("fitcnet",credittrain,"rating"); for ii = 1:length(params) disp(ii);disp(params(ii)) end
1 optimizablevariable with properties: name: 'numlayers' range: [1 3] type: 'integer' transform: 'none' optimize: 1 2 optimizablevariable with properties: name: 'activations' range: {'relu' 'tanh' 'sigmoid' 'none'} type: 'categorical' transform: 'none' optimize: 1 3 optimizablevariable with properties: name: 'standardize' range: {'true' 'false'} type: 'categorical' transform: 'none' optimize: 1 4 optimizablevariable with properties: name: 'lambda' range: [3.1786e-09 31.7864] type: 'real' transform: 'log' optimize: 1 5 optimizablevariable with properties: name: 'layerweightsinitializer' range: {'glorot' 'he'} type: 'categorical' transform: 'none' optimize: 0 6 optimizablevariable with properties: name: 'layerbiasesinitializer' range: {'zeros' 'ones'} type: 'categorical' transform: 'none' optimize: 0 7 optimizablevariable with properties: name: 'layer_1_size' range: [1 300] type: 'integer' transform: 'log' optimize: 1 8 optimizablevariable with properties: name: 'layer_2_size' range: [1 300] type: 'integer' transform: 'log' optimize: 1 9 optimizablevariable with properties: name: 'layer_3_size' range: [1 300] type: 'integer' transform: 'log' optimize: 1 10 optimizablevariable with properties: name: 'layer_4_size' range: [1 300] type: 'integer' transform: 'log' optimize: 0 11 optimizablevariable with properties: name: 'layer_5_size' range: [1 300] type: 'integer' transform: 'log' optimize: 0
to try more layers than the default of 1 through 3, set the range of numlayers
(optimizable variable 1) to its maximum allowable size, [1 5]
. also, set layer_4_size
and layer_5_size
(optimizable variables 10 and 11, respectively) to be optimized.
params(1).range = [1 5]; params(10).optimize = true; params(11).optimize = true;
set the range of all layer sizes (optimizable variables 7 through 11) to [1 400]
instead of the default [1 300]
.
for ii = 7:11 params(ii).range = [1 400]; end
train a neural network classifier by passing the training data credittrain
to the fitcnet
function, and include the optimizehyperparameters
argument set to params
. for reproducibility, set the acquisitionfunctionname
to "expected-improvement-plus"
in a hyperparameteroptimizationoptions
structure. to attempt to get a better solution, set the number of optimization steps to 100 instead of the default 30.
rng("default") % for reproducibility mdl = fitcnet(credittrain,"rating","optimizehyperparameters",params, ... "hyperparameteroptimizationoptions", ... struct("acquisitionfunctionname","expected-improvement-plus", ... "maxobjectiveevaluations",100))
|============================================================================================================================================| | iter | eval | objective | objective | bestsofar | bestsofar | activations | standardize | lambda | layersizes | | | result | | runtime | (observed) | (estim.) | | | | | |============================================================================================================================================| | 1 | best | 0.74189 | 2.2062 | 0.74189 | 0.74189 | sigmoid | true | 0.68961 | [104 1 5 3 1] | | 2 | best | 0.2225 | 70.081 | 0.2225 | 0.24316 | relu | true | 0.00058564 | [ 38 208 162] | | 3 | accept | 0.63891 | 13.086 | 0.2225 | 0.22698 | sigmoid | true | 1.9768e-06 | [ 1 25 1 287 7] | | 4 | best | 0.21933 | 33.886 | 0.21933 | 0.22307 | none | false | 1.3353e-06 | 320 | | 5 | accept | 0.74189 | 0.27024 | 0.21933 | 0.21936 | relu | true | 2.7056 | [ 1 2 1] | | 6 | accept | 0.29148 | 96.764 | 0.21933 | 0.21936 | relu | true | 1.0503e-06 | [301 31 400] | | 7 | accept | 0.6869 | 4.2153 | 0.21933 | 0.21936 | relu | true | 0.0113 | [ 97 5 56] | | 8 | accept | 0.74189 | 0.28736 | 0.21933 | 0.21936 | relu | true | 0.053563 | [ 2 92 1] | | 9 | accept | 0.25238 | 74.737 | 0.21933 | 0.2221 | relu | true | 0.00010812 | [ 8 137 232] | | 10 | accept | 0.29784 | 213.19 | 0.21933 | 0.21936 | relu | true | 2.3488e-07 | [ 30 397 364] | | 11 | accept | 0.74189 | 0.27991 | 0.21933 | 0.21936 | none | true | 10.18 | 204 | | 12 | best | 0.21392 | 35.925 | 0.21392 | 0.21395 | none | false | 3.4691e-06 | [ 7 355 2] | | 13 | accept | 0.74189 | 0.82149 | 0.21392 | 0.21395 | none | false | 31.657 | [193 53 5 90 355] | | 14 | accept | 0.21488 | 45.397 | 0.21392 | 0.21443 | none | false | 8.607e-06 | [126 80 2 86 2] | | 15 | accept | 0.2349 | 60.527 | 0.21392 | 0.21443 | relu | false | 9.4208e-06 | [ 38 6 379 4] | | 16 | accept | 0.21901 | 46.638 | 0.21392 | 0.21443 | relu | false | 0.0018197 | [ 6 20 205 30 51] | | 17 | accept | 0.22282 | 68.41 | 0.21392 | 0.21443 | relu | false | 1.2196e-07 | [ 5 3 91 45 163] | | 18 | accept | 0.74189 | 1.5076 | 0.21392 | 0.21387 | relu | false | 10.565 | [394 397 39] | | 19 | accept | 0.24348 | 57.89 | 0.21392 | 0.21442 | relu | false | 2.7033e-08 | [ 52 49 195 11 2] | | 20 | accept | 0.21933 | 54.865 | 0.21392 | 0.21411 | relu | false | 5.3281e-09 | [ 4 26 276 4] | |============================================================================================================================================| | iter | eval | objective | objective | bestsofar | bestsofar | activations | standardize | lambda | layersizes | | | result | | runtime | (observed) | (estim.) | | | | | |============================================================================================================================================| | 21 | accept | 0.21583 | 101.52 | 0.21392 | 0.21413 | relu | false | 0.00095213 | [ 98 25 120 70 321] | | 22 | accept | 0.74189 | 1.1203 | 0.21392 | 0.21413 | tanh | false | 10.324 | [ 5 19 325 100 286] | | 23 | accept | 0.2225 | 76.344 | 0.21392 | 0.21413 | tanh | true | 3.1717e-07 | [ 4 3 400] | | 24 | accept | 0.21996 | 39.348 | 0.21392 | 0.21412 | tanh | true | 6.0973e-06 | [ 6 3 202 2] | | 25 | accept | 0.74189 | 0.70734 | 0.21392 | 0.21389 | tanh | true | 0.47944 | [ 91 21 276 10 202] | | 26 | accept | 0.6424 | 7.8651 | 0.21392 | 0.21391 | relu | true | 4.153e-06 | [ 27 1 208 1 20] | | 27 | accept | 0.23808 | 124.09 | 0.21392 | 0.21391 | relu | false | 4.7143e-07 | [116 111 327 4 9] | | 28 | accept | 0.21869 | 59.477 | 0.21392 | 0.21394 | none | false | 0.00020517 | [213 245 1 45 6] | | 29 | accept | 0.74189 | 0.84795 | 0.21392 | 0.21394 | tanh | true | 0.066046 | [ 2 222 63] | | 30 | accept | 0.23013 | 44.975 | 0.21392 | 0.21394 | tanh | true | 1.6445e-07 | [184 1 32 21] | | 31 | accept | 0.21583 | 30.499 | 0.21392 | 0.214 | none | false | 8.3607e-09 | [172 13 1] | | 32 | accept | 0.29021 | 162.91 | 0.21392 | 0.2114 | relu | true | 0.0054118 | [ 79 385 325] | | 33 | accept | 0.22028 | 7.3966 | 0.21392 | 0.21435 | none | false | 6.2688e-07 | [ 5 13] | | 34 | accept | 0.21488 | 4.797 | 0.21392 | 0.21359 | none | false | 2.5162e-08 | [ 1 1 17] | | 35 | accept | 0.21805 | 10.065 | 0.21392 | 0.21515 | relu | false | 3.3182e-05 | [ 6 5 3 13] | | 36 | accept | 0.23268 | 9.1618 | 0.21392 | 0.21493 | relu | false | 3.9676e-09 | [ 36 4] | | 37 | accept | 0.21519 | 44.065 | 0.21392 | 0.21394 | none | false | 2.1955e-07 | [ 16 34 350 4 31] | | 38 | accept | 0.33249 | 26.542 | 0.21392 | 0.21231 | relu | false | 0.0010092 | [ 24 1 207] | | 39 | accept | 0.21583 | 21.537 | 0.21392 | 0.21394 | relu | false | 2.5221e-05 | [ 1 95] | | 40 | accept | 0.22123 | 89.369 | 0.21392 | 0.21394 | relu | true | 0.0002332 | [ 5 392 160] | |============================================================================================================================================| | iter | eval | objective | objective | bestsofar | bestsofar | activations | standardize | lambda | layersizes | | | result | | runtime | (observed) | (estim.) | | | | | |============================================================================================================================================| | 41 | accept | 0.28894 | 229.82 | 0.21392 | 0.21393 | relu | true | 5.2515e-05 | [153 394 315] | | 42 | accept | 0.22123 | 166.4 | 0.21392 | 0.21393 | none | false | 4.1509e-09 | [235 399 62 148] | | 43 | accept | 0.27654 | 19.776 | 0.21392 | 0.21392 | relu | false | 1.1969e-06 | [ 75 18] | | 44 | accept | 0.2705 | 91.89 | 0.21392 | 0.21393 | relu | false | 3.9338e-09 | [ 78 387 42 65] | | 45 | accept | 0.21678 | 159.34 | 0.21392 | 0.21396 | none | false | 3.3979e-05 | [ 2 350 376 2] | | 46 | accept | 0.21678 | 5.3698 | 0.21392 | 0.21396 | none | false | 0.00019489 | [ 10 4] | | 47 | best | 0.2136 | 40.323 | 0.2136 | 0.21359 | none | false | 5.8608e-08 | [ 21 382 2] | | 48 | accept | 0.22918 | 18.359 | 0.2136 | 0.21359 | relu | true | 3.1819e-09 | [ 3 71] | | 49 | accept | 0.27591 | 81.573 | 0.2136 | 0.21359 | relu | false | 8.1967e-06 | [ 55 388 56] | | 50 | accept | 0.29593 | 10.722 | 0.2136 | 0.21359 | tanh | true | 2.5573e-06 | 28 | | 51 | accept | 0.31532 | 81.712 | 0.2136 | 0.21361 | tanh | true | 1.7419e-06 | [216 24 25 62 94] | | 52 | accept | 0.21869 | 46.876 | 0.2136 | 0.21361 | relu | false | 3.3288e-09 | [ 25 1 310] | | 53 | accept | 0.21837 | 44.823 | 0.2136 | 0.21359 | none | false | 1.3416e-05 | [ 2 2 386 33] | | 54 | accept | 0.23872 | 86.465 | 0.2136 | 0.21359 | tanh | true | 3.1991e-09 | [ 9 2 233 13 297] | | 55 | accept | 0.21742 | 22.42 | 0.2136 | 0.21359 | none | false | 0.00017978 | [346 36] | | 56 | accept | 0.3506 | 53.374 | 0.2136 | 0.2136 | relu | false | 8.9375e-08 | [213 1 22 222] | | 57 | accept | 0.21583 | 47.939 | 0.2136 | 0.2136 | relu | false | 4.0858e-09 | [ 1 20 75 7 160] | | 58 | accept | 0.25048 | 63.899 | 0.2136 | 0.2136 | relu | false | 1.8367e-05 | [133 18 5 8 265] | | 59 | accept | 0.21392 | 24.587 | 0.2136 | 0.2136 | relu | false | 0.00025743 | [ 4 49 78] | | 60 | accept | 0.21996 | 57.638 | 0.2136 | 0.21361 | none | false | 6.077e-09 | [ 18 2 199 34 291] | |============================================================================================================================================| | iter | eval | objective | objective | bestsofar | bestsofar | activations | standardize | lambda | layersizes | | | result | | runtime | (observed) | (estim.) | | | | | |============================================================================================================================================| | 61 | accept | 0.21837 | 52.847 | 0.2136 | 0.21359 | none | false | 4.7921e-05 | [ 53 3 5 33 388] | | 62 | accept | 0.22028 | 46.08 | 0.2136 | 0.21359 | none | false | 4.2742e-09 | [206 87 9 20 39] | | 63 | accept | 0.21774 | 15.034 | 0.2136 | 0.21359 | none | false | 1.0053e-07 | [ 68 3] | | 64 | accept | 0.23554 | 68.289 | 0.2136 | 0.21359 | relu | true | 3.3518e-09 | [ 3 389 60] | | 65 | accept | 0.22759 | 2.6688 | 0.2136 | 0.2136 | none | false | 0.00079006 | 64 | | 66 | accept | 0.22187 | 55.67 | 0.2136 | 0.2136 | relu | false | 4.3532e-07 | [ 1 11 383] | | 67 | accept | 0.21805 | 113.63 | 0.2136 | 0.21359 | relu | false | 3.3578e-09 | [ 4 4 384 244] | | 68 | accept | 0.21742 | 39.749 | 0.2136 | 0.21359 | relu | false | 0.00042226 | [ 27 7 13 237] | | 69 | accept | 0.29911 | 22.327 | 0.2136 | 0.2136 | sigmoid | false | 3.1977e-09 | [ 66 31] | | 70 | accept | 0.28544 | 17.354 | 0.2136 | 0.21359 | sigmoid | false | 2.1618e-07 | 59 | | 71 | accept | 0.4342 | 17.862 | 0.2136 | 0.2136 | sigmoid | false | 1.1526e-05 | [ 53 28 9 27 2] | | 72 | accept | 0.24793 | 41.903 | 0.2136 | 0.21359 | sigmoid | false | 3.2532e-09 | 280 | | 73 | accept | 0.74189 | 0.24831 | 0.2136 | 0.21359 | sigmoid | false | 29.321 | [ 58 1 5 3] | | 74 | accept | 0.21805 | 11.378 | 0.2136 | 0.21359 | relu | false | 5.0967e-08 | [ 1 5 42] | | 75 | accept | 0.21964 | 16.802 | 0.2136 | 0.2136 | none | true | 3.3747e-09 | [ 56 273] | | 76 | accept | 0.21488 | 1.4504 | 0.2136 | 0.21359 | none | true | 3.6101e-09 | [ 1 19] | | 77 | accept | 0.21456 | 9.5126 | 0.2136 | 0.2136 | none | true | 1.8426e-07 | [ 1 76 2] | | 78 | accept | 0.21488 | 25.866 | 0.2136 | 0.21359 | none | true | 1.9217e-07 | [ 1 3 322 5] | | 79 | accept | 0.21996 | 7.2836 | 0.2136 | 0.20963 | none | true | 3.5146e-09 | 182 | | 80 | accept | 0.21996 | 26.22 | 0.2136 | 0.20986 | none | true | 1.9249e-08 | [ 51 79 345] | |============================================================================================================================================| | iter | eval | objective | objective | bestsofar | bestsofar | activations | standardize | lambda | layersizes | | | result | | runtime | (observed) | (estim.) | | | | | |============================================================================================================================================| | 81 | accept | 0.21996 | 16.72 | 0.2136 | 0.20976 | none | true | 5.6038e-08 | [269 6] | | 82 | accept | 0.21837 | 67.424 | 0.2136 | 0.21359 | none | true | 2.2486e-05 | [ 15 334 161] | | 83 | accept | 0.21901 | 52.193 | 0.2136 | 0.2136 | none | true | 2.325e-07 | [ 43 397 22 5 4] | | 84 | accept | 0.2136 | 25.949 | 0.2136 | 0.20893 | none | true | 1.4375e-05 | [ 3 23 161] | | 85 | accept | 0.22568 | 9.2788 | 0.2136 | 0.21359 | relu | false | 0.00036954 | [ 1 25] | | 86 | accept | 0.22123 | 9.0294 | 0.2136 | 0.2139 | none | true | 8.9433e-06 | 63 | | 87 | accept | 0.21551 | 73.231 | 0.2136 | 0.20857 | relu | false | 0.00013186 | [ 1 10 235 79 56] | | 88 | accept | 0.21996 | 45.161 | 0.2136 | 0.21359 | none | true | 4.6415e-06 | [274 61] | | 89 | accept | 0.24253 | 35.809 | 0.2136 | 0.21359 | none | true | 0.0043392 | [105 351 3 2 244] | | 90 | accept | 0.21392 | 26.066 | 0.2136 | 0.21359 | none | true | 0.0004037 | [ 68 57 5 189] | | 91 | accept | 0.24634 | 8.1577 | 0.2136 | 0.21359 | tanh | false | 3.2373e-09 | 11 | | 92 | accept | 0.23713 | 60.74 | 0.2136 | 0.2136 | tanh | false | 3.2168e-09 | [ 7 32 316 6] | | 93 | accept | 0.23331 | 46.265 | 0.2136 | 0.2136 | tanh | false | 2.7471e-07 | [ 7 6 6 255] | | 94 | accept | 0.22791 | 238.99 | 0.2136 | 0.2136 | tanh | false | 2.4117e-07 | [ 2 386 364 66] | | 95 | accept | 0.30769 | 66.556 | 0.2136 | 0.2136 | relu | true | 3.2605e-09 | [380 72] | | 96 | accept | 0.30038 | 70.252 | 0.2136 | 0.2136 | tanh | false | 9.629e-08 | [346 55] | | 97 | accept | 0.2136 | 240.45 | 0.2136 | 0.21358 | tanh | false | 3.0728e-08 | [ 1 9 319 337 168] | | 98 | accept | 0.21488 | 8.1832 | 0.2136 | 0.21358 | none | false | 4.8562e-09 | [ 1 108] | | 99 | accept | 0.31945 | 33.121 | 0.2136 | 0.20612 | relu | false | 5.058e-07 | [ 1 214 6 2 13] | | 100 | accept | 0.23299 | 79.247 | 0.2136 | 0.2058 | tanh | false | 1.4126e-07 | [204 1 298 3] |
__________________________________________________________ optimization completed. maxobjectiveevaluations of 100 reached. total function evaluations: 100 total elapsed time: 4964.939 seconds total objective function evaluation time: 4901.9365 best observed feasible point: activations standardize lambda layersizes ___________ ___________ __________ ________________ none false 5.8608e-08 21 382 2 observed objective function value = 0.2136 estimated objective function value = 0.21443 function evaluation time = 40.3226 best estimated feasible point (according to models): activations standardize lambda layersizes ___________ ___________ __________ _____________ relu false 0.00025743 4 49 78 estimated objective function value = 0.2058 estimated function evaluation time = 25.2207
mdl = classificationneuralnetwork predictornames: {'wc_ta' 're_ta' 'ebit_ta' 'mve_bvtd' 's_ta' 'industry'} responsename: 'rating' categoricalpredictors: 6 classnames: [aaa aa a bbb bb b ccc] scoretransform: 'none' numobservations: 3146 hyperparameteroptimizationresults: [1×1 bayesianoptimization] layersizes: [4 49 78] activations: 'relu' outputlayeractivation: 'softmax' solver: 'lbfgs' convergenceinfo: [1×1 struct] traininghistory: [1000×7 table] properties, methods
find the classification accuracy of the model on the test data set. visualize the results by using a confusion matrix.
testaccuracy = 1 - loss(mdl,credittest,"rating", ... "lossfun","classiferror")
testaccuracy = 0.8117
confusionchart(credittest.rating,predict(mdl,credittest))
the model has all predicted classes within one unit of the true classes, meaning all predictions are off by no more than one rating.
input arguments
tbl
— sample data
table
sample data used to train the model, specified as a table. each row of tbl
corresponds to one observation, and each column corresponds to one predictor variable.
optionally, tbl
can contain one additional column for the response
variable. multicolumn variables and cell arrays other than cell arrays of character
vectors are not allowed.
if
tbl
contains the response variable, and you want to use all remaining variables intbl
as predictors, then specify the response variable by usingresponsevarname
.if
tbl
contains the response variable, and you want to use only a subset of the remaining variables intbl
as predictors, then specify a formula by usingformula
.if
tbl
does not contain the response variable, then specify a response variable by usingy
. the length of the response variable and the number of rows intbl
must be equal.
responsevarname
— response variable name
name of variable in tbl
response variable name, specified as the name of a variable in
tbl
.
you must specify responsevarname
as a character vector or string scalar.
for example, if the response variable y
is
stored as tbl.y
, then specify it as
"y"
. otherwise, the software
treats all columns of tbl
, including
y
, as predictors when training
the model.
the response variable must be a categorical, character, or string array; a logical or numeric
vector; or a cell array of character vectors. if
y
is a character array, then each
element of the response variable must correspond to one row of
the array.
a good practice is to specify the order of the classes by using the
classnames
name-value
argument.
data types: char
| string
formula
— explanatory model of response variable and subset of predictor variables
character vector | string scalar
explanatory model of the response variable and a subset of the predictor variables,
specified as a character vector or string scalar in the form
"y~x1 x2 x3"
. in this form, y
represents the
response variable, and x1
, x2
, and
x3
represent the predictor variables.
to specify a subset of variables in tbl
as predictors for
training the model, use a formula. if you specify a formula, then the software does not
use any variables in tbl
that do not appear in
formula
.
the variable names in the formula must be both variable names in tbl
(tbl.properties.variablenames
) and valid matlab® identifiers. you can verify the variable names in tbl
by
using the isvarname
function. if the variable names
are not valid, then you can convert them by using the matlab.lang.makevalidname
function.
data types: char
| string
y
— class labels
numeric vector | categorical vector | logical vector | character array | string array | cell array of character vectors
class labels used to train the model, specified as a numeric, categorical, or logical vector; a character or string array; or a cell array of character vectors.
if
y
is a character array, then each element of the class labels must correspond to one row of the array.the length of
y
must be equal to the number of rows intbl
orx
.a good practice is to specify the class order by using the
classnames
name-value argument.
data types: single
| double
| categorical
| logical
| char
| string
| cell
x
— predictor data
numeric matrix
predictor data used to train the model, specified as a numeric matrix.
by default, the software treats each row of x
as one
observation, and each column as one predictor.
the length of y
and the number of observations in
x
must be equal.
to specify the names of the predictors in the order of their appearance in
x
, use the predictornames
name-value
argument.
note
if you orient your predictor matrix so that observations correspond to columns and
specify 'observationsin','columns'
, then you might experience a
significant reduction in computation time.
data types: single
| double
note
the software treats nan
, empty character vector
(''
), empty string (""
),
, and
elements as
missing values, and removes observations with any of these characteristics:
missing value in the response variable (for example,
y
orvalidationdata
{2}
)at least one missing value in a predictor observation (for example, row in
x
orvalidationdata{1}
)nan
value or0
weight (for example, value inweights
orvalidationdata{3}
)class label with
0
prior probability (value inprior
)
name-value arguments
specify optional pairs of arguments as
name1=value1,...,namen=valuen
, where name
is
the argument name and value
is the corresponding value.
name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
before r2021a, use commas to separate each name and value, and enclose
name
in quotes.
example: fitcnet(x,y,'layersizes',[10
10],'activations',["relu","tanh"])
specifies to create a neural network with two
fully connected layers, each with 10 outputs. the first layer uses a rectified linear unit
(relu) activation function, and the second uses a hyperbolic tangent activation
function.
layersizes
— sizes of fully connected layers
10
(default) | positive integer vector
sizes of the fully connected layers in the neural network model, specified as a
positive integer vector. the ith element of
layersizes
is the number of outputs in the
ith fully connected layer of the neural network model.
layersizes
does not include the size of the final fully
connected layer that uses a softmax activation function. for more information, see
neural network structure.
example: 'layersizes',[100 25 10]
activations
— activation functions for fully connected layers
'relu'
(default) | 'tanh'
| 'sigmoid'
| 'none'
| string array | cell array of character vectors
activation functions for the fully connected layers of the neural network model, specified as a character vector, string scalar, string array, or cell array of character vectors with values from this table.
value | description |
---|---|
'relu' | rectified linear unit (relu) function — performs a threshold operation on each element of the input, where any value less than zero is set to zero, that is, |
'tanh' | hyperbolic tangent (tanh) function — applies the |
'sigmoid' | sigmoid function — performs the following operation on each input element: |
'none' | identity function — returns each input element without performing any transformation, that is, f(x) = x |
if you specify one activation function only, then
activations
is the activation function for every fully connected layer of the neural network model, excluding the final fully connected layer. the activation function for the final fully connected layer is always softmax (see neural network structure).if you specify an array of activation functions, then the ith element of
activations
is the activation function for the ith layer of the neural network model.
example: 'activations','sigmoid'
layerweightsinitializer
— function to initialize fully connected layer weights
'glorot'
(default) | 'he'
function to initialize the fully connected layer weights, specified as
'glorot'
or 'he'
.
value | description |
---|---|
'glorot' | initialize the weights with the glorot initializer [1] (also
known as the xavier initializer). for each layer, the glorot initializer
independently samples from a uniform distribution with zero mean and
variable 2/(i o) , where i is the input
size and o is the output size for the layer. |
'he' | initialize the weights with the he initializer [2]. for each
layer, the he initializer samples from a normal distribution with zero mean
and variance 2/i , where i is the input
size for the layer. |
example: 'layerweightsinitializer','he'
layerbiasesinitializer
— type of initial fully connected layer biases
'zeros'
(default) | 'ones'
type of initial fully connected layer biases, specified as
'zeros'
or 'ones'
.
if you specify the value
'zeros'
, then each fully connected layer has an initial bias of 0.if you specify the value
'ones'
, then each fully connected layer has an initial bias of 1.
example: 'layerbiasesinitializer','ones'
data types: char
| string
observationsin
— predictor data observation dimension
'rows'
(default) | 'columns'
predictor data observation dimension, specified as 'rows'
or
'columns'
.
note
if you orient your predictor matrix so that observations correspond to columns and
specify 'observationsin','columns'
, then you might experience a
significant reduction in computation time. you cannot specify
'observationsin','columns'
for predictor data in a
table.
example: 'observationsin','columns'
data types: char
| string
lambda
— regularization term strength
0
(default) | nonnegative scalar
regularization term strength, specified as a nonnegative scalar. the software composes the objective function for minimization from the cross-entropy loss function and the ridge (l2) penalty term.
example: 'lambda',1e-4
data types: single
| double
standardize
— flag to standardize predictor data
false
or 0
(default) | true
or 1
flag to standardize the predictor data, specified as a numeric or logical
0
(false
) or 1
(true
). if you set standardize
to
true
, then the software centers and scales each numeric predictor
variable by the corresponding column mean and standard deviation. the software does
not standardize the categorical predictors.
example: 'standardize',true
data types: single
| double
| logical
verbose
— verbosity level
0
(default) | 1
verbosity level, specified as 0
or 1
. the
'verbose'
name-value argument controls the amount of diagnostic
information that fitcnet
displays at the command
line.
value | description |
---|---|
0 | fitcnet does not display diagnostic
information. |
1 | fitcnet periodically displays diagnostic
information. |
by default, storehistory
is set to
true
and fitcnet
stores the diagnostic
information inside of mdl
. use
mdl.traininghistory
to access the diagnostic information.
example: 'verbose',1
data types: single
| double
verbosefrequency
— frequency of verbose printing
1
(default) | positive integer scalar
frequency of verbose printing, which is the number of iterations between printing to the command window, specified as a positive integer scalar. a value of 1 indicates to print diagnostic information at every iteration.
note
to use this name-value argument, set verbose
to
1
.
example: 'verbosefrequency',5
data types: single
| double
storehistory
— flag to store training history
true
or 1
(default) | false
or 0
flag to store the training history, specified as a numeric or logical
0
(false
) or 1
(true
). if storehistory
is set to
true
, then the software stores diagnostic information inside of
mdl
, which you can access by using
mdl.traininghistory
.
example: 'storehistory',false
data types: single
| double
| logical
initialstepsize
— initial step size
[]
(default) | positive scalar | 'auto'
initial step size, specified as a positive scalar or 'auto'
. by
default, fitcnet
does not use the initial step size to determine
the initial hessian approximation used in training the model (see training solver). however, if you
specify an initial step size , then the initial inverse-hessian approximation is . is the initial gradient vector, and is the identity matrix.
to have fitcnet
determine an initial step size automatically,
specify the value as 'auto'
. in this case, the function determines
the initial step size by using . is the initial step vector, and is the vector of unconstrained initial weights and biases.
example: 'initialstepsize','auto'
data types: single
| double
| char
| string
iterationlimit
— maximum number of training iterations
1e3
(default) | positive integer scalar
maximum number of training iterations, specified as a positive integer scalar.
the software returns a trained model regardless of whether the training routine
successfully converges. mdl.convergenceinfo
contains convergence
information.
example: 'iterationlimit',1e8
data types: single
| double
gradienttolerance
— relative gradient tolerance
1e-6
(default) | nonnegative scalar
relative gradient tolerance, specified as a nonnegative scalar.
let be the loss function at training iteration t, be the gradient of the loss function with respect to the weights and biases at iteration t, and be the gradient of the loss function at an initial point. if , where , then the training process terminates.
example: 'gradienttolerance',1e-5
data types: single
| double
losstolerance
— loss tolerance
1e-6
(default) | nonnegative scalar
loss tolerance, specified as a nonnegative scalar.
if the function loss at some iteration is smaller than
losstolerance
, then the training process terminates.
example: 'losstolerance',1e-8
data types: single
| double
steptolerance
— step size tolerance
1e-6
(default) | nonnegative scalar
step size tolerance, specified as a nonnegative scalar.
if the step size at some iteration is smaller than
steptolerance
, then the training process terminates.
example: 'steptolerance',1e-4
data types: single
| double
validationdata
— validation data for training convergence detection
cell array | table
validation data for training convergence detection, specified as a cell array or table.
during the training process, the software periodically estimates the validation
loss by using validationdata
. if the validation loss increases
more than validationpatience
times in a row, then the software
terminates the training.
you can specify validationdata
as a table if you use a table
tbl
of predictor data that contains the response variable. in
this case, validationdata
must contain the same predictors and
response contained in tbl
. the software does not apply weights to
observations, even if tbl
contains a vector of weights. to
specify weights, you must specify validationdata
as a cell
array.
if you specify validationdata
as a cell array, then it must
have the following format:
validationdata{1}
must have the same data type and orientation as the predictor data. that is, if you use a predictor matrixx
, thenvalidationdata{1}
must be an m-by-p or p-by-m matrix of predictor data that has the same orientation asx
. the predictor variables in the training datax
andvalidationdata{1}
must correspond. similarly, if you use a predictor tabletbl
of predictor data, thenvalidationdata{1}
must be a table containing the same predictor variables contained intbl
. the number of observations invalidationdata{1}
and the predictor data can vary.validationdata{2}
must match the data type and format of the response variable, eithery
orresponsevarname
. ifvalidationdata{2}
is an array of class labels, then it must have the same number of elements as the number of observations invalidationdata{1}
. the set of all distinct labels ofvalidationdata{2}
must be a subset of all distinct labels ofy
. ifvalidationdata{1}
is a table, thenvalidationdata{2}
can be the name of the response variable in the table. if you want to use the sameresponsevarname
orformula
, you can specifyvalidationdata{2}
as[]
.optionally, you can specify
validationdata{3}
as an m-dimensional numeric vector of observation weights or the name of a variable in the tablevalidationdata{1}
that contains observation weights. the software normalizes the weights with the validation data so that they sum to 1.
if you specify validationdata
and want to display the
validation loss at the command line, set verbose
to
1
.
validationfrequency
— number of iterations between validation evaluations
1
(default) | positive integer scalar
number of iterations between validation evaluations, specified as a positive integer scalar. a value of 1 indicates to evaluate validation metrics at every iteration.
note
to use this name-value argument, you must specify
validationdata
.
example: 'validationfrequency',5
data types: single
| double
validationpatience
— stopping condition for validation evaluations
6
(default) | nonnegative integer scalar
stopping condition for validation evaluations, specified as a nonnegative integer
scalar. the training process stops if the validation loss is greater than or equal to
the minimum validation loss computed so far, validationpatience
times in a row. you can check the mdl.traininghistory
table to see
the running total of times that the validation loss is greater than or equal to the
minimum (validation checks
).
example: 'validationpatience',10
data types: single
| double
categoricalpredictors
— categorical predictors list
vector of positive integers | logical vector | character matrix | string array | cell array of character vectors | 'all'
categorical predictors list, specified as one of the values in this table. the descriptions assume that the predictor data has observations in rows and predictors in columns.
value | description |
---|---|
vector of positive integers |
each entry in the vector is an index value indicating that the corresponding predictor is
categorical. the index values are between 1 and if |
logical vector |
a |
character matrix | each row of the matrix is the name of a predictor variable. the names must match the entries in predictornames . pad the names with extra blanks so each row of the character matrix has the same length. |
string array or cell array of character vectors | each element in the array is the name of a predictor variable. the names must match the entries in predictornames . |
"all" | all predictors are categorical. |
by default, if the
predictor data is in a table (tbl
), fitcnet
assumes that a variable is categorical if it is a logical vector, categorical vector, character
array, string array, or cell array of character vectors. if the predictor data is a matrix
(x
), fitcnet
assumes that all predictors are
continuous. to identify any other predictors as categorical predictors, specify them by using
the categoricalpredictors
name-value argument.
for the identified categorical predictors, fitcnet
creates
dummy variables using two different schemes, depending on whether a categorical variable
is unordered or ordered. for an unordered categorical variable,
fitcnet
creates one dummy variable for each level of the
categorical variable. for an ordered categorical variable,
fitcnet
creates one less dummy variable than the number of
categories. for details, see automatic creation of dummy variables.
example: 'categoricalpredictors','all'
data types: single
| double
| logical
| char
| string
| cell
classnames
— names of classes to use for training
categorical array | character array | string array | logical vector | numeric vector | cell array of character vectors
names of classes to use for training, specified as a categorical, character, or string
array; a logical or numeric vector; or a cell array of character vectors.
classnames
must have the same data type as the response variable
in tbl
or y
.
if classnames
is a character array, then each element must correspond to one row of the array.
use classnames
to:
specify the order of the classes during training.
specify the order of any input or output argument dimension that corresponds to the class order. for example, use
classnames
to specify the order of the dimensions ofcost
or the column order of classification scores returned bypredict
.select a subset of classes for training. for example, suppose that the set of all distinct class names in
y
is["a","b","c"]
. to train the model using observations from classes"a"
and"c"
only, specify"classnames",["a","c"]
.
the default value for classnames
is the set of all distinct class names in the response variable in tbl
or y
.
example: "classnames",["b","g"]
data types: categorical
| char
| string
| logical
| single
| double
| cell
cost
— misclassification cost
square matrix | structure array
since r2023a
misclassification cost, specified as a square matrix or structure array.
if you specify a square matrix
cost
and the true class of an observation isi
, thencost(i,j)
is the cost of classifying a point into classj
. that is, rows correspond to the true classes, and columns correspond to the predicted classes. to specify the class order for the corresponding rows and columns ofcost
, also set theclassnames
name-value argument.if you specify a structure
s
, then it must have two fields:s.classnames
, which contains the class names as a variable of the same data type asy
s.classificationcosts
, which contains the cost matrix with rows and columns ordered as ins.classnames
the default value for cost
is ones(k) –
eye(k)
, where k
is the number of distinct
classes.
example: "cost",[0 1; 2 0]
data types: single
| double
| struct
predictornames
— predictor variable names
string array of unique names | cell array of unique character vectors
predictor variable names, specified as a string array of unique names or cell array of unique
character vectors. the functionality of 'predictornames'
depends on
the way you supply the training data.
if you supply
x
andy
, then you can use'predictornames'
to assign names to the predictor variables inx
.the order of the names in
predictornames
must correspond to the predictor order inx
. assuming thatx
has the default orientation, with observations in rows and predictors in columns,predictornames{1}
is the name ofx(:,1)
,predictornames{2}
is the name ofx(:,2)
, and so on. also,size(x,2)
andnumel(predictornames)
must be equal.by default,
predictornames
is{'x1','x2',...}
.
if you supply
tbl
, then you can use'predictornames'
to choose which predictor variables to use in training. that is,fitcnet
uses only the predictor variables inpredictornames
and the response variable during training.predictornames
must be a subset oftbl.properties.variablenames
and cannot include the name of the response variable.by default,
predictornames
contains the names of all predictor variables.a good practice is to specify the predictors for training using either
'predictornames'
orformula
, but not both.
example: 'predictornames',{'sepallength','sepalwidth','petallength','petalwidth'}
data types: string
| cell
prior
— prior class probabilities
"empirical"
(default) | "uniform"
| numeric vector | structure array
since r2023a
prior class probabilities, specified as a value in this table.
value | description |
---|---|
"empirical" | the class prior probabilities are the class relative frequencies in
y . |
"uniform" | all class prior probabilities are equal to 1/k, where k is the number of classes. |
numeric vector | each element is a class prior probability. order the elements according
to mdl .classnames or specify the
order using the classnames name-value argument. the
software normalizes the elements to sum to 1 . |
structure | a structure
|
example: "prior",struct("classnames",["b","g"],"classprobs",1:2)
data types: single
| double
| char
| string
| struct
responsename
— response variable name
"y"
(default) | character vector | string scalar
response variable name, specified as a character vector or string scalar.
if you supply
y
, then you can useresponsename
to specify a name for the response variable.if you supply
responsevarname
orformula
, then you cannot useresponsename
.
example: "responsename","response"
data types: char
| string
scoretransform
— score transformation
"none"
(default) | "doublelogit"
| "invlogit"
| "ismax"
| "logit"
| function handle | ...
score transformation, specified as a character vector, string scalar, or function handle.
this table summarizes the available character vectors and string scalars.
value | description |
---|---|
"doublelogit" | 1/(1 e–2x) |
"invlogit" | log(x / (1 – x)) |
"ismax" | sets the score for the class with the largest score to 1, and sets the scores for all other classes to 0 |
"logit" | 1/(1 e–x) |
"none" or "identity" | x (no transformation) |
"sign" | –1 for x < 0 0 for x = 0 1 for x > 0 |
"symmetric" | 2x – 1 |
"symmetricismax" | sets the score for the class with the largest score to 1, and sets the scores for all other classes to –1 |
"symmetriclogit" | 2/(1 e–x) – 1 |
for a matlab function or a function you define, use its function handle for the score transform. the function handle must accept a matrix (the original scores) and return a matrix of the same size (the transformed scores).
example: "scoretransform","logit"
data types: char
| string
| function_handle
weights
— observation weights
nonnegative numeric vector | name of variable in tbl
observation weights, specified as a nonnegative numeric vector or the name of a
variable in tbl
. the software weights each observation in
x
or tbl
with the corresponding value in
weights
. the length of weights
must equal
the number of observations in x
or tbl
.
if you specify the input data as a table tbl
, then
weights
can be the name of a variable in
tbl
that contains a numeric vector. in this case, you must
specify weights
as a character vector or string scalar. for
example, if the weights vector w
is stored as
tbl.w
, then specify it as 'w'
. otherwise, the
software treats all columns of tbl
, including w
,
as predictors or the response variable when training the model.
by default, weights
is ones(n,1)
, where
n
is the number of observations in x
or
tbl
.
the software normalizes weights
to sum to the value of the prior
probability in the respective class.
data types: single
| double
| char
| string
note
you cannot use any cross-validation name-value argument together with the
'optimizehyperparameters'
name-value argument. you can modify the
cross-validation for 'optimizehyperparameters'
only by using the
'hyperparameteroptimizationoptions'
name-value argument.
crossval
— flag to train cross-validated classifier
'off'
(default) | 'on'
flag to train a cross-validated classifier, specified as 'on'
or 'off'
.
if you specify 'on'
, then the software trains a cross-validated
classifier with 10 folds.
you can override this cross-validation setting using the
cvpartition
, holdout
,
kfold
, or leaveout
name-value argument.
you can use only one cross-validation name-value argument at a time to create a
cross-validated model.
alternatively, cross-validate later by passing mdl
to
crossval
.
example: 'crossval','on'
data types: char
| string
cvpartition
— cross-validation partition
[]
(default) | cvpartition
partition object
cross-validation partition, specified as a cvpartition
partition object
created by cvpartition
. the partition object
specifies the type of cross-validation and the indexing for the training and validation
sets.
to create a cross-validated model, you can specify only one of these four name-value
arguments: cvpartition
, holdout
,
kfold
, or leaveout
.
example: suppose you create a random partition for 5-fold cross-validation on 500
observations by using cvp = cvpartition(500,'kfold',5)
. then, you can
specify the cross-validated model by using
'cvpartition',cvp
.
holdout
— fraction of data for holdout validation
scalar value in the range (0,1)
fraction of the data used for holdout validation, specified as a scalar value in the range
(0,1). if you specify 'holdout',p
, then the software completes these
steps:
randomly select and reserve
p*100
% of the data as validation data, and train the model using the rest of the data.store the compact, trained model in the
trained
property of the cross-validated model.
to create a cross-validated model, you can specify only one of these four name-value
arguments: cvpartition
, holdout
,
kfold
, or leaveout
.
example: 'holdout',0.1
data types: double
| single
kfold
— number of folds
10
(default) | positive integer value greater than 1
number of folds to use in a cross-validated model, specified as a positive integer value
greater than 1. if you specify 'kfold',k
, then the software completes
these steps:
randomly partition the data into
k
sets.for each set, reserve the set as validation data, and train the model using the other
k
– 1 sets.store the
k
compact, trained models in ak
-by-1 cell vector in thetrained
property of the cross-validated model.
to create a cross-validated model, you can specify only one of these four name-value
arguments: cvpartition
, holdout
,
kfold
, or leaveout
.
example: 'kfold',5
data types: single
| double
leaveout
— leave-one-out cross-validation flag
'off'
(default) | 'on'
leave-one-out cross-validation flag, specified as 'on'
or
'off'
. if you specify 'leaveout','on'
, then
for each of the n observations (where n is the
number of observations, excluding missing observations, specified in the
numobservations
property of the model), the software completes
these steps:
reserve the one observation as validation data, and train the model using the other n – 1 observations.
store the n compact, trained models in an n-by-1 cell vector in the
trained
property of the cross-validated model.
to create a cross-validated model, you can specify only one of these four name-value
arguments: cvpartition
, holdout
,
kfold
, or leaveout
.
example: 'leaveout','on'
optimizehyperparameters
— parameters to optimize
'none'
(default) | 'auto'
| 'all'
| string array or cell array of eligible parameter names | vector of optimizablevariable
objects
parameters to optimize, specified as one of the following:
'none'
— do not optimize.'auto'
— use{'activations','lambda','layersizes','standardize'}
.'all'
— optimize all eligible parameters.string array or cell array of eligible parameter names.
vector of
optimizablevariable
objects, typically the output of .
the optimization attempts to minimize the cross-validation loss (error) for
fitcnet
by varying the parameters. for information about
cross-validation loss (although in a different context), see . to control the cross-validation type and other aspects
of the optimization, use the hyperparameteroptimizationoptions
name-value argument.
note
the values of 'optimizehyperparameters'
override any values you specify
using other name-value arguments. for example, setting
'optimizehyperparameters'
to 'auto'
causes
fitcnet
to optimize hyperparameters corresponding to the
'auto'
option and to ignore any specified values for the
hyperparameters.
the eligible parameters for fitcnet
are:
activations
—fitcnet
optimizesactivations
over the set{'relu','tanh','sigmoid','none'}
.lambda
—fitcnet
optimizeslambda
over continuous values in the range[1e-5,1e5]/numobservations
, where the value is chosen uniformly in the log transformed range.layerbiasesinitializer
—fitcnet
optimizeslayerbiasesinitializer
over the two values{'zeros','ones'}
.layerweightsinitializer
—fitcnet
optimizeslayerweightsinitializer
over the two values{'glorot','he'}
.layersizes
—fitcnet
optimizes over the three values1
,2
, and3
fully connected layers, excluding the final fully connected layer.fitcnet
optimizes each fully connected layer separately over1
through300
sizes in the layer, sampled on a logarithmic scale.note
when you use the
layersizes
argument, the iterative display shows the size of each relevant layer. for example, if the current number of fully connected layers is3
, and the three layers are of sizes10
,79
, and44
respectively, the iterative display showslayersizes
for that iteration as[10 79 44]
.note
to access up to five fully connected layers or a different range of sizes in a layer, use to select the optimizable parameters and ranges.
standardize
—fitcnet
optimizesstandardize
over the two values{true,false}
.
set nondefault parameters by passing a vector of
optimizablevariable
objects that have nondefault values. as an
example, this code sets the range of numlayers
to [1
5]
and optimizes layer_4_size
and
layer_5_size
:
load fisheriris params = hyperparameters('fitcnet',meas,species); params(1).range = [1 5]; params(10).optimize = true; params(11).optimize = true;
pass params
as the value of
optimizehyperparameters
. for an example using nondefault
parameters, see customize neural network classifier optimization.
by default, the iterative display appears at the command line,
and plots appear according to the number of hyperparameters in the optimization. for the
optimization and plots, the objective function is the misclassification rate. to control the
iterative display, set the verbose
field of the
'hyperparameteroptimizationoptions'
name-value argument. to control the
plots, set the showplots
field of the
'hyperparameteroptimizationoptions'
name-value argument.
for an example, see improve neural network classifier using optimizehyperparameters.
example: 'optimizehyperparameters','auto'
hyperparameteroptimizationoptions
— options for optimization
structure
options for optimization, specified as a structure. this argument modifies the effect of the
optimizehyperparameters
name-value argument. all fields in the
structure are optional.
field name | values | default |
---|---|---|
optimizer |
| 'bayesopt' |
acquisitionfunctionname |
acquisition functions whose names include
| 'expected-improvement-per-second-plus' |
maxobjectiveevaluations | maximum number of objective function evaluations. | 30 for 'bayesopt' and
'randomsearch' , and the entire grid for
'gridsearch' |
maxtime | time limit, specified as a positive real scalar. the time limit is in seconds, as
measured by | inf |
numgriddivisions | for 'gridsearch' , the number of values in each dimension. the value can be
a vector of positive integers giving the number of
values for each dimension, or a scalar that
applies to all dimensions. this field is ignored
for categorical variables. | 10 |
showplots | logical value indicating whether to show plots. if true , this field plots
the best observed objective function value against the iteration number. if you
use bayesian optimization (optimizer is
'bayesopt' ), then this field also plots the best
estimated objective function value. the best observed objective function values
and best estimated objective function values correspond to the values in the
bestsofar (observed) and bestsofar
(estim.) columns of the iterative display, respectively. you can
find these values in the properties objectiveminimumtrace and estimatedobjectiveminimumtrace of
mdl.hyperparameteroptimizationresults . if the problem
includes one or two optimization parameters for bayesian optimization, then
showplots also plots a model of the objective function
against the parameters. | true |
saveintermediateresults | logical value indicating whether to save results when optimizer is
'bayesopt' . if
true , this field overwrites a
workspace variable named
'bayesoptresults' at each
iteration. the variable is a bayesianoptimization object. | false |
verbose | display at the command line:
for details, see the | 1 |
useparallel | logical value indicating whether to run bayesian optimization in parallel, which requires parallel computing toolbox™. due to the nonreproducibility of parallel timing, parallel bayesian optimization does not necessarily yield reproducible results. for details, see . | false |
repartition | logical value indicating whether to repartition the cross-validation at every
iteration. if this field is the setting
| false |
use no more than one of the following three options. | ||
cvpartition | a cvpartition object, as created by cvpartition | 'kfold',5 if you do not specify a cross-validation
field |
holdout | a scalar in the range (0,1) representing the holdout fraction | |
kfold | an integer greater than 1 |
example: 'hyperparameteroptimizationoptions',struct('maxobjectiveevaluations',60)
data types: struct
output arguments
mdl
— trained neural network classifier
classificationneuralnetwork
object | classificationpartitionedmodel
object
trained neural network classifier, returned as a or object.
if you set any of the name-value arguments crossval
,
cvpartition
, holdout
,
kfold
, or leaveout
, then
mdl
is a classificationpartitionedmodel
object.
otherwise, mdl
is a classificationneuralnetwork
model.
to reference properties of mdl
, use dot notation.
more about
neural network structure
the default neural network classifier has the following layer structure.
structure | description |
---|---|
| input — this layer corresponds to the predictor data in
tbl or x . |
first fully connected layer — this layer has 10 outputs by default.
| |
relu activation function —
| |
final fully connected layer — this layer has k outputs, where k is the number of classes in the response variable.
| |
softmax function (for both binary and multiclass classification) —
the results correspond to the predicted classification scores (or posterior probabilities). | |
output — this layer corresponds to the predicted class labels. |
for an example that shows how a neural network classifier with this layer structure returns predictions, see .
tips
always try to standardize the numeric predictors (see
standardize
). standardization makes predictors insensitive to the scales on which they are measured.after training a model, you can generate c/c code that predicts labels for new data. generating c/c code requires matlab coder™. for details, see introduction to code generation.
algorithms
training solver
fitcnet
uses a limited-memory broyden-fletcher-goldfarb-shanno
quasi-newton algorithm (lbfgs) [3] as its loss function
minimization technique, where the software minimizes the cross-entropy loss. the lbfgs
solver uses a standard line-search method with an approximation to the hessian.
cost
, prior
, and weights
if you specify the
cost
,prior
, andweights
name-value arguments, the output model object stores the specified values in thecost
,prior
, andw
properties, respectively. thecost
property stores the user-specified cost matrix as is. theprior
andw
properties store the prior probabilities and observation weights, respectively, after normalization. for details, see .the software uses the
cost
property for prediction, but not training. therefore,cost
is not read-only; you can change the property value by using dot notation after creating the trained model.
references
[1] glorot, xavier, and yoshua bengio. “understanding the difficulty of training deep feedforward neural networks.” in proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256. 2010.
[2] he, kaiming, xiangyu zhang, shaoqing ren, and jian sun. “delving deep into rectifiers: surpassing human-level performance on imagenet classification.” in proceedings of the ieee international conference on computer vision, pp. 1026–1034. 2015.
[3] nocedal, j. and s. j. wright. numerical optimization, 2nd ed., new york: springer, 2006.
extended capabilities
automatic parallel support
accelerate code by automatically running computation in parallel using parallel computing toolbox™.
to perform parallel hyperparameter optimization, use the
'hyperparameteroptimizationoptions', struct('useparallel',true)
name-value argument in the call to the fitcnet
function.
for more information on parallel hyperparameter optimization, see .
for general information about parallel computing, see (parallel computing toolbox).
version history
introduced in r2021ar2023a: neural network classifiers support misclassification costs and prior probabilities
fitcnet
supports misclassification costs and prior probabilities for
neural network classifiers. specify the cost
and
prior
name-value arguments when you create a model. alternatively,
you can specify misclassification costs after training a model by using dot notation to
change the cost
property value of the
model.
mdl.cost = [0 2; 1 0];
see also
| | | | | | |
topics
打开示例
您曾对此示例进行过修改。是否要打开带有您的编辑的示例?
matlab 命令
您点击的链接对应于以下 matlab 命令:
请在 matlab 命令行窗口中直接输入以执行命令。web 浏览器不支持 matlab 命令。
select a web site
choose a web site to get translated content where available and see local events and offers. based on your location, we recommend that you select: .
you can also select a web site from the following list:
how to get best site performance
select the china site (in chinese or english) for best site performance. other mathworks country sites are not optimized for visits from your location.
americas
- (español)
- (english)
- (english)
europe
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
asia pacific
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)