gaussian kernel classification model using random feature expansion -凯发k8网页登录
gaussian kernel classification model using random feature expansion
description
classificationkernel
is a trained model object for a binary
gaussian kernel classification model using random feature expansion.
classificationkernel
is more practical for big data applications
that have large training sets but can also be applied to smaller data sets that fit in
memory.
unlike other classification models, and for economical memory usage,
classificationkernel
model objects do not store the training
data. however, they do store information such as the number of dimensions of the
expanded space, the kernel scale parameter, prior-class probabilities, and the
regularization strength.
you can use trained classificationkernel
models to continue
training using the training data and to predict labels or classification scores for new
data. for details, see and
.
creation
create a classificationkernel
object using the
function. this function maps data in a low-dimensional space into a high-dimensional
space, then fits a linear model in the high-dimensional space by minimizing the
regularized objective function. the linear model in the high-dimensional space is
equivalent to the model with a gaussian kernel in the low-dimensional space. available
linear classification models include regularized support vector machine (svm) and
logistic regression models.
properties
kernel classification properties
learner
— linear classification model type
'logistic'
| 'svm'
linear classification model type, specified as
'logistic'
or 'svm'
.
in the following table,
x is an observation (row vector) from p predictor variables.
is a transformation of an observation (row vector) for feature expansion. t(x) maps x in to a high-dimensional space ().
β is a vector of coefficients.
b is the scalar bias.
value | algorithm | loss function | fittedloss value |
---|---|---|---|
'svm' | support vector machine | hinge: | 'hinge' |
'logistic' | logistic regression | deviance (logistic): | 'logit' |
numexpansiondimensions
— number of dimensions of expanded space
positive integer
number of dimensions of the expanded space, specified as a positive integer.
data types: single
| double
kernelscale
— kernel scale parameter
positive scalar
kernel scale parameter, specified as a positive scalar.
data types: char
| single
| double
boxconstraint
— box constraint
positive scalar
box constraint, specified as a positive scalar.
data types: double
| single
lambda
— regularization term strength
nonnegative scalar
regularization term strength, specified as a nonnegative scalar.
data types: single
| double
fittedloss
— loss function used to fit linear model
'hinge'
| 'logit'
this property is read-only.
loss function used to fit the linear model, specified as 'hinge'
or 'logit'
.
value | algorithm | loss function | learner value |
---|---|---|---|
'hinge' | support vector machine | hinge: | 'svm' |
'logit' | logistic regression | deviance (logistic): | 'logistic' |
regularization
— complexity penalty type
'ridge (l2)'
complexity penalty type, which is always 'ridge
(l2)'
.
the software composes the objective function for minimization from the
sum of the average loss function (see fittedloss
) and the regularization term, ridge (l2) penalty.
the ridge (l2) penalty is
where λ specifies the
regularization term strength (see lambda
). the software excludes the bias term
(β0) from the
regularization penalty.
other classification properties
categoricalpredictors
— indices of categorical predictors
vector of positive integers | []
categorical predictor
indices, specified as a vector of positive integers. categoricalpredictors
contains index values indicating that the corresponding predictors are categorical. the index
values are between 1 and p
, where p
is the number of
predictors used to train the model. if none of the predictors are categorical, then this
property is empty ([]
).
data types: single
| double
classnames
— unique class labels
categorical array | character array | logical vector | numeric vector | cell array of character vectors
unique class labels used in training, specified as a categorical or
character array, logical or numeric vector, or cell array of
character vectors. classnames
has the same
data type as the class labels y
.
(the software treats string arrays as cell arrays of character
vectors.)
classnames
also determines the class
order.
data types: categorical
| char
| logical
| single
| double
| cell
cost
— misclassification costs
square numeric matrix
this property is read-only.
misclassification costs, specified as a square numeric matrix. cost
has k rows
and columns, where k is the number of classes.
cost(
is
the cost of classifying a point into class i
,j
)j
if
its true class is i
. the order of the rows
and columns of cost
corresponds to the order of
the classes in classnames
.
data types: double
modelparameters
— parameters used for training model
structure
parameters used for training the classificationkernel
model, specified as a structure.
access fields of modelparameters
using dot notation. for example, access
the relative tolerance on the linear coefficients and the bias term by using
mdl.modelparameters.betatolerance
.
data types: struct
predictornames
— predictor names
cell array of character vectors
predictor names in order of their appearance in the predictor data,
specified as a cell array of character vectors. the length of
predictornames
is equal to the number of
columns used as predictor variables in the training data
x
or tbl
.
data types: cell
expandedpredictornames
— expanded predictor names
cell array of character vectors
expanded predictor names, specified as a cell array of character vectors.
if the model uses encoding for categorical variables, then
expandedpredictornames
includes the names that describe the
expanded variables. otherwise, expandedpredictornames
is the same as
predictornames
.
data types: cell
prior
— prior class probabilities
numeric vector
this property is read-only.
prior class probabilities, specified as a numeric vector.
prior
has as many elements as
classes in classnames
, and the order of the
elements corresponds to the elements of
classnames
.
data types: double
responsename
— response variable name
character vector
response variable name, specified as a character vector.
data types: char
scoretransform
— score transformation function to apply to predicted scores
'doublelogit'
| 'invlogit'
| 'ismax'
| 'logit'
| 'none'
| function handle | ...
score transformation function to apply to predicted scores, specified as a function name or function handle.
for kernel classification models and before the score transformation, the predicted classification score for the observation x (row vector) is
is a transformation of an observation for feature expansion.
β is the estimated column vector of coefficients.
b is the estimated scalar bias.
to change the score transformation function to
function
, for example, use dot notation.
for a built-in function, enter this code and replace
function
with a value from the table.mdl.scoretransform = 'function';
value description "doublelogit"
1/(1 e–2x) "invlogit"
log(x / (1 – x)) "ismax"
sets the score for the class with the largest score to 1, and sets the scores for all other classes to 0 "logit"
1/(1 e–x) "none"
or"identity"
x (no transformation) "sign"
–1 for x < 0
0 for x = 0
1 for x > 0"symmetric"
2x – 1 "symmetricismax"
sets the score for the class with the largest score to 1, and sets the scores for all other classes to –1 "symmetriclogit"
2/(1 e–x) – 1 for a matlab® function, or a function that you define, enter its function handle.
mdl.scoretransform = @function;
function
must accept a matrix of the original scores for each class, and then return a matrix of the same size representing the transformed scores for each class.
data types: char
| function_handle
object functions
classification edge for gaussian kernel classification model | |
convert kernel model for binary classification to incremental learner | |
lime | local interpretable model-agnostic explanations (lime) |
classification loss for gaussian kernel classification model | |
classification margins for gaussian kernel classification model | |
partialdependence | compute partial dependence |
plotpartialdependence | create partial dependence plot (pdp) and individual conditional expectation (ice) plots |
predict labels for gaussian kernel classification model | |
resume training of gaussian kernel classification model | |
shapley | shapley values |
examples
train kernel classification model
train a binary kernel classification model using svm.
load the ionosphere
data set. this data set has 34 predictors and 351 binary responses for radar returns, either bad ('b'
) or good ('g'
).
load ionosphere
[n,p] = size(x)
n = 351
p = 34
resp = unique(y)
resp = 2x1 cell
{'b'}
{'g'}
train a binary kernel classification model that identifies whether the radar return is bad ('b'
) or good ('g'
). extract a fit summary to determine how well the optimization algorithm fits the model to the data.
rng('default') % for reproducibility [mdl,fitinfo] = fitckernel(x,y)
mdl = classificationkernel responsename: 'y' classnames: {'b' 'g'} learner: 'svm' numexpansiondimensions: 2048 kernelscale: 1 lambda: 0.0028 boxconstraint: 1 properties, methods
fitinfo = struct with fields:
solver: 'lbfgs-fast'
lossfunction: 'hinge'
lambda: 0.0028
betatolerance: 1.0000e-04
gradienttolerance: 1.0000e-06
objectivevalue: 0.2604
gradientmagnitude: 0.0028
relativechangeinbeta: 8.2512e-05
fittime: 0.0954
history: []
mdl
is a classificationkernel
model. to inspect the in-sample classification error, you can pass mdl
and the training data or new data to the function. or, you can pass mdl
and new predictor data to the function to predict class labels for new observations. you can also pass mdl
and the training data to the function to continue training.
fitinfo
is a structure array containing optimization information. use fitinfo
to determine whether optimization termination measurements are satisfactory.
for better accuracy, you can increase the maximum number of optimization iterations ('iterationlimit'
) and decrease the tolerance values ('betatolerance'
and 'gradienttolerance'
) by using the name-value pair arguments. doing so can improve measures like objectivevalue
and relativechangeinbeta
in fitinfo
. you can also optimize model parameters by using the 'optimizehyperparameters'
name-value pair argument.
predict class labels and resume training
load the ionosphere
data set. this data set has 34 predictors and 351 binary responses for radar returns, either bad ('b'
) or good ('g'
).
load ionosphere
partition the data set into training and test sets. specify a 20% holdout sample for the test set.
rng('default') % for reproducibility partition = cvpartition(y,'holdout',0.20); traininginds = training(partition); % indices for the training set xtrain = x(traininginds,:); ytrain = y(traininginds); testinds = test(partition); % indices for the test set xtest = x(testinds,:); ytest = y(testinds);
train a binary kernel classification model that identifies whether the radar return is bad ('b'
) or good ('g'
).
mdl = fitckernel(xtrain,ytrain,'iterationlimit',5,'verbose',1);
|=================================================================================================================| | solver | pass | iteration | objective | step | gradient | relative | sum(beta~=0) | | | | | | | magnitude | change in beta | | |=================================================================================================================| | lbfgs | 1 | 0 | 1.000000e 00 | 0.000000e 00 | 2.811388e-01 | | 0 | | lbfgs | 1 | 1 | 7.585395e-01 | 4.000000e 00 | 3.594306e-01 | 1.000000e 00 | 2048 | | lbfgs | 1 | 2 | 7.160994e-01 | 1.000000e 00 | 2.028470e-01 | 6.923988e-01 | 2048 | | lbfgs | 1 | 3 | 6.825272e-01 | 1.000000e 00 | 2.846975e-02 | 2.388909e-01 | 2048 | | lbfgs | 1 | 4 | 6.699435e-01 | 1.000000e 00 | 1.779359e-02 | 1.325304e-01 | 2048 | | lbfgs | 1 | 5 | 6.535619e-01 | 1.000000e 00 | 2.669039e-01 | 4.112952e-01 | 2048 | |=================================================================================================================|
mdl
is a classificationkernel
model.
predict the test-set labels, construct a confusion matrix for the test set, and estimate the classification error for the test set.
label = predict(mdl,xtest); confusiontest = confusionchart(ytest,label);
l = loss(mdl,xtest,ytest)
l = 0.3594
mdl
misclassifies all bad radar returns as good returns.
continue training by using resume
. this function continues training with the same options used for training mdl
.
updatedmdl = resume(mdl,xtrain,ytrain);
|=================================================================================================================| | solver | pass | iteration | objective | step | gradient | relative | sum(beta~=0) | | | | | | | magnitude | change in beta | | |=================================================================================================================| | lbfgs | 1 | 0 | 6.535619e-01 | 0.000000e 00 | 2.669039e-01 | | 2048 | | lbfgs | 1 | 1 | 6.132547e-01 | 1.000000e 00 | 6.355537e-03 | 1.522092e-01 | 2048 | | lbfgs | 1 | 2 | 5.938316e-01 | 4.000000e 00 | 3.202847e-02 | 1.498036e-01 | 2048 | | lbfgs | 1 | 3 | 4.169274e-01 | 1.000000e 00 | 1.530249e-01 | 7.234253e-01 | 2048 | | lbfgs | 1 | 4 | 3.679212e-01 | 5.000000e-01 | 2.740214e-01 | 2.495886e-01 | 2048 | | lbfgs | 1 | 5 | 3.332261e-01 | 1.000000e 00 | 1.423488e-02 | 9.558680e-02 | 2048 | | lbfgs | 1 | 6 | 3.235335e-01 | 1.000000e 00 | 7.117438e-03 | 7.137260e-02 | 2048 | | lbfgs | 1 | 7 | 3.112331e-01 | 1.000000e 00 | 6.049822e-02 | 1.252157e-01 | 2048 | | lbfgs | 1 | 8 | 2.972144e-01 | 1.000000e 00 | 7.117438e-03 | 5.796240e-02 | 2048 | | lbfgs | 1 | 9 | 2.837450e-01 | 1.000000e 00 | 8.185053e-02 | 1.484733e-01 | 2048 | | lbfgs | 1 | 10 | 2.797642e-01 | 1.000000e 00 | 3.558719e-02 | 5.856842e-02 | 2048 | | lbfgs | 1 | 11 | 2.771280e-01 | 1.000000e 00 | 2.846975e-02 | 2.349433e-02 | 2048 | | lbfgs | 1 | 12 | 2.741570e-01 | 1.000000e 00 | 3.914591e-02 | 3.113194e-02 | 2048 | | lbfgs | 1 | 13 | 2.725701e-01 | 5.000000e-01 | 1.067616e-01 | 8.729821e-02 | 2048 | | lbfgs | 1 | 14 | 2.667147e-01 | 1.000000e 00 | 3.914591e-02 | 3.491723e-02 | 2048 | | lbfgs | 1 | 15 | 2.621152e-01 | 1.000000e 00 | 7.117438e-03 | 5.104726e-02 | 2048 | | lbfgs | 1 | 16 | 2.601652e-01 | 1.000000e 00 | 3.558719e-02 | 3.764904e-02 | 2048 | | lbfgs | 1 | 17 | 2.589052e-01 | 1.000000e 00 | 3.202847e-02 | 3.655744e-02 | 2048 | | lbfgs | 1 | 18 | 2.583185e-01 | 1.000000e 00 | 7.117438e-03 | 6.490571e-02 | 2048 | | lbfgs | 1 | 19 | 2.556482e-01 | 1.000000e 00 | 9.252669e-02 | 4.601390e-02 | 2048 | | lbfgs | 1 | 20 | 2.542643e-01 | 1.000000e 00 | 7.117438e-02 | 4.141838e-02 | 2048 | |=================================================================================================================| | solver | pass | iteration | objective | step | gradient | relative | sum(beta~=0) | | | | | | | magnitude | change in beta | | |=================================================================================================================| | lbfgs | 1 | 21 | 2.532117e-01 | 1.000000e 00 | 1.067616e-02 | 1.661720e-02 | 2048 | | lbfgs | 1 | 22 | 2.529890e-01 | 1.000000e 00 | 2.135231e-02 | 1.231678e-02 | 2048 | | lbfgs | 1 | 23 | 2.523232e-01 | 1.000000e 00 | 3.202847e-02 | 1.958586e-02 | 2048 | | lbfgs | 1 | 24 | 2.506736e-01 | 1.000000e 00 | 1.779359e-02 | 2.474613e-02 | 2048 | | lbfgs | 1 | 25 | 2.501995e-01 | 1.000000e 00 | 1.779359e-02 | 2.514352e-02 | 2048 | | lbfgs | 1 | 26 | 2.488242e-01 | 1.000000e 00 | 3.558719e-03 | 1.531810e-02 | 2048 | | lbfgs | 1 | 27 | 2.485295e-01 | 5.000000e-01 | 3.202847e-02 | 1.229760e-02 | 2048 | | lbfgs | 1 | 28 | 2.482244e-01 | 1.000000e 00 | 4.270463e-02 | 8.970983e-03 | 2048 | | lbfgs | 1 | 29 | 2.479714e-01 | 1.000000e 00 | 3.558719e-03 | 7.393900e-03 | 2048 | | lbfgs | 1 | 30 | 2.477316e-01 | 1.000000e 00 | 3.202847e-02 | 3.268087e-03 | 2048 | | lbfgs | 1 | 31 | 2.476178e-01 | 2.500000e-01 | 3.202847e-02 | 5.445890e-03 | 2048 | | lbfgs | 1 | 32 | 2.474874e-01 | 1.000000e 00 | 1.779359e-02 | 3.535903e-03 | 2048 | | lbfgs | 1 | 33 | 2.473980e-01 | 1.000000e 00 | 7.117438e-03 | 2.821725e-03 | 2048 | | lbfgs | 1 | 34 | 2.472935e-01 | 1.000000e 00 | 3.558719e-03 | 2.699880e-03 | 2048 | | lbfgs | 1 | 35 | 2.471418e-01 | 1.000000e 00 | 3.558719e-03 | 1.242523e-02 | 2048 | | lbfgs | 1 | 36 | 2.469862e-01 | 1.000000e 00 | 2.846975e-02 | 7.895605e-03 | 2048 | | lbfgs | 1 | 37 | 2.469598e-01 | 1.000000e 00 | 2.135231e-02 | 6.657676e-03 | 2048 | | lbfgs | 1 | 38 | 2.466941e-01 | 1.000000e 00 | 3.558719e-02 | 4.654690e-03 | 2048 | | lbfgs | 1 | 39 | 2.466660e-01 | 5.000000e-01 | 1.423488e-02 | 2.885769e-03 | 2048 | | lbfgs | 1 | 40 | 2.465605e-01 | 1.000000e 00 | 3.558719e-03 | 4.562565e-03 | 2048 | |=================================================================================================================| | solver | pass | iteration | objective | step | gradient | relative | sum(beta~=0) | | | | | | | magnitude | change in beta | | |=================================================================================================================| | lbfgs | 1 | 41 | 2.465362e-01 | 1.000000e 00 | 1.423488e-02 | 5.652180e-03 | 2048 | | lbfgs | 1 | 42 | 2.463528e-01 | 1.000000e 00 | 3.558719e-03 | 2.389759e-03 | 2048 | | lbfgs | 1 | 43 | 2.463207e-01 | 1.000000e 00 | 1.511170e-03 | 3.738286e-03 | 2048 | | lbfgs | 1 | 44 | 2.462585e-01 | 5.000000e-01 | 7.117438e-02 | 2.321693e-03 | 2048 | | lbfgs | 1 | 45 | 2.461742e-01 | 1.000000e 00 | 7.117438e-03 | 2.599725e-03 | 2048 | | lbfgs | 1 | 46 | 2.461434e-01 | 1.000000e 00 | 3.202847e-02 | 3.186923e-03 | 2048 | | lbfgs | 1 | 47 | 2.461115e-01 | 1.000000e 00 | 7.117438e-03 | 1.530711e-03 | 2048 | | lbfgs | 1 | 48 | 2.460814e-01 | 1.000000e 00 | 1.067616e-02 | 1.811714e-03 | 2048 | | lbfgs | 1 | 49 | 2.460533e-01 | 5.000000e-01 | 1.423488e-02 | 1.012252e-03 | 2048 | | lbfgs | 1 | 50 | 2.460111e-01 | 1.000000e 00 | 1.423488e-02 | 4.166762e-03 | 2048 | | lbfgs | 1 | 51 | 2.459414e-01 | 1.000000e 00 | 1.067616e-02 | 3.271946e-03 | 2048 | | lbfgs | 1 | 52 | 2.458809e-01 | 1.000000e 00 | 1.423488e-02 | 1.846440e-03 | 2048 | | lbfgs | 1 | 53 | 2.458479e-01 | 1.000000e 00 | 1.067616e-02 | 1.180871e-03 | 2048 | | lbfgs | 1 | 54 | 2.458146e-01 | 1.000000e 00 | 1.455008e-03 | 1.422954e-03 | 2048 | | lbfgs | 1 | 55 | 2.457878e-01 | 1.000000e 00 | 7.117438e-03 | 1.880892e-03 | 2048 | | lbfgs | 1 | 56 | 2.457519e-01 | 1.000000e 00 | 2.491103e-02 | 1.074764e-03 | 2048 | | lbfgs | 1 | 57 | 2.457420e-01 | 1.000000e 00 | 7.473310e-02 | 9.511878e-04 | 2048 | | lbfgs | 1 | 58 | 2.457212e-01 | 1.000000e 00 | 3.558719e-03 | 3.718564e-04 | 2048 | | lbfgs | 1 | 59 | 2.457089e-01 | 1.000000e 00 | 4.270463e-02 | 6.237270e-04 | 2048 | | lbfgs | 1 | 60 | 2.457047e-01 | 5.000000e-01 | 1.423488e-02 | 3.647573e-04 | 2048 | |=================================================================================================================| | solver | pass | iteration | objective | step | gradient | relative | sum(beta~=0) | | | | | | | magnitude | change in beta | | |=================================================================================================================| | lbfgs | 1 | 61 | 2.456991e-01 | 1.000000e 00 | 1.423488e-02 | 5.666884e-04 | 2048 | | lbfgs | 1 | 62 | 2.456898e-01 | 1.000000e 00 | 1.779359e-02 | 4.697056e-04 | 2048 | | lbfgs | 1 | 63 | 2.456792e-01 | 1.000000e 00 | 1.779359e-02 | 5.984927e-04 | 2048 | | lbfgs | 1 | 64 | 2.456603e-01 | 1.000000e 00 | 1.403782e-03 | 5.414985e-04 | 2048 | | lbfgs | 1 | 65 | 2.456482e-01 | 1.000000e 00 | 3.558719e-03 | 6.506293e-04 | 2048 | | lbfgs | 1 | 66 | 2.456358e-01 | 1.000000e 00 | 1.476262e-03 | 1.284139e-03 | 2048 | | lbfgs | 1 | 67 | 2.456124e-01 | 1.000000e 00 | 3.558719e-03 | 8.636596e-04 | 2048 | | lbfgs | 1 | 68 | 2.455980e-01 | 1.000000e 00 | 1.067616e-02 | 9.861527e-04 | 2048 | | lbfgs | 1 | 69 | 2.455780e-01 | 1.000000e 00 | 1.067616e-02 | 5.102487e-04 | 2048 | | lbfgs | 1 | 70 | 2.455633e-01 | 1.000000e 00 | 3.558719e-03 | 1.228077e-03 | 2048 | | lbfgs | 1 | 71 | 2.455449e-01 | 1.000000e 00 | 1.423488e-02 | 7.864590e-04 | 2048 | | lbfgs | 1 | 72 | 2.455261e-01 | 1.000000e 00 | 3.558719e-02 | 1.090815e-03 | 2048 | | lbfgs | 1 | 73 | 2.455142e-01 | 1.000000e 00 | 1.067616e-02 | 1.701506e-03 | 2048 | | lbfgs | 1 | 74 | 2.455075e-01 | 1.000000e 00 | 1.779359e-02 | 1.504577e-03 | 2048 | | lbfgs | 1 | 75 | 2.455008e-01 | 1.000000e 00 | 3.914591e-02 | 1.144021e-03 | 2048 | | lbfgs | 1 | 76 | 2.454943e-01 | 1.000000e 00 | 2.491103e-02 | 3.015254e-04 | 2048 | | lbfgs | 1 | 77 | 2.454918e-01 | 5.000000e-01 | 3.202847e-02 | 9.837523e-04 | 2048 | | lbfgs | 1 | 78 | 2.454870e-01 | 1.000000e 00 | 1.779359e-02 | 4.328953e-04 | 2048 | | lbfgs | 1 | 79 | 2.454865e-01 | 5.000000e-01 | 3.558719e-03 | 7.126815e-04 | 2048 | | lbfgs | 1 | 80 | 2.454775e-01 | 1.000000e 00 | 5.693950e-02 | 8.992562e-04 | 2048 | |=================================================================================================================| | solver | pass | iteration | objective | step | gradient | relative | sum(beta~=0) | | | | | | | magnitude | change in beta | | |=================================================================================================================| | lbfgs | 1 | 81 | 2.454686e-01 | 1.000000e 00 | 1.183730e-03 | 1.590246e-04 | 2048 | | lbfgs | 1 | 82 | 2.454612e-01 | 1.000000e 00 | 2.135231e-02 | 1.389570e-04 | 2048 | | lbfgs | 1 | 83 | 2.454506e-01 | 1.000000e 00 | 3.558719e-03 | 6.162089e-04 | 2048 | | lbfgs | 1 | 84 | 2.454436e-01 | 1.000000e 00 | 1.423488e-02 | 1.877414e-03 | 2048 | | lbfgs | 1 | 85 | 2.454378e-01 | 1.000000e 00 | 1.423488e-02 | 3.370852e-04 | 2048 | | lbfgs | 1 | 86 | 2.454249e-01 | 1.000000e 00 | 1.423488e-02 | 8.133615e-04 | 2048 | | lbfgs | 1 | 87 | 2.454101e-01 | 1.000000e 00 | 1.067616e-02 | 3.872088e-04 | 2048 | | lbfgs | 1 | 88 | 2.453963e-01 | 1.000000e 00 | 1.779359e-02 | 5.670260e-04 | 2048 | | lbfgs | 1 | 89 | 2.453866e-01 | 1.000000e 00 | 1.067616e-02 | 1.444984e-03 | 2048 | | lbfgs | 1 | 90 | 2.453821e-01 | 1.000000e 00 | 7.117438e-03 | 2.457270e-03 | 2048 | | lbfgs | 1 | 91 | 2.453790e-01 | 5.000000e-01 | 6.761566e-02 | 8.228766e-04 | 2048 | | lbfgs | 1 | 92 | 2.453603e-01 | 1.000000e 00 | 2.135231e-02 | 1.084233e-03 | 2048 | | lbfgs | 1 | 93 | 2.453540e-01 | 1.000000e 00 | 2.135231e-02 | 2.060005e-04 | 2048 | | lbfgs | 1 | 94 | 2.453482e-01 | 1.000000e 00 | 1.779359e-02 | 1.560883e-04 | 2048 | | lbfgs | 1 | 95 | 2.453461e-01 | 1.000000e 00 | 1.779359e-02 | 1.614693e-03 | 2048 | | lbfgs | 1 | 96 | 2.453371e-01 | 1.000000e 00 | 3.558719e-02 | 2.145835e-04 | 2048 | | lbfgs | 1 | 97 | 2.453305e-01 | 1.000000e 00 | 4.270463e-02 | 7.602088e-04 | 2048 | | lbfgs | 1 | 98 | 2.453283e-01 | 2.500000e-01 | 2.135231e-02 | 3.422253e-04 | 2048 | | lbfgs | 1 | 99 | 2.453246e-01 | 1.000000e 00 | 3.558719e-03 | 3.872561e-04 | 2048 | | lbfgs | 1 | 100 | 2.453214e-01 | 1.000000e 00 | 3.202847e-02 | 1.732237e-04 | 2048 | |=================================================================================================================| | solver | pass | iteration | objective | step | gradient | relative | sum(beta~=0) | | | | | | | magnitude | change in beta | | |=================================================================================================================| | lbfgs | 1 | 101 | 2.453168e-01 | 1.000000e 00 | 1.067616e-02 | 3.065286e-04 | 2048 | | lbfgs | 1 | 102 | 2.453155e-01 | 5.000000e-01 | 4.626335e-02 | 3.402368e-04 | 2048 | | lbfgs | 1 | 103 | 2.453136e-01 | 1.000000e 00 | 1.779359e-02 | 2.215029e-04 | 2048 | | lbfgs | 1 | 104 | 2.453119e-01 | 1.000000e 00 | 3.202847e-02 | 4.142355e-04 | 2048 | | lbfgs | 1 | 105 | 2.453093e-01 | 1.000000e 00 | 1.423488e-02 | 2.186007e-04 | 2048 | | lbfgs | 1 | 106 | 2.453090e-01 | 1.000000e 00 | 2.846975e-02 | 1.338602e-03 | 2048 | | lbfgs | 1 | 107 | 2.453048e-01 | 1.000000e 00 | 1.423488e-02 | 3.208296e-04 | 2048 | | lbfgs | 1 | 108 | 2.453040e-01 | 1.000000e 00 | 3.558719e-02 | 1.294488e-03 | 2048 | | lbfgs | 1 | 109 | 2.452977e-01 | 1.000000e 00 | 1.423488e-02 | 8.328380e-04 | 2048 | | lbfgs | 1 | 110 | 2.452934e-01 | 1.000000e 00 | 2.135231e-02 | 5.149259e-04 | 2048 | | lbfgs | 1 | 111 | 2.452886e-01 | 1.000000e 00 | 1.779359e-02 | 3.650664e-04 | 2048 | | lbfgs | 1 | 112 | 2.452854e-01 | 1.000000e 00 | 1.067616e-02 | 2.633981e-04 | 2048 | | lbfgs | 1 | 113 | 2.452836e-01 | 1.000000e 00 | 1.067616e-02 | 1.804300e-04 | 2048 | | lbfgs | 1 | 114 | 2.452817e-01 | 1.000000e 00 | 7.117438e-03 | 4.251642e-04 | 2048 | | lbfgs | 1 | 115 | 2.452741e-01 | 1.000000e 00 | 1.779359e-02 | 9.018440e-04 | 2048 | | lbfgs | 1 | 116 | 2.452691e-01 | 1.000000e 00 | 2.135231e-02 | 9.941716e-05 | 2048 | |=================================================================================================================|
predict the test-set labels, construct a confusion matrix for the test set, and estimate the classification error for the test set.
updatedlabel = predict(updatedmdl,xtest); updatedconfusiontest = confusionchart(ytest,updatedlabel);
updatedl = loss(updatedmdl,xtest,ytest)
updatedl = 0.1284
the classification error decreases after resume
updates the classification model with more iterations.
extended capabilities
c/c code generation
generate c and c code using matlab® coder™.
usage notes and limitations:
the function supports code generation.
for more information, see introduction to code generation.
version history
introduced in r2017br2023a: generate c/c code for prediction
you can generate c/c code for the predict
function.
r2022a: cost
property stores the user-specified cost matrix
starting in r2022a, the cost
property stores the user-specified cost
matrix, so that you can compute the observed misclassification cost using the specified cost
value. the software stores normalized prior probabilities (prior
)
that do not reflect the penalties described in the cost matrix. to compute the observed
misclassification cost, specify the lossfun
name-value argument as
"classifcost"
when you call the loss
function.
note that model training has not changed and, therefore, the decision boundaries between classes have not changed.
for training, the fitting function updates the specified prior probabilities by
incorporating the penalties described in the specified cost matrix, and then normalizes the
prior probabilities and observation weights. this behavior has not changed. in previous
releases, the software stored the default cost matrix in the cost
property and stored the prior probabilities used for training in the
prior
property. starting in r2022a, the software stores the
user-specified cost matrix without modification, and stores normalized prior probabilities that do
not reflect the cost penalties. for more details, see .
some object functions use the cost
and prior
properties:
the
loss
function uses the cost matrix stored in thecost
property if you specify thelossfun
name-value argument as"classifcost"
or"mincost"
.the
loss
andedge
functions use the prior probabilities stored in theprior
property to normalize the observation weights of the input data.
if you specify a nondefault cost matrix when you train a classification model, the object functions return a different value compared to previous releases.
if you want the software to handle the cost matrix, prior
probabilities, and observation weights as in previous releases, adjust the prior probabilities
and observation weights for the nondefault cost matrix, as described in . then, when you train a
classification model, specify the adjusted prior probabilities and observation weights by using
the prior
and weights
name-value arguments, respectively,
and use the default cost matrix.
see also
打开示例
您曾对此示例进行过修改。是否要打开带有您的编辑的示例?
matlab 命令
您点击的链接对应于以下 matlab 命令:
请在 matlab 命令行窗口中直接输入以执行命令。web 浏览器不支持 matlab 命令。
select a web site
choose a web site to get translated content where available and see local events and offers. based on your location, we recommend that you select: .
you can also select a web site from the following list:
how to get best site performance
select the china site (in chinese or english) for best site performance. other mathworks country sites are not optimized for visits from your location.
americas
- (español)
- (english)
- (english)
europe
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
asia pacific
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)