main content

compact support vector machine (svm) for one-凯发k8网页登录

compact support vector machine (svm) for one-class and binary classification

description

compactclassificationsvm is a compact version of the support vector machine (svm) classifier. the compact classifier does not include the data used for training the svm classifier. therefore, you cannot perform some tasks, such as cross-validation, using the compact classifier. use a compact svm classifier for tasks such as predicting the labels of new data.

creation

create a compactclassificationsvm model from a full, trained classificationsvm classifier by using .

properties

svm properties

this property is read-only.

trained classifier coefficients, specified as an s-by-1 numeric vector. s is the number of support vectors in the trained classifier, sum(mdl.issupportvector).

alpha contains the trained classifier coefficients from the dual problem, that is, the estimated lagrange multipliers. if you remove duplicates by using the removeduplicates name-value pair argument of fitcsvm, then for a given set of duplicate observations that are support vectors, alpha contains one coefficient corresponding to the entire set. that is, matlab® attributes a nonzero coefficient to one observation from the set of duplicates and a coefficient of 0 to all other duplicate observations in the set.

data types: single | double

this property is read-only.

linear predictor coefficients, specified as a numeric vector. the length of beta is equal to the number of predictors used to train the model.

matlab expands categorical variables in the predictor data using full dummy encoding. that is, matlab creates one dummy variable for each level of each categorical variable. beta stores one value for each predictor variable, including the dummy variables. for example, if there are three predictors, one of which is a categorical variable with three levels, then beta is a numeric vector containing five values.

if kernelparameters.function is 'linear', then the classification score for the observation x is

f(x)=(x/s)β b.

mdl stores β, b, and s in the properties beta, bias, and kernelparameters.scale, respectively.

to estimate classification scores manually, you must first apply any transformations to the predictor data that were applied during training. specifically, if you specify 'standardize',true when using fitcsvm, then you must standardize the predictor data manually by using the mean mdl.mu and standard deviation mdl.sigma, and then divide the result by the kernel scale in mdl.kernelparameters.scale.

all svm functions, such as and predict, apply any required transformation before estimation.

if kernelparameters.function is not 'linear', then beta is empty ([]).

data types: single | double

this property is read-only.

bias term, specified as a scalar.

data types: single | double

this property is read-only.

kernel parameters, specified as a structure array. the kernel parameters property contains the fields listed in this table.

fielddescription
function

kernel function used to compute the elements of the gram matrix. for details, see 'kernelfunction'.

scale

kernel scale parameter used to scale all elements of the predictor data on which the model is trained. for details, see 'kernelscale'.

to display the values of kernelparameters, use dot notation. for example, mdl.kernelparameters.scale displays the kernel scale parameter value.

the software accepts kernelparameters as inputs and does not modify them.

data types: struct

this property is read-only.

support vector class labels, specified as an s-by-1 numeric vector. s is the number of support vectors in the trained classifier, sum(mdl.issupportvector).

a value of 1 in supportvectorlabels indicates that the corresponding support vector is in the positive class (classnames{2}). a value of –1 indicates that the corresponding support vector is in the negative class (classnames{1}).

if you remove duplicates by using the removeduplicates name-value pair argument of fitcsvm, then for a given set of duplicate observations that are support vectors, supportvectorlabels contains one unique support vector label.

data types: single | double

this property is read-only.

support vectors in the trained classifier, specified as an s-by-p numeric matrix. s is the number of support vectors in the trained classifier, sum(mdl.issupportvector), and p is the number of predictor variables in the predictor data.

supportvectors contains rows of the predictor data x that matlab considers to be support vectors. if you specify 'standardize',true when training the svm classifier using fitcsvm, then supportvectors contains the standardized rows of x.

if you remove duplicates by using the removeduplicates name-value pair argument of fitcsvm, then for a given set of duplicate observations that are support vectors, supportvectors contains one unique support vector.

data types: single | double

other classification properties

this property is read-only.

categorical predictor indices, specified as a vector of positive integers. categoricalpredictors contains index values indicating that the corresponding predictors are categorical. the index values are between 1 and p, where p is the number of predictors used to train the model. if none of the predictors are categorical, then this property is empty ([]).

data types: double

this property is read-only.

unique class labels used in training, specified as a categorical or character array, logical or numeric vector, or cell array of character vectors. classnames has the same data type as the class labels y. (the software treats string arrays as cell arrays of character vectors.) classnames also determines the class order.

data types: single | double | logical | char | cell | categorical

this property is read-only.

misclassification cost, specified as a numeric square matrix.

  • for two-class learning, the cost property stores the misclassification cost matrix specified by the cost name-value argument of the fitting function. the rows correspond to the true class and the columns correspond to the predicted class. that is, cost(i,j) is the cost of classifying a point into class j if its true class is i. the order of the rows and columns of cost corresponds to the order of the classes in classnames.

  • for one-class learning, cost = 0.

data types: double

this property is read-only.

expanded predictor names, specified as a cell array of character vectors.

if the model uses dummy variable encoding for categorical variables, then expandedpredictornames includes the names that describe the expanded variables. otherwise, expandedpredictornames is the same as predictornames.

data types: cell

this property is read-only.

predictor means, specified as a numeric vector. if you specify 'standardize',1 or 'standardize',true when you train an svm classifier using fitcsvm, the length of mu is equal to the number of predictors.

matlab expands categorical variables in the predictor data using dummy variables. mu stores one value for each predictor variable, including the dummy variables. however, matlab does not standardize the columns that contain categorical variables.

if you set 'standardize',false when you train the svm classifier using fitcsvm, then mu is an empty vector ([]).

data types: single | double

this property is read-only.

predictor variable names, specified as a cell array of character vectors. the order of the elements in predictornames corresponds to the order in which the predictor names appear in the training data.

data types: cell

this property is read-only.

prior probabilities for each class, specified as a numeric vector.

for two-class learning, if you specify a cost matrix, then the software updates the prior probabilities by incorporating the penalties described in the cost matrix.

  • for two-class learning, the software normalizes the prior probabilities specified by the prior name-value argument of the fitting function so that the probabilities sum to 1. the prior property stores the normalized prior probabilities. the order of the elements of prior corresponds to the elements of mdl.classnames.

  • for one-class learning, prior = 1.

data types: single | double

score transformation, specified as a character vector or function handle. scoretransform represents a built-in transformation function or a function handle for transforming predicted classification scores.

to change the score transformation function to function, for example, use dot notation.

  • for a built-in function, enter a character vector.

    mdl.scoretransform = 'function';

    this table describes the available built-in functions.

    valuedescription
    'doublelogit'1/(1 e–2x)
    'invlogit'log(x / (1 – x))
    'ismax'sets the score for the class with the largest score to 1, and sets the scores for all other classes to 0
    'logit'1/(1 ex)
    'none' or 'identity'x (no transformation)
    'sign'–1 for x < 0
    0 for x = 0
    1 for x > 0
    'symmetric'2x – 1
    'symmetricismax'sets the score for the class with the largest score to 1, and sets the scores for all other classes to –1
    'symmetriclogit'2/(1 ex) – 1
  • for a matlab function or a function that you define, enter its function handle.

    mdl.scoretransform = @function;

    function must accept a matrix (the original scores) and return a matrix of the same size (the transformed scores).

data types: char | function_handle

this property is read-only.

predictor standard deviations, specified as a numeric vector.

if you specify 'standardize',true when you train the svm classifier using fitcsvm, the length of sigma is equal to the number of predictor variables.

matlab expands categorical variables in the predictor data using dummy variables. sigma stores one value for each predictor variable, including the dummy variables. however, matlab does not standardize the columns that contain categorical variables.

if you set 'standardize',false when you train the svm classifier using fitcsvm, then sigma is an empty vector ([]).

data types: single | double

object functions

compare accuracies of two classification models using new data
discard support vectors for linear support vector machine (svm) classifier
find classification edge for support vector machine (svm) classifier
fit posterior probabilities for compact support vector machine (svm) classifier
gather properties of statistics and machine learning toolbox object from gpu
convert binary classification support vector machine (svm) model to incremental learner
limelocal interpretable model-agnostic explanations (lime)
find classification error for support vector machine (svm) classifier
find classification margins for support vector machine (svm) classifier
partialdependencecompute partial dependence
plotpartialdependencecreate partial dependence plot (pdp) and individual conditional expectation (ice) plots
predictclassify observations using support vector machine (svm) classifier
shapleyshapley values
updateupdate model parameters for code generation

examples

reduce the size of a full support vector machine (svm) classifier by removing the training data. full svm classifiers (that is, classificationsvm classifiers) hold the training data. to improve efficiency, use a smaller classifier.

load the ionosphere data set.

load ionosphere

train an svm classifier. standardize the predictor data and specify the order of the classes.

svmmodel = fitcsvm(x,y,'standardize',true,...
    'classnames',{'b','g'})
svmmodel = 
  classificationsvm
             responsename: 'y'
    categoricalpredictors: []
               classnames: {'b'  'g'}
           scoretransform: 'none'
          numobservations: 351
                    alpha: [90x1 double]
                     bias: -0.1343
         kernelparameters: [1x1 struct]
                       mu: [0.8917 0 0.6413 0.0444 0.6011 0.1159 0.5501 0.1194 0.5118 0.1813 0.4762 0.1550 0.4008 0.0934 0.3442 0.0711 0.3819 -0.0036 0.3594 -0.0240 0.3367 0.0083 0.3625 -0.0574 0.3961 -0.0712 0.5416 -0.0695 0.3784 -0.0279 0.3525 ... ]
                    sigma: [0.3112 0 0.4977 0.4414 0.5199 0.4608 0.4927 0.5207 0.5071 0.4839 0.5635 0.4948 0.6222 0.4949 0.6528 0.4584 0.6180 0.4968 0.6263 0.5191 0.6098 0.5182 0.6038 0.5275 0.5785 0.5085 0.5162 0.5500 0.5759 0.5080 0.5715 0.5136 ... ]
           boxconstraints: [351x1 double]
          convergenceinfo: [1x1 struct]
          issupportvector: [351x1 logical]
                   solver: 'smo'
  properties, methods

svmmodel is a classificationsvm classifier.

reduce the size of the svm classifier.

compactsvmmodel = compact(svmmodel)
compactsvmmodel = 
  compactclassificationsvm
             responsename: 'y'
    categoricalpredictors: []
               classnames: {'b'  'g'}
           scoretransform: 'none'
                    alpha: [90x1 double]
                     bias: -0.1343
         kernelparameters: [1x1 struct]
                       mu: [0.8917 0 0.6413 0.0444 0.6011 0.1159 0.5501 0.1194 0.5118 0.1813 0.4762 0.1550 0.4008 0.0934 0.3442 0.0711 0.3819 -0.0036 0.3594 -0.0240 0.3367 0.0083 0.3625 -0.0574 0.3961 -0.0712 0.5416 -0.0695 0.3784 -0.0279 0.3525 ... ]
                    sigma: [0.3112 0 0.4977 0.4414 0.5199 0.4608 0.4927 0.5207 0.5071 0.4839 0.5635 0.4948 0.6222 0.4949 0.6528 0.4584 0.6180 0.4968 0.6263 0.5191 0.6098 0.5182 0.6038 0.5275 0.5785 0.5085 0.5162 0.5500 0.5759 0.5080 0.5715 0.5136 ... ]
           supportvectors: [90x34 double]
      supportvectorlabels: [90x1 double]
  properties, methods

compactsvmmodel is a compactclassificationsvm classifier.

display the amount of memory used by each classifier.

whos('svmmodel','compactsvmmodel')
  name                 size             bytes  class                                                 attributes
  compactsvmmodel      1x1              31058  classreg.learning.classif.compactclassificationsvm              
  svmmodel             1x1             141148  classificationsvm                                               

the full svm classifier (svmmodel) is more than four times larger than the compact svm classifier (compactsvmmodel).

to label new observations efficiently, you can remove svmmodel from the matlab® workspace, and then pass compactsvmmodel and new predictor values to predict.

to further reduce the size of the compact svm classifier, use the function to discard support vectors.

load the ionosphere data set.

load ionosphere

train and cross-validate an svm classifier. standardize the predictor data and specify the order of the classes.

rng(1);  % for reproducibility
cvsvmmodel = fitcsvm(x,y,'standardize',true,...
    'classnames',{'b','g'},'crossval','on')
cvsvmmodel = 
  classificationpartitionedmodel
    crossvalidatedmodel: 'svm'
         predictornames: {'x1'  'x2'  'x3'  'x4'  'x5'  'x6'  'x7'  'x8'  'x9'  'x10'  'x11'  'x12'  'x13'  'x14'  'x15'  'x16'  'x17'  'x18'  'x19'  'x20'  'x21'  'x22'  'x23'  'x24'  'x25'  'x26'  'x27'  'x28'  'x29'  'x30'  'x31'  'x32'  'x33'  'x34'}
           responsename: 'y'
        numobservations: 351
                  kfold: 10
              partition: [1x1 cvpartition]
             classnames: {'b'  'g'}
         scoretransform: 'none'
  properties, methods

cvsvmmodel is a classificationpartitionedmodel cross-validated svm classifier. by default, the software implements 10-fold cross-validation.

alternatively, you can cross-validate a trained classificationsvm classifier by passing it to crossval.

inspect one of the trained folds using dot notation.

cvsvmmodel.trained{1}
ans = 
  compactclassificationsvm
             responsename: 'y'
    categoricalpredictors: []
               classnames: {'b'  'g'}
           scoretransform: 'none'
                    alpha: [78x1 double]
                     bias: -0.2210
         kernelparameters: [1x1 struct]
                       mu: [0.8888 0 0.6320 0.0406 0.5931 0.1205 0.5361 0.1286 0.5083 0.1879 0.4779 0.1567 0.3924 0.0875 0.3360 0.0789 0.3839 9.6066e-05 0.3562 -0.0308 0.3398 -0.0073 0.3590 -0.0628 0.4064 -0.0664 0.5535 -0.0749 0.3835 -0.0295 ... ]
                    sigma: [0.3149 0 0.5033 0.4441 0.5255 0.4663 0.4987 0.5205 0.5040 0.4780 0.5649 0.4896 0.6293 0.4924 0.6606 0.4535 0.6133 0.4878 0.6250 0.5140 0.6075 0.5150 0.6068 0.5222 0.5729 0.5103 0.5061 0.5478 0.5712 0.5032 0.5639 0.5062 ... ]
           supportvectors: [78x34 double]
      supportvectorlabels: [78x1 double]
  properties, methods

each fold is a compactclassificationsvm classifier trained on 90% of the data.

estimate the generalization error.

generror = kfoldloss(cvsvmmodel)
generror = 0.1168

on average, the generalization error is approximately 12%.

references

[1] hastie, t., r. tibshirani, and j. friedman. the elements of statistical learning, second edition. ny: springer, 2008.

[2] scholkopf, b., j. c. platt, j. c. shawe-taylor, a. j. smola, and r. c. williamson. “estimating the support of a high-dimensional distribution.” neural computation. vol. 13, number 7, 2001, pp. 1443–1471.

[3] christianini, n., and j. c. shawe-taylor. an introduction to support vector machines and other kernel-based learning methods. cambridge, uk: cambridge university press, 2000.

[4] scholkopf, b., and a. smola. learning with kernels: support vector machines, regularization, optimization and beyond, adaptive computation and machine learning. cambridge, ma: the mit press, 2002.

extended capabilities

version history

introduced in r2014a

see also

| | |

topics

    网站地图