support vector machine (svm) for one-凯发k8网页登录
support vector machine (svm) for one-class and binary classification
description
classificationsvm
is a support vector machine (svm) classifier for one-class and two-class learning. trained classificationsvm
classifiers store training data, parameter values, prior probabilities, support vectors, and algorithmic implementation information. use these classifiers to perform tasks such as fitting a score-to-posterior-probability transformation function (see ) and predicting labels for new data (see predict
).
creation
create a classificationsvm
object by using fitcsvm
.
properties
object functions
reduce size of machine learning model | |
compare accuracies of two classification models using new data | |
cross-validate machine learning model | |
discard support vectors for linear support vector machine (svm) classifier | |
find classification edge for support vector machine (svm) classifier | |
fit posterior probabilities for support vector machine (svm) classifier | |
gather properties of statistics and machine learning toolbox object from gpu | |
convert binary classification support vector machine (svm) model to incremental learner | |
lime | local interpretable model-agnostic explanations (lime) |
find classification error for support vector machine (svm) classifier | |
find classification margins for support vector machine (svm) classifier | |
partialdependence | compute partial dependence |
plotpartialdependence | create partial dependence plot (pdp) and individual conditional expectation (ice) plots |
predict | classify observations using support vector machine (svm) classifier |
resubstitution classification edge | |
resubstitution classification loss | |
resubstitution classification margin | |
classify training data using trained classifier | |
resume training support vector machine (svm) classifier | |
shapley | shapley values |
compare accuracies of two classification models by repeated cross-validation |
examples
more about
algorithms
for the mathematical formulation of the svm binary classification algorithm, see support vector machines for binary classification and .
nan
,
, empty character vector (''
), empty string (""
), and
values indicate missing values.fitcsvm
removes entire rows of data corresponding to a missing response. when computing total weights (see the next bullets),fitcsvm
ignores any weight corresponding to an observation with at least one missing predictor. this action can lead to unbalanced prior probabilities in balanced-class problems. consequently, observation box constraints might not equalboxconstraint
.if you specify the
cost
,prior
, andweights
name-value arguments, the output model object stores the specified values in thecost
,prior
, andw
properties, respectively. thecost
property stores the user-specified cost matrix (c) without modification. theprior
andw
properties store the prior probabilities and observation weights, respectively, after normalization. for model training, the software updates the prior probabilities and observation weights to incorporate the penalties described in the cost matrix. for details, see .note that the
cost
andprior
name-value arguments are used for two-class learning. for one-class learning, thecost
andprior
properties store0
and1
, respectively.for two-class learning,
fitcsvm
assigns a box constraint to each observation in the training data. the formula for the box constraint of observation j iswhere c0 is the initial box constraint (see the
boxconstraint
name-value argument), and wj* is the observation weight adjusted bycost
andprior
for observation j. for details about the observation weights, see .if you specify
standardize
astrue
and set thecost
,prior
, orweights
name-value argument, thenfitcsvm
standardizes the predictors using their corresponding weighted means and weighted standard deviations. that is,fitcsvm
standardizes predictor j (xj) usingwhere xjk is observation k (row) of predictor j (column), and
assume that
p
is the proportion of outliers that you expect in the training data, and that you set'outlierfraction',p
.for one-class learning, the software trains the bias term such that 100
p
% of the observations in the training data have negative scores.the software implements robust learning for two-class learning. in other words, the software attempts to remove 100
p
% of the observations when the optimization algorithm converges. the removed observations correspond to gradients that are large in magnitude.
if your predictor data contains categorical variables, then the software generally uses full dummy encoding for these variables. the software creates one dummy variable for each level of each categorical variable.
the
predictornames
property stores one element for each of the original predictor variable names. for example, assume that there are three predictors, one of which is a categorical variable with three levels. thenpredictornames
is a 1-by-3 cell array of character vectors containing the original names of the predictor variables.the
expandedpredictornames
property stores one element for each of the predictor variables, including the dummy variables. for example, assume that there are three predictors, one of which is a categorical variable with three levels. thenexpandedpredictornames
is a 1-by-5 cell array of character vectors containing the names of the predictor variables and the new dummy variables.similarly, the
beta
property stores one beta coefficient for each predictor, including the dummy variables.the
supportvectors
property stores the predictor values for the support vectors, including the dummy variables. for example, assume that there are m support vectors and three predictors, one of which is a categorical variable with three levels. thensupportvectors
is an n-by-5 matrix.the
x
property stores the training data as originally input and does not include the dummy variables. when the input is a table,x
contains only the columns used as predictors.
for predictors specified in a table, if any of the variables contain ordered (ordinal) categories, the software uses ordinal encoding for these variables.
for a variable with k ordered levels, the software creates k – 1 dummy variables. the jth dummy variable is –1 for levels up to j, and 1 for levels j 1 through k.
the names of the dummy variables stored in the
expandedpredictornames
property indicate the first level with the value 1. the software stores k – 1 additional predictor names for the dummy variables, including the names of levels 2, 3, ..., k.
all solvers implement l1 soft-margin minimization.
for one-class learning, the software estimates the lagrange multipliers, α1,...,αn, such that
references
[1] hastie, t., r. tibshirani, and j. friedman. the elements of statistical learning, second edition. ny: springer, 2008.
[2] scholkopf, b., j. c. platt, j. c. shawe-taylor, a. j. smola, and r. c. williamson. “estimating the support of a high-dimensional distribution.” neural comput., vol. 13, number 7, 2001, pp. 1443–1471.
[3] christianini, n., and j. c. shawe-taylor. an introduction to support vector machines and other kernel-based learning methods. cambridge, uk: cambridge university press, 2000.
[4] scholkopf, b., and a. smola. learning with kernels: support vector machines, regularization, optimization and beyond, adaptive computation and machine learning. cambridge, ma: the mit press, 2002.