main content

train multiclass naive bayes model -凯发k8网页登录

train multiclass naive bayes model

description

mdl = fitcnb(tbl,responsevarname) returns a multiclass naive bayes model (mdl), trained by the predictors in table tbl and class labels in the variable tbl.responsevarname.

mdl = fitcnb(tbl,formula) returns a multiclass naive bayes model (mdl), trained by the predictors in table tbl. formula is an explanatory model of the response and a subset of predictor variables in tbl used to fit mdl.

mdl = fitcnb(tbl,y) returns a multiclass naive bayes model (mdl), trained by the predictors in the table tbl and class labels in the array y.

example

mdl = fitcnb(x,y) returns a multiclass naive bayes model (mdl), trained by predictors x and class labels y.

example

mdl = fitcnb(___,name,value) returns a naive bayes classifier with additional options specified by one or more name,value pair arguments, using any of the previous syntaxes. for example, you can specify a distribution to model the data, prior probabilities for the classes, or the kernel smoothing window bandwidth.

examples

load fisher's iris data set.

load fisheriris
x = meas(:,3:4);
y = species;
tabulate(y)
       value    count   percent
      setosa       50     33.33%
  versicolor       50     33.33%
   virginica       50     33.33%

the software can classify data with more than two classes using naive bayes methods.

train a naive bayes classifier. it is good practice to specify the class order.

mdl = fitcnb(x,y,'classnames',{'setosa','versicolor','virginica'})
mdl = 
  classificationnaivebayes
              responsename: 'y'
     categoricalpredictors: []
                classnames: {'setosa'  'versicolor'  'virginica'}
            scoretransform: 'none'
           numobservations: 150
         distributionnames: {'normal'  'normal'}
    distributionparameters: {3x2 cell}
  properties, methods

mdl is a trained classificationnaivebayes classifier.

by default, the software models the predictor distribution within each class using a gaussian distribution having some mean and standard deviation. use dot notation to display the parameters of a particular gaussian fit, e.g., display the fit for the first feature within setosa.

setosaindex = strcmp(mdl.classnames,'setosa');
estimates = mdl.distributionparameters{setosaindex,1}
estimates = 2×1
    1.4620
    0.1737

the mean is 1.4620 and the standard deviation is 0.1737.

plot the gaussian contours.

figure
gscatter(x(:,1),x(:,2),y);
h = gca;
cxlim = h.xlim;
cylim = h.ylim;
hold on
params = cell2mat(mdl.distributionparameters); 
mu = params(2*(1:3)-1,1:2); % extract the means
sigma = zeros(2,2,3);
for j = 1:3
    sigma(:,:,j) = diag(params(2*j,:)).^2; % create diagonal covariance matrix
    xlim = mu(j,1)   4*[-1 1]*sqrt(sigma(1,1,j));
    ylim = mu(j,2)   4*[-1 1]*sqrt(sigma(2,2,j));
    f = @(x,y) arrayfun(@(x0,y0) mvnpdf([x0 y0],mu(j,:),sigma(:,:,j)),x,y);
    fcontour(f,[xlim ylim]) % draw contours for the multivariate normal distributions 
end
h.xlim = cxlim;
h.ylim = cylim;
title('naive bayes classifier -- fisher''s iris data')
xlabel('petal length (cm)')
ylabel('petal width (cm)')
legend('setosa','versicolor','virginica')
hold off

figure contains an axes object. the axes object with title naive bayes classifier -- fisher's iris data, xlabel petal length (cm), ylabel petal width (cm) contains 6 objects of type line, functioncontour. one or more of the lines displays its values using only markers these objects represent setosa, versicolor, virginica.

you can change the default distribution using the name-value pair argument 'distributionnames'. for example, if some predictors are categorical, then you can specify that they are multivariate, multinomial random variables using 'distributionnames','mvmn'.

construct a naive bayes classifier for fisher's iris data set. also, specify prior probabilities during training.

load fisher's iris data set.

load fisheriris
x = meas;
y = species;
classnames = {'setosa','versicolor','virginica'}; % class order

x is a numeric matrix that contains four petal measurements for 150 irises. y is a cell array of character vectors that contains the corresponding iris species.

by default, the prior class probability distribution is the relative frequency distribution of the classes in the data set. in this case the prior probability is 33% for each species. however, suppose you know that in the population 50% of the irises are setosa, 20% are versicolor, and 30% are virginica. you can incorporate this information by specifying this distribution as a prior probability during training.

train a naive bayes classifier. specify the class order and prior class probability distribution.

prior = [0.5 0.2 0.3];
mdl = fitcnb(x,y,'classnames',classnames,'prior',prior)
mdl = 
  classificationnaivebayes
              responsename: 'y'
     categoricalpredictors: []
                classnames: {'setosa'  'versicolor'  'virginica'}
            scoretransform: 'none'
           numobservations: 150
         distributionnames: {'normal'  'normal'  'normal'  'normal'}
    distributionparameters: {3x4 cell}
  properties, methods

mdl is a trained classificationnaivebayes classifier, and some of its properties appear in the command window. the software treats the predictors as independent given a class, and, by default, fits them using normal distributions.

the naive bayes algorithm does not use the prior class probabilities during training. therefore, you can specify prior class probabilities after training using dot notation. for example, suppose that you want to see the difference in performance between a model that uses the default prior class probabilities and a model that uses different prior.

create a new naive bayes model based on mdl, and specify that the prior class probability distribution is an empirical class distribution.

defaultpriormdl = mdl;
freqdist = cell2table(tabulate(y));
defaultpriormdl.prior = freqdist{:,3};

the software normalizes the prior class probabilities to sum to 1.

estimate the cross-validation error for both models using 10-fold cross-validation.

rng(1); % for reproducibility
defaultcvmdl = crossval(defaultpriormdl);
defaultloss = kfoldloss(defaultcvmdl)
defaultloss = 0.0533
cvmdl = crossval(mdl);
loss = kfoldloss(cvmdl)
loss = 0.0340

mdl performs better than defaultpriormdl.

load fisher's iris data set.

load fisheriris
x = meas;
y = species;

train a naive bayes classifier using every predictor. it is good practice to specify the class order.

mdl1 = fitcnb(x,y,...
    'classnames',{'setosa','versicolor','virginica'})
mdl1 = 
  classificationnaivebayes
              responsename: 'y'
     categoricalpredictors: []
                classnames: {'setosa'  'versicolor'  'virginica'}
            scoretransform: 'none'
           numobservations: 150
         distributionnames: {'normal'  'normal'  'normal'  'normal'}
    distributionparameters: {3x4 cell}
  properties, methods
mdl1.distributionparameters
ans=3×4 cell array
    {2x1 double}    {2x1 double}    {2x1 double}    {2x1 double}
    {2x1 double}    {2x1 double}    {2x1 double}    {2x1 double}
    {2x1 double}    {2x1 double}    {2x1 double}    {2x1 double}
mdl1.distributionparameters{1,2}
ans = 2×1
    3.4280
    0.3791

by default, the software models the predictor distribution within each class as a gaussian with some mean and standard deviation. there are four predictors and three class levels. each cell in mdl1.distributionparameters corresponds to a numeric vector containing the mean and standard deviation of each distribution, e.g., the mean and standard deviation for setosa iris sepal widths are 3.4280 and 0.3791, respectively.

estimate the confusion matrix for mdl1.

islabels1 = resubpredict(mdl1);
confusionmat1 = confusionchart(y,islabels1);

figure contains an object of type confusionmatrixchart.

element (j, k) of the confusion matrix chart represents the number of observations that the software classifies as k, but are truly in class j according to the data.

retrain the classifier using the gaussian distribution for predictors 1 and 2 (the sepal lengths and widths), and the default normal kernel density for predictors 3 and 4 (the petal lengths and widths).

mdl2 = fitcnb(x,y,...
    'distributionnames',{'normal','normal','kernel','kernel'},...
    'classnames',{'setosa','versicolor','virginica'});
mdl2.distributionparameters{1,2}
ans = 2×1
    3.4280
    0.3791

the software does not train parameters to the kernel density. rather, the software chooses an optimal width. however, you can specify a width using the 'width' name-value pair argument.

estimate the confusion matrix for mdl2.

islabels2 = resubpredict(mdl2);
confusionmat2 = confusionchart(y,islabels2);

figure contains an object of type confusionmatrixchart.

based on the confusion matrices, the two classifiers perform similarly in the training sample.

load fisher's iris data set.

load fisheriris
x = meas;
y = species;
rng(1); % for reproducibility

train and cross-validate a naive bayes classifier using the default options and k-fold cross-validation. it is good practice to specify the class order.

cvmdl1 = fitcnb(x,y,...
    'classnames',{'setosa','versicolor','virginica'},...
    'crossval','on');

by default, the software models the predictor distribution within each class as a gaussian with some mean and standard deviation. cvmdl1 is a classificationpartitionedmodel model.

create a default naive bayes binary classifier template, and train an error-correcting, output codes multiclass model.

t = templatenaivebayes();
cvmdl2 = fitcecoc(x,y,'crossval','on','learners',t);

cvmdl2 is a classificationpartitionedecoc model. you can specify options for the naive bayes binary learners using the same name-value pair arguments as for fitcnb.

compare the out-of-sample k-fold classification error (proportion of misclassified observations).

classerr1 = kfoldloss(cvmdl1,'lossfun','classiferr')
classerr1 = 0.0533
classerr2 = kfoldloss(cvmdl2,'lossfun','classiferr')
classerr2 = 0.0467

mdl2 has a lower generalization error.

some spam filters classify an incoming email as spam based on how many times a word or punctuation (called tokens) occurs in an email. the predictors are the frequencies of particular words or punctuations in an email. therefore, the predictors compose multinomial random variables.

this example illustrates classification using naive bayes and multinomial predictors.

create training data

suppose you observed 1000 emails and classified them as spam or not spam. do this by randomly assigning -1 or 1 to y for each email.

n = 1000;                       % sample size
rng(1);                         % for reproducibility
y = randsample([-1 1],n,true);  % random labels

to build the predictor data, suppose that there are five tokens in the vocabulary, and 20 observed tokens per email. generate predictor data from the five tokens by drawing random, multinomial deviates. the relative frequencies for tokens corresponding to spam emails should differ from emails that are not spam.

tokenprobs = [0.2 0.3 0.1 0.15 0.25;...
    0.4 0.1 0.3 0.05 0.15];             % token relative frequencies  
tokensperemail = 20;                    % fixed for convenience
x = zeros(n,5);
x(y == 1,:) = mnrnd(tokensperemail,tokenprobs(1,:),sum(y == 1));
x(y == -1,:) = mnrnd(tokensperemail,tokenprobs(2,:),sum(y == -1));

train the classifier

train a naive bayes classifier. specify that the predictors are multinomial.

mdl = fitcnb(x,y,'distributionnames','mn');

mdl is a trained classificationnaivebayes classifier.

assess the in-sample performance of mdl by estimating the misclassification error.

isgenrate = resubloss(mdl,'lossfun','classiferr')
isgenrate = 0.0200

the in-sample misclassification rate is 2%.

create new data

randomly generate deviates that represent a new batch of emails.

newn = 500;
newy = randsample([-1 1],newn,true);
newx = zeros(newn,5);
newx(newy == 1,:) = mnrnd(tokensperemail,tokenprobs(1,:),...
    sum(newy == 1));
newx(newy == -1,:) = mnrnd(tokensperemail,tokenprobs(2,:),...
    sum(newy == -1));

assess classifier performance

classify the new emails using the trained naive bayes classifier mdl, and determine whether the algorithm generalizes.

oosgenrate = loss(mdl,newx,newy)
oosgenrate = 0.0261

the out-of-sample misclassification rate is 2.6% indicating that the classifier generalizes fairly well.

this example shows how to use the optimizehyperparameters name-value pair to minimize cross-validation loss in a naive bayes classifier using fitcnb. the example uses fisher's iris data.

load fisher's iris data.

load fisheriris
x = meas;
y = species;
classnames = {'setosa','versicolor','virginica'};

optimize the classification using the 'auto' parameters.

for reproducibility, set the random seed and use the 'expected-improvement-plus' acquisition function.

rng default
mdl = fitcnb(x,y,'classnames',classnames,'optimizehyperparameters','auto',...
    'hyperparameteroptimizationoptions',struct('acquisitionfunctionname',...
    'expected-improvement-plus'))
warning: it is recommended that you first standardize all numeric predictors when optimizing the naive bayes 'width' parameter. ignore this warning if you have done that.
|=====================================================================================================|
| iter | eval   | objective   | objective   | bestsofar   | bestsofar   | distribution-|        width |
|      | result |             | runtime     | (observed)  | (estim.)    | names        |              |
|=====================================================================================================|
|    1 | best   |    0.053333 |     0.87175 |    0.053333 |    0.053333 |       normal |            - |
|    2 | best   |    0.046667 |     0.72209 |    0.046667 |    0.049998 |       kernel |      0.11903 |
|    3 | accept |    0.053333 |     0.24593 |    0.046667 |    0.046667 |       normal |            - |
|    4 | accept |    0.086667 |     0.67245 |    0.046667 |    0.046668 |       kernel |       2.4506 |
|    5 | accept |    0.046667 |     0.79236 |    0.046667 |    0.046663 |       kernel |      0.10449 |
|    6 | accept |    0.073333 |      1.1939 |    0.046667 |    0.046665 |       kernel |     0.025044 |
|    7 | accept |    0.046667 |     0.58352 |    0.046667 |    0.046655 |       kernel |      0.27647 |
|    8 | accept |    0.046667 |     0.82362 |    0.046667 |    0.046647 |       kernel |       0.2031 |
|    9 | accept |        0.06 |     0.77979 |    0.046667 |    0.046658 |       kernel |      0.44271 |
|   10 | accept |    0.046667 |     0.59301 |    0.046667 |    0.046618 |       kernel |       0.2412 |
|   11 | accept |    0.046667 |      1.2124 |    0.046667 |    0.046619 |       kernel |     0.071925 |
|   12 | accept |    0.046667 |     0.75483 |    0.046667 |    0.046612 |       kernel |     0.083459 |
|   13 | accept |    0.046667 |      0.5243 |    0.046667 |    0.046603 |       kernel |      0.15661 |
|   14 | accept |    0.046667 |     0.61396 |    0.046667 |    0.046607 |       kernel |      0.25613 |
|   15 | accept |    0.046667 |     0.50218 |    0.046667 |    0.046606 |       kernel |      0.17776 |
|   16 | accept |    0.046667 |     0.53691 |    0.046667 |    0.046606 |       kernel |      0.13632 |
|   17 | accept |    0.046667 |      1.1292 |    0.046667 |    0.046606 |       kernel |     0.077598 |
|   18 | accept |    0.046667 |     0.70306 |    0.046667 |    0.046626 |       kernel |      0.25646 |
|   19 | accept |    0.046667 |      1.0349 |    0.046667 |    0.046626 |       kernel |     0.093584 |
|   20 | accept |    0.046667 |      2.0454 |    0.046667 |    0.046627 |       kernel |     0.061602 |
|=====================================================================================================|
| iter | eval   | objective   | objective   | bestsofar   | bestsofar   | distribution-|        width |
|      | result |             | runtime     | (observed)  | (estim.)    | names        |              |
|=====================================================================================================|
|   21 | accept |    0.046667 |      2.6574 |    0.046667 |    0.046627 |       kernel |     0.066532 |
|   22 | accept |    0.093333 |      1.7453 |    0.046667 |    0.046618 |       kernel |       5.8968 |
|   23 | accept |    0.046667 |     0.58472 |    0.046667 |    0.046619 |       kernel |     0.067045 |
|   24 | accept |    0.046667 |     0.42848 |    0.046667 |     0.04663 |       kernel |      0.25281 |
|   25 | accept |    0.046667 |     0.63884 |    0.046667 |     0.04663 |       kernel |       0.1473 |
|   26 | accept |    0.046667 |     0.73108 |    0.046667 |    0.046631 |       kernel |      0.17211 |
|   27 | accept |    0.046667 |      0.5766 |    0.046667 |    0.046631 |       kernel |      0.12457 |
|   28 | accept |    0.046667 |     0.67273 |    0.046667 |    0.046631 |       kernel |     0.066659 |
|   29 | accept |    0.046667 |     0.81902 |    0.046667 |    0.046631 |       kernel |       0.1081 |
|   30 | accept |        0.08 |     0.49142 |    0.046667 |    0.046628 |       kernel |       1.1048 |

figure contains an axes object. the axes object with title min objective vs. number of function evaluations, xlabel function evaluations, ylabel min objective contains 2 objects of type line. these objects represent min observed objective, estimated min objective.

figure contains an axes object. the axes object with title objective function model, xlabel distributionnames, ylabel width contains 5 objects of type line, surface, contour. one or more of the lines displays its values using only markers these objects represent observed points, model mean, next point, model minimum feasible.

__________________________________________________________
optimization completed.
maxobjectiveevaluations of 30 reached.
total function evaluations: 30
total elapsed time: 61.9326 seconds
total objective function evaluation time: 25.6812
best observed feasible point:
    distributionnames     width 
    _________________    _______
         kernel          0.11903
observed objective function value = 0.046667
estimated objective function value = 0.046667
function evaluation time = 0.72209
best estimated feasible point (according to models):
    distributionnames     width 
    _________________    _______
         kernel          0.25613
estimated objective function value = 0.046628
estimated function evaluation time = 0.61585
mdl = 
  classificationnaivebayes
                         responsename: 'y'
                categoricalpredictors: []
                           classnames: {'setosa'  'versicolor'  'virginica'}
                       scoretransform: 'none'
                      numobservations: 150
    hyperparameteroptimizationresults: [1x1 bayesianoptimization]
                    distributionnames: {'kernel'  'kernel'  'kernel'  'kernel'}
               distributionparameters: {3x4 cell}
                               kernel: {'normal'  'normal'  'normal'  'normal'}
                              support: {'unbounded'  'unbounded'  'unbounded'  'unbounded'}
                                width: [3x4 double]
  properties, methods

input arguments

sample data used to train the model, specified as a table. each row of tbl corresponds to one observation, and each column corresponds to one predictor variable. optionally, tbl can contain one additional column for the response variable. multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

  • if tbl contains the response variable, and you want to use all remaining variables in tbl as predictors, then specify the response variable by using responsevarname.

  • if tbl contains the response variable, and you want to use only a subset of the remaining variables in tbl as predictors, then specify a formula by using formula.

  • if tbl does not contain the response variable, then specify a response variable by using y. the length of the response variable and the number of rows in tbl must be equal.

response variable name, specified as the name of a variable in tbl.

you must specify responsevarname as a character vector or string scalar. for example, if the response variable y is stored as tbl.y, then specify it as "y". otherwise, the software treats all columns of tbl, including y, as predictors when training the model.

the response variable must be a categorical, character, or string array; a logical or numeric vector; or a cell array of character vectors. if y is a character array, then each element of the response variable must correspond to one row of the array.

a good practice is to specify the order of the classes by using the classnames name-value argument.

data types: char | string

explanatory model of the response variable and a subset of the predictor variables, specified as a character vector or string scalar in the form "y~x1 x2 x3". in this form, y represents the response variable, and x1, x2, and x3 represent the predictor variables.

to specify a subset of variables in tbl as predictors for training the model, use a formula. if you specify a formula, then the software does not use any variables in tbl that do not appear in formula.

the variable names in the formula must be both variable names in tbl (tbl.properties.variablenames) and valid matlab® identifiers. you can verify the variable names in tbl by using the isvarname function. if the variable names are not valid, then you can convert them by using the matlab.lang.makevalidname function.

data types: char | string

class labels to which the naive bayes classifier is trained, specified as a categorical, character, or string array, a logical or numeric vector, or a cell array of character vectors. each element of y defines the class membership of the corresponding row of x. y supports k class levels.

if y is a character array, then each row must correspond to one class label.

the length of y and the number of rows of x must be equivalent.

data types: categorical | char | string | logical | single | double | cell

predictor data, specified as a numeric matrix.

each row of x corresponds to one observation (also known as an instance or example), and each column corresponds to one variable (also known as a feature).

the length of y and the number of rows of x must be equivalent.

data types: double

note:

the software treats nan, empty character vector (''), empty string (""), , and elements as missing data values.

  • if y contains missing values, then the software removes them and the corresponding rows of x.

  • if x contains any rows composed entirely of missing values, then the software removes those rows and the corresponding elements of y.

  • if x contains missing values and you set 'distributionnames','mn', then the software removes those rows of x and the corresponding elements of y.

  • if a predictor is not represented in a class, that is, if all of its values are nan within a class, then the software returns an error.

removing rows of x and corresponding elements of y decreases the effective training or cross-validation sample size.

name-value arguments

specify optional pairs of arguments as name1=value1,...,namen=valuen, where name is the argument name and value is the corresponding value. name-value arguments must appear after other arguments, but the order of the pairs does not matter.

before r2021a, use commas to separate each name and value, and enclose name in quotes.

example: 'distributionnames','mn','prior','uniform','kswidth',0.5 specifies that the data distribution is multinomial, the prior probabilities for all classes are equal, and the kernel smoothing window bandwidth for all classes is 0.5 units.

note

you cannot use any cross-validation name-value argument together with the 'optimizehyperparameters' name-value argument. you can modify the cross-validation for 'optimizehyperparameters' only by using the 'hyperparameteroptimizationoptions' name-value argument.

naive bayes options

data distributions fitcnb uses to model the data, specified as the comma-separated pair consisting of 'distributionnames' and a character vector or string scalar, a string array, or a cell array of character vectors with values from this table.

valuedescription
'kernel'kernel smoothing density estimate.
'mn'multinomial distribution. if you specify mn, then all features are components of a multinomial distribution. therefore, you cannot include 'mn' as an element of a string array or a cell array of character vectors. for details, see algorithms.
'mvmn'multivariate multinomial distribution. for details, see algorithms.
'normal'normal (gaussian) distribution.

if you specify a character vector or string scalar, then the software models all the features using that distribution. if you specify a 1-by-p string array or cell array of character vectors, then the software models feature j using the distribution in element j of the array.

by default, the software sets all predictors specified as categorical predictors (using the categoricalpredictors name-value pair argument) to 'mvmn'. otherwise, the default distribution is 'normal'.

you must specify that at least one predictor has distribution 'kernel' to additionally specify kernel, support, or width.

example: 'distributionnames','mn'

example: 'distributionnames',{'kernel','normal','kernel'}

kernel smoother type, specified as the comma-separated pair consisting of 'kernel' and a character vector or string scalar, a string array, or a cell array of character vectors.

this table summarizes the available options for setting the kernel smoothing density region. let i{u} denote the indicator function.

valuekernelformula
'box'box (uniform)

f(x)=0.5i{|x|1}

'epanechnikov'epanechnikov

f(x)=0.75(1x2)i{|x|1}

'normal'gaussian

f(x)=12πexp(0.5x2)

'triangle'triangular

f(x)=(1|x|)i{|x|1}

if you specify a 1-by-p string array or cell array, with each element of the array containing any value in the table, then the software trains the classifier using the kernel smoother type in element j for feature j in x. the software ignores elements of kernel not corresponding to a predictor whose distribution is 'kernel'.

you must specify that at least one predictor has distribution 'kernel' to additionally specify kernel, support, or width.

example: 'kernel',{'epanechnikov','normal'}

kernel smoothing density support, specified as the comma-separated pair consisting of 'support' and 'positive', 'unbounded', a string array, a cell array, or a numeric row vector. the software applies the kernel smoothing density to the specified region.

this table summarizes the available options for setting the kernel smoothing density region.

valuedescription
1-by-2 numeric row vectorfor example, [l,u], where l and u are the finite lower and upper bounds, respectively, for the density support.
'positive'the density support is all positive real values.
'unbounded'the density support is all real values.

if you specify a 1-by-p string array or cell array, with each element in the string array containing any text value in the table and each element in the cell array containing any value in the table, then the software trains the classifier using the kernel support in element j for feature j in x. the software ignores elements of kernel not corresponding to a predictor whose distribution is 'kernel'.

you must specify that at least one predictor has distribution 'kernel' to additionally specify kernel, support, or width.

example: 'kssupport',{[-10,20],'unbounded'}

data types: char | string | cell | double

kernel smoothing window width, specified as the comma-separated pair consisting of 'width' and a matrix of numeric values, numeric column vector, numeric row vector, or scalar.

suppose there are k class levels and p predictors. this table summarizes the available options for setting the kernel smoothing window width.

valuedescription
k-by-p matrix of numeric valueselement (k,j) specifies the width for predictor j in class k.
k-by-1 numeric column vectorelement k specifies the width for all predictors in class k.
1-by-p numeric row vectorelement j specifies the width in all class levels for predictor j.
scalarspecifies the bandwidth for all features in all classes.

by default, the software selects a default width automatically for each combination of predictor and class by using a value that is optimal for a gaussian distribution. if you specify width and it contains nans, then the software selects widths for the elements containing nans.

you must specify that at least one predictor has distribution 'kernel' to additionally specify kernel, support, or width.

example: 'width',[nan nan]

data types: double | struct

cross-validation options

cross-validation flag, specified as the comma-separated pair consisting of 'crossval' and 'on' or 'off'.

if you specify 'on', then the software implements 10-fold cross-validation.

to override this cross-validation setting, use one of these name-value pair arguments: cvpartition, holdout, kfold, or leaveout. to create a cross-validated model, you can use one cross-validation name-value pair argument at a time only.

alternatively, cross-validate later by passing mdl to .

example: 'crossval','on'

cross-validation partition, specified as a cvpartition partition object created by cvpartition. the partition object specifies the type of cross-validation and the indexing for the training and validation sets.

to create a cross-validated model, you can specify only one of these four name-value arguments: cvpartition, holdout, kfold, or leaveout.

example: suppose you create a random partition for 5-fold cross-validation on 500 observations by using cvp = cvpartition(500,'kfold',5). then, you can specify the cross-validated model by using 'cvpartition',cvp.

fraction of the data used for holdout validation, specified as a scalar value in the range (0,1). if you specify 'holdout',p, then the software completes these steps:

  1. randomly select and reserve p*100% of the data as validation data, and train the model using the rest of the data.

  2. store the compact, trained model in the trained property of the cross-validated model.

to create a cross-validated model, you can specify only one of these four name-value arguments: cvpartition, holdout, kfold, or leaveout.

example: 'holdout',0.1

data types: double | single

number of folds to use in a cross-validated model, specified as a positive integer value greater than 1. if you specify 'kfold',k, then the software completes these steps:

  1. randomly partition the data into k sets.

  2. for each set, reserve the set as validation data, and train the model using the other k – 1 sets.

  3. store the k compact, trained models in a k-by-1 cell vector in the trained property of the cross-validated model.

to create a cross-validated model, you can specify only one of these four name-value arguments: cvpartition, holdout, kfold, or leaveout.

example: 'kfold',5

data types: single | double

leave-one-out cross-validation flag, specified as 'on' or 'off'. if you specify 'leaveout','on', then for each of the n observations (where n is the number of observations, excluding missing observations, specified in the numobservations property of the model), the software completes these steps:

  1. reserve the one observation as validation data, and train the model using the other n – 1 observations.

  2. store the n compact, trained models in an n-by-1 cell vector in the trained property of the cross-validated model.

to create a cross-validated model, you can specify only one of these four name-value arguments: cvpartition, holdout, kfold, or leaveout.

example: 'leaveout','on'

other classification options

categorical predictors list, specified as one of the values in this table.

valuedescription
vector of positive integers

each entry in the vector is an index value indicating that the corresponding predictor is categorical. the index values are between 1 and p, where p is the number of predictors used to train the model.

if fitcnb uses a subset of input variables as predictors, then the function indexes the predictors using only the subset. the categoricalpredictors values do not count the response variable, observation weights variable, or any other variables that the function does not use.

logical vector

a true entry means that the corresponding predictor is categorical. the length of the vector is p.

character matrixeach row of the matrix is the name of a predictor variable. the names must match the entries in predictornames. pad the names with extra blanks so each row of the character matrix has the same length.
string array or cell array of character vectorseach element in the array is the name of a predictor variable. the names must match the entries in predictornames.
"all"all predictors are categorical.

by default, if the predictor data is in a table (tbl), fitcnb assumes that a variable is categorical if it is a logical vector, categorical vector, character array, string array, or cell array of character vectors. if the predictor data is a matrix (x), fitcnb assumes that all predictors are continuous. to identify any other predictors as categorical predictors, specify them by using the categoricalpredictors name-value argument.

for the identified categorical predictors, fitcnb uses multivariate multinomial distributions. for details, see distributionnames and algorithms.

example: 'categoricalpredictors','all'

data types: single | double | logical | char | string | cell

names of classes to use for training, specified as a categorical, character, or string array; a logical or numeric vector; or a cell array of character vectors. classnames must have the same data type as the response variable in tbl or y.

if classnames is a character array, then each element must correspond to one row of the array.

use classnames to:

  • specify the order of the classes during training.

  • specify the order of any input or output argument dimension that corresponds to the class order. for example, use classnames to specify the order of the dimensions of cost or the column order of classification scores returned by predict.

  • select a subset of classes for training. for example, suppose that the set of all distinct class names in y is ["a","b","c"]. to train the model using observations from classes "a" and "c" only, specify "classnames",["a","c"].

the default value for classnames is the set of all distinct class names in the response variable in tbl or y.

example: "classnames",["b","g"]

data types: categorical | char | string | logical | single | double | cell

cost of misclassification of a point, specified as the comma-separated pair consisting of 'cost' and one of the following:

  • square matrix, where cost(i,j) is the cost of classifying a point into class j if its true class is i (i.e., the rows correspond to the true class and the columns correspond to the predicted class). to specify the class order for the corresponding rows and columns of cost, additionally specify the classnames name-value pair argument.

  • structure s having two fields: s.classnames containing the group names as a variable of the same type as y, and s.classificationcosts containing the cost matrix.

the default is cost(i,j)=1 if i~=j, and cost(i,j)=0 if i=j.

example: 'cost',struct('classnames',{{'b','g'}},'classificationcosts',[0 0.5; 1 0])

data types: single | double | struct

predictor variable names, specified as a string array of unique names or cell array of unique character vectors. the functionality of predictornames depends on the way you supply the training data.

  • if you supply x and y, then you can use predictornames to assign names to the predictor variables in x.

    • the order of the names in predictornames must correspond to the column order of x. that is, predictornames{1} is the name of x(:,1), predictornames{2} is the name of x(:,2), and so on. also, size(x,2) and numel(predictornames) must be equal.

    • by default, predictornames is {'x1','x2',...}.

  • if you supply tbl, then you can use predictornames to choose which predictor variables to use in training. that is, fitcnb uses only the predictor variables in predictornames and the response variable during training.

    • predictornames must be a subset of tbl.properties.variablenames and cannot include the name of the response variable.

    • by default, predictornames contains the names of all predictor variables.

    • a good practice is to specify the predictors for training using either predictornames or formula, but not both.

example: "predictornames",["sepallength","sepalwidth","petallength","petalwidth"]

data types: string | cell

prior probabilities for each class, specified as the comma-separated pair consisting of 'prior' and a value in this table.

valuedescription
'empirical'the class prior probabilities are the class relative frequencies in y.
'uniform'all class prior probabilities are equal to 1/k, where k is the number of classes.
numeric vectoreach element is a class prior probability. order the elements according to mdl.classnames or specify the order using the classnames name-value pair argument. the software normalizes the elements such that they sum to 1.
structure

a structure s with two fields:

  • s.classnames contains the class names as a variable of the same type as y.

  • s.classprobs contains a vector of corresponding prior probabilities. the software normalizes the elements such that they sum to 1.

if you set values for both weights and prior, the weights are renormalized to add up to the value of the prior probability in the respective class.

example: 'prior','uniform'

data types: char | string | single | double | struct

response variable name, specified as a character vector or string scalar.

  • if you supply y, then you can use responsename to specify a name for the response variable.

  • if you supply responsevarname or formula, then you cannot use responsename.

example: "responsename","response"

data types: char | string

score transformation, specified as a character vector, string scalar, or function handle.

this table summarizes the available character vectors and string scalars.

valuedescription
"doublelogit"1/(1 e–2x)
"invlogit"log(x / (1 – x))
"ismax"sets the score for the class with the largest score to 1, and sets the scores for all other classes to 0
"logit"1/(1 ex)
"none" or "identity"x (no transformation)
"sign"–1 for x < 0
0 for x = 0
1 for x > 0
"symmetric"2x – 1
"symmetricismax"sets the score for the class with the largest score to 1, and sets the scores for all other classes to –1
"symmetriclogit"2/(1 ex) – 1

for a matlab function or a function you define, use its function handle for the score transform. the function handle must accept a matrix (the original scores) and return a matrix of the same size (the transformed scores).

example: "scoretransform","logit"

data types: char | string | function_handle

observation weights, specified as the comma-separated pair consisting of 'weights' and a numeric vector of positive values or name of a variable in tbl. the software weighs the observations in each row of x or tbl with the corresponding value in weights. the size of weights must equal the number of rows of x or tbl.

if you specify the input data as a table tbl, then weights can be the name of a variable in tbl that contains a numeric vector. in this case, you must specify weights as a character vector or string scalar. for example, if the weights vector w is stored as tbl.w, then specify it as 'w'. otherwise, the software treats all columns of tbl, including w, as predictors or the response when training the model.

the software normalizes weights to sum up to the value of the prior probability in the respective class.

by default, weights is ones(n,1), where n is the number of observations in x or tbl.

data types: double | single | char | string

hyperparameter optimization

parameters to optimize, specified as the comma-separated pair consisting of 'optimizehyperparameters' and one of the following:

  • 'none' — do not optimize.

  • 'auto' — use {'distributionnames','width'}.

  • 'all' — optimize all eligible parameters.

  • string array or cell array of eligible parameter names.

  • vector of optimizablevariable objects, typically the output of .

the optimization attempts to minimize the cross-validation loss (error) for fitcnb by varying the parameters. for information about cross-validation loss (albeit in a different context), see . to control the cross-validation type and other aspects of the optimization, use the hyperparameteroptimizationoptions name-value pair.

note

the values of 'optimizehyperparameters' override any values you specify using other name-value arguments. for example, setting 'optimizehyperparameters' to 'auto' causes fitcnb to optimize hyperparameters corresponding to the 'auto' option and to ignore any specified values for the hyperparameters.

the eligible parameters for fitcnb are:

  • distributionnamesfitcnb searches among 'normal' and 'kernel'.

  • widthfitcnb searches among real values, by default log-scaled in the range [minpredictordiff/4,max(maxpredictorrange,minpredictordiff)].

  • kernelfitcnb searches among 'normal', 'box', 'epanechnikov', and 'triangle'.

set nondefault parameters by passing a vector of optimizablevariable objects that have nondefault values. for example,

load fisheriris
params = hyperparameters('fitcnb',meas,species);
params(2).range = [1e-2,1e2];

pass params as the value of optimizehyperparameters.

by default, the iterative display appears at the command line, and plots appear according to the number of hyperparameters in the optimization. for the optimization and plots, the objective function is the misclassification rate. to control the iterative display, set the verbose field of the 'hyperparameteroptimizationoptions' name-value argument. to control the plots, set the showplots field of the 'hyperparameteroptimizationoptions' name-value argument.

for an example, see optimize naive bayes classifier.

example: 'auto'

options for optimization, specified as a structure. this argument modifies the effect of the optimizehyperparameters name-value argument. all fields in the structure are optional.

field namevaluesdefault
optimizer
  • 'bayesopt' — use bayesian optimization. internally, this setting calls .

  • 'gridsearch' — use grid search with numgriddivisions values per dimension.

  • 'randomsearch' — search at random among maxobjectiveevaluations points.

'gridsearch' searches in a random order, using uniform sampling without replacement from the grid. after optimization, you can get a table in grid order by using the command sortrows(mdl.hyperparameteroptimizationresults).

'bayesopt'
acquisitionfunctionname
  • 'expected-improvement-per-second-plus'

  • 'expected-improvement'

  • 'expected-improvement-plus'

  • 'expected-improvement-per-second'

  • 'lower-confidence-bound'

  • 'probability-of-improvement'

acquisition functions whose names include per-second do not yield reproducible results because the optimization depends on the runtime of the objective function. acquisition functions whose names include plus modify their behavior when they are overexploiting an area. for more details, see acquisition function types.

'expected-improvement-per-second-plus'
maxobjectiveevaluationsmaximum number of objective function evaluations.30 for 'bayesopt' and 'randomsearch', and the entire grid for 'gridsearch'
maxtime

time limit, specified as a positive real scalar. the time limit is in seconds, as measured by tic and toc. the run time can exceed maxtime because maxtime does not interrupt function evaluations.

inf
numgriddivisionsfor 'gridsearch', the number of values in each dimension. the value can be a vector of positive integers giving the number of values for each dimension, or a scalar that applies to all dimensions. this field is ignored for categorical variables.10
showplotslogical value indicating whether to show plots. if true, this field plots the best observed objective function value against the iteration number. if you use bayesian optimization (optimizer is 'bayesopt'), then this field also plots the best estimated objective function value. the best observed objective function values and best estimated objective function values correspond to the values in the bestsofar (observed) and bestsofar (estim.) columns of the iterative display, respectively. you can find these values in the properties objectiveminimumtrace and estimatedobjectiveminimumtrace of mdl.hyperparameteroptimizationresults. if the problem includes one or two optimization parameters for bayesian optimization, then showplots also plots a model of the objective function against the parameters.true
saveintermediateresultslogical value indicating whether to save results when optimizer is 'bayesopt'. if true, this field overwrites a workspace variable named 'bayesoptresults' at each iteration. the variable is a bayesianoptimization object.false
verbose

display at the command line:

  • 0 — no iterative display

  • 1 — iterative display

  • 2 — iterative display with extra information

for details, see the bayesopt name-value argument and the example .

1
useparallellogical value indicating whether to run bayesian optimization in parallel, which requires parallel computing toolbox™. due to the nonreproducibility of parallel timing, parallel bayesian optimization does not necessarily yield reproducible results. for details, see .false
repartition

logical value indicating whether to repartition the cross-validation at every iteration. if this field is false, the optimizer uses a single partition for the optimization.

the setting true usually gives the most robust results because it takes partitioning noise into account. however, for good results, true requires at least twice as many function evaluations.

false
use no more than one of the following three options.
cvpartitiona cvpartition object, as created by cvpartition'kfold',5 if you do not specify a cross-validation field
holdouta scalar in the range (0,1) representing the holdout fraction
kfoldan integer greater than 1

example: 'hyperparameteroptimizationoptions',struct('maxobjectiveevaluations',60)

data types: struct

output arguments

trained naive bayes classification model, returned as a model object or a cross-validated model object.

if you set any of the name-value pair arguments kfold, holdout, crossval, or cvpartition, then mdl is a classificationpartitionedmodel cross-validated model object. otherwise, mdl is a classificationnaivebayes model object.

to reference properties of mdl, use dot notation. for example, to access the estimated distribution parameters, enter mdl.distributionparameters.

more about

bag-of-tokens model

in the bag-of-tokens model, the value of predictor j is the nonnegative number of occurrences of token j in the observation. the number of categories (bins) in the multinomial model is the number of distinct tokens (number of predictors).

naive bayes

naive bayes is a classification algorithm that applies density estimation to the data.

the algorithm leverages bayes theorem, and (naively) assumes that the predictors are conditionally independent, given the class. although the assumption is usually violated in practice, naive bayes classifiers tend to yield posterior distributions that are robust to biased class density estimates, particularly where the posterior is 0.5 (the decision boundary) [1].

naive bayes classifiers assign observations to the most probable class (in other words, the maximum a posteriori decision rule). explicitly, the algorithm takes these steps:

  1. estimate the densities of the predictors within each class.

  2. model posterior probabilities according to bayes rule. that is, for all k = 1,...,k,

    p^(y=k|x1,..,xp)=π(y=k)j=1pp(xj|y=k)k=1kπ(y=k)j=1pp(xj|y=k),

    where:

    • y is the random variable corresponding to the class index of an observation.

    • x1,...,xp are the random predictors of an observation.

    • π(y=k) is the prior probability that a class index is k.

  3. classify an observation by estimating the posterior probability for each class, and then assign the observation to the class yielding the maximum posterior probability.

if the predictors compose a multinomial distribution, then the posterior probabilityp^(y=k|x1,..,xp)π(y=k)pmn(x1,...,xp|y=k), where pmn(x1,...,xp|y=k) is the probability mass function of a multinomial distribution.

tips

  • for classifying count-based data, such as the bag-of-tokens model, use the multinomial distribution (e.g., set 'distributionnames','mn').

  • after training a model, you can generate c/c code that predicts labels for new data. generating c/c code requires matlab coder™. for details, see introduction to code generation.

algorithms

  • if predictor variable j has a conditional normal distribution (see the distributionnames name-value argument), the software fits the distribution to the data by computing the class-specific weighted mean and the unbiased estimate of the weighted standard deviation. for each class k:

    • the weighted mean of predictor j is

      x¯j|k={i:yi=k}wixij{i:yi=k}wi,

      where wi is the weight for observation i. the software normalizes weights within a class such that they sum to the prior probability for that class.

    • the unbiased estimator of the weighted standard deviation of predictor j is

      sj|k=[{i:yi=k}wi(xijx¯j|k)2z1|kz2|kz1|k]1/2,

      where z1|k is the sum of the weights within class k and z2|k is the sum of the squared weights within class k.

  • if all predictor variables compose a conditional multinomial distribution (you specify 'distributionnames','mn'), the software fits the distribution using the bag-of-tokens model. the software stores the probability that token j appears in class k in the property distributionparameters{k,j}. using additive smoothing [2], the estimated probability is

    p(token j|class k)=1 cj|kp ck,

    where:

    • cj|k=nk{i:yi=k}xijwi{i:yi=k}wi, which is the weighted number of occurrences of token j in class k.

    • nk is the number of observations in class k.

    • wi is the weight for observation i. the software normalizes weights within a class such that they sum to the prior probability for that class.

    • ck=j=1pcj|k, which is the total weighted number of occurrences of all tokens in class k.

  • if predictor variable j has a conditional multivariate multinomial distribution:

    1. the software collects a list of the unique levels, stores the sorted list in , and considers each level a bin. each predictor/class combination is a separate, independent multinomial random variable.

    2. for each class k, the software counts instances of each categorical level using the list stored in categoricallevels{j}.

    3. the software stores the probability that predictor j, in class k, has level l in the property distributionparameters{k,j}, for all levels in categoricallevels{j}. using additive smoothing [2], the estimated probability is

      p(predictor j=l|class k)=1 mj|k(l)mj mk,

      where:

      • mj|k(l)=nk{i:yi=k}i{xij=l}wi{i:yi=k}wi, which is the weighted number of observations for which predictor j equals l in class k.

      • nk is the number of observations in class k.

      • i{xij=l}=1 if xij = l, 0 otherwise.

      • wi is the weight for observation i. the software normalizes weights within a class such that they sum to the prior probability for that class.

      • mj is the number of distinct levels in predictor j.

      • mk is the weighted number of observations in class k.

  • if you specify the cost, prior, and weights name-value arguments, the output model object stores the specified values in the cost, prior, and w properties, respectively. the cost property stores the user-specified cost matrix as is. the prior and w properties store the prior probabilities and observation weights, respectively, after normalization. for details, see .

  • the software uses the cost property for prediction, but not training. therefore, cost is not read-only; you can change the property value by using dot notation after creating the trained model.

references

[1] hastie, t., r. tibshirani, and j. friedman. the elements of statistical learning, second edition. ny: springer, 2008.

[2] manning, christopher d., prabhakar raghavan, and hinrich schütze. introduction to information retrieval, ny: cambridge university press, 2008.

extended capabilities

version history

introduced in r2014b

see also

| | |

topics

    网站地图