main content

compress machine learning model for memory-凯发k8网页登录

this example shows how to reduce the size of a machine learning model for deployment to memory-limited hardware. to demonstrate the model compression workflow, the example builds models for the acoustic scene classification (asc) task, which classifies environments from the sounds they produce. asc is a generic multiclass classification problem that is foundational for context awareness in devices, robots, and other applications [1].

assume that you want to build a model for hearing aids where the available memory size is 30 kb. first, simplify the multiclass asc task to a binary classification problem, and them perform these steps:

  • reduce the number of features by selecting important features.

  • optimize hyperparameters with coupled constraints, which limit the size of a machine learning model.

  • quantize model parameters.

for more details on optimizing hyperparameters to reduce the memory size, see more about.

load data

load the acousticscenes data set, and display the variables in the data set.

load("acousticscenes.mat")
whos
  name           size               bytes  class          attributes
  xeval        300x286             686400  double                   
  xtest        300x286             686400  double                   
  xtrain      1500x286            3432000  double                   
  yeval        300x1                 2102  categorical              
  ytest        300x1                 2102  categorical              
  ytrain      1500x1                 3302  categorical              

xtrain, xeval, and xtest contain features extracted from the tut acoustic scene data set using wavelet scattering. ytrain, yeval, and ytest contain acoustic scene labels of 15 different types for xtrain, xeval, and xtest, respectively. in this example, you use xtrain and ytrain to train models and xtest and ytest to test the accuracy of the trained models. during the optimization step, you use xeval and yeval as a holdout validation set.

the tut acoustic scene data set provides development data (tut-acoustic-scenes-2017-development [3]) and test data (tut-acoustic-scenes-2017-evaluation [4]). the development data provides a 4-fold cross-validation setup. xtrain and xeval are from the subsets of the training and evaluation sets (respectively) defined by the first fold of the cross-validation setup, and xtest is from the subset of the test data set. the example acoustic scene recognition using late fusion (audio toolbox) describes how you can obtain these variables from a subset of the tut acoustic scene data set.

normalize the data sets.

[xtrain,mu,sigma] = normalize(xtrain);
xeval = normalize(xeval,center=mu,scale=sigma);
xtest = normalize(xtest,center=mu,scale=sigma);

select classification model types

select types of classification models for this example by using the classification learner app.

  1. on the apps tab, open the apps gallery. then, in the machine learning and deep learning group, click classification learner.

  2. on the classification learner tab, in the file section, click new session and select from workspace. in the dialog box, specify ytrain as the response variable, and specify the variables in xtrain as predictors.

  3. in the models section of the app, click all. this option trains all the model presets available for your data set.

  4. in the train section, click train all and select train all.

you can compare trained models based on accuracy scores, visualize results by plotting class predictions, and check performance using the confusion matrix and roc curve. for more details on classification learner, see train classification models in classification learner app.

in this example, you work with these five model types:

  • bilayered neural network

  • linear discriminant

  • random subspace ensemble with discriminant analysis learners

  • linear svm

  • logistic regression

create a variable containing the model names.

mdlnames = ["bilayered nn","linear discriminant", ...
    "subspace discriminant","linear svm","logistic regression"]';

train multiclass classification models

train the five models using fitting functions at the command line, and then reduce the size of the trained models by using the compact function. the compact function discards information that is not necessary for prediction.

svm and logistic regression models support only binary classification. therefore, use the fitcecoc function to train a multiclass classification model with linear svm learners and a multiclass classification model with logistic regression learners. for the logistic regression model, use a templatelinear learner; in this case, you do not use the compact function because fitcecoc returns a compact model object (compactclassificationecoc).

rng("default") % for reproducibility
multimdls = cell(5,1);
% bilayered nn
multimdls{1} = compact(fitcnet(xtrain,ytrain,layersizes=[10 10]));
% linear discriminant
multimdls{2} = compact(fitcdiscr(xtrain,ytrain));
% subspace discriminant
multimdls{3} = compact(fitcensemble(xtrain,ytrain, ...
    method="subspace",learners="discriminant", ...
    numlearningcycles=30,npredtosample=25));
% linear svm
multimdls{4} = compact(fitcecoc(xtrain,ytrain));
% logistic regression
tlinear = templatelinear(learner="logistic");
multimdls{5} = fitcecoc(xtrain,ytrain,learners=tlinear);

specify the output display format as bank to display two digits after the decimal point.

format("bank")

test the models with the test data set using the helper function helpermdlmetrics.this function returns a table of model metrics including the model accuracy as a percentage and the model size in kb. the code for the helpermdlmetrics function appears at the end of this example.

multimdltbl = helpermdlmetrics(multimdls,xtest,ytest);
tbl1 = multimdltbl;
tbl1.properties.rownames = mdlnames;
disp(tbl1)
                             accuracy    model size
                             ________    __________
    bilayered nn              54.33         36.17  
    linear discriminant       53.33       2776.71  
    subspace discriminant     50.67        881.54  
    linear svm                34.33        901.90  
    logistic regression       50.00       1937.67  

the size of each model is more than 30 kb, and the accuracy value is approximately 50% for most models.

simplify problem as binary classification

for the hearing aid application, assume you only want to distinguish background sounds and sounds from specific sources, instead of classifying sounds into the 15 types included in the data set. group the types of sounds into two types (allaround and directional) by using the function.

allaround = ["beach","forest_path","park","office","home", ...
    "library","city_center","residential_area"];
directional = ["train","bus","car","tram","grocery_store", ...
    "metro_station","cafe/restaurant"];
ytrainmapped = mergecats(ytrain,allaround,"allaround");
ytrainmapped = mergecats(ytrainmapped,directional,"directional");
yevalmapped = mergecats(yeval,allaround,"allaround");
yevalmapped = mergecats(yevalmapped,directional,"directional");
ytestmapped = mergecats(ytest,allaround,"allaround");
ytestmapped = mergecats(ytestmapped,directional,"directional");

create a grouped scatter plot of the first two principal components to see whether the binary grouping works.

figure
[~,score] = pca(xtrain);
gscatter(score(:,1),score(:,2),ytrainmapped)
xlabel("first principal component")
ylabel("second principal component")

train binary classification models

train the models for the binary sound labels ytrainmapped. for the linear svm model, reduce the memory size by discarding the support vectors by using the function. the model can still predict new data using the linear predictor coefficients stored in the model property beta. for the logistic regression model, the fitclinear function returns a compact model that does not store the training data.

rng("default")
binarymdls = cell(5,1);
% bilayered nn
binarymdls{1} = compact(fitcnet(xtrain,ytrainmapped,layersizes=[10 10]));
% linear discriminant
binarymdls{2} = compact(fitcdiscr(xtrain,ytrainmapped));
% subspace discriminant
binarymdls{3} = compact(fitcensemble(xtrain,ytrainmapped, ...
    method="subspace",learners="discriminant",numlearningcycles=30,npredtosample=25));
% linear svm
binarymdls{4} = discardsupportvectors(compact(fitcsvm(xtrain,ytrainmapped)));
% logistic regression
binarymdls{5} = fitclinear(xtrain,ytrainmapped,learner="logistic");

test the binary classification models with the test data set ytestmapped.

binarymdltbl = helpermdlmetrics(binarymdls,xtest,ytestmapped);
tbl2 = table(multimdltbl,binarymdltbl);
tbl2.properties.rownames = mdlnames;
tbl2.properties.variablenames = ["multiclass","binary"];
disp(tbl2)
                                   multiclass                  binary        
                             accuracy    model size    accuracy    model size
                             ______________________    ______________________
    bilayered nn              54.33         36.17       99.33         31.89  
    linear discriminant       53.33       2776.71       98.00       1314.90  
    subspace discriminant     50.67        881.54       99.33        552.08  
    linear svm                34.33        901.90       97.00          8.74  
    logistic regression       50.00       1937.67       98.67         18.60  

the trained models accurately classify the acoustic scenes for the binary classification problem. the linear svm and logistic regression models are smaller than 30 kb.

train models with fewer features

you can make machine learning models smaller without losing too much accuracy by building models using only important features. xtrain, xtest, and xeval include 286 features. select 50 features by using the function.

idx = fscmrmr(xtrain,ytrainmapped);
xtrainselected = xtrain(:,idx(1:50));
xevalselected = xeval(:,idx(1:50));
xtestselected = xtest(:,idx(1:50));

train binary classification models using the selected features.

rng("default")
feat50binarymdls = cell(5,1);
% bilayered nn
feat50binarymdls{1} = compact(fitcnet(xtrainselected,ytrainmapped,layersizes=[10 10]));
% linear discriminant
feat50binarymdls{2} = compact(fitcdiscr(xtrainselected,ytrainmapped));
% subspace discriminant
feat50binarymdls{3} = compact(fitcensemble(xtrainselected,ytrainmapped, ...
    method="subspace",learners="discriminant",numlearningcycles=30,npredtosample=25));
% linear svm
feat50binarymdls{4} = discardsupportvectors(compact(fitcsvm(xtrainselected,ytrainmapped)));
% logistic regression
feat50binarymdls{5} = fitclinear(xtrainselected,ytrainmapped,learner="logistic");

test the models with the test data set ytestmapped.

feat50binarymdltbl = helpermdlmetrics(feat50binarymdls,xtestselected,ytestmapped);
tbl3 = table(multimdltbl,binarymdltbl,feat50binarymdltbl);
tbl3.properties.rownames = mdlnames;
tbl3.properties.variablenames = ["multiclass","binary","50 features"];
disp(tbl3)
                                   multiclass                  binary                 50 features      
                             accuracy    model size    accuracy    model size    accuracy    model size
                             ______________________    ______________________    ______________________
    bilayered nn              54.33         36.17       99.33         31.89       90.67         11.38  
    linear discriminant       53.33       2776.71       98.00       1314.90       95.33         51.70  
    subspace discriminant     50.67        881.54       99.33        552.08       91.33        541.91  
    linear svm                34.33        901.90       97.00          8.74       96.33          4.82  
    logistic regression       50.00       1937.67       98.67         18.60       97.00         12.18  

in addition to the linear svm and logistic regression models, the bilayered neural network model is also smaller than 30 kb. however, reducing the number of features causes the accuracy to decrease in the trained models.

restore the default display format.

format("default")

optimize neural network with coupled constraints

find optimal model hyperparameters while limiting the memory use of the models. the constraints depend on the type of machine learning model. for example, you can limit the number of support vectors for an svm model or limit the number of parameters in a neural network model. for more details on bayesian optimization and an example for an svm model, see . this example shows constraint-coupled optimization for a bilayered neural network model.

for constraint-coupled optimization, specify the hyperparameters to optimize and define a customized objective function. then, use the function to find the optimal hyperparameters based on the objective function.

first, get the default hyperparameters of the bilayered neural network model by using the function.

params_bilayerednet = hyperparameters("fitcnet",xtrainselected,ytrainmapped);

modify the first, third, and ninth hyperparameters, which correspond to numlayers, standardize, and layer_3_size, so that they are not optimized. in this way, you can build a bilayered model and use training data without standardization. the training data is already standardized.

params_bilayerednet(1).range = [1 2]; % numlayers
params_bilayerednet(1).optimize = false;
params_bilayerednet(3).optimize = false; % standardize
params_bilayerednet(9).optimize = false; % layer_3_size

use the customized objective function helperoptimizeconstrainedbilayer, which trains a bilayered neural network model using a given set of parameters for the training data set, and returns the loss for the holdout validation set. the code for the helperoptimizeconstrainedbilayer function appears at the end of this example. the function also accepts the upper limit for the number of weight parameters in the model and returns a constraint value. a positive constraint value indicates that the number of parameters is greater than the specified limit.

define a function handle fun that takes the hyperparameters and calls the helperoptimizeconstrainedbilayer function. specify the upper limit for the number of weight parameters as 300.

fun = @(params)helperoptimizeconstrainedbilayer(params,xtrainselected,ytrainmapped,xevalselected,yevalmapped,300);

when you call the bayesopt function, specify the objective function as fun and specify the hyperparameters as params_bilayerednet. also, specify numcoupledconstraints as 1 to indicate that the objective function has one coupled constraint. for reproducibility, set the random seed and use the expected-improvement-plus acquisition function.

rng("default")
resultnn = bayesopt(fun,params_bilayerednet, ...
    acquisitionfunctionname="expected-improvement-plus", ...
    numcoupledconstraints=1);
|==================================================================================================================================================|
| iter | eval   | objective   | objective   | bestsofar   | bestsofar   | constraint1  |  activations |       lambda | layer_1_size | layer_2_size |
|      | result |             | runtime     | (observed)  | (estim.)    |              |              |              |              |              |
|==================================================================================================================================================|
|    1 | infeas |    0.076667 |      3.8313 |         nan |    0.076667 |      2.4e 03 |         none |   7.6806e-06 |           15 |          115 |
|    2 | best   |        0.07 |      1.1425 |        0.07 |    0.070445 |         -196 |         none |    0.0001221 |            2 |            1 |
|    3 | infeas |     0.46667 |     0.15246 |        0.07 |    0.070862 |     1.39e 03 |      sigmoid |       45.438 |           26 |           14 |
|    4 | best   |    0.063333 |      1.3051 |    0.063333 |    0.063353 |        -52.5 |         tanh |   2.6069e-05 |            4 |            8 |
|    5 | accept |     0.11333 |      1.4743 |    0.063333 |    0.063423 |        -58.5 |         relu |   2.2423e-05 |            4 |            7 |
|    6 | accept |        0.07 |      1.1222 |    0.063333 |    0.063344 |         -196 |         none |    0.0001411 |            2 |            1 |
|    7 | infeas |    0.046667 |      1.5327 |    0.063333 |     0.06318 |     1.95e 04 |         tanh |   1.2269e-07 |          300 |           16 |
|    8 | infeas |     0.11333 |       5.227 |    0.063333 |    0.063575 |     9.47e 04 |         tanh |     0.045218 |          298 |          267 |
|    9 | accept |     0.46667 |    0.023516 |    0.063333 |    0.063332 |         -196 |         none |       9.1357 |            2 |            1 |
|   10 | infeas |     0.46667 |    0.025527 |    0.063333 |    0.063332 |     1.42e 03 |         relu |       3.0052 |           30 |            7 |
|   11 | best   |    0.046667 |      2.0311 |    0.046667 |    0.046678 |         -172 |         relu |    6.691e-09 |            2 |            7 |
|   12 | accept |    0.046667 |      1.0284 |    0.046667 |    0.046675 |        -52.5 |         tanh |   6.7859e-09 |            4 |            8 |
|   13 | accept |    0.086667 |      2.4386 |    0.046667 |    0.046686 |         -172 |         relu |   1.1251e-07 |            2 |            7 |
|   14 | accept |     0.46667 |    0.024936 |    0.046667 |     0.04668 |        -58.5 |         tanh |       60.245 |            4 |            7 |
|   15 | best   |        0.03 |      1.0594 |        0.03 |    0.030086 |        -58.5 |         tanh |    0.0011383 |            4 |            7 |
|   16 | infeas |     0.12333 |     0.12629 |        0.03 |     0.03007 |          296 |      sigmoid |    6.766e-09 |           10 |            8 |
|   17 | accept |    0.076667 |     0.71763 |        0.03 |    0.030071 |         -146 |         none |   8.2973e-09 |            3 |            1 |
|   18 | best   |    0.023333 |      1.0659 |    0.023333 |    0.026599 |        -58.5 |         tanh |    0.0009958 |            4 |            7 |
|   19 | accept |    0.026667 |        1.01 |    0.023333 |     0.02661 |        -52.5 |         tanh |    0.0009402 |            4 |            8 |
|   20 | accept |        0.05 |      1.3193 |    0.023333 |    0.026601 |         -226 |      sigmoid |    1.086e-05 |            1 |            8 |
|==================================================================================================================================================|
| iter | eval   | objective   | objective   | bestsofar   | bestsofar   | constraint1  |  activations |       lambda | layer_1_size | layer_2_size |
|      | result |             | runtime     | (observed)  | (estim.)    |              |              |              |              |              |
|==================================================================================================================================================|
|   21 | accept |    0.036667 |      1.0198 |    0.023333 |    0.027248 |         -110 |         tanh |   0.00090677 |            3 |            8 |
|   22 | infeas |    0.053333 |      5.9702 |    0.023333 |    0.027181 |     1.41e 04 |         tanh |   0.00048938 |          283 |            1 |
|   23 | infeas |     0.12333 |     0.37451 |    0.023333 |    0.027429 |     1.71e 04 |         relu |   7.1367e-09 |          238 |           23 |
|   24 | accept |    0.076667 |     0.92349 |    0.023333 |    0.029543 |         -248 |         none |   6.7138e-07 |            1 |            1 |
|   25 | accept |    0.046667 |      1.3113 |    0.023333 |     0.02962 |         -226 |         tanh |   1.1434e-07 |            1 |            8 |
|   26 | accept |    0.043333 |      1.3654 |    0.023333 |    0.029659 |         -168 |      sigmoid |   9.1787e-07 |            2 |            8 |
|   27 | accept |    0.043333 |     0.71783 |    0.023333 |    0.029584 |         -226 |         tanh |    0.0018534 |            1 |            8 |
|   28 | infeas |        0.06 |      3.8672 |    0.023333 |    0.030036 |     1.31e 04 |      sigmoid |   2.3192e-06 |          257 |            2 |
|   29 | accept |    0.066667 |       1.257 |    0.023333 |    0.026647 |         -226 |         tanh |   0.00050488 |            1 |            8 |
|   30 | accept |    0.036667 |     0.70965 |    0.023333 |    0.028015 |        -52.5 |         tanh |    0.0044111 |            4 |            8 |

__________________________________________________________
optimization completed.
maxobjectiveevaluations of 30 reached.
total function evaluations: 30
total elapsed time: 60.5813 seconds
total objective function evaluation time: 44.1746
best observed feasible point:
    activations     lambda      layer_1_size    layer_2_size
    ___________    _________    ____________    ____________
       tanh        0.0009958         4               7      
observed objective function value = 0.023333
estimated objective function value = 0.029092
function evaluation time = 1.0659
observed constraint violations =[ -58.500000 ]
best estimated feasible point (according to models):
    activations     lambda      layer_1_size    layer_2_size
    ___________    _________    ____________    ____________
       tanh        0.0011383         4               7      
estimated objective function value = 0.028015
estimated function evaluation time = 1.0464
estimated constraint violations =[ -58.501089 ]

bayesopt finds optimal hyperparameters that minimize an error in the holdout validation set and satisfy the constraint. extract the best point in the optimization results resultnn by using the bestpoint function.

[optimalparams,criterionvalue1,iteration] = bestpoint(resultnn)
optimalparams=1×4 table
    activations     lambda      layer_1_size    layer_2_size
    ___________    _________    ____________    ____________
       tanh        0.0011383         4               7      
criterionvalue1 = 0.0332
iteration = 15

train the bilayered neural network model with the optimal hyperparameters.

rng("default")
modelnnopt = compact(fitcnet(xtrainselected,ytrainmapped, ...
    activations=char(optimalparams.activations), ...
    layersizes=[optimalparams.layer_1_size optimalparams.layer_2_size], ...
    lambda=optimalparams.lambda));

find the accuracy and size of the trained model.

optimizednnaccuracy = (1-loss(modelnnopt,xtestselected,ytestmapped))*100
optimizednnaccuracy = 93.3333
optimizednnsize = whos("modelnnopt").bytes/1024
optimizednnsize = 8.3555

quantize model parameters with simulink block

you can also reduce the memory footprint of a machine learning model by quantizing model parameters with a simulink block. statistics and machine learning toolbox™ provides various prediction blocks that allows you to import a trained machine learning model into a simulink model. in the prediction blocks, you can specify the data types for some or all model parameters as single-precision, fixed-point, half-precision, and so on. for an example of fixed-point conversion, see .

this example provides the simulink model slexacousticsceneclassificationnnpredictexample.slx, which includes the block. open this model.

simmdlname = 'slexacousticsceneclassificationnnpredictexample'; 
open_system(simmdlname)

double-click the classificationneuralnetwork predict block to open the block parameters dialog box. you can specify the data types for the model parameters in the data types tab. to reduce the memory size, specify the data types for the layers as single. for details on specifying data types, see (simulink).

prepare the input data for the simulink model. convert the predictor data (xtestselected) to single precision by using the single function.

soundinput.time = (0:size(xtestselected,1)-1)';
soundinput.signals(1).values = single(xtestselected);
soundinput.signals(1).dimensions = size(xtestselected,2);

simulate the simulink model and assign the result to the out variable.

out = sim(simmdlname);

find the accuracy of the predict block using the data logged in the (simulink) block.

pred = categorical(out.simout.data,unique(out.simout.data),["allaround","directional"]);
quantizednnaccuracy = sum(pred == ytestmapped)/length(ytestmapped)*100
quantizednnaccuracy = 93.3333

find the size of the quantized model parameters.

p = simulink.mask.get("slexacousticsceneclassificationnnpredictexample/classificationneuralnetwork predict");
vars = p.getworkspacevariables;
blockparams = vars(end).value;
save("params.mat","blockparams")
s = dir("params.mat");
quantizednnsize = s.bytes/1024
quantizednnsize = 2.4951

model compression summary

display the changes in model size and accuracy during the model compression workflow for the bilayered neural network model. in general, the model loses some accuracy as you apply additional model compression schemes.

nnccuracy = [multimdltbl{1,"accuracy"} binarymdltbl{1,"accuracy"} ...
    feat50binarymdltbl{1,"accuracy"} ...
    optimizednnaccuracy quantizednnaccuracy];
nnsize = [multimdltbl{1,"model size"} binarymdltbl{1,"model size"} ...
    feat50binarymdltbl{1,"model size"} ...
    optimizednnsize quantizednnsize];
modeltype = ["multiclass","binary","50 features","optimized","single precision"];
figure
yyaxis left
b = bar(nnsize);
xtips = b.xendpoints;
ytips = b.yendpoints;
labels = string(round(b.ydata,2));
text(xtips,ytips,labels,horizontalalignment="center",verticalalignment="bottom", ...
    color='#0072bd')
ylabel("model size [kb]")
yyaxis right
plot(nnccuracy,"-o")
ylabel("accuracy [%]")
xticklabels(modeltype)
grid on

for the bilayered neural network model, the model size decreases to less than 30 kb after you reduce the number of features. the constrained optimization and converting data to single precision further reduce the model size.

the accuracy of the initial multiclass classification model is lower compared to the other models, because the multiclass model classifies sounds into 15 types. after you simplify the multiclass problem into a binary classification problem, the models accurately classify more than 90% of the test data. reducing the number of features leads to a loss of model accuracy, but the constrained optimization step improves accuracy, and converting data to single precision does not reduce accuracy.

helper functions

the helpermdlmetrics function takes a cell array of trained models (mdls) and test data sets (x and y) and returns a table of model metrics that includes the model accuracy as a percentage and the model size in kb. the helper function uses the whos function to estimate the model size. however, the size returned by the whos function can be larger than the actual model size required in the generated code for deployment. for example, the generated code does not include information that is not needed for prediction. consider a compactclassificationecoc model that uses logistic regression learners. the binary learners in a compactclassificationecoc model object in the matlab workspace contain the modelparameters property. however, the model prepared for deployment in the generated code does not contain this property.

function tbl = helpermdlmetrics(mdls,x,y)
nummdl = length(mdls);
metrics = nan(nummdl,2);
for i = 1 : nummdl
    mdl = mdls{i};
    mdlinfo = whos("mdl");
    metrics(i,:) = [(1-loss(mdl,x,y))*100 mdlinfo.bytes/1024];
end
tbl = array2table(metrics, ...
    variablenames=["accuracy","model size"]);
end

the helperoptimizeconstrainedbilayer function trains a bilayered neural network model using a given set of parameters for the training data, and returns the loss for the holdout validation set. in addition, the function accepts the upper limit (maxsize) for the number of weight parameters in the model and returns a constraint value. a positive constraint value indicates that the number of parameters is greater than the specified limit maxsize.

function [objective,constraint] = helperoptimizeconstrainedbilayer(params,xtrain,ytrain,xeval,yeval,maxsize)
mdl = fitcnet(xtrain,ytrain, ...
    activations=char(params.activations), ...
    layersizes=[params.layer_1_size params.layer_2_size], ...
    lambda=params.lambda);
objective = loss(mdl,xeval,yeval);
numclasses = size(unique(ytrain),1);
sizeest = size(xtrain,2)*params.layer_1_size   ...
    params.layer_1_size*params.layer_2_size   ...
    params.layer_2_size*numclasses;
constraint = sizeest - maxsize - 0.5;
end

more about

for constraint-coupled optimization, you can consider minimizing these hyperparameters to limit the memory use, depending on the type of machine learning model:

  • decision tree — minimum number of leaf node observations (minleafsize) and the maximum number of decision splits (maxnumsplits). a decision tree model has a small memory footprint.

  • linear discriminant and logistic regression — number of features and classes. both a linear discriminant model and a logistic regression model have a small to medium memory footprint.

  • shallow neural network — number of fully connected layers and the number of hidden units in each layer (layersizes). a shallow neural network model has a small to medium memory footprint.

  • k-nearest neighbor — training data size, the number of nearest neighbors (numneighbors), and the maximum number of data points in the leaf node for the kd-tree algorithm (bucketsize). a k-nearest neighbor model has a medium memory footprint.

  • support vector machine (svm) — number of support vectors determined by the box constrains (boxconstraint). an svm has a medium to large memory footprint. for an svm model that uses the linear kernel function, you can reduce the footprint by discarding support vectors from the model using the discardsupportvectors function. the reduced svm model can still predict new data using predictor coefficients (beta property) stored in the model.

  • ensemble — number of learners and the size of each learner determined by numlearningcycles and learners. an ensemble has a medium to large memory footprint.

  • gaussian process regression (regression only) — size of the active set (activesetsize). a gaussian process regression model has a medium to large memory footprint.

several factors determine the memory use of a machine learning model. however, in general, the memory footprint for a decision tree model is small. a linear discriminant model, logistic regression model, and shallow neural network model have a small to medium memory footprint, and a k-nearest neighbor model has a medium memory footprint. an svm, ensemble, and gaussian process model have a medium to large memory footprint. for an svm model that uses the linear kernel function, you can discard support vectors from the model to reduce the footprint by using the discardsupportvectors function. the reduced svm model can still predict new data using predictor coefficients (beta property) stored in the model.

for deployment to memory-limited hardware, a recommended practice is to specify training data using a matrix, not a table. if you specify training data using a table, some model properties, such as predictornames, can take a considerable proportion of the model memory footprint.

references

[1] mesaros, annamaria, toni heittola, and tuomas virtanen. acoustic scene classification: an overview of dcase 2017 challenge entries. in proc. international workshop on acoustic signal enhancement, 2018.

[2] lostanlen, vincent, and joakim anden. binaural scene classification with wavelet scattering. technical report, dcase2016 challenge, 2016.

[3]

[4]

see also

| | |

related topics

网站地图