classification with imbalanced data -凯发k8网页登录
this example shows how to perform classification when one class has many more observations than another. you use the rusboost
algorithm first, because it is designed to handle this case. another way to handle imbalanced data is to use the name-value pair arguments 'prior'
or 'cost'
. for details, see .
this example uses the "cover type" data from the uci machine learning archive, described in . the data classifies types of forest (ground cover), based on predictors such as elevation, soil type, and distance to water. the data has over 500,000 observations and over 50 predictors, so training and using a classifier is time consuming.
blackard and dean [1] describe a neural net classification of this data. they quote a 70.6% classification accuracy. rusboost
obtains over 81% classification accuracy.
obtain the data
import the data into your workspace. extract the last data column into a variable named y
.
gunzip('https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.data.gz') load covtype.data y = covtype(:,end); covtype(:,end) = [];
examine the response data
tabulate(y)
value count percent 1 211840 36.46% 2 283301 48.76% 3 35754 6.15% 4 2747 0.47% 5 9493 1.63% 6 17367 2.99% 7 20510 3.53%
there are hundreds of thousands of data points. those of class 4 are less than 0.5% of the total. this imbalance indicates that rusboost
is an appropriate algorithm.
partition the data for quality assessment
use half the data to fit a classifier, and half to examine the quality of the resulting classifier.
rng(10,'twister') % for reproducibility part = cvpartition(y,'holdout',0.5); istrain = training(part); % data for fitting istest = test(part); % data for quality assessment tabulate(y(istrain))
value count percent 1 105919 36.46% 2 141651 48.76% 3 17877 6.15% 4 1374 0.47% 5 4747 1.63% 6 8684 2.99% 7 10254 3.53%
create the ensemble
use deep trees for higher ensemble accuracy. to do so, set the trees to have maximal number of decision splits of n
, where n
is the number of observations in the training sample. set learnrate
to 0.1
in order to achieve higher accuracy as well. the data is large, and, with deep trees, creating the ensemble is time consuming.
n = sum(istrain); % number of observations in the training sample t = templatetree('maxnumsplits',n); tic rustree = fitcensemble(covtype(istrain,:),y(istrain),'method','rusboost', ... 'numlearningcycles',1000,'learners',t,'learnrate',0.1,'nprint',100);
training rusboost... grown weak learners: 100 grown weak learners: 200 grown weak learners: 300 grown weak learners: 400 grown weak learners: 500 grown weak learners: 600 grown weak learners: 700 grown weak learners: 800 grown weak learners: 900 grown weak learners: 1000
toc
elapsed time is 242.836734 seconds.
inspect the classification error
plot the classification error against the number of members in the ensemble.
figure; tic plot(loss(rustree,covtype(istest,:),y(istest),'mode','cumulative')); toc
elapsed time is 164.470086 seconds.
grid on; xlabel('number of trees'); ylabel('test classification error');
the ensemble achieves a classification error of under 20% using 116 or more trees. for 500 or more trees, the classification error decreases at a slower rate.
examine the confusion matrix for each class as a percentage of the true class.
tic yfit = predict(rustree,covtype(istest,:)); toc
elapsed time is 132.353489 seconds.
confusionchart(y(istest),yfit,'normalization','row-normalized','rowsummary','row-normalized')
all classes except class 2 have over 90% classification accuracy. but class 2 makes up close to half the data, so the overall accuracy is not that high.
compact the ensemble
the ensemble is large. remove the data using the compact
method.
cmpctrus = compact(rustree); sz(1) = whos('rustree'); sz(2) = whos('cmpctrus'); [sz(1).bytes sz(2).bytes]
ans = 1×2
109 ×
1.6579 0.9423
the compacted ensemble is about half the size of the original.
remove half the trees from cmpctrus
. this action is likely to have minimal effect on the predictive performance, based on the observation that 500 out of 1000 trees give nearly optimal accuracy.
cmpctrus = removelearners(cmpctrus,[500:1000]);
sz(3) = whos('cmpctrus');
sz(3).bytes
ans = 452868660
the reduced compact ensemble takes about a quarter of the memory of the full ensemble. its overall loss rate is under 19%:
l = loss(cmpctrus,covtype(istest,:),y(istest))
l = 0.1833
the predictive accuracy on new data might differ, because the ensemble accuracy might be biased. the bias arises because the same data used for assessing the ensemble was used for reducing the ensemble size. to obtain an unbiased estimate of requisite ensemble size, you should use cross validation. however, that procedure is time consuming.
references
[1] blackard, j. a. and d. j. dean. "comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables". computers and electronics in agriculture vol. 24, issue 3, 1999, pp. 131–151.
see also
fitcensemble
| | cvpartition
| | | | | | | |