signal classification using wavelet-凯发k8网页登录
this example shows how to classify human electrocardiogram (ecg) signals using wavelet-based feature extraction and a support vector machine (svm) classifier. the problem of signal classification is simplified by transforming the raw ecg signals into a much smaller set of features that serve in aggregate to differentiate different classes. you must have wavelet toolbox™, signal processing toolbox™, and statistics and machine learning toolbox™ to run this example. the data used in this example are publicly available from .
data description
this example uses ecg data obtained from three groups, or classes, of people: persons with cardiac arrhythmia, persons with congestive heart failure, and persons with normal sinus rhythms. the example uses 162 ecg recordings from three physionet databases: [3][7], [3], and [1][3]. in total, there are 96 recordings from persons with arrhythmia, 30 recordings from persons with congestive heart failure, and 36 recordings from persons with normal sinus rhythms. the goal is to train a classifier to distinguish between arrhythmia (arr), congestive heart failure (chf), and normal sinus rhythm (nsr).
download data
the first step is to download the data from the . to download the data, click code
and select download zip
. save the file physionet_ecg_data-main.zip
in a folder where you have write permission. the instructions for this example assume you have downloaded the file to your temporary directory, (tempdir
in matlab). modify the subsequent instructions for unzipping and loading the data if you choose to download the data in folder different from tempdir
.
the file physionet_ecg_data-main.zip
contains
ecgdata.zip
readme.md
and ecgdata.zip contains
ecgdata.mat
modified_physionet_data.txt
license.txt.
ecgdata.mat holds the data used in this example. the .txt file, modified_physionet_data.txt, is required by physionet's copying policy and provides the source attributions for the data as well as a description of the pre-processing steps applied to each ecg recording.
load files
if you followed the download instructions in the previous section, enter the following commands to unzip the two archive files.
unzip(fullfile(tempdir,'physionet_ecg_data-main.zip'),tempdir) unzip(fullfile(tempdir,'physionet_ecg_data-main','ecgdata.zip'),... fullfile(tempdir,'ecgdata'))
after you unzip the ecgdata.zip file, load the data into matlab.
load(fullfile(tempdir,'ecgdata','ecgdata.mat'))
ecgdata
is a structure array with two fields: data
and labels
. data
is a 162-by-65536 matrix where each row is an ecg recording sampled at 128 hertz. labels
is a 162-by-1 cell array of diagnostic labels, one for each row of data
. the three diagnostic categories are: 'arr' (arrhythmia), 'chf' (congestive heart failure), and 'nsr' (normal sinus rhythm).
create training and test data
randomly split the data into two sets - training and test data sets. the helper function helperrandomsplit
performs the random split. helperrandomsplit
accepts the desired split percentage for the training data and ecgdata
. the helperrandomsplit
function outputs two data sets along with a set of labels for each. each row of traindata
and testdata
is an ecg signal. each element of trainlabels
and testlabels
contains the class label for the corresponding row of the data matrices. in this example, we randomly assign 70% percent of the data in each class to the training set. the remaining 30% is held out for testing (prediction) and are assigned to the test set.
percent_train = 70;
[traindata,testdata,trainlabels,testlabels] = ...
helperrandomsplit(percent_train,ecgdata);
there are 113 records in the traindata
set and 49 records in testdata
. by design the training data contains 69.75% (113/162) of the data. recall that the arr class represents 59.26% of the data (96/162), the chf class represents 18.52% (30/162), and the nsr class represents 22.22% (36/162). examine the percentage of each class in the training and test sets. the percentages in each are consistent with the overall class percentages in the data set.
ctrain = countcats(categorical(trainlabels))./numel(trainlabels).*100
ctrain = 3×1
59.2920
18.5841
22.1239
ctest = countcats(categorical(testlabels))./numel(testlabels).*100
ctest = 3×1
59.1837
18.3673
22.4490
plot samples
plot the first few thousand samples of four randomly selected records from ecgdata
. the helper function helperplotrandomrecords
does this. helperplotrandomrecords
accepts ecgdata
and a random seed as input. the initial seed is set at 14 so that at least one record from each class is plotted. you can execute helperplotrandomrecords
with ecgdata
as the only input argument as many times as you wish to get a sense of the variety of ecg waveforms associated with each class. you can find the source code for this helper function in the supporting functions section at the end of this example.
helperplotrandomrecords(ecgdata,14)
feature extraction
extract the features used in the signal classification for each signal. this example uses the following features extracted on 8 blocks of each signal approximately one minute in duration (8192 samples):
autoregressive model (ar) coefficients of order 4 [8].
shannon entropy (se) values for the maximal overlap discrete wavelet packet transform (modpwt) at level 4 [5].
multifractal wavelet leader estimates of the second cumulant of the scaling exponents and the range of holder exponents, or singularity spectrum [4].
additionally, multiscale wavelet variance estimates are extracted for each signal over the entire data length [6]. an unbiased estimate of the wavelet variance is used. this requires that only levels with at least one wavelet coefficient unaffected by boundary conditions are used in the variance estimates. for a signal length of 2^16 (65,536) and the 'db2' wavelet this results in 14 levels.
these features were selected based on published research demonstrating their effectiveness in classifying ecg waveforms. this is not intended to be an exhaustive or optimized list of features.
the ar coefficients for each window are estimated using the burg method, arburg
. in [8], the authors used model order selection methods to determine that an ar(4) model provided the best fit for ecg waveforms in a similar classification problem. in [5], an information theoretic measure, the shannon entropy, was computed on the terminal nodes of a wavelet packet tree and used with a random forest classifier. here we use the nondecimated wavelet packet transform, modwpt
, down to level 4.
the definition of the shannon entropy for the undecimated wavelet packet transform following [5] is given by: where is the number of the corresponding coefficients in the j-th node and are the normalized squares of the wavelet packet coefficients in the j-th terminal node.
two fractal measures estimated by wavelet methods are used as features. following [4], we use the width of the singularity spectrum obtained from dwtleader
as a measure of the multifractal nature of the ecg signal. we also use the second cumulant of the scaling exponents. the scaling exponents are scale-based exponents describing power-law behavior in the signal at different resolutions. the second cumulant broadly represents the departure of the scaling exponents from linearity.
the wavelet variance for the entire signal is obtained using modwtvar
. wavelet variance measures variability in a signal by scale, or equivalently variability in a signal over octave-band frequency intervals.
in total there are 190 features: 32 ar features (4 coefficients per block), 128 shannon entropy values (16 values per block), 16 fractal estimates (2 per block), and 14 wavelet variance estimates.
the helperextractfeatures
function computes these features and concatenates them into a feature vector for each signal. you can find the source code for this helper function in the supporting functions section at the end of this example.
timewindow = 8192;
arorder = 4;
modwptlevel = 4;
[trainfeatures,testfeatures,featureindices] = ...
helperextractfeatures(traindata,testdata,timewindow,arorder,modwptlevel);
trainfeatures
and testfeatures
are 113-by-190 and 49-by-190 matrices, respectively. each row of these matrices is a feature vector for the corresponding ecg data in traindata
and testdata
, respectively. in creating feature vectors, the data is reduced from 65536 samples to 190 element vectors. this is a significant reduction in data, but the goal is not just a reduction in data. the goal is to reduce the data to a much smaller set of features which captures the difference between the classes so that a classifier can accurately separate the signals. the indices for the features, which make up both trainfeatures
and testfeatures
are contained in the structure array, featureindices
. you can use these indices to explore features by group. as an example, examine the range of holder exponents in the singularity spectra for the first time window. plot the data for the entire data set.
allfeatures = [trainfeatures;testfeatures]; alllabels = [trainlabels;testlabels]; figure boxplot(allfeatures(:,featureindices.hrfeatures(1)),alllabels,'notch','on') ylabel('holder exponent range') title('range of singularity spectrum by group (first time window)') grid on
you can perform a one-way analysis of variance on this feature and confirm what appears in the boxplot, namely that the arr and nsr groups have a significantly larger range than the chf group.
[p,anovatab,st] = anova1(allfeatures(:,featureindices.hrfeatures(1)),...
alllabels);
c = multcompare(st,'display','off')
c = 3×6
1.0000 2.0000 0.0176 0.1144 0.2112 0.0155
1.0000 3.0000 -0.1591 -0.0687 0.0218 0.1764
2.0000 3.0000 -0.2975 -0.1831 -0.0687 0.0005
as an additional example, consider the difference in variance in the second-lowest frequency (second-largest scale) wavelet subband for the three groups.
boxplot(allfeatures(:,featureindices.wvarfeatures(end-1)),alllabels,'notch','on') ylabel('wavelet variance') title('wavelet variance by group') grid on
if you perform an analysis of variance on this feature, you find that the nsr group has significantly lower variance in this wavelet subband than the arr and chf groups. these examples are just intended to illustrate how individual features serve to separate the classes. while one feature alone is not sufficient, the goal is to obtain a rich enough feature set to enable a classifier to separate all three classes.
signal classification
now that the data has been reduced to a feature vector for each signal, the next step is to use these feature vectors for classifying the ecg signals. you can use the classification learner app to quickly evaluate a large number of classifiers. in this example, a multi-class svm with a quadratic kernel is used. two analyses are performed. first we use the entire dataset (training and testing sets) and estimate the misclassification rate and confusion matrix using 5-fold cross-validation.
features = [trainfeatures; testfeatures]; rng(1) template = templatesvm(... 'kernelfunction','polynomial',... 'polynomialorder',2,... 'kernelscale','auto',... 'boxconstraint',1,... 'standardize',true); model = fitcecoc(... features,... [trainlabels;testlabels],... 'learners',template,... 'coding','onevsone',... 'classnames',{'arr','chf','nsr'}); kfoldmodel = crossval(model,'kfold',5); classlabels = kfoldpredict(kfoldmodel); loss = kfoldloss(kfoldmodel)*100
loss = 8.0247
[confmatcv,grouporder] = confusionmat([trainlabels;testlabels],classlabels);
the 5-fold classification error is 8.02% (91.98% correct). the confusion matrix, confmatcv
, shows which records were misclassified. grouporder
gives the ordering of the groups. two of the arr group were misclassified as chf, eight of the chf group were misclassified as arr and one as nsr, and two from the nsr group were misclassified as arr.
precision, recall, and f1 score
in a classification task, the precision for a class is the number of correct positive results divided by the number of positive results. in other words, of all the records that the classifier assigns a given label, what proportion actually belong to the class. recall is defined as the number of correct labels divided by the number of labels for a given class. specifically, of all the records belonging to a class, what proportion did our classifier label as that class. in judging the accuracy your machine learning system, you ideally want to do well on both precision and recall. for example, suppose we had a classifier that labeled every single record as arr. then our recall for the arr class would be 1 (100%). all records belonging to the arr class would be labeled arr. however, the precision would be low. because our classifier labeled all records as arr, there would be 66 false positives in this case for a precision of 96/162, or 0.5926. the f1 score is the harmonic mean of precision and recall and therefore provides a single metric that summarizes the classifier performance in terms of both recall and precision. the following helper function computes the precision, recall, and f1 scores for the three classes. you can see how helperprecisionrecall
computes precision, recall, and the f1 score based on the confusion matrix by examining the code in the supporting functions section.
cvtable = helperprecisionrecall(confmatcv);
you can display the table returned by helperprecisionrecall
with the following command.
disp(cvtable)
precision recall f1_score _________ ______ ________ arr 90.385 97.917 94 chf 91.304 70 79.245 nsr 97.143 94.444 95.775
both precision and recall are good for the arr and nsr classes, while recall is significantly lower for the chf class.
for the next analysis, we fit a multi-class quadratic svm to the training data only (70%) and then use that model to make predictions on the 30% of the data held out for testing. there are 49 data records in the test set.
model = fitcecoc(... trainfeatures,... trainlabels,... 'learners',template,... 'coding','onevsone',... 'classnames',{'arr','chf','nsr'}); predlabels = predict(model,testfeatures);
use the following to determine the number of correct predictions and obtain the confusion matrix.
correctpredictions = strcmp(predlabels,testlabels); testaccuracy = sum(correctpredictions)/length(testlabels)*100
testaccuracy = 97.9592
[confmattest,grouporder] = confusionmat(testlabels,predlabels);
the classification accuracy on the test dataset is approximately 98% and the confusion matrix shows that one chf record was misclassified as nsr.
similar to what was done in the cross-validation analysis, obtain precision, recall, and the f1 scores for the test set.
testtable = helperprecisionrecall(confmattest); disp(testtable)
precision recall f1_score _________ ______ ________ arr 100 100 100 chf 100 88.889 94.118 nsr 91.667 100 95.652
classification on raw data and clustering
two natural questions arise from the previous analysis. is feature extraction necessary in order to achieve good classification results? is a classifier necessary, or can these features separate the groups without a classifier? to address the first question, repeat the cross-validation results for the raw time series data. note that the following is a computationally expensive step because we are applying the svm to a 162-by-65536 matrix. if you do not wish to run this step yourself, the results are described in the next paragraph.
rawdata = [traindata;testdata]; labels = [trainlabels;testlabels]; rng(1) template = templatesvm(... 'kernelfunction','polynomial', ... 'polynomialorder',2, ... 'kernelscale','auto', ... 'boxconstraint',1, ... 'standardize',true); model = fitcecoc(... rawdata,... [trainlabels;testlabels],... 'learners',template,... 'coding','onevsone',... 'classnames',{'arr','chf','nsr'}); kfoldmodel = crossval(model,'kfold',5); classlabels = kfoldpredict(kfoldmodel); loss = kfoldloss(kfoldmodel)*100
loss = 33.3333
[confmatcvraw,grouporder] = confusionmat([trainlabels;testlabels],classlabels); rawtable = helperprecisionrecall(confmatcvraw); disp(rawtable)
precision recall f1_score _________ ______ ________ arr 64 100 78.049 chf 100 13.333 23.529 nsr 100 22.222 36.364
the misclassification rate for the raw time series data is 33.3%. repeating the precision, recall, and f1 score analysis reveals very poor f1 scores for both the chf (23.52) and nsr groups (36.36). obtain the magnitude discrete fourier transform (dft) coefficients for each signal to perform the analysis in frequency domain. because the data are real-valued, we can achieve some data reduction using the dft by exploiting the fact that fourier magnitudes are an even function.
rawdatadft = abs(fft(rawdata,[],2)); rawdatadft = rawdatadft(:,1:2^16/2 1); rng(1) template = templatesvm(... 'kernelfunction','polynomial',... 'polynomialorder',2,... 'kernelscale','auto',... 'boxconstraint',1,... 'standardize',true); model = fitcecoc(... rawdatadft,... [trainlabels;testlabels],... 'learners',template,... 'coding','onevsone',... 'classnames',{'arr','chf','nsr'}); kfoldmodel = crossval(model,'kfold',5); classlabels = kfoldpredict(kfoldmodel); loss = kfoldloss(kfoldmodel)*100
loss = 19.1358
[confmatcvdft,grouporder] = confusionmat([trainlabels;testlabels],classlabels); dfttable = helperprecisionrecall(confmatcvdft); disp(dfttable)
precision recall f1_score _________ ______ ________ arr 76.423 97.917 85.845 chf 100 26.667 42.105 nsr 93.548 80.556 86.567
using the dft magnitudes reduces the misclassification rate to 19.13% but that is still more than twice the error rate obtained with our 190 features. these analyses demonstrate that the classifier has benefited from a careful selection of features.
to answer the question concerning the role of the classifier, attempt to cluster the data using only the feature vectors. use k-means clustering along with the gap statistic to determine both the optimal number of clusters and cluster assignment. allow for the possibility of 1 to 6 clusters for the data.
rng default eva = evalclusters(features,'kmeans','gap','klist',[1:6]); eva
eva = gapevaluation with properties: numobservations: 162 inspectedk: [1 2 3 4 5 6] criterionvalues: [1.2777 1.3539 1.3644 1.3570 1.3591 1.3752] optimalk: 3
the gap statistic indicates that the optimal number of clusters is three. however, if you look at the number of records in each of the three clusters, you see that the k-means clustering based on the feature vectors has done a poor job of separating the three diagnostic categories.
countcats(categorical(eva.optimaly))
ans = 3×1
61
74
27
recall that there are 96 persons in the arr class, 30 in the chf class, and 36 in the nsr class.
summary
this example used signal processing to extract wavelet features from ecg signals and used those features to classify ecg signals into three classes. not only did the feature extraction result in a significant amount of data reduction, it also captured the differences between the arr, chf, and nsr classes as demonstrated by the cross-validation results and the performance of the svm classifier on the test set. the example further demonstrated that applying a svm classifier to the raw data resulted in poor performance as did clustering the feature vectors without using a classifier. neither the classifier nor the features alone were sufficient to separate the classes. however, when feature extraction was used as a data reduction step prior to the use of a classifier, the three classes were well separated.
references
baim ds, colucci ws, monrad es, smith hs, wright rf, lanoue a, gauthier df, ransil bj, grossman w, braunwald e. survival of patients with severe congestive heart failure treated with oral milrinone. j american college of cardiology 1986 mar; 7(3):661-670.
engin, m., 2004. ecg beat classification using neuro-fuzzy network. pattern recognition letters, 25(15), pp.1715-1722.
goldberger al, amaral lan, glass l, hausdorff jm, ivanov pch, mark rg, mietus je, moody gb, peng c-k, stanley he. physiobank, physiotoolkit,and physionet: components of a new research resource for complex physiologic signals. circulation. vol. 101, no. 23, 13 june 2000, pp. e215-e220.
http://circ.ahajournals.org/content/101/23/e215.full
leonarduzzi, r.f., schlotthauer, g., and torres. m.e. 2010. wavelet leader based multifractal analysis of heart rate variability during myocardial ischaemia. engineering in medicine and biology society (embc), 2010 annual international conference of the ieee.
li, t. and zhou, m., 2016. ecg classification using wavelet packet entropy and random forests. entropy, 18(8), p.285.
maharaj, e.a. and alonso, a.m. 2014. discriminant analysis of multivariate time series: application to diagnosis based on ecg signals. computational statistics and data analysis, 70, pp. 67-87.
moody gb, mark rg. the impact of the mit-bih arrhythmia database. ieee eng in med and biol 20(3):45-50 (may-june 2001). (pmid: 11446209)
zhao, q. and zhang, l., 2005. ecg feature extraction and classification using wavelet transform and support vector machines. ieee international conference on neural networks and brain,2, pp. 1089-1092.
supporting functions
helperplotrandomrecords plots four ecg signals randomly chosen from ecgdata
.
function helperplotrandomrecords(ecgdata,randomseed) % this function is only intended to support the xpwwaveletmlexample. it may % change or be removed in a future release. if nargin==2 rng(randomseed) end m = size(ecgdata.data,1); idxsel = randperm(m,4); for numplot = 1:4 subplot(2,2,numplot) plot(ecgdata.data(idxsel(numplot),1:3000)) ylabel('volts') if numplot > 2 xlabel('samples') end title(ecgdata.labels{idxsel(numplot)}) end end
helperextractfeatures extracts the wavelet features and ar coefficients for blocks of the data of a specified size. the features are concatenated into feature vectors.
function [trainfeatures, testfeatures,featureindices] = helperextractfeatures(traindata,testdata,t,ar_order,level) % this function is only in support of xpwwaveletmlexample. it may change or % be removed in a future release. trainfeatures = []; testfeatures = []; for idx =1:size(traindata,1) x = traindata(idx,:); x = detrend(x,0); arcoefs = blockar(x,ar_order,t); se = shannonentropy(x,t,level); [cp,rh] = leaders(x,t); wvar = modwtvar(modwt(x,'db2'),'db2'); trainfeatures = [trainfeatures; arcoefs se cp rh wvar']; %#okend for idx =1:size(testdata,1) x1 = testdata(idx,:); x1 = detrend(x1,0); arcoefs = blockar(x1,ar_order,t); se = shannonentropy(x1,t,level); [cp,rh] = leaders(x1,t); wvar = modwtvar(modwt(x1,'db2'),'db2'); testfeatures = [testfeatures;arcoefs se cp rh wvar']; %#ok end featureindices = struct(); % 4*8 featureindices.arfeatures = 1:32; startidx = 33; endidx = 33 (16*8)-1; featureindices.sefeatures = startidx:endidx; startidx = endidx 1; endidx = startidx 7; featureindices.cp2features = startidx:endidx; startidx = endidx 1; endidx = startidx 7; featureindices.hrfeatures = startidx:endidx; startidx = endidx 1; endidx = startidx 13; featureindices.wvarfeatures = startidx:endidx; end function se = shannonentropy(x,numbuffer,level) numwindows = numel(x)/numbuffer; y = buffer(x,numbuffer); se = zeros(2^level,size(y,2)); for kk = 1:size(y,2) wpt = modwpt(y(:,kk),level); % sum across time e = sum(wpt.^2,2); pij = wpt.^2./e; % the following is eps(1) se(:,kk) = -sum(pij.*log(pij eps),2); end se = reshape(se,2^level*numwindows,1); se = se'; end function arcfs = blockar(x,order,numbuffer) numwindows = numel(x)/numbuffer; y = buffer(x,numbuffer); arcfs = zeros(order,size(y,2)); for kk = 1:size(y,2) artmp = arburg(y(:,kk),order); arcfs(:,kk) = artmp(2:end); end arcfs = reshape(arcfs,order*numwindows,1); arcfs = arcfs'; end function [cp,rh] = leaders(x,numbuffer) y = buffer(x,numbuffer); cp = zeros(1,size(y,2)); rh = zeros(1,size(y,2)); for kk = 1:size(y,2) [~,h,cptmp] = dwtleader(y(:,kk)); cp(kk) = cptmp(2); rh(kk) = range(h); end end
helperprecisionrecall returns the precision, recall, and f1 scores based on the confusion matrix. outputs the results as a matlab table.
function prtable = helperprecisionrecall(confmat) % this function is only in support of xpwwaveletmlexample. it may change or % be removed in a future release. precisionarr = confmat(1,1)/sum(confmat(:,1))*100; precisionchf = confmat(2,2)/sum(confmat(:,2))*100 ; precisionnsr = confmat(3,3)/sum(confmat(:,3))*100 ; recallarr = confmat(1,1)/sum(confmat(1,:))*100; recallchf = confmat(2,2)/sum(confmat(2,:))*100; recallnsr = confmat(3,3)/sum(confmat(3,:))*100; f1arr = 2*precisionarr*recallarr/(precisionarr recallarr); f1chf = 2*precisionchf*recallchf/(precisionchf recallchf); f1nsr = 2*precisionnsr*recallnsr/(precisionnsr recallnsr); % construct a matlab table to display the results. prtable = array2table([precisionarr recallarr f1arr;... precisionchf recallchf f1chf; precisionnsr recallnsr... f1nsr],'variablenames',{'precision','recall','f1_score'},'rownames',... {'arr','chf','nsr'}); end
see also
functions
- | | |
apps
- classification learner app (statistics and machine learning toolbox)