(to be removed) extract vggish features -凯发k8网页登录
(to be removed) extract vggish features
since r2020b
the vggishfeatures
function will be removed in a future release. use
vggishembeddings
instead. for more information, see compatibility considerations.
description
returns vggish feature embeddings over time for the audio input embeddings
= vggishfeatures(audioin
,fs
)audioin
with sample rate fs
. columns of the input are treated as individual
channels.
specifies options using one or more embeddings
= vggishfeatures(audioin
,fs
,name,value
)name,value
arguments. for example,
embeddings = vggishfeatures(audioin,fs,'applypca',true)
applies a
principal component analysis (pca) transformation to the audio embeddings.
this function requires both audio toolbox™ and deep learning toolbox™.
examples
download vggishfeatures
functionality
this example uses:
download and unzip the audio toolbox™ model for vggish.
type vggishfeatures
at the command line. if the audio toolbox model for vggish is not installed, then the function provides a link to the location of the network weights. to download the model, click the link. unzip the file to a location on the matlab path.
alternatively, execute the following commands to download and unzip the vggish model to your temporary directory.
downloadfolder = fullfile(tempdir,'vggishdownload'); loc = websave(downloadfolder,'https://ssd.mathworks.com/supportfiles/audio/vggish.zip'); vggishlocation = tempdir; unzip(loc,vggishlocation) addpath(fullfile(vggishlocation,'vggish'))
extract vggish embeddings
this example uses:
read in an audio file.
[audioin,fs] = audioread('mainstreetone-16-16-mono-12secs.wav');
call the vggishfeatures
function with the audio and sample rate to extract vggish feature embeddings from the audio.
featurevectors = vggishfeatures(audioin,fs);
the vggishfeatures
function returns a matrix of 128-element feature vectors over time.
[numhops,numelementsperhop,numchannels] = size(featurevectors)
numhops = 23
numelementsperhop = 128
numchannels = 1
increase time resolution of vggish features
this example uses:
create a 10-second pink noise signal and then extract vggish features. the vggishfeatures
function extracts features from mel spectrograms with 50% overlap.
fs = 16e3;
dur = 10;
audioin = pinknoise(dur*fs,1,'single');
features = vggishfeatures(audioin,fs);
plot the vggish features over time.
surf(features,'edgecolor','none') view([30 65]) axis tight xlabel('feature index') ylabel('frame') xlabel('feature value') title('vggish features')
to increase the resolution of vggish features over time, specify the percent overlap between mel spectrograms. plot the results.
overlappercentage = 75; features = vggishfeatures(audioin,fs,'overlappercentage',overlappercentage); surf(features,'edgecolor','none') view([30 65]) axis tight xlabel('feature index') ylabel('frame') zlabel('feature value') title('vggish features')
apply principal component analysis to vggish embeddings
this example uses:
read in an audio file, listen to it, and then extract vggish features from the audio.
[audioin,fs] = audioread('counting-16-44p1-mono-15secs.wav');
sound(audioin,fs)
features = vggishfeatures(audioin,fs);
visualize the vggish features over time. many of the individual features are zero-valued and contain no useful information.
surf(features,'edgecolor','none') view([90,-90]) axis tight xlabel('feature index') ylabel('frame index') title('vggish features')
you can apply principal component analysis (pca) to map the feature vectors into a space that emphasizes variation between the embeddings. call the vggishfeatures
function again and specify applypca
as true
. visualize the vggish features after pca.
features = vggishfeatures(audioin,fs,'applypca',true); surf(features,'edgecolor','none') view([90,-90]) axis tight xlabel('feature index') ylabel('frame index') title('vggish features pca')
use vggish embeddings for deep learning
this example uses:
download and unzip the air compressor data set. this data set consists of recordings from air compressors in a healthy state or in one of seven faulty states.
url = 'https://www.mathworks.com/supportfiles/audio/aircompressordataset/aircompressordataset.zip'; downloadfolder = fullfile(tempdir,'aircompressordataset'); datasetlocation = tempdir; if ~exist(fullfile(tempdir,'aircompressordataset'),'dir') loc = websave(downloadfolder,url); unzip(loc,fullfile(tempdir,'aircompressordataset')) end
create an audiodatastore
object to manage the data and split it into training and validation sets.
ads = audiodatastore(downloadfolder,'includesubfolders',true,'labelsource','foldernames'); [adstrain,adsvalidation] = spliteachlabel(ads,0.8,0.2);
read an audio file from the datastore and save the sample rate for later use. reset the datastore to return the read pointer to the beginning of the data set. listen to the audio signal and plot the signal in the time domain.
[x,fileinfo] = read(adstrain); fs = fileinfo.samplerate; reset(adstrain) sound(x,fs) figure t = (0:size(x,1)-1)/fs; plot(t,x) xlabel('time (s)') title('state = ' string(fileinfo.label)) axis tight
extract vggish features from the training and validation sets. transpose the features so that time is along rows.
trainfeatures = cell(1,numel(adstrain.files)); for idx = 1:numel(adstrain.files) [audioin,fileinfo] = read(adstrain); features = vggishfeatures(audioin,fileinfo.samplerate); trainfeatures{idx} = features'; end validationfeatures = cell(1,numel(adsvalidation.files)); for idx = 1:numel(adsvalidation.files) [audioin,fileinfo] = read(adsvalidation); features = vggishfeatures(audioin,fileinfo.samplerate); validationfeatures{idx} = features'; end
define a long short-term memory neural networks (deep learning toolbox) network.
layers = [ sequenceinputlayer(128) lstmlayer(100,'outputmode','last') fullyconnectedlayer(8) softmaxlayer classificationlayer];
to define training options, use (deep learning toolbox).
minibatchsize = 64; validationfrequency = 5*floor(numel(trainfeatures)/minibatchsize); options = trainingoptions("adam", ... "maxepochs",12, ... "minibatchsize",minibatchsize, ... "plots","training-progress", ... "shuffle","every-epoch", ... "learnrateschedule","piecewise", ... "learnratedropperiod",6, ... "learnratedropfactor",0.1, ... "validationdata",{validationfeatures,adsvalidation.labels}, ... "validationfrequency",validationfrequency, ... 'verbose',false);
to train the network, use (deep learning toolbox).
net = trainnetwork(trainfeatures,adstrain.labels,layers,options)
net = seriesnetwork with properties: layers: [5×1 nnet.cnn.layer.layer] inputnames: {'sequenceinput'} outputnames: {'classoutput'}
visualize the confusion matrix for the validation set.
predictedclass = classify(net,validationfeatures); confusionchart(adsvalidation.labels,predictedclass)
use vggish embeddings for machine learning
download and unzip the air compressor data set [1]. this data set consists of recordings from air compressors in a healthy state or in one of seven faulty states.
url = 'https://www.mathworks.com/supportfiles/audio/aircompressordataset/aircompressordataset.zip'; downloadfolder = fullfile(tempdir,'aircompressordataset'); datasetlocation = tempdir; if ~exist(fullfile(tempdir,'aircompressordataset'),'dir') loc = websave(downloadfolder,url); unzip(loc,fullfile(tempdir,'aircompressordataset')) end
create an audiodatastore
object to manage the data and split it into training and validation sets.
ads = audiodatastore(downloadfolder,'includesubfolders',true,'labelsource','foldernames');
in this example, you classify signals as either healthy or faulty. combine all of the faulty labels into a single label. split the datastore into training and validation sets.
labels = ads.labels; labels(labels~=categorical("healthy")) = categorical("faulty"); ads.labels = removecats(labels); [adstrain,adsvalidation] = spliteachlabel(ads,0.8,0.2);
extract vggish features from the training set. each audio file corresponds to multiple vggish features. replicate the labels so that they are in one-to-one correspondence with the features.
trainfeatures = []; trainlabels = []; for idx = 1:numel(adstrain.files) [audioin,fileinfo] = read(adstrain); features = vggishfeatures(audioin,fileinfo.samplerate); trainfeatures = [trainfeatures;features]; trainlabels = [trainlabels;repelem(fileinfo.label,size(features,1))']; end
train a cubic support vector machine (svm) using fitcsvm
(statistics and machine learning toolbox). to explore other classifiers and their performances, use classification learner (statistics and machine learning toolbox).
faultdetector = fitcsvm( ... trainfeatures, ... trainlabels, ... 'kernelfunction','polynomial', ... 'polynomialorder',3, ... 'kernelscale','auto', ... 'boxconstraint',1, ... 'standardize',true, ... 'classnames',categories(trainlabels));
for each file in the validation set:
extract vggish features.
for each vggish feature vector in a file, use the trained classifier to predict whether the machine is healthy or faulty.
take the mode of the predictions for each file.
predictions = []; for idx = 1:numel(adsvalidation.files) [audioin,fileinfo] = read(adsvalidation); features = vggishfeatures(audioin,fileinfo.samplerate); predictionsperfile = categorical(predict(faultdetector,features)); predictions = [predictions;mode(predictionsperfile)]; end
use (statistics and machine learning toolbox) to display the performance of the classifier.
accuracy = sum(predictions==adsvalidation.labels)/numel(adsvalidation.labels);
cc = confusionchart(predictions,adsvalidation.labels);
cc.title = sprintf('accuracy = %0.2f %',accuracy*100);
references
[1] verma, nishchal k., rahul kumar sevakula, sonal dixit, and al salour. 2016. “intelligent condition based monitoring using acoustic signals for air compressors.” ieee transactions on reliability 65 (1): 291–309. https://doi.org/10.1109/tr.2015.2459684.
input arguments
audioin
— input signal
column vector | matrix
input signal, specified as a column vector or matrix. if you specify a matrix,
vggishfeatures
treats the columns of the matrix as individual
audio channels.
the duration of audioin
must be equal to or greater than 0.975
seconds.
data types: single
| double
fs
— sample rate (hz)
positive scalar
sample rate of the input signal in hz, specified as a positive scalar.
data types: single
| double
name-value arguments
specify optional pairs of arguments as
name1=value1,...,namen=valuen
, where name
is
the argument name and value
is the corresponding value.
name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
before r2021a, use commas to separate each name and value, and enclose
name
in quotes.
example: 'overlappercentage',75
overlappercentage
— percentage overlap between consecutive audio frames
50
(default) | scalar in the range [0,100)
percentage overlap between consecutive audio frames, specified as a scalar in the range [0,100).
data types: single
| double
applypca
— flag to apply pca transformation to audio embeddings
false
(default) | true
flag to apply pca transformation to audio embeddings, specified as either
true
or false
.
data types: logical
output arguments
embeddings
— compact representation of audio data
l-by-128-by-n array
compact representation of audio data, returned as an l-by-128-by-n array, where:
l –– represents the number of frames the audio signal is partitioned into. this is determined by the
overlappercentage
.128 –– represents the audio embedding length.
n –– represents the number of channels.
algorithms
the vggishfeatures
function uses vggish to extract feature embeddings
from audio. the vggishfeatures
function preprocesses the audio so that it
is in the format required by vggish and optionally postprocesses the embeddings.
preprocess
resample
audioin
to 16 khz and cast to single precision.compute a one-sided short time fourier transform using a 25 ms periodic hann window with a 10 ms hop, and a 512-point dft. the audio is now represented by a 257-by-l array, where 257 is the number of bins in the one-sided spectra, and l depends on the length of the input.
convert the complex spectral values to magnitude and discard phase information.
pass the one-sided magnitude spectrum through a 64-band mel-spaced filter bank, then sum the magnitudes in each band. the audio is now represented by a single 64-by-l mel spectrogram.
convert the mel spectrogram to a log scale.
buffer the mel spectrogram into overlapped segments consisting of 96 spectra each. the audio is now represented by a 96-by-64-by-1-by-k array, where 96 is the number of spectra in the individual mel spectrograms, 64 is the number of mel bands, and the spectrograms are spaced along the fourth dimension for compatibility with the vggish model. the number of mel spectrograms, k, depends on the input length and
overlappercentage
.
feature extraction
pass the 96-by-64-by-1-by-k array of mel spectrograms through vggish to return a k-by-128 matrix. the output from vggish are the feature embeddings corresponding to each 0.975 s frame of audio data.
postprocess
if applypca
is set to true
, the feature
embeddings are postprocessed to match the postprocessing of the released audioset
embeddings. the vggish model was released with a precomputed principal component analysis
(pca) matrix and mean vector to apply a pca transformation and whitening during inference.
the postprocessing includes applying pca, whitening, and quantization.
subtract the precomputed 1-by-128 pca mean vector from the k-by-128 feature matrix, and then premultiply the result by the precomputed 128-by-128 pca matrix.
clip the transformed and whitened embeddings to between –2 and 2, then quantize the result to values that can be represented by
uint8
.
references
[1] gemmeke, jort f., daniel p. w. ellis, dylan freedman, aren jansen, wade lawrence, r. channing moore, manoj plakal, and marvin ritter. 2017. “audio set: an ontology and human-labeled dataset for audio events.” in 2017 ieee international conference on acoustics, speech and signal processing (icassp), 776–80. new orleans, la: ieee. https://doi.org/10.1109/icassp.2017.7952261.
[2] hershey, shawn, sourish chaudhuri, daniel p. w. ellis, jort f. gemmeke, aren jansen, r. channing moore, manoj plakal, et al. 2017. “cnn architectures for large-scale audio classification.” in 2017 ieee international conference on acoustics, speech and signal processing (icassp), 131–35. new orleans, la: ieee. https://doi.org/10.1109/icassp.2017.7952132.
extended capabilities
gpu arrays
accelerate code by running on a graphics processing unit (gpu) using parallel computing toolbox™.
this function fully supports gpu arrays. for more information, see run matlab functions on a gpu (parallel computing toolbox).
version history
introduced in r2020br2022a: vggishfeatures
will be removed
the vggishfeatures
function will be removed in a future release.
use
instead. existing calls to vggishfeatures
continue to run.
打开示例
您曾对此示例进行过修改。是否要打开带有您的编辑的示例?
matlab 命令
您点击的链接对应于以下 matlab 命令:
请在 matlab 命令行窗口中直接输入以执行命令。web 浏览器不支持 matlab 命令。
select a web site
choose a web site to get translated content where available and see local events and offers. based on your location, we recommend that you select: .
you can also select a web site from the following list:
how to get best site performance
select the china site (in chinese or english) for best site performance. other mathworks country sites are not optimized for visits from your location.
americas
- (español)
- (english)
- (english)
europe
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
asia pacific
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)