main content

使用深度学习训练语音命令识别模型 -凯发k8网页登录

此示例说明如何训练一个深度学习模型来检测音频中是否存在语音命令。此示例使用语音命令数据集 [1] 来训练卷积神经网络,以识别一组命令。

要使用预训练的语音命令识别系统,请参阅 (audio toolbox)

要快速运行该示例,请将 speedupexample 设置为 true。要运行发布的完整示例,请将 speedupexample 设置为 false

speedupexample = false;

设置随机种子以实现可再现性。

rng default

加载数据

此示例使用 google speech commands dataset [1]。下载并解压缩数据集。

downloadfolder = matlab.internal.examples.downloadsupportfile("audio","google_speech.zip");
datafolder = tempdir;
unzip(downloadfolder,datafolder)
dataset = fullfile(datafolder,"google_speech");

增强数据

网络应不仅能够识别不同发音的单词,而且还能够检测音频输入是静音还是背景噪声。

支持函数 augmentdataset 使用 google speech commands dataset 的背景文件夹中的长音频文件来创建时长一秒的背景噪声片段。该函数从每个背景噪声文件创建相同数量的背景片段,然后将这些片段拆分到训练和验证文件夹中。

augmentdataset(dataset)
progress = 17 (%)
progress = 33 (%)
progress = 50 (%)
progress = 67 (%)
progress = 83 (%)
progress = 100 (%)

创建训练数据存储

创建一个指向该训练数据集的 audiodatastore (audio toolbox)

ads = audiodatastore(fullfile(dataset,"train"), ...
    includesubfolders=true, ...
    fileextensions=".wav", ...
    labelsource="foldernames");

指定您希望模型识别为命令的单词。将所有不是命令或背景噪声的文件标注为 unknown。将非命令单词标注为 unknown 会创建一个单词组,用来逼近除命令之外的所有单词的分布。网络使用该组来学习命令与所有其他单词之间的差异。

为了减少已知单词和未知单词之间的类不平衡并加快处理速度,只在训练集中包括未知单词的一小部分。

使用 (audio toolbox) 创建一个仅包含命令、背景噪声和未知单词子集的数据存储。计算属于每个类别的示例的数量。

commands = categorical(["yes","no","up","down","left","right","on","off","stop","go"]);
background = categorical("background");
iscommand = ismember(ads.labels,commands);
isbackground = ismember(ads.labels,background);
isunknown = ~(iscommand|isbackground);
includefraction = 0.2; % fraction of unknowns to include.
idx = find(isunknown);
idx = idx(randperm(numel(idx),round((1-includefraction)*sum(isunknown))));
isunknown(idx) = false;
ads.labels(isunknown) = categorical("unknown");
adstrain = subset(ads,iscommand|isunknown|isbackground);
adstrain.labels = removecats(adstrain.labels);

创建验证数据存储

创建一个指向该验证数据集的 audiodatastore (audio toolbox)。按照用于创建训练数据存储的相同步骤进行操作。

ads = audiodatastore(fullfile(dataset,"validation"), ...
    includesubfolders=true, ...
    fileextensions=".wav", ...
    labelsource="foldernames");
iscommand = ismember(ads.labels,commands);
isbackground = ismember(ads.labels,background);
isunknown = ~(iscommand|isbackground);
includefraction = 0.2; % fraction of unknowns to include.
idx = find(isunknown);
idx = idx(randperm(numel(idx),round((1-includefraction)*sum(isunknown))));
isunknown(idx) = false;
ads.labels(isunknown) = categorical("unknown");
adsvalidation = subset(ads,iscommand|isunknown|isbackground);
adsvalidation.labels = removecats(adsvalidation.labels);

可视化训练和验证标签分布。

figure(units="normalized",position=[0.2,0.2,0.5,0.5])
tiledlayout(2,1)
nexttile
histogram(adstrain.labels)
title("training label distribution")
ylabel("number of observations")
grid on
nexttile
histogram(adsvalidation.labels)
title("validation label distribution")
ylabel("number of observations")
grid on

如果需要,通过减少数据集来加快示例速度。

if speedupexample
    numuniquelabels = numel(unique(adstrain.labels)); %#ok 
    % reduce the dataset by a factor of 20
    adstrain = spliteachlabel(adstrain,round(numel(adstrain.files) / numuniquelabels / 20));
    adsvalidation = spliteachlabel(adsvalidation,round(numel(adsvalidation.files) / numuniquelabels / 20));
end

准备要训练的数据

为了准备能够高效训练卷积神经网络的数据,请将语音波形转换为基于听觉的频谱图。

为了加快处理速度,您可以在多个工作进程之间分配特征提取。如果您能够访问 parallel computing toolbox™,请启动并行池。

if canuseparallelpool && ~speedupexample
    useparallel = true;
    gcp;
else
    useparallel = false;
end
starting parallel pool (parpool) using the 'local' profile ...
connected to the parallel pool (number of workers: 6).

提取特征

定义从音频输入提取听觉频谱图的参数。segmentduration 是每个语音段的持续时间(以秒为单位)。frameduration 是用于计算频谱的每个帧的持续时间。hopduration 是每个频谱之间的时间步。numbands 是听觉频谱图中的滤波器的数量。

fs = 16e3; % known sample rate of the data set.
segmentduration = 1;
frameduration = 0.025;
hopduration = 0.010;
fftlength = 512;
numbands = 50;
segmentsamples = round(segmentduration*fs);
framesamples = round(frameduration*fs);
hopsamples = round(hopduration*fs);
overlapsamples = framesamples - hopsamples;

创建一个 audiofeatureextractor (audio toolbox) 对象来执行特征提取。

afe = audiofeatureextractor( ...
    samplerate=fs, ...
    fftlength=fftlength, ...
    window=hann(framesamples,"periodic"), ...
    overlaplength=overlapsamples, ...
    barkspectrum=true);
setextractorparameters(afe,"barkspectrum",numbands=numbands,windownormalization=false);

基于 audiodatastore (audio toolbox) 定义一系列 (audio toolbox),以将音频填充到一致的长度,提取特征,然后应用对数。

transform1 = transform(adstrain,@(x)[zeros(floor((segmentsamples-size(x,1))/2),1);x;zeros(ceil((segmentsamples-size(x,1))/2),1)]);
transform2 = transform(transform1,@(x)extract(afe,x));
transform3 = transform(transform2,@(x){log10(x 1e-6)});

使用 (audio toolbox) 函数从数据存储中读取所有数据。在读取每个文件时,它都会经过变换再返回数据。

xtrain = readall(transform3,useparallel=useparallel);

输出是一个 numfiles×1 元胞数组。该元胞数组的每个元素对应于从一个文件中提取的听觉频谱图。

numfiles = numel(xtrain)
numfiles = 28463
[numhops,numbands,numchannels] = size(xtrain{1})
numhops = 98
numbands = 50
numchannels = 1

将元胞数组转换为 4 维数组,第 4 维为听觉频谱图。

xtrain = cat(4,xtrain{:});
[numhops,numbands,numchannels,numfiles] = size(xtrain)
numhops = 98
numbands = 50
numchannels = 1
numfiles = 28463

对验证集执行上述特征提取步骤。

transform1 = transform(adsvalidation,@(x)[zeros(floor((segmentsamples-size(x,1))/2),1);x;zeros(ceil((segmentsamples-size(x,1))/2),1)]);
transform2 = transform(transform1,@(x)extract(afe,x));
transform3 = transform(transform2,@(x){log10(x 1e-6)});
xvalidation = readall(transform3,useparallel=useparallel);
xvalidation = cat(4,xvalidation{:});

为方便起见,对训练和验证目标标签进行隔离。

ttrain = adstrain.labels;
tvalidation = adsvalidation.labels;

可视化数据

绘制几个训练样本的波形和听觉频谱图。播放对应的音频片段。

specmin = min(xtrain,[],"all");
specmax = max(xtrain,[],"all");
idx = randperm(numel(adstrain.files),3);
figure(units="normalized",position=[0.2,0.2,0.6,0.6]);
tiledlayout(2,3)
for ii = 1:3
    [x,fs] = audioread(adstrain.files{idx(ii)});
    nexttile(ii)
    plot(x)
    axis tight
    title(string(adstrain.labels(idx(ii))))
    
    nexttile(ii 3)
    spect = xtrain(:,:,1,idx(ii))';
    pcolor(spect)
    clim([specmin specmax])
    shading flat
    
    sound(x,fs)
    pause(2)
end

定义网络架构

创建一个层数组形式的简单网络架构。使用卷积层和批量归一化层,并使用最大池化层“在空间上”(即,在时间和频率上)对特征图进行下采样。添加最终的最大池化层,它随时间对输入特征图进行全局池化。这会在输入频谱图中强制实施(近似的)时间平移不变性,从而使网络在对语音进行分类时不依赖于语音的准确时间位置,得到相同的分类结果。全局池化还可以显著减少最终全连接层中的参数数量。为了降低网络记住训练数据特定特征的可能性,可为最后一个全连接层的输入添加一个小的丢弃率。

该网络很小,因为它只有五个卷积层和几个滤波器。numf 控制卷积层中的滤波器数量。要提高网络的准确度,请尝试通过添加一些相同的块(由卷积层、批量归一化层和 relu 层组成)来增加网络深度。还可以尝试通过增大 numf 来增加卷积滤波器的数量。

为了使每个类在损失中的总权重相等,使用的类权重应与每个类的训练样本数成反比。使用 adam 优化器训练网络时,训练算法与类权重的整体归一化无关。

classes = categories(ttrain);
classweights = 1./countcats(ttrain);
classweights = classweights'/mean(classweights);
numclasses = numel(classes);
timepoolsize = ceil(numhops/8);
dropoutprob = 0.2;
numf = 12;
layers = [
    imageinputlayer([numhops,afe.featurevectorlength])
    
    convolution2dlayer(3,numf,padding="same")
    batchnormalizationlayer
    relulayer
    maxpooling2dlayer(3,stride=2,padding="same")
    
    convolution2dlayer(3,2*numf,padding="same")
    batchnormalizationlayer
    relulayer
    maxpooling2dlayer(3,stride=2,padding="same")
    
    convolution2dlayer(3,4*numf,padding="same")
    batchnormalizationlayer
    relulayer
    maxpooling2dlayer(3,stride=2,padding="same")
    
    convolution2dlayer(3,4*numf,padding="same")
    batchnormalizationlayer
    relulayer
    convolution2dlayer(3,4*numf,padding="same")
    batchnormalizationlayer
    relulayer
    maxpooling2dlayer([timepoolsize,1])
    dropoutlayer(dropoutprob)
    fullyconnectedlayer(numclasses)
    softmaxlayer
    classificationlayer(classes=classes,classweights=classweights)];

指定训练选项

要定义训练参数,请使用 。使用小批量大小为 128 的 adam 优化器。

minibatchsize = 128;
validationfrequency = floor(numel(ttrain)/minibatchsize);
options = trainingoptions("adam", ...
    initiallearnrate=3e-4, ...
    maxepochs=15, ...
    minibatchsize=minibatchsize, ...
    shuffle="every-epoch", ...
    plots="training-progress", ...
    verbose=false, ...
    validationdata={xvalidation,tvalidation}, ...
    validationfrequency=validationfrequency);

训练网络

要训练网络,请使用 。如果您没有 gpu,则训练网络可能需要较长的时间。

trainednet = trainnetwork(xtrain,ttrain,layers,options);

评估经过训练的网络

要计算基于训练集和验证集的网络最终准确度,请使用 。网络对于此数据集非常准确。但是,训练数据、验证数据和测试数据全都具有相似的分布,不一定能反映真实环境。尤其是对仅包含少量单词读音的 unknown 类别,更是如此。

yvalidation = classify(trainednet,xvalidation);
validationerror = mean(yvalidation ~= tvalidation);
ytrain = classify(trainednet,xtrain);
trainerror = mean(ytrain ~= ttrain);
disp(["training error: "   trainerror*100   "%";"validation error: "   validationerror*100   "%"])
    "training error: 2.7263%"
    "validation error: 6.3968%"

要绘制验证集的混淆矩阵,请使用 。使用列汇总和行汇总显示每个类的准确率和召回率。

figure(units="normalized",position=[0.2,0.2,0.5,0.5]);
cm = confusionchart(tvalidation,yvalidation, ...
    title="confusion matrix for validation data", ...
    columnsummary="column-normalized",rowsummary="row-normalized");
sortclasses(cm,[commands,"unknown","background"])

在处理硬件资源受限的应用(如移动应用)时,必须考虑可用内存和计算资源的限制。当使用 cpu 时,以 kb 为单位计算网络总大小,并测试网络的预测速度。预测时间是指对单个输入图像进行分类的时间。如果向网络中输入多个图像,可以同时对它们进行分类,从而缩短每个图像的预测时间。然而,在对流音频进行分类时,单个图像预测时间是最相关的。

for ii = 1:100
    x = randn([numhops,numbands]);
    predictiontimer = tic;
    [y,probs] = classify(trainednet,x,executionenvironment="cpu");
    time(ii) = toc(predictiontimer);
end
disp(["network size: "   whos("trainednet").bytes/1024   " kb"; ...
"single-image prediction time on cpu: "   mean(time(11:end))*1000   " ms"])
    "network size: 292.2842 kb"
    "single-image prediction time on cpu: 3.7237 ms"

支持函数

用背景噪声增强数据集

function augmentdataset(datasetloc)
adsbkg = audiodatastore(fullfile(datasetloc,"background"));
fs = 16e3; % known sample rate of the data set
segmentduration = 1;
segmentsamples = round(segmentduration*fs);
volumerange = log10([1e-4,1]);
numbkgsegments = 4000;
numbkgfiles = numel(adsbkg.files);
numsegmentsperfile = floor(numbkgsegments/numbkgfiles);
fptrain = fullfile(datasetloc,"train","background");
fpvalidation = fullfile(datasetloc,"validation","background");
if ~datasetexists(fptrain)
    % create directories
    mkdir(fptrain)
    mkdir(fpvalidation)
    for backgroundfileindex = 1:numel(adsbkg.files)
        [bkgfile,fileinfo] = read(adsbkg);
        [~,fn] = fileparts(fileinfo.filename);
        % determine starting index of each segment
        segmentstart = randi(size(bkgfile,1)-segmentsamples,numsegmentsperfile,1);
        % determine gain of each clip
        gain = 10.^((volumerange(2)-volumerange(1))*rand(numsegmentsperfile,1)   volumerange(1));
        for segmentidx = 1:numsegmentsperfile
            % isolate the randomly chosen segment of data.
            bkgsegment = bkgfile(segmentstart(segmentidx):segmentstart(segmentidx) segmentsamples-1);
            % scale the segment by the specified gain.
            bkgsegment = bkgsegment*gain(segmentidx);
            % clip the audio between -1 and 1.
            bkgsegment = max(min(bkgsegment,1),-1);
            % create a file name.
            afn = fn   "_segment"   segmentidx   ".wav";
            % randomly assign background segment to either the train or
            % validation set.
            if rand > 0.85 % assign 15% to validation
                dirtowriteto = fpvalidation;
            else % assign 85% to train set.
                dirtowriteto = fptrain;
            end
            % write the audio to the file location.
            ffn = fullfile(dirtowriteto,afn);
            audiowrite(ffn,bkgsegment,fs)
        end
        % print progress
        fprintf('progress = %d (%%)\n',round(100*progress(adsbkg)))
    end
end
end

参考资料

[1] warden p."speech commands:a public dataset for single-word speech recognition", 2017.可从 获得。凯发官网入口首页 copyright google 2017.speech commands dataset 是根据 creative commons attribution 4.0 许可证授权的,可通过 获得。

参考

[1] warden p. "speech commands: a public dataset for single-word speech recognition", 2017. available from . 凯发官网入口首页 copyright google 2017. the speech commands dataset is licensed under the creative commons attribution 4.0 license, available here: .

另请参阅

| |

相关主题

网站地图