main content

speaker diarization using x-凯发k8网页登录

speaker diarization is the process of partitioning an audio signal into segments according to speaker identity. it answers the question "who spoke when" without prior knowledge of the speakers and, depending on the application, without prior knowledge of the number of speakers.

speaker diarization has many applications, including: enhancing speech transcription by structuring text according to active speaker, video captioning, content retrieval (what did jane say?) and speaker counting (how many speakers were present in the meeting?).

in this example, you perform speaker diarization using a pretrained x-vector system [1] to characterize regions of audio and agglomerative hierarchical clustering (ahc) to group similar regions of audio [2]. to see how the x-vector system was defined and trained, see .

download pretrained speaker diarization system

download the pretrained speaker diarization system and supporting files. the total size is approximately 22 mb.

downloadfolder = matlab.internal.examples.downloadsupportfile("audio","speakerdiarization.zip");
datafolder = tempdir;
unzip(downloadfolder,datafolder)
netfolder = fullfile(datafolder,"speakerdiarization");
addpath(netfolder)

load an audio signal and a table containing ground truth annotations. the signal consists of five speakers. listen to the audio signal and plot its time-domain waveform.

[audioin,fs] = audioread("exampleconversation.flac");
load("exampleconversationlabels.mat")
audioin = audioin./max(abs(audioin));
sound(audioin,fs)
t = (0:size(audioin,1)-1)/fs;
figure(1)
plot(t,audioin)
xlabel("time (s)")
ylabel("amplitude")
axis tight

extract x-vectors

in this example, you used a pretrained x-vector system based on [1]. to see how the x-vector system was defined and trained, see .

load pretrained x-vector system

load the lightweight pretrained x-vector system. the x-vector system consists of:

  • afe - an audiofeatureextractor object to extract mel frequency cepstral coefficients (mfccs).

  • factors - a struct containing the mean and standard deviation of mfccs determined from a representative data set. these factors are used to standardize the mfccs.

  • dlnet - a trained dlnetwork. the network is used to extract x-vectors from the mfccs.

  • projmat - a trained projection matrix to reduce the dimensionality of x-vectors.

  • plda - a trained plda model for scoring x-vectors.

xvecsys = load("xvectorsystem.mat");

extract standardized acoustic features

extract standardized mfcc features from the audio data. view the feature distributions to confirm that the standardization factors learned from a separate data set approximately standardize the features derived in this example. a standard distribution has a mean of zero and a standard deviation of 1.

features = single((extract(xvecsys.afe,audioin)-xvecsys.factors.mean')./xvecsys.factors.std');
figure(2)
histogram(features)
xlabel("standardized mfcc")

extract x-vectors

each acoustic feature vector represents approximately 0.01 seconds of audio data. group the features into approximately 2 second segments with 0.1 second hops between segments.

featurevectorhopdur = (numel(xvecsys.afe.window) - xvecsys.afe.overlaplength)/xvecsys.afe.samplerate;
segmentdur = 2;
segmenthopdur = 0.1;
segmentlength = round(segmentdur/featurevectorhopdur);
segmenthop = round(segmenthopdur/featurevectorhopdur);
idx = 1:segmentlength;
featuressegmented = [];
while idx(end) < size(features,1)
    featuressegmented = cat(3,featuressegmented,features(idx,:));
    idx = idx   segmenthop;
end

extract x-vectors from each segment. x-vectors correspond to the output from the first fully-connected layer in the x-vector model trained in . the first fully-connected layer is the first segment-level layer after statistics are calculated for the time-dilated frame-level layers. visualize the x-vectors over time.

xvecs = zeros(512,size(featuressegmented,3));
for sample = 1:size(featuressegmented,3)
    dlx = dlarray(featuressegmented(:,:,sample),"tcb");
    xvecs(:,sample) = predict(xvecsys.dlnet,dlx,outputs="fc_1");
end
figure(3)
surf(xvecs',edgecolor="none")
view([90,-90])
axis([1 size(xvecs,1) 1 size(xvecs,2)])
xlabel("features")
ylabel("segment")

apply the pretrained linear discriminant analysis (lda) projection matrix to reduce the dimensionality of the x-vectors and then visualize the x-vectors over time.

x = xvecsys.projmat*xvecs;
figure(4)
surf(x',edgecolor="none")
view([90,-90])
axis([1 size(x,1) 1 size(x,2)])
xlabel("features")
ylabel("segment")

cluster x-vectors

an x-vector system learns to extract compact representations (x-vectors) of speakers. cluster the x-vectors to group similar regions of audio using either agglomerative hierarchical clustering ( (statistics and machine learning toolbox)) or k-means clustering (kmeans (statistics and machine learning toolbox)). [2] suggests using agglomerative heirarchical clustering with plda scoring as the distance measurement. k-means clustering using a cosine similarity score is also commonly used. assume prior knowledge of the the number of speakers in the audio. set the maximum clusters to the number of known speakers 1 so that the background is clustered independently.

knownnumberofspeakers = numel(unique(groundtruth.label));
maxclusters = knownnumberofspeakers   1;
clustermethod = 'agglomerative - plda scoring';
switch clustermethod
    case "agglomerative - plda scoring"
        t = clusterdata(x',criterion="distance",distance=@(a,b)helperpldascorer(a,b,xvecsys.plda),linkage="average",maxclust=maxclusters);
    case "agglomerative - css scoring"
        t = clusterdata(x',criterion="distance",distance="cosine",linkage="average",maxclust=maxclusters);
    case "kmeans - css scoring"
        t = kmeans(x',maxclusters,distance="cosine");
end

plot the cluster decisions over time.

figure(5)
tiledlayout(2,1)
nexttile
plot(t,audioin)
axis tight
ylabel("amplitude")
xlabel("time (s)")
nexttile
plot(t)
axis tight
ylabel("cluster index")
xlabel("segment")

to isolate segments of speech corresponding to clusters, map the segments back to audio samples. plot the results.

mask = zeros(size(audioin,1),1);
start = round((segmentdur/2)*fs);
segmenthopsamples = round(segmenthopdur*fs);
mask(1:start) = t(1);
start = start   1;
for ii = 1:numel(t)
    finish = start   segmenthopsamples;
    mask(start:start   segmenthopsamples) = t(ii);
    start = finish   1;
end
mask(finish:end) = t(end);
figure(6)
tiledlayout(2,1)
nexttile
plot(t,audioin)
axis tight
nexttile
plot(t,mask)
ylabel("cluster index")
axis tight
xlabel("time (s)")

use to determine speech regions. use to convert speech regions to a binary voice activity detection (vad) mask. call detectspeech a second time without any arguments to plot the detected speech regions.

mergeduration = 0.5;
vadidx = detectspeech(audioin,fs,mergedistance=fs*mergeduration);
vadmask = sigroi2binmask(vadidx,numel(audioin));
figure(7)
detectspeech(audioin,fs,mergedistance=fs*mergeduration)

apply the vad mask to the speaker mask and plot the results. a cluster index of 0 indicates a region of no speech.

mask = mask.*vadmask;
figure(8)
tiledlayout(2,1)
nexttile
plot(t,audioin)
axis tight
nexttile
plot(t,mask)
ylabel("cluster index")
axis tight
xlabel("time (s)")

in this example, you assume each detected speech region belongs to a single speaker. if more than two labels are present in a speech region, merge them to the most frequently occuring label.

masklabels = zeros(size(vadidx,1),1);
for ii = 1:size(vadidx,1)
    masklabels(ii) = mode(mask(vadidx(ii,1):vadidx(ii,2)),"all");
    mask(vadidx(ii,1):vadidx(ii,2)) = masklabels(ii);
end
figure(9)
tiledlayout(2,1)
nexttile
plot(t,audioin)
axis tight
nexttile
plot(t,mask)
ylabel("cluster index")
axis tight
xlabel("time (s)")

count the number of remaining speaker clusters.

uniquespeakerclusters = unique(masklabels);
numspeakers = numel(uniquespeakerclusters)
numspeakers = 5

visualize diarization results

create a object and then plot the speaker clusters. label the plot with the ground truth labels. the cluster labels are color coded with a key on the right of the plot. the true labels are printed above the plot.

msk = signalmask(table(vadidx,categorical(masklabels)));
figure(10)
plotsigroi(msk,audioin,true)
axis([0 numel(audioin) -1 1])
truelabel = groundtruth.label;
for ii = 1:numel(truelabel)  
    text(vadidx(ii,1),1.1,truelabel(ii),fontweight="bold")
end

choose a cluster to inspect and then use to isolate the speaker. plot the isolated speech signal and listen to the speaker cluster.

speakertoinspect = 2;
cutoutsilencefromaudio = true;
bmsk = binmask(msk,numel(audioin));
audiotoplay = audioin;
if cutoutsilencefromaudio
    audiotoplay(~bmsk(:,speakertoinspect)) = [];
end
sound(audiotoplay,fs)
figure(11)
tiledlayout(2,1)
nexttile
plot(t,audioin)
axis tight
ylabel("amplitude")
nexttile
plot(t,audioin.*bmsk(:,speakertoinspect))
axis tight
xlabel("time (s)")
ylabel("amplitude")
title("speaker group " speakertoinspect)

diarization system evaluation

the common metric for speaker diarization systems is the diarization error rate (der). the der is the sum of the miss rate (classifying speech as non-speech), the false alarm rate (classifying non-speech as speech) and the speaker error rate (confusing one speaker's speech for another).

in this simple example, the miss rate and false alarm rate are trivial problems. you evaluate the speaker error rate only.

map each true speaker to the corresponding best-fitting speaker cluster. to determine the speaker error rate, count the number of mismatches between the true speakers and the best-fitting speaker clusters, and then divide by the number of true speaker regions.

uniquelabels = unique(truelabel);
guesslabels = masklabels;
uniqueguesslabels = unique(guesslabels);
totalnumerrors = 0;
for ii = 1:numel(uniquelabels)
    isspeaker = uniquelabels(ii)==truelabel;
    minnumerrors = inf;
    
    for jj = 1:numel(uniqueguesslabels)
        groupcandidate = uniqueguesslabels(jj) == guesslabels;
        numerrors = nnz(isspeaker - groupcandidate);
        if numerrors < minnumerrors
            minnumerrors = numerrors;
            bestcandidate = jj;
        end
        minnumerrors = min(minnumerrors,numerrors);
    end
    uniqueguesslabels(bestcandidate) = [];
    totalnumerrors = totalnumerrors   minnumerrors;
    if isempty(uniqueguesslabels)
        break
    end
end
speakererrorrate = totalnumerrors/numel(truelabel)
speakererrorrate = 0

references

[1] snyder, david, et al. “x-vectors: robust dnn embeddings for speaker recognition.” 2018 ieee international conference on acoustics, speech and signal processing (icassp), ieee, 2018, pp. 5329–33. doi.org (crossref), doi:10.1109/icassp.2018.8461375.

[2] sell, g., snyder, d., mccree, a., garcia-romero, d., villalba, j., maciejewski, m., manohar, v., dehak, n., povey, d., watanabe, s., khudanpur, s. (2018) diarization is hard: some experiences and lessons learned for the jhu team in the inaugural dihard challenge. proc. interspeech 2018, 2808-2812, doi: 10.21437/interspeech.2018-1893.

网站地图