compare lda solvers -凯发k8网页登录
this example shows how to compare latent dirichlet allocation (lda) solvers by comparing the goodness of fit and the time taken to fit the model.
import text data
import a set of abstracts and category labels from math papers using the arxiv api. specify the number of records to import using the importsize
variable.
importsize = 50000;
create a url that queries records with set "math"
and metadata prefix "arxiv"
.
url = "https://export.arxiv.org/oai2?verb=listrecords" ... "&set=math" ... "&metadataprefix=arxiv";
extract the abstract text and the resumption token returned by the query url using the parsearxivrecords
function which is attached to this example as a supporting file. to access this file, open this example as a live script. note that the arxiv api is rate limited and requires waiting between multiple requests.
[textdata,~,resumptiontoken] = parsearxivrecords(url);
iteratively import more chunks of records until the required amount is reached, or there are no more records. to continue importing records from where you left off, use the resumption token from the previous result in the query url. to adhere to the rate limits imposed by the arxiv api, add a delay of 20 seconds before each query using the pause
function.
while numel(textdata) < importsize if resumptiontoken == "" break end url = "https://export.arxiv.org/oai2?verb=listrecords" ... "&resumptiontoken=" resumptiontoken; pause(20) [textdatanew,labelsnew,resumptiontoken] = parsearxivrecords(url); textdata = [textdata; textdatanew]; end
preprocess text data
set aside 10% of the documents at random for validation.
numdocuments = numel(textdata);
cvp = cvpartition(numdocuments,'holdout',0.1);
textdatatrain = textdata(training(cvp));
textdatavalidation = textdata(test(cvp));
tokenize and preprocess the text data using the function preprocesstext
which is listed at the end of this example.
documentstrain = preprocesstext(textdatatrain); documentsvalidation = preprocesstext(textdatavalidation);
create a bag-of-words model from the training documents. remove the words that do not appear more than two times in total. remove any documents containing no words.
bag = bagofwords(documentstrain); bag = removeinfrequentwords(bag,2); bag = removeemptydocuments(bag);
for the validation data, create a bag-of-words model from the validation documents. you do not need to remove any words from the validaiton data because any words that do not appear in the fitted lda models are automatically ignored.
validationdata = bagofwords(documentsvalidation);
fit and compare models
for each of the lda solvers, fit a model with 40 topics. to distinguish the solvers when plotting the results on the same axes, specify different line properties for each solver.
numtopics = 40; solvers = ["cgs" "avb" "cvb0" "savb"]; linespecs = [" -" "*-" "x-" "o-"];
fit an lda model using each solver. for each solver, specify the initial topic concentration 1, to validate the model once per data pass, and to not fit the topic concentration parameter. using the data in the fitinfo
property of the fitted lda models, plot the validation perplexity and the time elapsed.
the stochastic solver, by default, uses a mini-batch size of 1000 and validates the model every 10 iterations. for this solver, to validate the model once per data pass, set the validation frequency to ceil(numobservations/1000)
, where numobservations
is the number of documents in the training data. for the other solvers, set the validation frequency to 1.
for the iterations that the stochastic solver does not evaluate the validation perplexity, the stochastic solver reports nan
in the fitinfo
property. to plot the validation perplexity, remove the nans from the reported values.
numobservations = bag.numdocuments; figure for i = 1:numel(solvers) solver = solvers(i); linespec = linespecs(i); if solver == "savb" numiterationsperdatapass = ceil(numobservations/1000); else numiterationsperdatapass = 1; end mdl = fitlda(bag,numtopics, ... 'solver',solver, ... 'initialtopicconcentration',1, ... 'fittopicconcentration',false, ... 'validationdata',validationdata, ... 'validationfrequency',numiterationsperdatapass, ... 'verbose',0); history = mdl.fitinfo.history; timeelapsed = history.timesincestart; validationperplexity = history.validationperplexity; % remove nans. idx = isnan(validationperplexity); timeelapsed(idx) = []; validationperplexity(idx) = []; plot(timeelapsed,validationperplexity,linespec) hold on end hold off xlabel("time elapsed (s)") ylabel("validation perplexity") ylim([0 inf]) legend(solvers)
for the stochastic solver, there is only one data point. this is because this solver passes through input data once. to specify more data passes, use the 'datapasslimit'
option. for the batch solvers ("cgs"
, "avb"
, and "cvb0"
), to specify the number of iterations used to fit the models, use the 'iterationlimit'
option.
a lower validation perplexity suggests a better fit. usually, the solvers "savb"
and "cgs"
converge quickly to a good fit. the solver "cvb0"
might converge to a better fit, but it can take much longer to converge.
for the fitinfo
property, the fitlda
function estimates the validation perplexity from the document probabilities at the maximum likelihood estimates of the per-document topic probabilities. this is usually quicker to compute, but can be less accurate than other methods. alternatively, calculate the validation perplexity using the logp
function. this function calculates more accurate values but can take longer to run. for an example showing how to compute the perplexity using logp
, see .
preprocessing function
the function preprocesstext
performs the following steps:
tokenize the text using
tokenizeddocument
.lemmatize the words using
normalizewords
.erase punctuation using
erasepunctuation
.remove a list of stop words (such as "and", "of", and "the") using
removestopwords
.remove words with 2 or fewer characters using
removeshortwords
.remove words with 15 or more characters using
removelongwords
.
function documents = preprocesstext(textdata) % tokenize the text. documents = tokenizeddocument(textdata); % lemmatize the words. documents = addpartofspeechdetails(documents); documents = normalizewords(documents,'style','lemma'); % erase punctuation. documents = erasepunctuation(documents); % remove a list of stop words. documents = removestopwords(documents); % remove words with 2 or fewer characters, and words with 15 or greater % characters. documents = removeshortwords(documents,2); documents = removelongwords(documents,15); end
see also
tokenizeddocument
| | removestopwords
| | | | | | | erasepunctuation
| | | normalizewords
| addpartofspeechdetails