classify text data using deep learning -凯发k8网页登录
this example shows how to classify text data using a deep learning long short-term memory (lstm) network.
text data is naturally sequential. a piece of text is a sequence of words, which might have dependencies between them. to learn and use long-term dependencies to classify sequence data, use an lstm neural network. an lstm network is a type of recurrent neural network (rnn) that can learn long-term dependencies between time steps of sequence data.
to input text to an lstm network, first convert the text data into numeric sequences. you can achieve this using a word encoding which maps documents to sequences of numeric indices. for better results, also include a word embedding layer in the network. word embeddings map words in a vocabulary to numeric vectors rather than scalar indices. these embeddings capture semantic details of the words, so that words with similar meanings have similar vectors. they also model relationships between words through vector arithmetic. for example, the relationship "rome is to italy as paris is to france" is described by the equation italy– rome paris = france.
there are four steps in training and using the lstm network in this example:
import and preprocess the data.
convert the words to numeric sequences using a word encoding.
create and train an lstm network with a word embedding layer.
classify new text data using the trained lstm network.
import data
import the factory reports data. this data contains labeled textual descriptions of factory events. to import the text data as strings, specify the text type to be 'string'
.
filename = "factoryreports.csv"; data = readtable(filename,'texttype','string'); head(data)
ans=8×5 table
description category urgency resolution cost
_____________________________________________________________________ ____________________ ________ ____________________ _____
"items are occasionally getting stuck in the scanner spools." "mechanical failure" "medium" "readjust machine" 45
"loud rattling and banging sounds are coming from assembler pistons." "mechanical failure" "medium" "readjust machine" 35
"there are cuts to the power when starting the plant." "electronic failure" "high" "full replacement" 16200
"fried capacitors in the assembler." "electronic failure" "high" "replace components" 352
"mixer tripped the fuses." "electronic failure" "low" "add to watch list" 55
"burst pipe in the constructing agent is spraying coolant." "leak" "high" "replace components" 371
"a fuse is blown in the mixer." "electronic failure" "low" "replace components" 441
"things continue to tumble off of the belt." "mechanical failure" "low" "readjust machine" 38
the goal of this example is to classify events by the label in the category
column. to divide the data into classes, convert these labels to categorical.
data.category = categorical(data.category);
view the distribution of the classes in the data using a histogram.
figure histogram(data.category); xlabel("class") ylabel("frequency") title("class distribution")
the next step is to partition it into sets for training and validation. partition the data into a training partition and a held-out partition for validation and testing. specify the holdout percentage to be 20%.
cvp = cvpartition(data.category,'holdout',0.2);
datatrain = data(training(cvp),:);
datavalidation = data(test(cvp),:);
extract the text data and labels from the partitioned tables.
textdatatrain = datatrain.description; textdatavalidation = datavalidation.description; ytrain = datatrain.category; yvalidation = datavalidation.category;
to check that you have imported the data correctly, visualize the training text data using a word cloud.
figure
wordcloud(textdatatrain);
title("training data")
preprocess text data
create a function that tokenizes and preprocesses the text data. the function preprocesstext
, listed at the end of the example, performs these steps:
tokenize the text using
tokenizeddocument
.convert the text to lowercase using
lower
.erase the punctuation using
erasepunctuation
.
preprocess the training data and the validation data using the preprocesstext
function.
documentstrain = preprocesstext(textdatatrain); documentsvalidation = preprocesstext(textdatavalidation);
view the first few preprocessed training documents.
documentstrain(1:5)
ans = 5×1 tokenizeddocument: 9 tokens: items are occasionally getting stuck in the scanner spools 10 tokens: loud rattling and banging sounds are coming from assembler pistons 10 tokens: there are cuts to the power when starting the plant 5 tokens: fried capacitors in the assembler 4 tokens: mixer tripped the fuses
convert document to sequences
to input the documents into an lstm network, use a word encoding to convert the documents into sequences of numeric indices.
to create a word encoding, use the wordencoding
function.
enc = wordencoding(documentstrain);
the next conversion step is to pad and truncate documents so they are all the same length. the trainingoptions
function provides options to pad and truncate input sequences automatically. however, these options are not well suited for sequences of word vectors. instead, pad and truncate the sequences manually. if you left-pad and truncate the sequences of word vectors, then the training might improve.
to pad and truncate the documents, first choose a target length, and then truncate documents that are longer than it and left-pad documents that are shorter than it. for best results, the target length should be short without discarding large amounts of data. to find a suitable target length, view a histogram of the training document lengths.
documentlengths = doclength(documentstrain); figure histogram(documentlengths) title("document lengths") xlabel("length") ylabel("number of documents")
most of the training documents have fewer than 10 tokens. use this as your target length for truncation and padding.
convert the documents to sequences of numeric indices using doc2sequence
. to truncate or left-pad the sequences to have length 10, set the 'length'
option to 10.
sequencelength = 10;
xtrain = doc2sequence(enc,documentstrain,'length',sequencelength);
xtrain(1:5)
ans=5×1 cell array
{1×10 double}
{1×10 double}
{1×10 double}
{1×10 double}
{1×10 double}
convert the validation documents to sequences using the same options.
xvalidation = doc2sequence(enc,documentsvalidation,'length',sequencelength);
create and train lstm network
define the lstm network architecture. to input sequence data into the network, include a sequence input layer and set the input size to 1. next, include a word embedding layer of dimension 50 and the same number of words as the word encoding. next, include an lstm layer and set the number of hidden units to 80. to use the lstm layer for a sequence-to-label classification problem, set the output mode to 'last'
. finally, add a fully connected layer with the same size as the number of classes, a softmax layer, and a classification layer.
inputsize = 1; embeddingdimension = 50; numhiddenunits = 80; numwords = enc.numwords; numclasses = numel(categories(ytrain)); layers = [ ... sequenceinputlayer(inputsize) wordembeddinglayer(embeddingdimension,numwords) lstmlayer(numhiddenunits,'outputmode','last') fullyconnectedlayer(numclasses) softmaxlayer classificationlayer]
layers = 6x1 layer array with layers: 1 '' sequence input sequence input with 1 dimensions 2 '' word embedding layer word embedding layer with 50 dimensions and 423 unique words 3 '' lstm lstm with 80 hidden units 4 '' fully connected 4 fully connected layer 5 '' softmax softmax 6 '' classification output crossentropyex
specify training options
specify the training options:
train using the adam solver.
specify a mini-batch size of 16.
shuffle the data every epoch.
monitor the training progress by setting the
'plots'
option to'training-progress'
.specify the validation data using the
'validationdata'
option.suppress verbose output by setting the
'verbose'
option tofalse
.
by default, trainnetwork
uses a gpu if one is available. otherwise, it uses the cpu. to specify the execution environment manually, use the 'executionenvironment'
name-value pair argument of trainingoptions
. training on a cpu can take significantly longer than training on a gpu. training with a gpu requires parallel computing toolbox™ and a supported gpu device. for information on supported devices, see gpu computing requirements (parallel computing toolbox).
options = trainingoptions('adam', ... 'minibatchsize',16, ... 'gradientthreshold',2, ... 'shuffle','every-epoch', ... 'validationdata',{xvalidation,yvalidation}, ... 'plots','training-progress', ... 'verbose',false);
train the lstm network using the trainnetwork
function.
net = trainnetwork(xtrain,ytrain,layers,options);
predict using new data
classify the event type of three new reports. create a string array containing the new reports.
reportsnew = [ ... "coolant is pooling underneath sorter." "sorter blows fuses at start up." "there are some very loud rattling sounds coming from the assembler."];
preprocess the text data using the preprocessing steps as the training documents.
documentsnew = preprocesstext(reportsnew);
convert the text data to sequences using doc2sequence
with the same options as when creating the training sequences.
xnew = doc2sequence(enc,documentsnew,'length',sequencelength);
classify the new sequences using the trained lstm network.
labelsnew = classify(net,xnew)
labelsnew = 3×1 categorical
leak
electronic failure
mechanical failure
preprocessing function
the function preprocesstext
performs these steps:
tokenize the text using
tokenizeddocument
.convert the text to lowercase using
lower
.erase the punctuation using
erasepunctuation
.
function documents = preprocesstext(textdata) % tokenize the text. documents = tokenizeddocument(textdata); % convert to lowercase. documents = lower(documents); % erase punctuation. documents = erasepunctuation(documents); end
see also
fasttextwordembedding
| | tokenizeddocument
| (deep learning toolbox) | (deep learning toolbox) | (deep learning toolbox) | | (deep learning toolbox) |
related topics
- generate text using deep learning (deep learning toolbox)
- word-by-word text generation using deep learning
- create simple text model for classification
- analyze text data using topic models
- analyze text data using multiword phrases
- train a sentiment classifier
- (deep learning toolbox)
- deep learning in matlab (deep learning toolbox)