pride and prejudice and matlab -凯发k8网页登录
this example shows how to train a deep learning lstm network to generate text using character embeddings.
to train a deep learning network for text generation, train a sequence-to-sequence lstm network to predict the next character in a sequence of characters. to train the network to predict the next character, specify the responses to be the input sequences shifted by one time step.
to use character embeddings, convert each training observation to a sequence of integers, where the integers index into a vocabulary of characters. include a word embedding layer in the network which learns an embedding of the characters and maps the integers to vectors.
load training data
read the html code from and parse it using webread
and htmltree
.
url = "https://www.gutenberg.org/files/1342/1342-h/1342-h.htm";
code = webread(url);
tree = htmltree(code);
extract the paragraphs by finding the p
elements. specify to ignore paragraph elements with class "toc"
using the css selector ':not(.toc)'
.
paragraphs = findelement(tree,'p:not(.toc)');
extract the text data from the paragraphs using extracthtmltext
. and remove the empty strings.
textdata = extracthtmltext(paragraphs);
textdata(textdata == "") = [];
remove strings shorter than 20 characters.
idx = strlength(textdata) < 20; textdata(idx) = [];
visualize the text data in a word cloud.
figure
wordcloud(textdata);
title("pride and prejudice")
convert text data to sequences
convert the text data to sequences of character indices for the predictors and categorical sequences for the responses.
the categorical function treats newline and whitespace entries as undefined. to create categorical elements for these characters, replace them with the special characters "¶
" (pilcrow, "\x00b6"
) and "·" (middle dot, "\x00b7"
) respectively. to prevent ambiguity, you must choose special characters that do not appear in the text. these characters do not appear in the training data so can be used for this purpose.
newlinecharacter = compose("\x00b6"); whitespacecharacter = compose("\x00b7"); textdata = replace(textdata,[newline " "],[newlinecharacter whitespacecharacter]);
loop over the text data and create a sequence of character indices representing the characters of each observation and a categorical sequence of characters for the responses. to denote the end of each observation, include the special character "␃" (end of text, "\x2403"
).
endoftextcharacter = compose("\x2403"); numdocuments = numel(textdata); for i = 1:numdocuments characters = textdata{i}; x = double(characters); % create vector of categorical responses with end of text character. charactersshifted = [cellstr(characters(2:end)')' endoftextcharacter]; y = categorical(charactersshifted); xtrain{i} = x; ytrain{i} = y; end
during training, by default, the software splits the training data into mini-batches and pads the sequences so that they have the same length. too much padding can have a negative impact on the network performance.
to prevent the training process from adding too much padding, you can sort the training data by sequence length, and choose a mini-batch size so that sequences in a mini-batch have a similar length.
get the sequence lengths for each observation.
numobservations = numel(xtrain); for i=1:numobservations sequence = xtrain{i}; sequencelengths(i) = size(sequence,2); end
sort the data by sequence length.
[~,idx] = sort(sequencelengths); xtrain = xtrain(idx); ytrain = ytrain(idx);
create and train lstm network
define the lstm architecture. specify a sequence-to-sequence lstm classification network with 400 hidden units. set the input size to be the feature dimension of the training data. for sequences of character indices, the feature dimension is 1. specify a word embedding layer with dimension 200 and specify the number of words (which correspond to characters) to be the highest character value in the input data. set the output size of the fully connected layer to be the number of categories in the responses. to help prevent overfitting, include a dropout layer after the lstm layer.
the word embedding layer learns an embedding of characters and maps each character to a 200-dimension vector.
inputsize = size(xtrain{1},1); numclasses = numel(categories([ytrain{:}])); numcharacters = max([textdata{:}]); layers = [ sequenceinputlayer(inputsize) wordembeddinglayer(200,numcharacters) lstmlayer(400,'outputmode','sequence') dropoutlayer(0.2); fullyconnectedlayer(numclasses) softmaxlayer classificationlayer];
specify the training options. specify to train with a mini-batch size of 32 and initial learn rate 0.01. to prevent the gradients from exploding, set the gradient threshold to 1. to ensure the data remains sorted, set 'shuffle'
to 'never'
. to monitor the training progress, set the 'plots'
option to 'training-progress'
. to suppress verbose output, set 'verbose'
to false
.
options = trainingoptions('adam', ... 'minibatchsize',32,... 'initiallearnrate',0.01, ... 'gradientthreshold',1, ... 'shuffle','never', ... 'plots','training-progress', ... 'verbose',false);
train the network.
net = trainnetwork(xtrain,ytrain,layers,options);
generate new text
generate the first character of the text by sampling a character from a probability distribution according to the first characters of the text in the training data. generate the remaining characters by using the trained lstm network to predict the next sequence using the current sequence of generated text. keep generating characters one-by-one until the network predicts the "end of text" character.
sample the first character according to the distribution of the first characters in the training data.
initialcharacters = extractbefore(textdata,2); firstcharacter = datasample(initialcharacters,1); generatedtext = firstcharacter;
convert the first character to a numeric index.
x = double(char(firstcharacter));
for the remaining predictions, sample the next character according to the prediction scores of the network. the prediction scores represent the probability distribution of the next character. sample the characters from the vocabulary of characters given by the class names of the output layer of the network. get the vocabulary from the classification layer of the network.
vocabulary = string(net.layers(end).classnames);
make predictions character by character using predictandupdatestate
. for each prediction, input the index of the previous character. stop predicting when the network predicts the end of text character or when the generated text is 500 characters long. for large collections of data, long sequences, or large networks, predictions on the gpu are usually faster to compute than predictions on the cpu. otherwise, predictions on the cpu are usually faster to compute. for single time step predictions, use the cpu. to use the cpu for prediction, set the 'executionenvironment'
option of predictandupdatestate
to 'cpu'
.
maxlength = 500; while strlength(generatedtext) < maxlength % predict the next character scores. [net,characterscores] = predictandupdatestate(net,x,'executionenvironment','cpu'); % sample the next character. newcharacter = datasample(vocabulary,1,'weights',characterscores); % stop predicting at the end of text. if newcharacter == endoftextcharacter break end % add the character to the generated text. generatedtext = generatedtext newcharacter; % get the numeric index of the character. x = double(char(newcharacter)); end
reconstruct the generated text by replacing the special characters with their corresponding whitespace and new line characters.
generatedtext = replace(generatedtext,[newlinecharacter whitespacecharacter],[newline " "])
generatedtext = "“i wish mr. darcy, upon latter of my sort sincerely fixed in the regard to relanth. we were to join on the lucases. they are married with him way sir wickham, for the possibility which this two od since to know him one to do now thing, and the opportunity terms as they, and when i read; nor lizzy, who thoughts of the scent; for a look for times, i never went to the advantage of the case; had forcibling himself. they pility and lively believe she was to treat off in situation because, i am exceal"
to generate multiple pieces of text, reset the network state between generations using resetstate
.
net = resetstate(net);
see also
| | tokenizeddocument
| (deep learning toolbox) | (deep learning toolbox) | (deep learning toolbox) | (deep learning toolbox) | | | |
related topics
- generate text using deep learning (deep learning toolbox)
- word-by-word text generation using deep learning
- create simple text model for classification
- analyze text data using topic models
- analyze text data using multiword phrases
- train a sentiment classifier
- (deep learning toolbox)
- deep learning in matlab (deep learning toolbox)