prepare text data for analysis -凯发k8网页登录
this example shows how to create a function which cleans and preprocesses text data for analysis.
text data can be large and can contain lots of noise which negatively affects statistical analysis. for example, text data can contain the following:
variations in case, for example "new" and "new"
variations in word forms, for example "walk" and "walking"
words which add noise, for example stop words such as "the" and "of"
punctuation and special characters
html and xml tags
these word clouds illustrate word frequency analysis applied to some raw text data from factory reports, and a preprocessed version of the same text data.
load and extract text data
load the example data. the file factoryreports.csv
contains factory reports, including a text description and categorical labels for each event.
filename = "factoryreports.csv"; data = readtable(filename,'texttype','string');
extract the text data from the field description
, and the label data from the field category
.
textdata = data.description; labels = data.category; textdata(1:10)
ans = 10×1 string
"items are occasionally getting stuck in the scanner spools."
"loud rattling and banging sounds are coming from assembler pistons."
"there are cuts to the power when starting the plant."
"fried capacitors in the assembler."
"mixer tripped the fuses."
"burst pipe in the constructing agent is spraying coolant."
"a fuse is blown in the mixer."
"things continue to tumble off of the belt."
"falling items from the conveyor belt."
"the scanner reel is split, it will soon begin to curve."
create tokenized documents
create an array of tokenized documents.
cleaneddocuments = tokenizeddocument(textdata); cleaneddocuments(1:10)
ans = 10×1 tokenizeddocument: 10 tokens: items are occasionally getting stuck in the scanner spools . 11 tokens: loud rattling and banging sounds are coming from assembler pistons . 11 tokens: there are cuts to the power when starting the plant . 6 tokens: fried capacitors in the assembler . 5 tokens: mixer tripped the fuses . 10 tokens: burst pipe in the constructing agent is spraying coolant . 8 tokens: a fuse is blown in the mixer . 9 tokens: things continue to tumble off of the belt . 7 tokens: falling items from the conveyor belt . 13 tokens: the scanner reel is split , it will soon begin to curve .
to improve lemmatization, add part of speech details to the documents using addpartofspeechdetails
. use the addpartofspeech
function before removing stop words and lemmatizing.
cleaneddocuments = addpartofspeechdetails(cleaneddocuments);
words like "a", "and", "to", and "the" (known as stop words) can add noise to data. remove a list of stop words using the removestopwords
function. use the removestopwords
function before using the normalizewords
function.
cleaneddocuments = removestopwords(cleaneddocuments); cleaneddocuments(1:10)
ans = 10×1 tokenizeddocument: 7 tokens: items occasionally getting stuck scanner spools . 8 tokens: loud rattling banging sounds coming assembler pistons . 5 tokens: cuts power starting plant . 4 tokens: fried capacitors assembler . 4 tokens: mixer tripped fuses . 7 tokens: burst pipe constructing agent spraying coolant . 4 tokens: fuse blown mixer . 6 tokens: things continue tumble off belt . 5 tokens: falling items conveyor belt . 8 tokens: scanner reel split , soon begin curve .
lemmatize the words using normalizewords
.
cleaneddocuments = normalizewords(cleaneddocuments,'style','lemma'); cleaneddocuments(1:10)
ans = 10×1 tokenizeddocument: 7 tokens: items occasionally get stuck scanner spool . 8 tokens: loud rattle bang sound come assembler piston . 5 tokens: cut power start plant . 4 tokens: fry capacitor assembler . 4 tokens: mixer trip fuse . 7 tokens: burst pipe constructing agent spray coolant . 4 tokens: fuse blow mixer . 6 tokens: thing continue tumble off belt . 5 tokens: fall item conveyor belt . 8 tokens: scanner reel split , soon begin curve .
erase the punctuation from the documents.
cleaneddocuments = erasepunctuation(cleaneddocuments); cleaneddocuments(1:10)
ans = 10×1 tokenizeddocument: 6 tokens: items occasionally get stuck scanner spool 7 tokens: loud rattle bang sound come assembler piston 4 tokens: cut power start plant 3 tokens: fry capacitor assembler 3 tokens: mixer trip fuse 6 tokens: burst pipe constructing agent spray coolant 3 tokens: fuse blow mixer 5 tokens: thing continue tumble off belt 4 tokens: fall item conveyor belt 6 tokens: scanner reel split soon begin curve
remove words with 2 or fewer characters, and words with 15 or greater characters.
cleaneddocuments = removeshortwords(cleaneddocuments,2); cleaneddocuments = removelongwords(cleaneddocuments,15); cleaneddocuments(1:10)
ans = 10×1 tokenizeddocument: 6 tokens: items occasionally get stuck scanner spool 7 tokens: loud rattle bang sound come assembler piston 4 tokens: cut power start plant 3 tokens: fry capacitor assembler 3 tokens: mixer trip fuse 6 tokens: burst pipe constructing agent spray coolant 3 tokens: fuse blow mixer 5 tokens: thing continue tumble off belt 4 tokens: fall item conveyor belt 6 tokens: scanner reel split soon begin curve
create bag-of-words model
create a bag-of-words model.
cleanedbag = bagofwords(cleaneddocuments)
cleanedbag = bagofwords with properties: counts: [480×352 double] vocabulary: [1×352 string] numwords: 352 numdocuments: 480
remove words that do not appear more than two times in the bag-of-words model.
cleanedbag = removeinfrequentwords(cleanedbag,2)
cleanedbag = bagofwords with properties: counts: [480×163 double] vocabulary: [1×163 string] numwords: 163 numdocuments: 480
some preprocessing steps such as removeinfrequentwords
leaves empty documents in the bag-of-words model. to ensure that no empty documents remain in the bag-of-words model after preprocessing, use removeemptydocuments
as the last step.
remove empty documents from the bag-of-words model and the corresponding labels from labels
.
[cleanedbag,idx] = removeemptydocuments(cleanedbag); labels(idx) = []; cleanedbag
cleanedbag = bagofwords with properties: counts: [480×163 double] vocabulary: [1×163 string] numwords: 163 numdocuments: 480
create a preprocessing function
it can be useful to create a function which performs preprocessing so you can prepare different collections of text data in the same way. for example, you can use a function so that you can preprocess new data using the same steps as the training data.
create a function which tokenizes and preprocesses the text data so it can be used for analysis. the function preprocesstext
, performs the following steps:
tokenize the text using
tokenizeddocument
.remove a list of stop words (such as "and", "of", and "the") using
removestopwords
.lemmatize the words using
normalizewords
.erase punctuation using
erasepunctuation
.remove words with 2 or fewer characters using
removeshortwords
.remove words with 15 or more characters using
removelongwords
.
use the example preprocessing function preprocesstext
to prepare the text data.
newtext = "the sorting machine is making lots of loud noises.";
newdocuments = preprocesstext(newtext)
newdocuments = tokenizeddocument: 6 tokens: sorting machine make lot loud noise
compare with raw data
compare the preprocessed data with the raw data.
rawdocuments = tokenizeddocument(textdata); rawbag = bagofwords(rawdocuments)
rawbag = bagofwords with properties: counts: [480×555 double] vocabulary: [1×555 string] numwords: 555 numdocuments: 480
calculate the reduction in data.
numwordscleaned = cleanedbag.numwords; numwordsraw = rawbag.numwords; reduction = 1 - numwordscleaned/numwordsraw
reduction = 0.7063
compare the raw data and the cleaned data by visualizing the two bag-of-words models using word clouds.
figure subplot(1,2,1) wordcloud(rawbag); title("raw data") subplot(1,2,2) wordcloud(cleanedbag); title("cleaned data")
preprocessing function
the function preprocesstext
, performs the following steps in order:
tokenize the text using
tokenizeddocument
.remove a list of stop words (such as "and", "of", and "the") using
removestopwords
.lemmatize the words using
normalizewords
.erase punctuation using
erasepunctuation
.remove words with 2 or fewer characters using
removeshortwords
.remove words with 15 or more characters using
removelongwords
.
function documents = preprocesstext(textdata) % tokenize the text. documents = tokenizeddocument(textdata); % remove a list of stop words then lemmatize the words. to improve % lemmatization, first use addpartofspeechdetails. documents = addpartofspeechdetails(documents); documents = removestopwords(documents); documents = normalizewords(documents,'style','lemma'); % erase punctuation. documents = erasepunctuation(documents); % remove words with 2 or fewer characters, and words with 15 or more % characters. documents = removeshortwords(documents,2); documents = removelongwords(documents,15); end
see also
tokenizeddocument
| | removestopwords
| | | erasepunctuation
| | | normalizewords
| | addpartofspeechdetails