preprocess and clean up text data for analysis -凯发k8网页登录
preprocess and clean up text data for analysis
since r2023a
description
the preprocess text data live editor task helps prepare text data for analysis.
you can use the task to control these processing steps:
html clean up
tokenization
adding token details
word normalization
changing and removing words
the preprocess text data live editor task generates code that performs the selected preprocessing steps, which you can use to create a preprocessing function for your workflows.
open the preprocess text data
to add the preprocess text data task to a live script in the matlab® editor:
on the live editor tab, select task > preprocess text data.
in a code block in the live script, type a relevant keyword, such as
preprocess
,clean
, ortext
. select preprocess text data from the suggested command completions.
examples
create simple preprocessing function
this example shows how to create a function which cleans and preprocesses text data for analysis using the preprocess text data live editor task.
first, load the factory reports data. the data contains textual descriptions of factory failure events.
tbl = readtable("factoryreports.csv")
open the preprocess text data live editor task. to open the
task, begin typing the keyword preprocess
and select
preprocess text data from the suggested command completions.
alternatively, on the live editor tab, select task > preprocess text data.
preprocess the text using these options:
select
tbl
as the input data and select the table variabledescription
.tokenize the text using automatic language detection.
to improve lemmatization, add part-of-speech tags to the token details.
normalize the words using lemmatization.
remove words with fewer than 3 characters or more than 14 characters.
remove stop words.
erase punctuation.
display the preprocessed text in a word cloud.
the preprocess text data live editor task generates code in your live script. the generated code reflects the options that you select and includes code to generate the display. to see the generated code, click the at the bottom of the task parameter area. the task expands to display the generated code.
by default, the generated code uses preprocessedtext
as the name of the output variable returned to the matlab workspace. to specify a different output variable name, enter a new name in the summary line at the top of the task.
to reuse the same steps in your code, create a function that takes as input the text data and
outputs the preprocessed text data. you can include the function at the end of a script or
as a separate file. the preprocesstextdata
function listed at the end of
the example, uses the code generated by the preprocess text
data live editor task.
to use the function, specify the table as input to the preprocesstextdata
function.
documents = preprocesstextdata(tbl);
preprocessing function
the preprocesstextdata
function uses the code generated by the preprocess text data live editor task. the function takes as input the table tbl
and returns the preprocessed text preprocessedtext
. the function performs the these steps:
extract the text data from the
description
variable of the input table.tokenize the text using
tokenizeddocument
.add part-of-speech details using
addpartofspeechdetails
.lemmatize the words using
normalizewords
.remove words with 2 or fewer characters using .
remove words with 15 or more characters using .
remove stop words (such as "and", "of", and "the") using
removestopwords
.erase punctuation using
erasepunctuation
.
function preprocessedtext = preprocesstextdata(tbl) %% preprocess text preprocessedtext = tbl.description; % tokenize preprocessedtext = tokenizeddocument(preprocessedtext); % add token details preprocessedtext = addpartofspeechdetails(preprocessedtext); % change and remove words preprocessedtext = normalizewords(preprocessedtext,style="lemma"); preprocessedtext = removeshortwords(preprocessedtext,2); preprocessedtext = removelongwords(preprocessedtext,15); preprocessedtext = removestopwords(preprocessedtext,ignorecase=false); preprocessedtext = erasepunctuation(preprocessedtext); end
for an example showing a more detailed workflow, see . for next steps in text analytics, you can try creating a classification model or analyze the data using topic models. for examples, see create simple text model for classification and analyze text data using topic models.
parameters
data
— text to preprocess
workspace variable
text to preprocess, specified as a matlab workspace variable. the variable must be a table, string array, or character vector to appear in the list.
if you select a table, then specify the table variable containing the text data in the second drop-down box that appears.
extract html text
— extract text data from html tags
off
(default) | on
extract text data from html tags.
the generated code uses .
remove html tags
— remove html tags
off
(default) | on
remove html tags.
the generated code uses .
decode html entities
— convert html and xml entities into characters
off
(default) | on
convert html and xml entities into characters. for example convert
"&"
to "&"
.
the generated code uses .
language
— text language
automatic
(default) | english
| german
| japanese
| korean
text language, specified as one of these options:
automatic
automatic language detection
english
english language
german
german language
japanese
japanese language
korean
korean language
the generated code uses tokenizeddocument
.
split
— text splitting mode
none
(default) | sentences
| paragraphs
text splitting mode, specified as one of these options:
none
do not split input.
sentences
split input into sentences. this option supports scalar input only.
the generated code uses .
paragraphs
split input into paragraphs. this option supports scalar input only.
the generated code uses .
add sentence numbers
— option to add sentence numbers
off
(default) | on
option to add sentence numbers to tokens.
the generated code uses .
add part-of-speech tags
— option to add part-of-speech tags
on
(default) | off
option to add part-of-speech tags to tokens.
the generated code uses addpartofspeechdetails
.
detect named entities
— option to detect named entities
off
(default) | on
option to detect named entities in tokens.
the generated code uses addentitydetails
.
parse dependencies
— option to parse dependencies
off
(default) | on
option to parse dependencies in tokens. this option requires text analytics toolbox™ model for udify data support package.
the generated code uses .
word normalization
— word normalization
lemma
(default) | stem
| none
word normalization, specified as one of these options:
none
do not normalize words.
lemma
normalize words using lemmatization. this option outputs text in lowercase.
stem
normalize words using stemming.
the generated code uses normalizewords
.
case normalization
— case normalization
none
(default) | uppercase
| lowercase
case normalization, specified as one of these options:
none
do not normalize case.
note
the
lemma
option of word normalization converts text to lowercase.lowercase
convert text to lowercase.
the generated code uses .
uppercase
convert text to uppercase.
the generated code uses .
minimum word length
— minimum word length
3
(default) | positive integer | off
minimum word length, specified as of these options:
off
— do not remove short wordspositive integer — remove words with fewer than the specified number of characters
the generated code uses .
maximum word length
— maximum word length
14
(default) | positive integer | off
maximum word length, specified as of these options:
off
— do not remove long wordspositive integer — remove words with more than the specified number of characters
the generated code uses .
remove stop words
— option to remove stop words
on
(default) | off
option to remove stop words.
the generated code uses removestopwords
.
remove erase punctuation
— option to erase punctuation
on
(default) | off
option to erase punctuation.
the generated code uses erasepunctuation
.
replace words
— source and target words for replacement
pairs of source and target strings
source and target words for replacement, specified as pairs of source and target strings. to specify multiword phrases (n-grams), use whitespace separated words.
the generated code uses and .
remove words
— words to remove
string
words to remove, specified as strings. to specify multiword phrases (n-grams), use whitespace separated words.
the generated code uses and .
remove empty documents
— option to remove empty documents
off
(default) | on
option to remove empty documents.
the generated code uses .
ignore case
— option to ignore case
off
(default) | on
option to ignore case in word change and removal options.
show tokenized text
— option to show tokenized text
off
(default) | on
option to show tokenized text.
show token details
— option to show token details
off
(default) | on
option to show token details.
the generated code uses .
show word cloud
— option to show word cloud
off
(default) | on
option to show word cloud.
the generated code uses .
tips
by default, the preprocess text data task does not automatically run when you modify the task parameters. to have the task run automatically after any change, select the autorun button at the top-right of the task. if your data set is large, do not enable this option.
version history
introduced in r2023a
matlab 命令
您点击的链接对应于以下 matlab 命令:
请在 matlab 命令行窗口中直接输入以执行命令。web 浏览器不支持 matlab 命令。
select a web site
choose a web site to get translated content where available and see local events and offers. based on your location, we recommend that you select: .
you can also select a web site from the following list:
how to get best site performance
select the china site (in chinese or english) for best site performance. other mathworks country sites are not optimized for visits from your location.
americas
- (español)
- (english)
- (english)
europe
- (english)
- (english)
- (deutsch)
- (español)
- (english)
- (français)
- (english)
- (italiano)
- (english)
- (english)
- (english)
- (deutsch)
- (english)
- (english)
- switzerland
- (english)
asia pacific
- (english)
- (english)
- (english)
- 中国
- (日本語)
- (한국어)