main content

stem or lemmatize words -凯发k8网页登录

stem or lemmatize words

description

use normalizewords to reduce words to a root form. to lemmatize english words (reduce them to their dictionary forms), set the 'style' option to 'lemma'.

the function supports english, japanese, german, and korean text.

example

updateddocuments = normalizewords(documents) reduces the words in documents to a root form. for english and german text, the function, by default, stems the words using the porter stemmer for english and german text respectively. for japanese and korean text, the function, by default, lemmatizes the words using the mecab tokenizer.

example

updatedwords = normalizewords(words) reduces each word in the string array words to a root form.

updatedwords = normalizewords(words,'language',language) reduces the words and also specifies the word language.

example

___ = normalizewords(___,'style',style) also specifies normalization style. for example, normalizewords(documents,'style','lemma') lemmatizes the words in the input documents.

examples

stem the words in a document array using the porter stemmer.

documents = tokenizeddocument([
    "a strongly worded collection of words"
    "another collection of words"]);
newdocuments = normalizewords(documents)
newdocuments = 
  2x1 tokenizeddocument:
    6 tokens: a strongli word collect of word
    4 tokens: anoth collect of word

stem the words in a string array using the porter stemmer. each element of the string array must be a single word.

words = ["a" "strongly" "worded" "collection" "of" "words"];
newwords = normalizewords(words)
newwords = 1x6 string
    "a"    "strongli"    "word"    "collect"    "of"    "word"

lemmatize the words in a document array.

documents = tokenizeddocument([
    "i am building a house."
    "the building has two floors."]);
newdocuments = normalizewords(documents,'style','lemma')
newdocuments = 
  2x1 tokenizeddocument:
    6 tokens: i be build a house .
    6 tokens: the build have two floor .

to improve the lemmatization, first add part-of-speech details to the documents using the addpartofspeechdetails function. for example, if the documents contain part-of-speech details, then normalizewords reduces the only verb "building" and not the noun "building".

documents = addpartofspeechdetails(documents);
newdocuments = normalizewords(documents,'style','lemma')
newdocuments = 
  2x1 tokenizeddocument:
    6 tokens: i be build a house .
    6 tokens: the building have two floor .

tokenize japanese text using the tokenizeddocument function. the function automatically detects japanese text.

str = [
    "空に星が輝き、瞬いている。"
    "空の星が輝きを増している。"
    "駅までは遠くて、歩けない。"
    "遠くの駅まで歩けない。"];
documents = tokenizeddocument(str);

lemmatize the tokens using normalizewords.

documents = normalizewords(documents)
documents = 
  4x1 tokenizeddocument:
    10 tokens: 空 に 星 が 輝く 、 瞬く て いる 。
    10 tokens: 空 の 星 が 輝き を 増す て いる 。
     9 tokens: 駅 まで は 遠い て 、 歩ける ない 。
     7 tokens: 遠く の 駅 まで 歩ける ない 。

tokenize german text using the tokenizeddocument function. the function automatically detects german text.

str = [
    "guten morgen. wie geht es dir?"
    "heute wird ein guter tag."];
documents = tokenizeddocument(str);

stem the tokens using normalizewords.

documents = normalizewords(documents)
documents = 
  2x1 tokenizeddocument:
    8 tokens: gut morg . wie geht es dir ?
    6 tokens: heut wird ein gut tag .

input arguments

input documents, specified as a tokenizeddocument array.

input words, specified as a string vector, character vector, or cell array of character vectors. if you specify words as a character vector, then the function treats the argument as a single word.

data types: string | char | cell

normalization style, specified as one of the following:

  • 'stem' – stem words using the porter stemmer. this option supports english and german text only. for english and german text, this value is the default.

  • 'lemma' – extract the dictionary form of each word. this option supports english, japanese, and korean text only. if a word is not in the internal dictionary, then the function outputs the word unchanged. for english text, the output is lowercase. for japanese and korean text, this value is the default.

the function only normalizes tokens with type 'letters' and 'other'. for more information on token types, see .

tip

for english text, to improve lemmatization of words in documents, first add part-of-speech details using the addpartofspeechdetails function.

word language, specified as one of the following:

  • 'en' – english language

  • 'de' – german language

if you do not specify language, then the software detects the language automatically. to lemmatize japanese or korean text, use tokenizeddocument input.

data types: char | string

output arguments

updated documents, returned as a tokenizeddocument array.

updated words, returned as a string array, character vector, or cell array of character vectors. words and updatedwords have the same data type.

algorithms

language details

tokenizeddocument objects contain details about the tokens including language details. the language details of the input documents determine the behavior of normalizewords. the tokenizeddocument function, by default, automatically detects the language of the input text. to specify the language details manually, use the language option of tokenizeddocument. to view the token details, use the function.

version history

introduced in r2017b
网站地图