POS Tagging with NLTK

简介:

POS tagging :part-of-speech tagging , or word classes or lexical categories . 说法很多其实就是词性标注。

那么用nltk的工具集的off-the-shelf工具可以简单的对文本进行POS tagging

>>> text = nltk.word_tokenize("And now for something completely different")
>>> nltk.pos_tag(text)
[(''And'', ''CC''), (''now'', ''RB''), (''for'', ''IN''), (''something'', ''NN''), (''completely'', ''RB''), (''different'', ''JJ'')]

API Document里面是这么介绍这个接口的

Use NLTK''s currently recommended part of speech tagger to   tag the given list of tokens.

我查了下code, pos_tag load the Standard treebank POS tagger

1.      CC      Coordinating conjunction
2.     CD     Cardinal number
3.     DT     Determiner
4.     EX     Existential there
5.     FW     Foreign word
6.     IN     Preposition or subordinating conjunction
7.     JJ     Adjective
8.     JJR     Adjective, comparative
9.     JJS     Adjective, superlative
10.     LS     List item marker
11.     MD     Modal
12.     NN     Noun, singular or mass
13.     NNS     Noun, plural
14.     NNP     Proper noun, singular
15.     NNPS     Proper noun, plural
16.     PDT     Predeterminer
17.     POS     Possessive ending
18.     PRP     Personal pronoun
19.     PRPPossessivepronoun20.RBAdverb21.RBRAdverb,comparative22.RBSAdverb,superlative23.RPParticle24.SYMSymbol25.TOto26.UHInterjection27.VBVerb,baseform28.VBDVerb,pasttense29.VBGVerb,gerundorpresentparticiple30.VBNVerb,pastparticiple31.VBPVerb,non3rdpersonsingularpresent32.VBZVerb,3rdpersonsingularpresent33.WDTWhdeterminer34.WPWhpronoun35.WPPossessivepronoun20.RBAdverb21.RBRAdverb,comparative22.RBSAdverb,superlative23.RPParticle24.SYMSymbol25.TOto26.UHInterjection27.VBVerb,baseform28.VBDVerb,pasttense29.VBGVerb,gerundorpresentparticiple30.VBNVerb,pastparticiple31.VBPVerb,non−3rdpersonsingularpresent32.VBZVerb,3rdpersonsingularpresent33.WDTWh−determiner34.WPWh−pronoun35.WP    Possessive wh-pronoun
36.     WRB     Wh-adverb 

现在根据上面主要词性缩写的解释,可以比较容易理解上面接口给出的词性标注了。

在nltk的corpus,语料库,里面有些是加过词性标注的,这些可以用于训练集,标注过的corpors都有tagged_words() method

>>> nltk.corpus.brown.tagged_words()
[(''The'', ''AT''), (''Fulton'', ''NP-TL''), (''County'', ''NN-TL''), ...]
>>> nltk.corpus.brown.tagged_words(simplify_tags=True)
[(''The'', ''DET''), (''Fulton'', ''N''), (''County'', ''N''), ...]

Automatic Tagging

下面就来讲讲各种自动标注的方法,因为tag要根据词的context,所以tag是以sentense为单位的,而不是word为单位,因为如果 以词为单位,一个句子的结尾词会影响到下个句子开头词的tag,这样是不合理的,以句子为单位可以避免这样的错误,让context的影响不会越过 sentense。

我们就用brown corpus作为例子,

>>> from nltk.corpus import brown
>>> brown_tagged_sents = brown.tagged_sents(categories=''news'')
>>> brown_sents = brown.sents(categories=''news'')

可以分布取出标注过的句子集合, 未标注的句子集合,分别用做标注算法的验证集和测试集。

The Default Tagger

The simplest possible tagger assigns the same tag to each token.

>>> raw = ''I do not like green eggs and ham, I do not like them Sam I am!''
>>> tokens = nltk.word_tokenize(raw)
>>> default_tagger = nltk.DefaultTagger(''NN'')
>>> default_tagger.tag(tokens)
[(''I'', ''NN''), (''do'', ''NN''), (''not'', ''NN''), (''like'', ''NN''), (''green'', ''NN''),
(''eggs'', ''NN''), (''and'', ''NN''), (''ham'', ''NN''), ('','', ''NN''), (''I'', ''NN''),
198 | Chapter 5: Categorizing and Tagging Words
(''do'', ''NN''), (''not'', ''NN''), (''like'', ''NN''), (''them'', ''NN''), (''Sam'', ''NN''),
(''I'', ''NN''), (''am'', ''NN''), (''!'', ''NN'')]

这个Tagger,真的很简单就是把所有的都标注成你告诉他的这种,看似毫无意义的tagger,不过作为backoff,还是有用的

The Regular Expression Tagger

The regular expression tagger assigns tags to tokens on the basis of matching patterns.

>>> patterns = [
... (r''.*ing'', ''VBG''), # gerunds ... (r''.*ed'', ''VBG''), # gerunds ... (r''.*ed'', ''VBD''), # simple past
... (r''.*es'', ''VBZ''), # 3rd singular present ... (r''.*ould'', ''VBZ''), # 3rd singular present ... (r''.*ould'', ''MD''), # modals
... (r''.*/''s′′,′′NN″,″NN''), # possessive nouns
... (r''.*s'', ''NNS''), # plural nouns ... (r''^-?[0-9]+(.[0-9]+)?'', ''NNS''), # plural nouns ... (r''^-?[0-9]+(.[0-9]+)?'', ''CD''), # cardinal numbers
... (r''.*'', ''NN'') # nouns (default)
... ]

>>> regexp_tagger = nltk.RegexpTagger(patterns)
>>> regexp_tagger.tag(brown_sents[3])
[(''``'', ''NN''), (''Only'', ''NN''), (''a'', ''NN''), (''relative'', ''NN''), (''handful'', ''NN''),
(''of'', ''NN''), (''such'', ''NN''), (''reports'', ''NNS''), (''was'', ''NNS''), (''received'', ''VBD''),
("''''", ''NN''), ('','', ''NN''), (''the'', ''NN''), (''jury'', ''NN''), (''said'', ''NN''), ('','', ''NN''),
(''``'', ''NN''), (''considering'', ''VBG''), (''the'', ''NN''), (''widespread'', ''NN''), ...]

这个Tagger,进步了一点,就是你可以定义一些正则文法的规则,满足规则就tag成相应的词性,否则还是default

The Lookup Tagger

A lot of high-frequency words do not have the NN tag. Let’s find the hundred most frequent words and store their most likely tag.

这个方法开始有点实用价值了, 就是通过统计训练corpus里面最常用的词,最有可能出现的词性是什么,来进行词性标注。

>>> fd = nltk.FreqDist(brown.words(categories=''news''))
>>> cfd = nltk.ConditionalFreqDist(brown.tagged_words(categories=''news''))
>>> most_freq_words = fd.keys()[:100]
>>> likely_tags = dict((word, cfd[word].max()) for word in most_freq_words)
>>> baseline_tagger = nltk.UnigramTagger(model=likely_tags)

这段code就是从corpus中取出top 100的词,然后找到这100个词出现次数最多的词性,然后形成likely_tags的字典

然后将这个字典作为model传个unigramTagger

unigramTagger就是一元的tagger,即不考虑前后context的一种简单的tagger

这个方法有个最大的问题,你只指定了top 100词的词性,那么其他的词怎么办

好,前面的default tagger有用了

baseline_tagger = nltk.UnigramTagger(model=likely_tags, backoff=nltk.DefaultTagger(''NN''))

这样就可以部分解决这个问题, 不知道的就用default tagger来标注

这个方法的准确性完全取决于这个model的大小,这儿取了top100的词,可能准确性不高,但是随着你取的词的增多,这个准确率会不断提高。

N-Gram Tagging

Unigram taggers are based on a simple statistical algorithm: for each token, assign the tag that is most likely for that particular token.

上面给出的lookup tagger就是用的Unigram tagger, 现在给出Unigram tagger更一般的用法

>>> from nltk.corpus import brown
>>> brown_tagged_sents = brown.tagged_sents(categories=''news'')
>>> brown_sents = brown.sents(categories=''news'')
>>> unigram_tagger = nltk.UnigramTagger(brown_tagged_sents) #Training 
>>> unigram_tagger.tag(brown_sents[2007])
[(''Various'', ''JJ''), (''of'', ''IN''), (''the'', ''AT''), (''apartments'', ''NNS''),
(''are'', ''BER''), (''of'', ''IN''), (''the'', ''AT''), (''terrace'', ''NN''), (''type'', ''NN''),
('','', '',''), (''being'', ''BEG''), (''on'', ''IN''), (''the'', ''AT''), (''ground'', ''NN''),
(''floor'', ''NN''), (''so'', ''QL''), (''that'', ''CS''), (''entrance'', ''NN''), (''is'', ''BEZ''),
(''direct'', ''JJ''), (''.'', ''.'')]

你可以来已标注的语料库对Unigram tagger进行训练

An n-gram tagger is a generalization of a unigram tagger whose context is the current word together with the part-of-speech tags of the n-1 preceding tokens.

n元就是要考虑context,即考虑前n-1个word的tag,来给当前的word进行tagging

就n元tagger的特例二元tagger作为例子

>>> bigram_tagger = nltk.BigramTagger(train_sents)
>>> bigram_tagger.tag(brown_sents[2007])

这样有个问题,如果tag的句子中的某个词的context在训练集里面没有,哪怕这个词在训练集中有,也无法对他进行标注,还是要通过backoff来解决这样的问题

>>> t0 = nltk.DefaultTagger(''NN'')
>>> t1 = nltk.UnigramTagger(train_sents, backoff=t0)
>>> t2 = nltk.BigramTagger(train_sents, backoff=t1)

Transformation-Based Tagging

n-gram tagger存在的问题是,model会占用比较大的空间,还有就是在考虑context时,只会考虑前面词的tag,而不会考虑词本身。

而要介绍的这种tagger可以比较好的解决这些问题,用存储rule来代替model,这样可以节省大量的空间,同时在rule中不限制仅考虑tag,也可以考虑word本身。

Brill tagging is a kind of transformation-based learning, named after its inventor. The general idea is very simple: guess the tag of each word, then go back and fix the mistakes.

那么Brill tagging的原理从底下这个例子就可以了解

(1) replace NN with VB when the previous word is TO;

(2) replace TO with IN when the next tag is NNS.

Phrase     to increase grants to states for vocational rehabilitation
Unigram TO    NN        NNS   TO NNS    IN      JJ                NN
Rule 1              VB
Rule 2                                    IN
Output     TO    VB        NNS    IN NNS    IN      JJ                NN

第一步用unigram tagger对所有词做一遍tagging,这里面可能有很多不准确的

下面就用rule来纠正第一步中guess错的那些词的tag,最终得到比较准确的tagging

那么这些rules是怎么生成的了,答案是在training阶段自动生成的

During its training phase, the tagger guesses values for T1, T2, and C, to create thousands of candidate rules. Each rule is scored according to its net benefit: the number of incorrect tags that it corrects, less the number
of correct tags it incorrectly modifies.

意思就是在training阶段,先创建thousands of candidate rules, 这些rule创建可以通过简单的统计来完成,所以可能有一些rule是不准确的。那么用每条rule去fix mistakes,然后和正确tag对比,改对的数目减去改错的数目用来作为score评价该rule的好坏,自然得分高的留下,得分低的rule就删 去, 底下是些rules的例子

NN -> VB if the tag of the preceding word is ''TO''
NN -> VBD if the tag of the following word is ''DT''
NN -> VBD if the tag of the preceding word is ''NNS''
NN -> NNP if the tag of words i-2...i-1 is ''-NONE-''
NN -> NNP if the tag of the following word is ''NNP''
NN -> NNP if the text of words i-2...i-1 is ''like''
NN -> VBN if the text of the following word is ''*-1''


本文章摘自博客园,原文发布日期:2011-07-04

目录
相关文章
|
4月前
|
自然语言处理 网络安全 Python
【Python】已解决:nltk.download(‘punkt’) [nltk_data] Error loading punkt: [WinError 10060] [nltk_data]
【Python】已解决:nltk.download(‘punkt’) [nltk_data] Error loading punkt: [WinError 10060] [nltk_data]
679 1
|
2月前
|
机器学习/深度学习 自然语言处理 Python
|
4月前
|
域名解析 自然语言处理 网络协议
【Python】已解决:nltk.download(‘averaged_perceptron_tagger’) [nltk_data] Error loading averaged_perceptro
【Python】已解决:nltk.download(‘averaged_perceptron_tagger’) [nltk_data] Error loading averaged_perceptro
461 1
|
6月前
|
机器学习/深度学习 自然语言处理 C++
[Dict2vec]论文实现:Dict2vec : Learning Word Embeddings using Lexical Dictionaries
[Dict2vec]论文实现:Dict2vec : Learning Word Embeddings using Lexical Dictionaries
36 2
[Dict2vec]论文实现:Dict2vec : Learning Word Embeddings using Lexical Dictionaries
完美解决nltk中nltk_data相关文件不能使用的问题
完美解决nltk中nltk_data相关文件不能使用的问题
|
机器学习/深度学习 自然语言处理 算法
Word2Vec教程-Skip-Gram模型
这篇教程主要讲述了Word2Vec中的skip gram模型,主要目的是避免普遍的浅层介绍和抽象观点,而是更加详细地探索Word2Vec。现在我们开始研究skip gram模型吧
486 0
Word2Vec教程-Skip-Gram模型
|
自然语言处理 算法 Python
Gensim实现Word2Vec的Skip-Gram模型
gensim是一个开源的Python库,用于便捷高效地提取文档中的语义话题。它用于处理原始的、非结构化的电子文本(“纯文本”),gensim中的一些算法,如 Latent Semantic Analysis(潜在语义分析)、 Latent Dirichlet Allocation(潜在Dirichlet分布)、Random Projections(随机预测)通过检查训练文档中的共现实体来挖掘语义结构。
290 0
|
机器学习/深度学习 人工智能 自然语言处理
手把手教你NumPy来实现Word2vec
Word2Vec被认为是自然语言处理(NLP)领域中最大、最新的突破之一。
607 0
|
机器学习/深度学习 自然语言处理 测试技术
基于jieba、gensim.word2vec、LogisticRegression的文档分类
学习资源来源:容大教育,致以诚挚的谢意。 重新编辑:潇洒坤 jieba中文叫做结巴,是一款中文分词工具,官方文档链接:https://github.com/fxsjy/jieba gensim.word2vec中文叫做词向量模型,是是用来文章内容向量化的工具,官方文档链接:https://radimrehurek.com/gensim/models/word2vec.html LogisticRegression中文叫做逻辑回归模型,是一种基础、常用的分类方法。
1563 0
|
算法 Python 自然语言处理