NLP | Likely Word Tags

nltk.probability.FreqDist is used to find the most common words by counting word frequencies in the treebank corpus. ConditionalFreqDist class is created for tagged words, where we count the frequency of every tag for every word. These counts are then used too construct a model of the frequent words as keys, with the most frequent tag for each word as a value. Code #1 : Creating function
Python3
# Loading Librariesfrom nltk.probability import FreqDist, ConditionalFreqDist# Making functiondef word_tag_model(words, tagged_words, limit = 200): fd = FreqDist(words) cfd = ConditionalFreqDist(tagged_words) most_freq = (word for word, count in fd.most_common(limit)) return dict((word, cfd[word].max()) for word in most_freq) |
Code #2 : Using the function with UnigramTagger
Python3
# loading librariesfrom tag_util import word_tag_modelfrom nltk.corpus import treebankfrom nltk.tag import UnigramTagger# initializing training and testing set train_data = treebank.tagged_sents()[:3000]test_data = treebank.tagged_sents()[3000:]# Initializing the modelmodel = word_tag_model(treebank.words(), treebank.tagged_words())# Initializing the Unigramtag = UnigramTagger(model = model)print ("Accuracy : ", tag.evaluate(test_data)) |
Output :
Accuracy : 0.559680552557738
Code #3 : Let’s try backoff chain
Python3
# Loading librariesfrom nltk.tag import UnigramTaggerfrom nltk.tag import DefaultTaggerdefault_tagger = DefaultTagger('NN')likely_tagger = UnigramTagger( model = model, backoff = default_tagger)tag = backoff_tagger(train_sents, [ UnigramTagger, BigramTagger, TrigramTagger], backoff = likely_tagger) print ("Accuracy : ", tag.evaluate(test_data)) |
Output :
Accuracy : 0.8806820634578028
Note : Backoff chain has increases the accuracy. We can improve this result further by effectively using UnigramTagger class. Code #4 : Manual Override of Trained Taggers
Python3
# Loading librariesfrom nltk.tag import UnigramTaggerfrom nltk.tag import DefaultTaggerdefault_tagger = DefaultTagger('NN')tagger = backoff_tagger(train_sents, [ UnigramTagger, BigramTagger, TrigramTagger], backoff = default_tagger) likely_tag = UnigramTagger(model = model, backoff = tagger)print ("Accuracy : ", likely_tag.evaluate(test_data)) |
Output :
Accuracy : 0.8824088063889488



