Innovation Series

NER Natural Language Processing Model - Which is Best?

Bhagyashri Shitole, Software Engineer, Druva Labs

What is NER?

In a text document, some terms represent specific entities that are more informative and have a unique context. Named Entity Recognition (NER) is a method of information extraction which automatically identifies and classifies named entities into predefined categories, such as people, location, organization, time, quantities, percentage, monetary values, etc. NER is used in many applications of Natural Language Processing (NLP), and helps to address questions such as the following:

  • Which organization is mentioned in the article?
  • Which person is referred to in the email?
  • Which location is referred to in a review?

How does NER work?

We humans naturally recognize named entities including people, locations, organizations, and so on. For example, “Druva, a data protection company headquartered in California, was founded by Jaspreet Singh and Milind Borate.”

PERSON(s): Jaspreet Singh, Milind Borate

ORGANIZATION: Druva

LOCATION: California

For computers, however, recognizing entity types in human languages is not that simple. NLP is a subfield of Artificial Intelligence (AI) that helps machines process human language. NLP studies the structure and rules of the language and constructs intelligent systems capable of analyzing text or speech.

Here are my takeaways on diving further into open-source NLP libraries.

Natural Language Toolkit (NLTK)

NLTK provides all components of NLP to build an NER pipeline.

  • The raw text of the document is split into sentences using the sentence segmenter.
  • Sentences are divided into words using a tokenizer.
  • Sentences are assigned part-of-speech (POS) tags which are helpful in entity detection.

Noun Phrase Chunking (np-chunking) is used for entity identification. Using POS tagged sentences, np-chunking divides sentences into individual noun phrases as shown in the diagram.

  • NP: Noun Phrase (DT: Determiner, JJ: Adjective, NN: Noun)
  • VBD: Verb, past tense
  • IN: Preposition, Conjunction

Noun Phrase (NP) represents one of the entity types.

Noun-phrase chunking


NLTK provides a classifier trained to identify named entities. It is accessed using the function nltk.ne_chunk() and labels such as PERSON, LOCATION, ORGANIZATION, etc.

import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('maxent_ne_chunker')
nltk.download('words')

data = "Druva a data protection company headquartered in California was founded by Jaspreet Singh and Milind Borate."

tokens = nltk.word_tokenize(data)
pos_tags = nltk.pos_tag(tokens)

chunks = nltk.ne_chunk(pos_tags)
for chunk in chunks:
   if hasattr(chunk, 'label'):
       print(' '.join(c[0] for c in chunk), chunk.label())

Druva            GPE
California       GPE
Jaspreet Singh   PERSON
Milind Borate    PERSON

Spacy

Spacy is a Python framework that is fast and easy to use. You can use its pre-trained models whose predictions are strongly dependent on the examples on which it was trained. As such, they might need some tuning as per your use case.

Spacey


While processing, Spacy first tokenizes the raw text, assigns POS tags, identifies the relation between tokens like subject or object, labels named ‘real-world’ objects like persons, organizations, or locations, and finally returns the processed text with linguistic annotations with entities from the text. Spacy does not use the output of tagger and parser for NER, so you can skip these pipelines while processing, as shown below.

import spacy

data = "Druva a data protection company headquartered in California was founded by Jaspreet Singh and Milind Borate."

nlp = spacy.load('en_core_web_sm', pipeline=["ner"])
for ent in nlp(data).ents:
   print(ent.text, ent.label_)

Druva           GPE
California      GPE
Jaspreet Singh  PERSON
Milind Borate   ORG

Stanford Core NLP (Stanza)

The Stanford NER classifier is also called the Conditional Random Field (CRF) classifier. This provides a general implementation of a linear chain CRF model. 

NER processing pipeline:

  • Tokenizer splits the raw text into sentences and words.
  • The Multi-Word Token (MWT) expansion module expands the token into multiple syntactic words. This pipeline is specific to languages with multi-word token, like French or German. Languages such as English do not support it.  
  • NER classifier receives the annotated data and assigns labels to an entity, like PERSON, ORGANIZATION, LOCATION.

import stanza
stanza.download("en")

data = "Druva a data protection company headquartered in California was founded by Jaspreet Singh and Milind Borate."
nlp = stanza.Pipeline(lang='en', processors='tokenize,ner')

doc = nlp(data)
for sent in doc.sentences:
   for ent in sent.ents:
       print(ent.tex,t ent.type)

Druva           ORG
California     GPE
Jaspreet Singh PERSON
Milind Borate PERSON

Polyglot

Polyglot NER does not use human-annotated training datasets. Rather, it uses huge unlabeled datasets (like Wikipedia) with automatically inferred entities using the hyperlinks.
The following example shows how to identify entities by cross-linking with Wikipedia.

<ENTITY url="https://en.wikipedia.org/wiki/Michael_I._Jordan"> Michael Jordan </ENTITY> is a professor at <ENTITY url="https://en.wikipedia.org/wiki/University_of_California,_Berkeley"> Berkeley </ENTITY>

Polyglot's object-oriented implementation simplifies its use for NLP features.

Applying the model on raw text, Polyglot provides a processed text with data including sentences, words, entities, and POS tags.

from polyglot.text import Text

data = "Druva, a data protection company headquartered in California was founded by Jaspreet Singh and Milind Borate."

text = Text(data, hint_language_code='en')
for each in text.entities:
   print(' '.join(each), each.tag)

California        I-LOC
Jaspreet Singh    I-PER
Milind Borate     I-PER

Comparison

1. Performance

For experiments, the input text file used was 250KB, and the test machine was used with configurations of 2 Cores and a 4GB Memory.

  NLTKSpacyStanzaPolyglot
Time (sec)143.151847.6
CPU1 core 100%1 core 100%2 core 100%1 core 100%
Memory340 MB1.1 GB1.6 GB150 MB

2. Comparison

The table below provides guidelines on when to consider using specific models.

 NLTKSpacyStanzaPolyglot
Beginneryesyesyesyes
Multi-language supportyesyesyesyes
Entity categories7183/4/73
CPU efficient applicationyesyesnoyes
ModelSupervisedSupervisedSupervisedSemi-Supervised
Programming LanguagePythonPythonPython/JavaPython

3. Accuracy:

There are accuracy variations of NER results for given examples as pre-trained models of libraries used for experiments.

Conclusion

These observations are for NLTK, Spacy, CoreNLP (Stanza), and Polyglot using pre-trained models provided by open-source libraries. There are many other open-source libraries which can be used for NLP.

NLTK is one of the oldest, and most widely adopted methods for research and educational purposes. Spacy is object-oriented with customizable options, works fast, and is considered the current industry-standard. Stanford CoreNLP is slow for NLP production usage, but can integrate with NLTK to boost CPU efficiency. Polyglot is a lesser-known library, but is efficient, straightforward, and works fast. Using Polyglot is similar to using Spacy and a good choice for projects involving language which Spacy does not support. Unlike other libraries, Polyglot works better at processing unusual or informal text/speech where natural language rules are not followed.

Explore the many ways Druva’s innovative solutions are enabling a range of next-generation cloud-based applications, such as those for neural networks, in the Tech/Engineering section of the blog archive.