Menu

[LWM] Entity recognition 2

December 5, 2021/Alex Alderman

Welcome to another blog-isode of Learn with me — a weekly educational series by Gauss Algorithmic. We take cutting-edge technological concepts and break them down into bite-sized pieces for everyday business people. We will continue with NER, a subcategory of NLP.

Welcome back to Learn with me, where we break down complicated tech topics so simply that even a marketer like me can understand. 🤷‍♂️

I’ve now been working on this project for over a month. If you’re wondering what it’s like to get private lessons from some of the most educated people I’ve ever met, well …

So if you’re ever thinking that I’m way over-simplifying this stuff… you’re right!

(and you’re welcome 😅 )

So, let’s pick up where we left off last week, and continue discussing Named Entity Recognition, or NER. Now that we already know what NER is, let’s go a little deeper.

 

Overlapping entities

 

Sometimes the algorithm is able to determine the entities with much of a problem, because they’re typically only used in one way. 

Rick is almost always a name, and scientist is almost always a profession.

But what if things are less clear? 🤔 The word “Washington” is a well-known person AND multiple well-known places. So how does the algorithm know how to classify it?

Well, how do YOU know whether Washington is referring to a place or the person? Typically through context and the words around it, right? The algorithm does something similar.

 

Training an NER algorithm

 

When the algorithm is being trained with data, it pays attention to the words before and after the key word. A data scientist could give 10,000 texts where “Washington” is a person, and another 10,000 tests where “Washington” is a place. 

The algorithm can then spot patterns of which words typically appear before or after Washington in each of the two situations. For simplicity, let’s just consider which words would normally go before “Washington” for both places, and the person. 

Typically, the algorithm will look at more than just one word before or after. The data scientist can set this value to whatever they like, typically between 3–5 words. 

These 3–5 words can be turned into a Bag of words, and a calculation can be run to determine how likely the key word is in one category or another (e.g. a person or a place). 🤖⁉️

After looking through the all of the data, the NER algorithm can predict with with certainty the if preceded by “Hi, my name is” then what follows will be *chiga* *chiga* Slim Shady a person, and not a place. 

 

Practical application

 

 NER tech is getting stronger every day. And there’s a lot of potential for it to save companies and hospitals a lot of time and improve overall accuracy.

If your organization processes a lot of documents or other text, then there’s a good chance that this process can be automated. Write to us, and we can tell you how!

Do you like the article? Share it.

Read our blog

[LWM] NLP: Bag-of-words

[LWM] NLP: Bag-of-words

05. 12. 2021Read more [LWM] NLP: Bag-of-words
[LWM] NLP: Text summarization

[LWM] NLP: Text summarization

05. 12. 2021Read more [LWM] NLP: Text summarization

Are you interested in our services?

Contact us

We collect anonymous data to monitor traffic and enhance our website. Do you agree to cookies?

YesNo, give me more information