Welcome back to Learn with me, where we break down complicated tech topics so simply that even a marketer like me can understand. 🤷♂️
If you’ve been following Gauss for a while, you may remember the Man vs. Machine demo we did for our Text Summarizer. I was … the man.
Man vs. Machine?
One product that Gauss Algorithm has built is an article summarizer. It takes an article, and lets you pick the length of summary that you want. It’s pretty cool!
We did a month-long demo where both our the Summarizer and I created summaries of articles and let people guess which was automatically generated, and which was done by me. You can read more about the process and the results here and here.
Spoiler alert: humanity has prevailed ... for now. But we’re biased as hell, one major advantage to AI summarization.
Want to try the summarizer yourself? Email us and we’ll hook you up with non-commercial access!
Training the algorithm
So, how was the Summarizer algorithm trained to generate the text? The same way that all algorithms are trained: through lots of data!
In this case we fed the algorithm as many BBC articles as we could access. It gave the algorithm a slight British accent, but there are worse outcomes I suppose. 😜
Let’s go into detail juuuust a little bit, because it’s a good way to teach you some important NLP vocabulary. 😇
(First, we should say that the model was inspired by research done at Facebook, so a lot of credit goes to them.)
But in general it’s a sequence-to-sequence (seq2seq) model, which simply means that the input is a sequence of words (an article) and the output is also a sequence of words (a summary).
The next terms you should know are extractive and abstractive summarization. Extractive summarization identifies and then … extracts … important sentences in the text. You put those important sentences together, and that creates your summary.
But our model uses abstractive summarization. This is much more complicated. This is where the output is unique, generated text that was not found in the input (the article being summarized).
So the Summarizer had to use machine learning to know which parts of the article were important, and then generate unique sentences which contained all of the key information. It’s really quite extraordinary if you think about it. 🤯
How is any of this useful to you? Well, besides all of the fancy words we learned today, the most important concept was this:
To summarize an article, we needed access to a lot of other articles to train the algorithm.
It’s a concept that you’ll be hearing over and over in these educational series. Models are great, but data is king.
When we help companies with text generation, one of the most important services we offer is acquiring the data they need to build a perfect algorithm. We know data. We love data. We know where to find data. 🔍
Is your company looking to automate some kind of text generation? We’d love to help! Write to us, and we can tell you how!