Professional Documents
Culture Documents
NLP API
SHARE:
TWITTER FACEBOOK
You can find Part 2 here.
A large language model (LLM) is a type of machine learning model that can handle a
wide range of natural language processing (NLP) use cases. But due to their
versatility, LLMs can be a bit overwhelming for newcomers who are trying to
understand when and where to use these models.
In this blog series, we’ll simplify LLMs by mapping out the seven broad categories of
use cases where you can apply them, with examples from Cohere's LLM platform.
Hopefully, this can serve as a starting point as you begin working with the Cohere API,
or even seed some ideas for the next thing you want to build.
1. Generate
2. Summarize
3. Rewrite
4. Extract
5. Search/Similarity
6. Cluster
7. Classify
Because of the general-purpose nature of LLMs, the range of use cases and relevant
industries within each category is extremely wide. This post will not attempt to delve
too deeply into each, but it will provide you with enough ideas and examples to help
you start experimenting.
1. Generate
Probably the first thing that comes to mind when talking about LLMs is their ability to
generate original and coherent text. And that’s what this use case category is all
about. LLMs are pre-trained using a huge collection of text gathered from a variety of
sources. This means that they are able to capture the patterns of how language is
used and how humans write.
Getting the best out of these generation models is now becoming a whole field of
study in and of itself called prompt engineering. In fact, the first four use case
categories on our list all leverage prompt generation in their own ways.
More on the other three later, but the basic idea in prompt engineering is to provide a
context for a model to work with. Prompt engineering is a vast topic, but at a very high
level, the idea is to provide a model with a small amount of contextual information as
a cue for generating a specific sequence of text.
One way to set up the context is to write a few lines of a passage for the model to
continue. Imagine writing an essay or marketing copy where you would begin with the
first few sentences about a topic, and then have the model complete the paragraph or
even the whole piece.
Another way is by writing a few example patterns that indicate the type of text that we
want the model to generate. This is an interesting one because of the different ways
we can shape the models and the various applications that it entails.
Let’s take one example. The goal here is to have the model generate the first
paragraph of a blog post. First, we prepare a short line of context about what we’d like
the model to write. Then, we prepare two examples — each containing the blog’s title,
its audience, the tone of voice, and the matching paragraph.
Finally, we feed the model with this prompt, together with the information for the new
blog. And the model will duly generate the text that matches the context, as seen
below.
--
Audience: Millennials
recommendations
--
Blog Title: Mastering Dynamic Programming
Audience: Developers
Tone: Informative
First Paragraph: In this piece, we'll help you understand the fundamentals
programming problems.
--
Audience: Athletes
Tone: Enthusiastic
First Paragraph:
Completion:
seasoned climber, you'll find a wealth of options for the perfect rock
climbing excursion. In this piece, I'll share some of the best rock
climbing spots in the United States to help you plan your next adventure.
In fact, the excerpt you read at the beginning of this blog was generated using this
preset!
That was just one example, but how we prompt a model is limited only by our
creativity. Here are some other examples:
2. Summarize
The second use case category, which also leverages prompt engineering, is text
summarization. Think about the amount of text that we deal with on a typical day, such
as reports, articles, meeting notes, emails, transcripts, and so on. We can have an LLM
summarize a piece of text by prompting it with a few examples of a full document and
its summary.
Prompt:
that the game has gotten harder in recent weeks ever since The New York
Times bought it from developer Josh Wardle in late January. The Times has
come forward and shared that this likely isn't the case. That said, the
NYT did mess with the back end code a bit, removing some offensive and
claiming that a confirmation bias was at play. One Twitter user went so
far as to claim the game has gone to "the dusty section of the dictionary"
to find its latest words.
--
funding led by ARG Global, with participation from D9 Capital Group and
Boulder Capital. Earlier backers also joined the round, including Hilton
Group, Roxanne Capital, Paved Roads Ventures, Brook Partners, and Plato
Capital.
--
warning is in effect for the Bay Area, with freezing temperatures expected
in these areas overnight. Temperatures could fall into the mid-20s to low
30s in some areas. In anticipation of the hard freeze, the weather service
TLDR:
Completion:
Here are some other example documents where LLM summarization will be useful:
— Customer support chats
— Environmental, Social, and Governance (ESG) reports
— Earnings calls
— Paper abstracts
— Dialogues and transcripts
3. Rewrite
Another flavor of prompt engineering is text rewriting. This is another of those tasks
that we do every day and spend a lot of time on, and if we could automate them, it
would free us up to work on more creative tasks.
Rewriting text can mean different things and take different forms, but one common
example is text correction. The following is the task of correcting the spelling and
grammar in voice-to-text transcriptions. We prepare the prompt with a short bit of
context about the task, followed by examples of incorrect and corrected
transcriptions.
Prompt:
with errors, the model responds with the correct version of the excerpt.
card. I lost by card when I what's at the grocery store and I need to see
sent a new one.
card. I lost my card when I was at the grocery store and I need to be sent
a new one.
--
Incorrect transcription: Can you repeat the dates for the three and five
dear fixed mortgages? I want to compare them a gain the dates I was quoted
by a other broker.
Correct transcription: Can you repeat the rates for the three and five
year fixed mortgages? I want to compare them against the rates I was
--
I paid my pull balance one day due date. I not missed a pavement year yet.
Correct transcription:
Completion:
I got charged interest on my credit card last month but I paid my balance
in full on the due date. I haven't missed a payment yet. Man, you reverse
Here are some other example use cases for using an LLM to rewrite text:
— Paraphrase a piece of text in a different voice
— Build a spell checker that corrects text capitalizations
— Rephrase chatbot responses
— Redact personally identifiable information
4. Extract
Text extraction is another use case category that can leverage a generation LLM. The
idea is to take a long piece of text and extract only the key information or words from
the text.
Prompt:
This program will extract relevant information from contracts. Here are
some examples:
23 day of August, 2022 (the "Effective Date") is made between Oren & Co
upon its signing until July 31, 2023, when the final LinkedIn post is
Extracted Text:
--
based musical group ("Artist") and Universal Music Group, a record label
with license number 545345 ("Recording Label"). Artist and Recording Label
Extracted Text:
Completion:
Conclusion
In part two of this series, we’ll continue our exploration of the remaining three use
case categories (Search/Similarity, Cluster, and Classify). We’ll also explore how LLM
APIs can help address more complex use cases. The world is complex, and a lot of
problems can only be tackled by piecing multiple NLP models together. We’ll look at
some examples of how we can quickly snap together a combination of API endpoints in
order to build more complete solutions.
KEEP READING