Professional Documents
Culture Documents
Langchain Presentation
Langchain Presentation
A FRAMEWORK
DEFINITION
• A prompt in an LLM model is a short piece of text that is used to guide the
model's output. Prompts can be used to control the LLM's output in a
variety of ways, such as:
• Indicating the task that the LLM should perform. For example, a prompt
might indicate that the LLM should generate a poem, translate a text, or
answer a question
OUTPUT PARSER
• Output parsers are responsible for taking the output of an LLM and
transforming it to a more suitable format. This is very useful when you are
using LLMs to generate any form of structured data.
CHAINS
• A chain is a sequence of steps that are used to process user input and generate an output. Chains
can be used to perform a variety of tasks, such as question answering, summarization, and creative
writing.
• Prompting: The first step is to generate a prompt that will be used to guide the LLM's output. The
prompt can be generated using a variety of methods, such as using a template or using a natural
language processing (NLP) model.
Querying: The second step is to query the LLM using the prompt. The LLM will
return a response, which may be a text, a code, or some other type of output.
Postprocessing: The third step is to postprocess the LLM's response. This may
involve cleaning up the response, formatting it, or translating it into another
language.
The steps in a chain can be chained together to create a more complex task. For
example, a chain could be used to answer a question by first prompting the LLM
with the question, then querying the LLM, and finally postprocessing the LLM's
response.
AGENTS
• Agents are software programs that are able to perform a variety of tasks
by following a chain of thought. Agents are typically composed of a set of
tools or resources, such as access to Wikipedia, web search, mathematical
libraries, and other LLMs. When an agent receives a query, it uses its tools
to reason about the query and generate a response.
MEMORY
• Indexing: a pipeline for ingesting data from a source and indexing it.
• Load: First we need to load our data. This is done with Document loaders
• Split: Text splitters break large Documents into smaller chunks. This is useful both for
indexing data and for passing it in to a model, since large chunks are harder to search
over and won’t fit in a model’s finite context window.
• Store: We need somewhere to store and index our splits, so that they can later be
searched over. This is often done using a VectorStore and Embeddings model.
RETRIEVAL AND GENERATION
• Retrieve: Given a user input, relevant splits are retrieved from storage
using a Retriever.
• Generate: A ChatModel / LLM produces an answer using a prompt that
includes the question and the retrieved data
EXAMPLE