Professional Documents
Culture Documents
Améliorer Les Outputs Des LLM
Améliorer Les Outputs Des LLM
output of LLMs
What Prompt Engineering,
RAG & Fine-tuning is all about!
Prompt Engineering, RAG &
Fine-Tuning
When integrating Large Language Models (LLMs) into your product or application,
ensuring high-quality output is essential. But how can you actually achieve that?
To enhance the performance of LLMs, three key techniques come into play: Prompt
Engineering, RAG, and Fine-Tuning.
Prompt Engineering
1
Example of a Prompt:
Retrieval Augmented
Generation (RAG)
2
What is RAG?
… explained with a prompt
Retrieval Augmented Generation (RAG) is a technique that equips LLMs with external
knowledge.
Think of it as enhancing a prompt with an extra layer of context. However, the added
context isn't just a few details – it's an entire dataset of knowledge automatically
integrated into the prompt. This enriches the LLM's understanding and enables it to
generate more informed and contextually relevant responses.
Fine Tuning
3
What is Fine-Tuning?
… explained with a prompt
We are general contractor and we are looking for a solution for our accountant
to manage invoices. Can we arrange a demo?
-> Unqualified
Dynamic Now qualify the following mails:
content
Fine-tuning trains a LLM for a precise task or specific behavior, eliminating the need
for external information retrieval.
This process involves training a base model on task-specific data (training set),
consisting of labeled examples aligned with the target task. Subsequently, users
engage directly with the fine-tuned LLM and not the base model anymore.
training set
… when you have a lot of example ● Improving output when words fall
data how the output looks like short and examples are a better
explanation
● Increasing speed - through use of a
smaller model
● Saving costs - through reducing the
prompt length
Recommended
Resources
… to understand more
By Heiko Hotz