You are on page 1of 28

VISVESVARAYA TECHNOLOGICAL UNIVERSITY

BELAGAVI-590018

A
Seminar Report
on

“PROMPT ENGINEERING”
A seminar report submitted in partial fulfillment of the requirements for the award of the degree of

Bachelor of Engineering in Computer Science and Engineering

Submitted by

Nikhil Reddy
(3LA20CS008)

Under the Guidance:

Prof. Ambika Muddale


Asst. Professor

LINGARAJ APPA ENGINEERING COLLEGE


Gornalli, Bidar-585403
2023-2024
LINGARAJ APPA ENGINEERING COLLEGE
Gornalli, Bidar-585403

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CERTIFICATE

This is to certify that the seminar work entitled “PROMPT ENGINEERING” has been
successfully carried out by Nikhil Reddy, a bonafide work carried out by me at
Lingaraj Appa Engineering College in partial fulfillment of the requirements for the
award of degree in Bachelor of Engineering in Computer Science and Engineering of
Visvesvaraya Technological University, Belagavi during academic year 2023- 2024. The
seminar report has been approved as it satisfies the academic requirements in respect of project
work for the said degree.

Prof. Ambika Muddale Prof.Veeresh Biradar Dr. Vinita Patil

GUIDE HOD PRINCIPAL

EXTERNAL VIVA

Name of the Examiners Signature with Date


1.
2
ACKNOWLEDGMENT

The satisfaction that I feel at the successful completion of my seminar report,


“Prompt Engineering” would be incomplete if I did not mention the people, whose able
guidance and encouragement, crowned my efforts with success. It is my privilege to
express my gratitude and respect to all those who inspired and helped me in the
completion of my project. All the expertise in this seminar project belongs to those listed
below.
I express my sincere thanks to our President Poojya Dr. Sharanbaswappa Appaji and
Secretary Shri. Basavaraj Deshmukh for providing all the required facilities for the
completion of the seminar Report.
I express my sincere thanks to our beloved Principal Dr. Vinita Patil, LAEC, Bidar for
giving me an opportunity to carry out my seminar report.
I am greatly indebted to Prof. Veeresh Biradar HOD, Computer Science and
Engineering Department, LAEC, Bidar for facilities and support extended to me.
I express my deepest gratitude and thanks to my guide Prof. Ambika Muddale, Asst.
Prof, LAEC, Bidar for giving her valuable cooperation and excellent guidance in completing
the seminar Report.
I express my sincere thanks to all the teaching & Non-teaching staff of Computer
Science and Engineering Department and the friends for their valuable cooperation during
the seminar Report.
Finally, I convey my sweet thanks to my beloved parents who supported me to pursue
higher studies and providing me a pleasant environment at home to prepare for the
seminar Report in time.

Date:

Place: Bidar

Nikhil Reddy
(3LA20CS008)
DECLARATION

I, NIKHIL REDDY bearing the USN 3LA20CS008, student of B.E. Department of


Computer Science and Engineering, Lingaraj Appa Engineering College, Bidar, declare
that the seminar work entitled “Prompt Engineering”, has been duly executed by me
under the guidance of Mrs. Ambika Muddale, Asst Professor, Department of Computer
Science and Engineering. The seminar report of the same is submitted in partial
fulfilment of the requirement for the award of Bachelor of Engineering degree in
Department of Computer Science and Engineering by Visvesvaraya Technological
University, Belgaum.

Date :

Place : Bidar

Nikhil Reddy
(3LA20CS008)
ABSTRACT

Prompt engineering has emerged as a pivotal aspect in the development and utilization
of natural language processing (NLP) systems. This report presents a thorough investigation
into the recent advancements and methodologies within the realm of prompt engineering,
focusing on its significance, techniques, and applications.The report begins by elucidating the
foundational concepts of prompt engineering, highlighting its role in shaping the behavior and
output of NLP models. It delves into the various techniques employed for prompt engineering,
including but not limited to template-based approaches, gradient-based optimization, and
reinforcement learning strategies. Each technique is examined in detail, emphasizing its
strengths, limitations, and real-world applications.Furthermore, the report explores the impact
of prompt engineering across diverse domains such as text generation, question-answering,
language translation, and sentiment analysis. Case studies and examples are provided to
illustrate the efficacy of prompt engineering in improving model performance and adapting to
specific task requirements.
TABLE OF CONTENTS
S.NO TITLE PAGE NO
ABSTRACT
1 INTRODUCTION 1-5
2 EVOLUTION 6-7
3 TYPES AND ITS WORKING 8-9
4 TECHNIQUES 10-12
5 WHAT IS GENERATIVE AI ? 13-14
6 APPLICATIONS 15-18
7 MODELS 19-20
8 CONCLUSION 21
REFERENCES 22
Prompt Engineering

CHAPTER 1
INTRODUCTION
We are entering an era in which anybody can generate digital images from text — a
democratization of art and creative production. In this novel creative era, humans work within
a human-computer co-creative framework. Emerging digital technologies will co-evolve with
humans in this digital revolution, which requires the renewal of human capabilities and
competences. One increasingly important human skill is prompting due to it providing an
intuitive language-based interface to artificial intelligence (AI). Prompting (or “prompt
engineering”) is the skill and practice of writing inputs (“prompts”) for generative models.
Prompt engineering is iterative and interactive — a dialogue between humans and AI in an
act of co-creation. As generative models become more widespread, prompt engineering has
become an important research area on how humans interact with AI .

One area where prompt engineering has been particularly useful is the field of digital
visual art. State-of-the-art image generation systems, such as OpenAI’s DALL-E, Midjourney
and Google’s Imagen , have been trained on large collections of text andimages collected from
the World Wide Web. These systems can synthesize high-quality images in a wide range of
artistic styles from textual input prompts. Practitioners of text-to-image generation often use
prompt engineering to improve the quality of their digital artworks. Within the community of
practitioners, certain keywords and phrases have been identified that act as “prompt
modifiers”. These keywords can, if included in a prompt, improve the quality of the
generative model’s output or make images appear in a specific artistic style. While a short
prompt may already produce impressive results with these generative systems, the use of
prompt modifiers can help practitioners unlock the systems’ full potential. The skillful
application of prompt modifiers can distinguish expert practitioners of text-to-image
generation from novices.

Whether prompt engineering is an intuitive skill or whether this skill must be acquired (e.g.,
through means of practice and learning) has, so far, not been investigated. There are a number
of reasons why such an investigation is important. A look at Stable Diffusion’s Discord
channel1 shows preliminary evidence that some prompts and keywords combinations
circulating in the community of practitioners are not intuitive. Such keywords include, for
instance, the modifier “by Greg Rutkowski” or other popular modifiers, such as “smooth,”

Dept. of CSE. LAEC, Bidar Page 1


Prompt Engineering

“elegant,” “luxury,” “octane render,” and “artstation”. These keywords are often used in
combination with each other to boost the quality of generated images, resulting in unintuitive
keyword combinations that a human user would likely never have chosen to describe the
image to another human. These modifiers are commonly applied by practitioners in the AI art
community, but may confront laypeople with challenges of understanding the effect of
modifiers on the resulting image. Further confounding the problem is that keywords in
prompts can affect both the subject and style of a generated image simultaneously.

Whether prompt engineering is a skill that humans apply intuitively or whether it is a new type
of skill that needs to be acquired is important not only for the field of AI art, but also for
research on human-AI interaction and the future of work in general. A source for
unintuitiveness of the current practice of prompt engineering is a potential misalignment
between the written human prompt and the way in which text-to-image models interpret
prompts. Compared to how we humans understand a prompt and its constituents, text-to-image
generative models may attach very different meanings to some keywords in the prompt.
Further, many AI-generated images are shared on social media, often with stunning results.
However, if what we see on social media is the result of the application of prompt engineering
by skilled experts, then the generative content that we encounter on social media could be
skewed by a small group of highly skilled practitioners. Or from another perspective, if prompt
engineering is an acquired skill that requires expertise and training, this could give rise to novel
creative professions with implications for the future of work. On the other hand, we run the
risk of assigning too much importance to prompting as a method for interacting with generative
models if prompt engineering is an innate ability and an intuitive artistic skill that is acquired
quickly .

1.1 DEFINATION

➢ Prompt engineering is a relatively new discipline for developing and optimizing


prompts to efficiently use language models (LMs) for a wide variety of applications
and research topics. Prompt engineering skills help to better understand the capabilities
and limitations of large language models (LLMs).
➢ Researchers use prompt engineering to improve the capacity of LLMs on a wide range
of common and complex tasks such as question answering and arithmetic reasoning.

Dept. of CSE. LAEC, Bidar Page 2


Prompt Engineering

Developers use prompt engineering to design robust and effective prompting


techniques that interface with LLMs and other tools.
➢ Prompt engineering is not just about designing and developing prompts. It encompasses
a wide range of skills and techniques that are useful for interacting and developing with
LLMs. It's an important skill to interface, build with, and understand capabilities of
LLMs. You can use prompt engineering to improve safety of LLMs and build new
capabilities like augmenting LLMs with domain knowledge and external tools.

1.2 BASICS OF PROMPTING


You can achieve a lot with simple prompts, but the quality of results depends on how
much information you provide it and how well-crafted the prompt is. A prompt can contain
information like the instruction or question you are passing to the model and include other
details such as context, inputs, or examples. You can use these elements to instruct the model
more effectively to improve the quality of results.

Let's get started by going over a basic example of a simple prompt:

The below figure 1.1 explains what actually prompt is and how its works before optimizing and
generates desired output.

Dept. of CSE. LAEC, Bidar Page 3


Prompt Engineering

Figure 1.1: Prompt Engineering

1.3 ELEMENTS OF PROMPT


A prompt contains any of the following elements:

Instruction - a specific task or instruction you want the model to perform

Context - external information or additional context that can steer the model to better responses

Input Data - the input or question that we are interested to find a response for

Output Indicator - the type or format of the output.

Dept. of CSE. LAEC, Bidar Page 4


Prompt Engineering

Figure 1.2: Elements of Prompt

In the prompt figure 1.2 above, the instruction corespond to the classification task, "Classify
the text into neutral, negative, or positive". The input data corresponds to the "I think the food
wasokay.' part, and the output indicator used is "Sentiment:".

Dept. of CSE. LAEC, Bidar Page 5


Prompt Engineering

CHAPTER 2
EVOLUTION
In the rapidly advancing field of artificial intelligence, prompt engineering has
emerged as a crucial technique for refining and optimizing AI models. By guiding the outputs
of these models, prompt engineering has paved the way for more accurate and contextually
appropriate responses. In this blog, we'll delve into the fascinating history of prompt
engineering, tracing its evolution and exploring its impact on the AI landscape.

1. Early Days: Rule-Based Systems In the early days of AI, rule-based systems
dominated the scene. These systems relied on predefined sets of rules to generate
responses. While effective for simple tasks, they often fell short when faced with
complex language understanding. Prompt engineering, in its nascent form, involved
carefully crafting these rule sets to elicit desired responses from the AI system.

2. Machine Learning Revolution:With the advent of machine learning, AI models


evolved to learn patterns and make predictions from vast amounts of data. However,
early iterations of these models struggled with generating coherent and contextually
accurate responses. Prompt engineering started gaining traction as researchers and
developers recognized the need to guide these models through well-crafted prompts.

3. Transformer Models and Fine-Tuning:The introduction of transformer models, such


as GPT (Generative Pre-trained Transformer), brought a significant breakthrough in
natural language processing. These models showcased remarkable language generation
capabilities. Prompt engineering took center stage, as developers realized the potential
to fine-tune these models on specific tasks or datasets.

4. Iterative Refinement and Feedback:Loops Prompt engineering has evolved into an


iterative process involving constant refinement and feedback loops. Developers
experiment with different prompt strategies, adjusting instructions, examples, and
constraints to optimize model outputs. This iterative approach empowers prompt
engineers to shape AI models according to specific use cases, ensuring more accurate
and reliable responses.

Dept. of CSE. LAEC, Bidar Page 6


Prompt Engineering

5. Contextual Prompts and Active Learning: Recent advancements in prompt


engineering include the exploration of contextual prompts. By incorporating context

aware instructions or system/user messages, prompt engineers can steer conversations


and generate more contextually appropriate responses. Active learning techniques are
also being leveraged to iteratively improve prompt engineering, as the models learn
from user feedback and adapt accordingly.

Dept. of CSE. LAEC, Bidar Page 7


Prompt Engineering

CHAPTER 3

TYPES AND ITS WORKING


3.1 TYPES OF PROMPTS
There can be wide variety of prompts which you will get to know during the course of this
tutorial. This being an introductory chapter, let's start with a small set to highlight the different
types of prompts that one can use:

1. Natural Language Prompts: These prompts emulate human-like instructions,


providing guidance in the form of natural language cues. They allow developers to
interact with the model more intuitively, using instructions that resemble how a person
would communicate.
2. System Prompts: System prompts are predefined instructions or templates that
developers provide to guide the model's output. They offer a structured way of
specifying the desired output format or behavior, providing explicit instructions to the
model.
3. Conditional Prompts: Conditional prompts involve conditioning the model on
specific context or constraints. By incorporating conditional prompts, developers can
guide the model's behavior based on conditional statements, such as "If X, then Y" or
"Given A, generate B."

3.2 HOW DOES PROPMT ENGINEERING WORK?


Prompt engineering is a complex and iterative process. There is no single formula for creating
effective prompts, and the best approach will vary depending on the specific LLM and the task
at hand. However, there are some general principles that prompt engineers can follow:

• Start with a clear understanding of the task. What do you want the LLM to do? What
kind of output are you looking for? Once you have a clear understanding of the task,
you can start to craft a prompt that will help the LLM achieve your goals.
• Use clear and concise language. The LLM should be able to understand your prompt
without any ambiguity. Use simple words and phrases, and avoid jargon or technical
terms.

Dept. of CSE. LAEC, Bidar Page 8


Prompt Engineering

• Be specific. The more specific you are in your prompt, the more likely the LLM is to
generate a relevant and informative response. For example, instead of asking the LLM
to "write a poem," you could ask it to "write a poem about a lost love."
• Use examples. If possible, provide the LLM with examples of the kind of output you
are looking for. This will help the LLM to understand your expectations and to generate
more accurate results.
• Experiment. There is no one-size-fits-all approach to prompt engineering. The best
way to learn what works is to experiment with different prompts and see what results
you get.

Dept. of CSE. LAEC, Bidar Page 9


Prompt Engineering

CHAPTER 4
TECHNIQUES
Effective prompt engineering requires careful consideration and attention to detail. Here are
some techniques to enhance prompt effectiveness:

1. Zero-Shot Prompting: Zero-shot prompting means that the prompt used to interact
with the model won't contain examples or demonstrations. The zero-shot prompt
directly instructs the model to perform a task without any additional examples to steer
it.
2. Few-Shot Prompting: Few-shot prompting can be used as a technique to enable in-
context learning where we provide demonstrations in the prompt to steer the model to
better performance. The demonstrations serve as conditioning for subsequent examples
where we would like the model to generate a response.
3. Chain-of-Thought Prompting: chain-of-thought (CoT) prompting enables complex
reasoning capabilities through intermediate reasoning steps. You can combine it with
few-shot prompting to get better results on more complex tasks that require reasoning
before responding.
4. Self-Consistency: self-consistency aims "to replace the naive greedy decoding used in
chain-of-thought prompting". The idea is to sample multiple, diverse reasoning paths
through few-shot CoT, and use the generations to select the most consistent answer.
This helps to boost the performance of CoT prompting on tasks involving arithmetic
and commonsense reasoning.
5. Generated Knowledge Prompting:

Dept. of CSE. LAEC, Bidar Page 10


Prompt Engineering

6. Prompt Chaining: Prompt chaining is useful to accomplish complex tasks which an


LLM might struggle to address if prompted with a very detailed prompt. In prompt
chaining, chain prompts perform transformations or additional processes on the
generated responses before reaching a final desired state.
7. Tree of Thoughts (ToT):

8. Retrieval Augmented Generation (RAG): Meta AI researchers introduced a method


called Retrieval Augmented Generation (RAG) to address such knowledge-intensive
tasks. RAG combines an information retrieval component with a text generator model.
RAG can be fine-tuned and its internal knowledge can be modified in an efficient
manner and without needing retraining of the entire model.
RAG takes an input and retrieves a set of relevant/supporting documents given a source
(e.g., Wikipedia. This is very useful as LLMs's parametric knowledge is static. RAG
allows language models to bypass retraining, enabling access to the latest information
for generating reliable outputs via retrieval-based generation.

9. ReAct Prompting: Generating reasoning traces allow the model to induce, track, and
update action plans, and even handle exceptions. The action step allows to interface
with and gather information from external sources such as knowledge bases or
environments.The ReAct framework can allow LLMs to interact with external tools to
retrieve additional information that leads to more reliable and factual responses.

Dept. of CSE. LAEC, Bidar Page 11


Prompt Engineering

Figure 4.1: Efficient Prompt Arrangement

Dept. of CSE. LAEC, Bidar Page 12


Prompt Engineering

CHAPTER 5
WHAT IS GENERATIVE AI ?
Generative AI refers to a class of artificial intelligence techniques that focus on creating
data, such as images, text, or audio, rather than processing existing data. We will explore how
generative AI models, particularly generative language models, play a crucial role in prompt
engineering and how they can be finetuned for various NLP tasks.

5.1 GENERATIVE LANGUAGE MODELS

Generative language models, such as GPT-3 and other variants, have gained immense
popularity due to their ability to generate coherent and contextually relevant text. Generative
language models can be used for a wide range of tasks, including text generation, translation,
summarization, and more. They serve as a foundation for prompt engineering by providing
contextually aware responses to custom prompts.

5.2 FINE-TUNING GENERATIVE LANGUAGE MODELS

Fine-tuning is the process of adapting a pre-trained language model to a specific task or domain
using task-specific data. Prompt engineers can fine-tune generative language models with
domain specific datasets, creating prompt-based language models that excel in specific tasks.

1. Customizing Model Responses

• Custom Prompt Engineering: Prompt engineers have the flexibility to customize


model responses through the use of tailored prompts and instructions.
• Role of Generative AI: Generative AI models allow for more dynamic and interactive
interactions, where model responses can be modified by incorporating user instructions
and constraints in the prompts.

2. Creative Writing and Storytelling

• Creative Writing Applications: Generative AI models are widely used in creative


writing tasks, such as generating poetry, short stories, and even interactive storytelling
experiences.

Dept. of CSE. LAEC, Bidar Page 13


Prompt Engineering

• Co-Creation with Users: By involving users in the writing process through interactive
prompts, generative AI can facilitate co-creation, allowing users to collaborate with the
model in storytelling endeavors.

3. Language Translation

• Multilingual Prompting: Generative language models can be finetuned for


multilingual translation tasks, enabling prompt engineers to build prompt-based
translation systems.
• Real-Time Translation: Interactive translation prompts allow users to obtain instant
translation responses from the model, making it a valuable tool for multilingual
communication.

4. Multimodal Prompting

• Integrating Different Modalities: Generative AI models can be extended to


multimodal prompts, where users can combine text, images, audio, and other forms of
input to elicit responses from the model.
• Enhanced Contextual Understanding: Multimodal prompts enable generative AI
models to provide more comprehensive and contextually aware responses, enhancing
the user experience.

5. Ethical Considerations

• Responsible Use of Generative AI: As with any AI technology, prompt engineers


must consider ethical implications, potential biases, and the responsible use of
generative AI models.
• Addressing Potential Risks: Prompt engineers should be vigilant in monitoring and
mitigating risks associated with content generation and ensure that the models are
deployed responsibly.

6. Future Directions

• Continual Advancements: Generative AI is an active area of research, and prompt


engineers can expect continuous advancements in model architectures and training
techniques. Prompt Engineering 14
• Integration with Other AI Technologies: The integration of generative AI with other
AI technologies, such as reinforcement learning and multimodal fusion, holds promise
for even more sophisticated prompt-based language models.

Dept. of CSE. LAEC, Bidar Page 14


Prompt Engineering

CHAPTER 6
APPLICATIONS
we will cover advanced and interesting ways we can use prompt engineering to perform
useful and more advanced tasks with large language models (LLMs).

1. Function Calling:

• Function calling is the ability to reliably connect LLMs to external tools to enable
effective tool usage and interaction with external APIs.
• LLMs like GPT-4 and GPT-3.5 have been fine-tuned to detect when a function needs
to be called and then output JSON containing arguments to call the function. The
functions that are being called by function calling will act as tools in your AI
application and you can define more than one in a single request.
• Function calling is an important ability for building LLM-powered chatbots or agents
that need to retrieve context for an LLM or interact with external tools by converting
natural language into API calls.

2. Generating Data:

LLMs have strong capabilities to generate coherent text. Using effective prompt strategies
can steer the model to produce better, consistent, and more factual responses. LLMs can also
be especially useful for generating data which is really useful to run all sorts of experiments
and evaluations.

3. Generating Code:

• LLMs like ChatGPT are very effective at code generation. In this section, we will
cover many examples of how to use ChatGPT for code generation.
• The OpenAI's Playground (Chat Mode) and the gpt-3.5-turbo model are used for all
examples below.
• As with all chat models from OpenAI, you can use a System Message to define the
behavior and format of the responses.

Dept. of CSE. LAEC, Bidar Page 15


Prompt Engineering

4. Graduate Job Classification Case Study:

• Clavié provide a case-study on prompt-engineering applied to a medium-scale text


classification use-case in a production system. Using the task of classifying whether a
job is a true "entry-level job", suitable for a recent graduate, or not, they evaluated a
series of prompt engineering techniques and report their results using GPT-3.5 (gpt-
3.5-turbo).
• The work shows that LLMs outperforms all other models tested, including an extremely
strong baseline in DeBERTa-V3. gpt-3.5-turbo also noticeably outperforms older
GPT3 variants in all key metrics, but requires additional output parsing as its ability to
stick to a template appears to be worse than the other variants.

5. Code Generation

Any LLM can create code or output of any desired input based on context lets see example
of SQL language.

From example we can see that a query is provided and asked a question and desired output is
produced by LLM.

Dept. of CSE. LAEC, Bidar Page 16


Prompt Engineering

6. Reasoning

7. Text Summarization:

Dept. of CSE. LAEC, Bidar Page 17


Prompt Engineering

8. Question Answering:

9. Role Playing:

Dept. of CSE. LAEC, Bidar Page 18


Prompt Engineering

CHAPTER 7
MODELS
There are some models of Large Language Models where prompts are given as inputs and
desired output is expected.Some of them are

1. ChatGPT:
ChatGPT is a new model that has the capability to interact in a conversational way. This model
is trained to follow instructions in a prompt to provide appropriate responses in the context of
a dialogue. ChatGPT can help with answering questions, suggesting recipes, writing lyrics in
a certain style, generating code, and much more.

ChatGPT is trained using Reinforcement Learning from Human Feedback (RLHF). While this
model is a lot more capable than previous GPT iterations (and also trained to reduce harmful
and untruthful outputs), it still comes with limitations.

2. Llma:

Code Llama is a family of large language models (LLM), released by Meta, with the
capabilities to accept text prompts and generate and discuss code. The release also includes
two other variants (Code Llama Python and Code Llama Instruct) and different sizes (7B, 13B,
34B, and 70B).

3. Gemini:

• Gemini is the newest most capable AI model from Google Deepmind. It's built with
multimodal capabilities from the ground up and can showcases impressive crossmodal
reasoning across texts, images, video, audio, and code.
• Gemini comes in three sizes:
1. Ultra - the most capable of the model series and good for highly complex tasks
2. Pro - considered the best model for scaling across a wide range of tasks
3. Nano - an efficient model for on-device memory-constrained tasks and use-cases;
they include 1.8B (Nano-1) and 3.25B (Nano-2) parameters models and distilled
from large Gemini models and quantized to 4-bit

Dept. of CSE. LAEC, Bidar Page 19


Prompt Engineering

4. Grok-1:

• Grok-1 is a mixture-of-experts (MoE) large language model (LLM) with 314B


parameters which includes the open release of the base model weights and network
architecture.
• Grok-1 is trained by xAI and consists of MoE model that activates 25% of the weights
for a given token at inference time. The pretraining cutoff date for Grok-1 is October
2023.
• As stated in the official announcement, Grok-1 is the raw base model checkpoint from
the pre-training phase which means that it has not been fine-tuned for any specific
application like conversational agents.

5. Sora:

• OpenAI introduces Sora, its new text-to-video AI model. Sora can create videos of up
to a minute of realistic and imaginative scenes given text instructions.
• OpenAI reports that its vision is to build AI systems that understand and simulate the
physical world in motion and train models to solve problems requiring real-world
interaction

6. Claude 3:

• Anthropic announces Claude 3, their new family of models that include Claude 3 Haiku,
Claude 3 Sonnet, and Claude 3 Opus.
• Claude 3 Opus (the strongest model) is reported to outperform GPT-4 and all other
models on common benchmarks like MMLU and HumanEval.

Dept. of CSE. LAEC, Bidar Page 20


Prompt Engineering

CHAPTER 8
CONCLUSION

In conclusion, this report has highlighted the critical role that engineering plays in
various aspects of society, including infrastructure development, technological advancement,
and problem-solving. Through innovative design, rigorous analysis, and a commitment to
safety and sustainability, engineers continuously push the boundaries of what is possible,
driving progress and improving quality of life for people around the world. However, the field
of engineering also faces challenges such as ensuring diversity and inclusivity, addressing
environmental impacts, and navigating ethical dilemmas in an increasingly interconnected
world.

Dept. of CSE. LAEC, Bidar Page 21


Prompt Engineering

REFERENCES
1. Prompt Design and Engineering: Introduction and Advanced Methods (January 2024)
2. A Survey on Hallucination in Large Language Models: Principles,Taxonomy,
Challenges, and Open Questions (November 2023)
3. An RL Perspective on RLHF, Prompting, and Beyond (October 2023)
4. Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation (May
2023)
5. Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study (May 2023)
6. Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond (April
2023)

Dept. of CSE. LAEC, Bidar Page 22

You might also like