You are on page 1of 20

AMITY UNIVERSITY UTTAR PRADESH

GREATER NOIDA CAMPUS

AMITY SCHOOL OF ENGINEERING & TECHNOLOGY


2022 - 2026

MAJOR PROJECT REPORT

“TECHNIQUES OF ARTIFICIAL INTELLIGENCE”

Submitted as a partial fulfilment toward the award of


Bachelor of Technology
(COMPUTER SCIENCE AND ENGINEERING)

Submitted by :
HARSHIT JUNEJA ( A41105222053 )

Submitted to :
Prof. POOJA ANAND
( Faculty Guide )

1
CERTIFICATE

I hereby certify that the work which is being offered in entitled Major Project Report, in
partial fulfilment of the requirement for the award of B.Tech in (CSE) and submitted to
the Department of ASET is a genuine file of my very own and executed all through a
duration from 5/JUNE/2023-26/JUNE/2023 beneath the supervision of,
Prof. POOJA ANAND, Amity University Greater Noida Computer Science Department.

The matter provided on this undertaking record has not been submitted via me for
the award of another diploma someplace else.

HARSHIT JUNEJA
(A41105222053)

This is certified that the above statement made with the aid of the candidate is accurate
to the exceptional of my expertise.

Date:

Signature of Faculty Guide


Mrs. Pooja Anand

2
ACKNOWLEDGEMENT

I would love to express my sincerest gratitude and indebtedness to the person that
gave me ethical and technical help & whose kind assistance has been instrumental
inside the crowning glory of my project. It offers me sizeable satisfaction to very
own my humble gratitude to my faculty guide Mr. Pradeep Kushwaha for this
integral steering and for supplying necessary thoughts to perform this challenge.

I would, then the like to vicinity on record my best regards and private sense of
gratitude to Prof. (Dr.) AJAY RANA (Director General -SD &VT), Prof J S Jassi (
Dean Academic ), Prof. Pooja Anand (Programme Coordinator) and all the
professors of our Computer Science Department. I additionally thank all the
faculties of Amity University for his or her cautious and precious steerage which has
been extraordinarily treasured for my look at both theoretically and almost.

I understand this possibility as a huge milestone in my professional improvement. I


will try to apply abilities and understanding won in the course of this era in a great
feasible manner, and I will retain paintings, to obtain my preferred professional
targets.

Sincerely

HARSHIT JUNEJA
(A41105222053)

3
Table of contents

S. No. CONTENT PAGE NO.

CERTIFICATE 2
1
ACKNOWLEDGEMENT 3
2

ABSTRACT 5
3
INTRODUCTION 6
4
Artificial intelligence 7
5
History of AI 8-9
6
Types of AI 10
7
Pros and Cons of AI 11-16
8
Subsets of AI 17
9
Scopes of working in AI 18-19
10
CONCLUSION 20
11

4
ABSTRACT

Artificial Intelligence (AI) currently a trend in modern life is playing a


very crucial role in modern life. Thus, this report concludes the uses,
advantages, disadvantages probably all the aspects this technology holds.
The rapid increase in the popularity of this technology is making a
revolution in the technical field affecting a thousand of peoples. The way
this technology is spreading itself is of course a matter of concern. This
report also explains the concerns, how is this being used in decision
making processes, fraud detection etc.
In conclusion this report explains all the pros and cons, advantages and
disadvantages, the subsets of artificial intelligence i.e., Machine Learning,
Deep Learning, Robotics etc. This report will provide a brief sketch about
how this technology is to be used and the back end of the technology, the
developers of AI, the language this technology uses and so on…
Thus, our report concludes that this technology is going to be the future of
all the technologies helping in various ways but being a little problematic
too leading to smooth user’s experience but problems too.

5
INTRODUCTION

Artificial Intelligence (AI) is a very commonly heard name these days or


we can say the trend of the 20th century. This technology is a very fabulous
manmade piece of work that can ever exist.
This technology is manmade and hence it is also called as Artificial
Intelligence and, in this report, we will study about the technology of
artificial intelligence whose pace is defeating all the other machines and
other technologies. This technology mimics humans which is resulting in
the replacement of humans with artificial intelligence in various fields of
work. This report also talks about the subsets of artificial intelligence
which includes Machine Learning, Robotics, Deep Learning etc.
No doubt this technology is worth it, but every coin has two sides, In this
report we are going to learn about both the sides, how this technology is
making things time efficient and so on.

6
ARTIFICIAL INTELLIGENCE

Artificial intelligence refers to the technology which works as an


alternative for human intelligence to provide an effective and efficient
outcome. It aims to learn and imitate human actions such as learning,
problem solving, unbiased decision making etc. this technology also
contains both, a front end and a back end. They both play a vital role in the
functioning of this tech.
Imagine a way that can complete your work of days into minutes? Yes,
exactly this is what AI does. With the introduction of this technology the
solution of your any query be it science, tech or any other life problem
whatever in just few seconds. It works with complex algorithms which
helps the user get to the result.

7
HISTORY OF AI

The history of Artificial Intelligence (AI) dates to the early 20th century
when the concept of creating machines that can simulate human
intelligence emerged. Here is a brief overview of the key milestones in the
history of AI:

1. The Dartmouth Conference (1956): Considered the birthplace of AI, this


conference held at Dartmouth College in the United States marked the
official launch of AI as a field of study. Participants, including pioneers
like John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon,
aimed to explore the potential of creating machines that could exhibit
human-like intelligence.

2. Early AI Research (1956-1974): In the years following the Dartmouth


Conference, AI experienced rapid growth. Researchers developed
foundational concepts such as the Logic Theorist (a program that could
prove mathematical theorems) and the General Problem Solver (a program
that could solve a wide range of problems).

3. Expert Systems (1965-1980): During this period, researchers focused on


creating expert systems - AI programs that could mimic human expertise
in specific domains. One notable example was MYCIN, a system
developed to diagnose bacterial infections and recommend treatments.

4. AI Winter (mid-1970s to mid-1980s): Despite initial optimism, progress


in AI research slowed down, leading to a period known as the "AI winter."
Funding for AI projects diminished, as many projects failed to deliver the
desired results. AI was deemed overhyped, and public expectations
remained unfulfilled.

8
5. Neural Networks Resurgence (1986-present): In the mid-1980s, neural
networks, a branch of AI that simulates the workings of the human brain,
experienced a revival. Researchers discovered new techniques to train
neural networks effectively, leading to advancements in pattern
recognition, speech recognition, and other domains.

6. Big Data and Machine Learning (2000s-present): The exponential


growth of data and computational power in the 21st century fueled
advancements in machine learning. Algorithms that could learn from vast
amounts of data allowed AI systems to make more accurate predictions
and perform complex tasks such as image recognition and natural
language processing.

7. Deep Learning and AI Breakthroughs (2010s-present): Deep learning, a


subset of machine learning that uses artificial neural networks with many
layers, has emerged as a dominant paradigm in AI. Significant
breakthroughs in deep learning have been achieved, including the
development of AlphaGo, an AI system that defeated human champions in
the ancient game of Go.

8. AI in Everyday Applications (present): AI has become a part of our


daily lives, powering virtual assistants like Siri and Alexa,
recommendation systems in online shopping platforms, and facial
recognition systems in smartphones. It is also being applied in diverse
fields such as healthcare, finance, transportation, and robotics.

Throughout its history, AI has experienced periods of hype, setbacks, and


breakthroughs. Today, AI continues to evolve and shows immense promise
for transforming various industries and enhancing human lives

9
Types of AI

There are generally three types of AI:

1. Narrow AI (also known as Weak AI): This type of AI is designed for


specific tasks and operates within a limited context. Narrow AI systems
are designed to excel at a specific task, such as facial recognition or
natural language processing, but they lack the ability to generalize their
knowledge to other domains. Most of the AI applications we see today fall
under this category.

2. General AI (also known as Strong AI): General AI refers to AI systems


that possess the ability to understand, learn, and perform any intellectual
task that a human being can do. This type of AI would have human-like
cognitive capabilities and could understand and apply knowledge across
various domains. General AI is still largely theoretical and does not exist
in practice.

3. Superintelligent AI: Superintelligent AI goes beyond human capabilities


and demonstrates intelligence that surpasses even the most intelligent
human beings. This type of AI is capable of outperforming humans in
virtually every domain, including scientific research, creativity, and
problem-solving. Superintelligent AI is a topic of speculation and debate,
with experts discussing its potential benefits and risks.

It's worth noting that the term "Artificial General Intelligence (AGI)" is
often used to refer to both General AI and Superintelligent AI, as they both
involve AI systems with human-like cognitive abilities. However, it's
important to distinguish between the two as General AI represents a
broader scope of human-like intelligence, while Superintelligent AI
represents an intelligence that goes beyond human capacity.

10
Pros and Cons of AI

There are several advantages of AI that contribute to its increasing


prominence and adoption in various industries. Some key advantages
include:

1. Automation and Efficiency: AI can automate repetitive and mundane


tasks, increasing efficiency and productivity. It can handle large volumes
of data, analyse it quickly, and make decisions or draw insights in real-
time. This frees up human workers to focus on more complex and creative
tasks, leading to improved efficiency and faster results.

2. Precision and Accuracy: AI systems can process and analyze vast


amounts of data with high precision, minimizing errors and inconsistencies
that may arise from human intervention. This is especially valuable in
fields like healthcare, finance, and manufacturing, where precision and
accuracy are paramount.

3. Improved Decision-Making: AI algorithms can process complex data


sets and identify patterns, trends, or correlations that may not be apparent
to human analysts. This helps in making informed and data-driven
decisions, especially in areas such as finance, marketing, and risk
assessment.

4. Enhanced Personalization: AI enables the delivery of personalized


experiences and services to users. By analysing user data, preferences, and
behaviours, AI systems can tailor recommendations, content, and
advertisements to individual users, resulting in improved customer
satisfaction and engagement.

11
5. Advanced Problem Solving: AI techniques like machine learning and
deep learning excel at tackling complex problems that may be challenging
or time-consuming for humans. Whether it's image recognition, natural
language processing, or predicting outcomes based on historical data, AI
can provide powerful solutions and insights.

6. Continuous Learning and Improvement: AI systems can learn from new


data and experiences, continuously improving their performance over
time. This enables them to adapt to changing circumstances, identify
emerging patterns, and incorporate new knowledge to make better
decisions.

7. Handling Big Data: With the exponentially growing volumes of data


generated by organizations, AI is essential for extracting valuable insights
from this data. AI algorithms can efficiently process, analyze, and interpret
big data, uncovering hidden patterns and correlations that can drive
strategic decision-making.

8. Improved Safety and Risk Mitigation: In industries like healthcare,


transportation, and manufacturing, AI can contribute to improving safety
and reducing risks. For example, AI-powered systems can analyze real-
time data to predict equipment failures or detect anomalies, helping
prevent accidents and downtime.

9. Augmenting Human Abilities: AI is not intended to replace humans but


rather enhance their capabilities. By automating routine tasks and
providing intelligent insights, AI allows humans to focus on higher-level
tasks that require creativity, critical thinking, and emotional intelligence.

10. Innovation and Exploration: AI enables the development of new


technological advancements, such as autonomous vehicles, robotics,
virtual assistants, and smart home devices. These innovations have the

12
potential to revolutionize industries, improve quality of life, and create
new opportunities for growth and exploration.

It's important to note that while AI offers several advantages, ethical


considerations, responsible usage, and potential challenges, such as job
displacement, need to be addressed to ensure the ethical and responsible
deployment of AI technologies.
• Now if we come to disadvantages, following are some of them-:
1. Job Displacement: One of the main concerns is the potential for AI to
automate and replace human jobs. AI has the capability to perform tasks
traditionally done by humans, which could lead to unemployment and
economic disruptions in certain industries.

2. Lack of Human-like Judgment and Understanding: Despite


advancements in AI, machines still lack human-like judgment, intuition,
and common sense. AI systems may struggle to understand context,

13
sarcasm, or ambiguous situations, which can limit their decision-making
capabilities.

3. Data Bias and Discrimination: AI systems learn from historical data, and
if that data contains biases or discriminatory patterns, the AI can
perpetuate and amplify these biases. This can result in unfair treatment or
decisions that disproportionately impact certain groups of people.

4. Privacy and Security Concerns: AI relies on collecting and analyzing


vast amounts of data, which raises concerns about privacy and security. If
personal or sensitive data is mishandled or falls into the wrong hands, it
can lead to privacy breaches or identity theft.

5. Lack of Transparency and Explain ability: Some AI algorithms, such as


deep neural networks, are black boxes, meaning they are difficult to
interpret and explain. This lack of transparency can raise trust issues,
especially in sectors like healthcare and finance, where transparency and
accountability are crucial.

6. Ethical Considerations: AI raises a range of ethical concerns, such as the


potential for AI-enabled autonomous weapons, invasion of privacy, and the
moral responsibility of AI systems. Decisions made by AI can have
significant consequences, so ethical considerations must be carefully
addressed.

7. Overreliance on AI: Blind reliance on AI can lead to overdependence


and neglect of human skills and creativity. Humans should maintain
control and have the ability to question or challenge AI-generated
outcomes.

8. Cost and Accessibility: Developing and implementing AI technologies


can be expensive, especially for smaller businesses or underdeveloped

14
regions. This may create a digital divide and hinder access to the benefits
of AI for certain groups or communities.

9. Potential for Malicious Use: Just like any technology, AI can be misused
or exploited for malicious purposes, such as hacking, spreading
misinformation, or creating sophisticated cyberattacks.

10. Unemployment and Socioeconomic Impact: If AI causes significant


job displacement without adequate reskilling opportunities, it can lead to
socioeconomic challenges, including income inequality and social unrest.

Addressing these disadvantages and challenges requires careful regulation,


ethical guidelines, responsible AI development, and ongoing research to
mitigate potential risks and ensure AI benefits society.

15
16
Subsets of AI

Data science is a multidisciplinary field that combines scientific methods,


processes, algorithms, and systems to extract knowledge and insights from
structured and unstructured data. It involves a range of techniques,
including data analysis, machine learning, statistical modelling, data
visualization, and data engineering.

Data scientists use their skills and expertise in mathematics, statistics,


computer science, and domain knowledge to collect, clean, organize,
analyze, and interpret large datasets. They apply statistical techniques and
machine learning algorithms to uncover patterns, trends, and relationships
in the data, which can be used to make informed decisions and solve
complex problems.

Data science has a wide range of applications across various industries,


including finance, healthcare, marketing, e-commerce, transportation, and
telecommunications. It plays a crucial role in enabling organizations to
gain insights from their data, optimize processes, improve decision-
making, develop predictive models, and create intelligent systems.

Data scientists often work with programming languages such as Python or


R, as well as data analysis tools and libraries like pandas, NumPy, and
scikit-learn. They also utilize data visualization tools like Tableau or
matplotlib to effectively communicate their findings to stakeholders.

Overall, data science is a powerful discipline that leverages the power of


data to drive innovation, improve efficiency, and enhance decision-making
in various industries.

17
Scopes of working in AI fields

The AI field offers a good stipend for the well-educated engineers


with specialisation in AI. Working in this fields offers a dream job of
working from home. But for getting a job in AI department and
working as an AI engineer it requires a lot of hardwork I have
illustrated some steps below-:
Becoming an AI engineer typically involves several steps and a
combination of education, skills development, and practical
experience. Here is an overview of the general process:

1. Education: Starting by obtaining a strong foundation in


mathematics, computer science, and statistics. A bachelor's degree in
computer science, engineering, or a related field is typically required,
although some AI engineers may have advanced degrees like a
Master's or Ph.D. in Artificial Intelligence or Machine Learning.

2. Programming Skills: Developing strong programming skills,


especially in languages commonly used in AI, such as Python, R, or
Java. Familiarising oneself with libraries and frameworks like
TensorFlow, PyTorch, or scikit-learn that are frequently used in AI
development.

3. Machine Learning and Data Science: Gaining a deep understanding


of machine learning algorithms, data preprocessing techniques,
statistical modeling, and data visualization. Learning how to apply
these concepts to real-world problems.

4. Specialize in AI Subfields: Choosing a specific area of AI to


specialize in, such as natural language processing (NLP), computer

18
vision, robotics, or deep learning. Diving deeper into the theories,
concepts, and tools associated with that particular subfield.

5. Projects and Practical Experience: Undertaking AI projects to gain


hands-on experience. This can involve working on personal projects,
contributing to open-source projects, participating in Kaggle
competitions, or completing internships or research positions to apply
the knowledge in real-world scenarios.

6. Continuous Learning: AI is a rapidly evolving field, so staying


updated with the latest research papers, industry trends, and
advancements is essential. Attending conferences, joining AI
communities, and taking online courses to continue learning and
expanding the skill set.

19
CONCLUSION

To conclude, the power of artificial intelligence should be limited to


the point where it will only assist the human brain and not supersede it
posing any threat to mankind.

20

You might also like