You are on page 1of 39

Artificial Intelligence and Machine Learning

CHAPTER 1

INTRODUCTION
1.1 Introduction to AI-ML

AIML (Artificial Intelligence Machine Learning) is a markup language designed for creating
chatbots and conversational agents. This report provides an overview of AIML, its history, key
features, applications, and its relevance in modern conversational AI development.

• Before leading to the meaning of artificial intelligence let understand what is the meaning of
Intelligence-
• Intelligence: The ability to learn and solve problems. This definition is taken from webster’s
Dictionary.
• The most common answer that one expects is “to make computers intelligent so that they can act
intelligently!”, but the question is how much intelligent? How can one judge intelligence?

If the computers can, somehow, solve real-world problems, by improving on their own from past
experiences, they would be called “intelligent”. Thus, the AI systems are more generic (rather than
specific), can “think” and are more flexible. Intelligence, as we know, is the ability to acquire and
apply knowledge. Knowledge is the information acquired through experience. Experience is the
knowledge gained through exposure(training). Summing the terms up, we get artificial intelligence as
the “copy of something natural (i.e., human beings) ‘WHO’ is capable of acquiring and applying the
information it has gained through exposure.”
• Intelligence is composed of:
A. Reasoning
B. Learning
C. Problem-Solving
D. Perception
E. Linguistic Intelligence

1.2 History of AI-ML


AIML was developed by Dr. Richard S. Wallace in the late 1990s and has its roots in the
ALICE (Artificial Linguistic Internet Computer Entity) project. It gained popularity as one of the
earliest tools for building rule-based chatbots capable of engaging in natural language conversations.
Over the years, AIML has contributed to the evolution of conversational AI technologies.

Dept. Of AIML, AIET, Mijar Page | 1


Artificial Intelligence and Machine Learning

➢ The birth of Artificial Intelligence (1952-1956)


• Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence program
“Which was named as "Logic Theorist". This program had proved 38 of 52 Mathematics
theorems, and find new and more elegant proofs for some theorems.
• Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist John
McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic field. At that
time high-level computer languages such as FORTRAN, LISP, or COBOL were invented. And the
enthusiasm for AI was very high at that time.
➢ The golden years-Early enthusiasm (1956-1974)
• Year 1966: The researchers emphasized developing algorithms which can solve mathematical
problems. Joseph Weizenbaum created the first chatbot in 1966, which was named as ELIZA.
• Year 1972: The first intelligent humanoid robot was built in Japan which was named as WABOT-
1.
➢ The first AI winter (1974-1980)
• The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers to the
time period where computer scientist dealt with a severe shortage of funding from government for
AI researches.
• During AI winters, an interest of publicity on artificial intelligence was decreased.
1.3 Life cycle of AI-ML

What is Al Life Cycle?


It is a step-by-step process that a person should follow to develop an Al Project to solve a
problem. Al Project Cycle provides us with an appropriate framework which can lead us to achieve
our goal. 4

The Al Project Cycle mainly has 5 stages:

1. Problem Scoping

2. Data Acquisition

3. Data Exploration

4. Modelling

5. Evaluation
1.3.1. What is Problem Scoping?
Identifying a problem and having a vision to solve it, is called Problem Scoping. Scoping a
problem is not that easy as we need to have a deeper understanding so that the picture becomes clearer

Dept. Of AIML, AIET, Mijar Page | 2


Artificial Intelligence and Machine Learning

while we are working to solve it. So, we use the 4Ws Problem Canvas to understand the problem in a
better way.

➢ What is 4Ws Problem Canvas? The 4Ws Problem canvas helps in identifying the key elements
related to the problem.

The 4Ws are:

1. Who
2. What
3. Where
4. Why
1. Who?: This block helps in analysing the people who are getting affected directly or indirectly due
to a problem. Under this, we find out who are the 'Stakeholders' (those people who face this problem
and would be benefitted with the solution) to this problem? Below are the questions that we need to
discuss under this block.

1. Who are the stakeholders?

2. What do you know about them?


2. What?: This block helps to determine the nature of the problem. What is the problem and how do
we know that it is a problem? Under this block, we also gather evidence to prove that the problem you
have selected actually exists. Below are the questions that we need to discuss under this block.

1. What is the problem?


2. How do you know that it is a problem?

3. Where?: This block will help us to look into the situation in which the problem arises, the context
of it, and the locations where it is prominent. Here is the Where Canvas:

1. What is the context/situation in which the stakeholders experience the problem?


4. Why?: In the "Why" canvas, we think about the benefits which the stakeholders would get from the
solution and how it will benefit them as well as the society. Below are the questions that we need to
discuss under this block.

1. What would be of key value to the stakeholders?

2. How would it improve their situation?

1.3.2. What is Data Acquisition?


This is the second stage of Al Project cycle. According to the term, this stage is about acquiring
data for the project. Whenever we want an Al project to be able to predict an output, we need to train

Dept. Of AIML, AIET, Mijar Page | 3


Artificial Intelligence and Machine Learning

it first using data. For example, if you want to make an Artificially Intelligent system which can predict
the salary of any employee based on his previous salaries, you would feed the data of his previous
salaries into the machine. The previous salary data here is known as Training Data while the next
salary prediction data set is known as the Testing Data. Data features refer to the type of data you want
to collect. In above example, data features would be salary amount, increment percentage, increment
period, bonus, etc. There can be various ways to collect the data. Some of them are:

1. Surveys
2. Web Scraping
3. Sensors
4. Cameras
5. Observations
6. API (Application Program Interface)

One of the most reliable and authentic sources of information, are the open-sourced websites hosted
by the government. Some of the open-sourced Govt. portals are: data.gov.in, india.gov. i

1.3.3. What is Data Exploration?


While acquiring data, we must have noticed that the data is a complex entity - it is full of numbers
and if anyone wants to make some sense out of it, they have to work some patterns out of it. Thus, to
analyse the data, you need to visualise it in some user-friendly format so that you can:

1. Quickly get a sense of the trends, relationships and patterns contained within the data.
2. Define strategy for which model to use at a later stage.
3. Communicate the same to others effectively.

To visualise data, we can use various types of visual representations like Bar graph, Histogram, Line
Chart, Pie Chart.

1.3.4. What is Data Modelling?


The graphical representation makes the data understandable for humans as we can discover
trends and patterns out of it, but machine can analyse the data only when the data is in the most basic
form of numbers (which is binary – O’s and 1’s). The ability to mathematically describe the
relationship between parameters is the heart of every Al model.

Generally, Al models can be classified as follows:

Dept. Of AIML, AIET, Mijar Page | 4


Artificial Intelligence and Machine Learning

➢ Rule Based Approach:


It refers to the Al modelling where the rules are defined by the developer. The machine follows the
rules or instructions mentioned by the developer and performs its task accordingly. In this we fed the
data along with rules to the machine and the machine after getting trained on them is now able to
predict answers for the same. A drawback/feature for this approach is that the learning is static.

➢ Learning Based Approach:


It refers to the Al modelling where the machine learns by itself. In this approach the Al model gets
trained on the data fed to it and then is able to design a model which is adaptive to the change in data.
An advantage for this approach is that the learning is dynamic. The learning-based approach can further
be divided into three parts:

a) Supervised Learning: In a supervised learning model, the dataset which is fed to the machine is
labelled. A label is some information which can be used as a tag for data. For example, students get
grades according to the marks they secure in examinations. These grades are labels which categorise
the students according to their marks. There are two types of Supervised Learning models:

1. Classification: Where the data is classified according to the labels. This model works on discrete
dataset which means the data need not be continuous.
2. Regression: Such models work on continuous data. For example, if we wish to predict our next
salary, then we would put in the data of our previous salary, any increments, etc., and would train
the model. Here, the data which has been fed to the machine is continuous.

b) Unsupervised Learning: An unsupervised learning model works on unlabelled dataset. This means
that the data which is fed to the machine is random. This model is used to identify relationships,
patterns and trends out of the data which is fed into it. It helps the user in understanding what the data
is about and what are the major features identified by the machine in it.

Unsupervised learning models can be further divided into two categories:


1. Clustering: It refers to the unsupervised learning algorithm which can cluster the unknown data
according to the patterns or trends identified out of it.
2. Dimensionality Reduction: We humans are able to visualise up to 3-Dimensions only but, there
are various entities which exist beyond 3-Dimensions. For example, in Natural language
Processing, the words are considered to be N-Dimensional entities. So, to make sense out of it,
dimensionality reduction algorithm is used to reduce their dimensions.

Dept. Of AIML, AIET, Mijar Page | 5


Artificial Intelligence and Machine Learning

1.3.5. What is Evaluation?


Once a model has been made and trained, it needs to go through proper testing so that one can
calculate the efficiency and performance of the model. Hence, the model is tested with the help of
Testing Data and the efficiency of the model is calculated on the basis of the parameters mentioned
below:

1. Accuracy
2. Precision
3. Recall
4. F1 Score

• Neural Network:
Neural networks are loosely modelled after how neurons in the human brain behave. The key
advantage of neural networks is that they are able to extract data features automatically
without needing the input of the programmer. It is a fast and efficient way to solve problems for which
the dataset is very large, such as in images. As seen in the figure given above, the larger Neural
Networks tend to perform better with larger amounts of data whereas the traditional machine learning
algorithms stop improving after a certain saturation point.

• Agents in Artificial Intelligence


In artificial intelligence, an agent is a computer program or system that is designed to perceive its
environment, make decisions and take actions to achieve a specific goal or set of goals. The agent
operates autonomously, meaning it is not directly controlled by a human operator.
• Structure of an AI Agent
To understand the structure of Intelligent Agents, we should be familiar with Architecture and
Agent programs. Architecture is the machinery that the agent executes on. It is a device with sensors
and actuators, for example, a robotic car, a camera, and a PC. An agent program is an implementation
of an agent function. An agent function is a map from the percept sequence (history of all that an agent
has perceived to date) to an action.
There are many examples of agents in artificial intelligence. Here are a few:
➢ Intelligent personal assistants: These are agents that are designed to help users with various tasks,
such as scheduling appointments, sending messages, and setting reminders. Examples of intelligent
personal assistants include Siri, Alexa, and Google Assistant.
➢ Autonomous robots: These are agents that are designed to operate autonomously in the physical
world. They can perform tasks such as cleaning, sorting, and delivering goods. Examples of
autonomous robots include the Roomba vacuum cleaner and the Amazon delivery robot.

Dept. Of AIML, AIET, Mijar Page | 6


Artificial Intelligence and Machine Learning

➢ Gaming agents: These are agents that are designed to play games, either against human opponents
or other agents. Examples of gaming agents include chess-playing agents and poker-playing
agents.
➢ Fraud detection agents: These are agents that are designed to detect fraudulent behavior in
financial transactions. They can analyse patterns of behaviour to identify suspicious activity and
alert authorities. Examples of fraud detection agents include those used by banks and credit card
companies.
➢ Traffic management agents: These are agents that are designed to manage traffic flow in cities.
They can monitor traffic patterns, adjust traffic lights, and reroute vehicles to minimize congestion.
Examples of traffic management agents include those used in smart cities around the world.
➢ A software agent: It has Keystrokes, file contents, received network packages that act as sensors
and displays on the screen, files, and sent network packets acting as actuators.
➢ Human-agent has eyes, ears, and other organs which act as sensors, and hands, legs, mouth, and
other body parts act as actuators.
➢ A Robotic agent has Cameras and infrared range finders which act as sensors and various motors
act as actuators.
1.4 Key features of AI-ML
AIML incorporates several key features that make it suitable for building chatbots:

1.4.1. Rule-Based Approach


AIML uses a rule-based approach where developers define patterns and templates. Patterns represent
user input, and templates specify the chatbot's responses. This approach is intuitive and allows for easy
customization.

1.4.2. Pattern Matching


AIML relies on pattern matching to identify the most appropriate response for a given user input. It
uses wildcards and placeholders to capture variations in user queries.

1.4.3. Variable Management


AIML supports variable management through the <set> and <get> elements, enabling chatbots to store
and retrieve data during conversations. This feature helps personalize responses based on user context.

1.4.4. Internal Processing


The <think> element allows chatbots to perform internal operations without generating direct user
responses. This capability can be useful for managing chatbot state and making decisions.

Dept. Of AIML, AIET, Mijar Page | 7


Artificial Intelligence and Machine Learning

1.5 Applications of AI-ML


Uses of Artificial Intelligence:

Artificial Intelligence has many practical applications across various industries and domains,
including:

➢ Healthcare: AI is used for medical diagnosis, drug discovery, and predictive analysis of diseases.
➢ Finance: AI helps in credit scoring, fraud detection, and financial forecasting.
➢ Retail: AI is used for product recommendations, price optimization, and supply chain
management.
➢ Manufacturing: AI helps in quality control, predictive maintenance, and production optimization.
➢ Transportation: AI is used for autonomous vehicles, traffic prediction, and route optimization.
➢ Customer service: AI-powered chatbots are used for customer support, answering frequently
asked questions, and handling simple requests.
➢ Security: AI is used for facial recognition, intrusion detection, and cybersecurity threat analysis.
➢ Marketing: AI is used for targeted advertising, customer segmentation, and sentiment analysis.
➢ Education: AI is used for personalized learning, adaptive testing, and intelligent tutoring systems.

Dept. Of AIML, AIET, Mijar Page | 8


Artificial Intelligence and Machine Learning

CHAPTER 2

PYTHON PROGRAMMING
2.1 Introduction

Python is a versatile and widely-used programming language known for its simplicity,
readability, and extensive library support. This report provides an in-depth overview of Python,
including its history, key features, applications, and its significance in modern software development.

➢ What can Python do?


• Python can be used on a server to create web applications.
• Python can be used alongside software to create workflows.
• Python can connect to database systems. It can also read and modify files.
• Python can be used to handle big data and perform complex mathematics.
• Python can be used for rapid prototyping, or for production-ready software development.
➢ Why Python?
• Python works on different platforms (Windows, Mac, Linux, Raspberry Pi, etc).
• Python has a simple syntax similar to the English language.
• Python has syntax that allows developers to write programs with fewer lines than some other
programming languages.
• Python runs on an interpreter system, meaning that code can be executed as soon as it is written.
This means that prototyping can be very quick. • Python can be treated in a procedural way, an
object-oriented way or a functional way.
2.2 History of Python Programming
Python was created by Guido van Rossum and first released in 1991. It was designed with an emphasis
on code readability and ease of use, making it an excellent choice for both beginners and experienced
developers. Python's development has been guided by a strong community of contributors, resulting
in a rich ecosystem of libraries and frameworks.

• Python laid its foundation in the late 1980s.


• The implementation of Python was started in December 1989 by Guido Van Rossum at CWI in
Netherland.
• In February 1991, Guido Van Rossum published the code (labeled version 0.9.0) to alt.source
• In 1994, Python 1.0 was released with new features like lambda, map, filter, and reduce.
• Python 2.0 added new features such as list comprehensions, garbage collection systems.
• On December 3, 2008, Python 3.0 (also called "Py3K") was released. It was designed to rectify the
fundamental flaw of the language.

Dept. Of AIML, AIET, Mijar Page | 9


Artificial Intelligence and Machine Learning

• ABC programming language is said to be the predecessor of Python language, which was capable
of Exception Handling and interfacing with the Amoeba Operating System.
• The following programming languages influence Python:
o ABC language.
o Modula-3
2.3 Key features of Python Programming
Python offers a range of features that contribute to its popularity

2.3.1. Simple and Readable Syntax

Python's syntax is clean and easy to read, which reduces the cost of program maintenance and enhances
collaboration among developers.

2.3.2. Extensive Standard Library

Python includes a comprehensive standard library that provides modules and packages for various
tasks, from file handling to web development, which simplifies and accelerates software development.

2.3.3. Cross-Platform Compatibility

Python is available on multiple platforms, including Windows, macOS, and Linux, allowing
developers to create cross-platform applications with ease.

2.3.4. Strong Community and Third-Party Libraries

The Python community is large and active, fostering the creation of numerous third-party libraries and
frameworks. Popular examples include NumPy for scientific computing, Django for web development,
and TensorFlow for machine learning.

2.3.5. Versatility

Python is a versatile language suitable for a wide range of applications, including web development,
data analysis, artificial intelligence, scientific computing, automation, and more.

2.4 Applications of Python Programming


Python is used extensively in various domains and industries:
2.4.1. Web Development
Frameworks like Django and Flask make Python a popular choice for building web applications and
APIs.

Dept. Of AIML, AIET, Mijar Page | 10


Artificial Intelligence and Machine Learning

2.4.2. Data Science and Machine Learning

Python is the go-to language for data analysis and machine learning due to libraries like Pandas,
Matplotlib, Scikit-Learn, and TensorFlow.

2.4.3. Automation and Scripting

Python's ease of use makes it ideal for automating repetitive tasks and writing scripts for various
purposes.

2.4.4. Scientific Computing


In scientific research and engineering, Python is used for numerical simulations, data visualization,
and modelling.

2.4.5. Game Development


Python has libraries such as Pygame for game development, making it a choice for indie game
developers.
2.5 Libraries of Python Programming
Picking a programming language that can proficiently tackle everyday issues is basic in the current
period, where new tech is turning out to be logically critical in all pieces of our lives. An illustration
of this kind of programming language is Python. In recent years, Python's popularity has increased
dramatically, kudos to its widespread application in, among other fields, data science, software
engineering, and machine learning. The large number of libraries that Python offers is one of the
reasons for its popularity. Through this tutorial, we hope to teach the reader about the most popular
Python libraries and how they are used today.
▪ What is a Library?
Each of Python's open-source libraries has its own source code. A collection of code scripts that
can be used iteratively to save time is known as a library. It is like a physical library in that it has
resources that can be used again, as the name suggests. A collection of modules that are linked together
is also known as a Python library. It has code bundles that can be used again and again in different
programs. For programmers, it makes Python programming easier and simpler. Since then, we will
not need to compose the same code for various projects. Python libraries are heavily used in a variety
of fields, including data visualization, machine learning, and computer science

▪ How Python Libraries work?


As previously stated, a Python library is nothing more than a collection of code scripts or modules of
code that can be used in a program for specific operations. We use libraries to avoid having to rewrite
existing program code

Dept. Of AIML, AIET, Mijar Page | 11


Artificial Intelligence and Machine Learning

▪ Standard Libraries of Python


The Python Standard Library contains all of Python's syntax, semantics, and tokens. It has built-in
modules that allow the user to access I/O and a few other essential modules as well as fundamental
functions. The Python libraries have been written in the C language generally. The Python standard
library has more than 200 core modules. Because of all of these factors, Python is a powerful
programming language. The Python Standard Library plays a crucial role. Python's features can only
be used by programmers who have it. In addition, Python has a number of libraries that make.

Different type libraries in python are:


1. NumPy
2. Matplotlib
3. Pandas
4. Scikit-learn
5. sqlite3
6. OpenCV

1. NumPy:

What is NumPy?
▪ NumPy is a Python library used for working with arrays.
▪ It also has functions for working in domain of linear algebra, Fourier transform, and matrices.
▪ NumPy was created in 2005 by Travis Oliphant. It is an open-source project and you can use it
freely.
▪ NumPy stands for Numerical Python.
➢ Data Types in Python

By default, Python have these data types:


▪ strings - used to represent text data, the text is given under quote marks. e.g., "ABCD"
▪ integer - used to represent integer numbers. e.g., -1, -2, -3
▪ float - used to represent real numbers. e.g., 1.2, 42.42
▪ Boolean - used to represent True or False.
▪ complex - used to represent complex numbers. e.g., 1.0 + 2.0j, 1.5 + 2.5j
➢ Data Types in NumPy

NumPy has some extra data types, and refer to data types with one character, like i for integers, u for
unsigned integers etc.
Below is a list of all data types in NumPy and the characters used to represent them.
• i - integer

Dept. Of AIML, AIET, Mijar Page | 12


Artificial Intelligence and Machine Learning

• b - Boolean
• u - unsigned integer
• f - float
• c - complex float
• m - time delta
• M – datetime
• O - object
• S - string
• U - Unicode string
• V - fixed chunk of memory for other type (void)
Methods used in NumPy
➢ Copy: The copy owns the data and any changes made to the copy will not affect original array,
and any changes made to the original array will not affect the copy.
➢ Shape: NumPy arrays have an attribute called shape that returns a tuple with each index having
the number of corresponding elements.
➢ Reshape: Reshaping means changing the shape of an array.
• The shape of an array is the number of elements in each dimension.
• By reshaping we can add or remove dimensions or change number of elements in each
dimension.
➢ Iterating Arrays:
• Iterating means going through elements one by one.
• As we deal with multi-dimensional arrays in NumPy, we can do this using basic for loop of
python.
• If we iterate on a 1-D array it will go through each element one by one.
➢ Join:
•Joining means putting contents of two or more arrays in a single array.
•In SQL we join tables based on a key, whereas in NumPy we join arrays by axes.
•We pass a sequence of arrays that we want to join to the concatenate () function, along with the
axis. If axis is not explicitly passed, it is taken as 0.
➢ Split:
• Splitting is reverse operation of Joining.
• Joining merges multiple arrays into one and Splitting breaks one array into multiple.
• We use array split () for splitting arrays, we pass it the array we want to split and the number
of splits.

Dept. Of AIML, AIET, Mijar Page | 13


Artificial Intelligence and Machine Learning

➢ Search:
• You can search an array for a certain value, and return the indexes that get a match.
• To search an array, use the where () method.
➢ Sort:
• Sorting means putting elements in an ordered sequence.
• Ordered sequence is any sequence that has an order corresponding to elements, like numeric
or alphabetical, ascending or descending.
• The NumPy ND array object has a function called sort (), that will sort a specified array.
➢ Filter:
• Getting some elements out of an existing array and creating a new array out of them is called

filtering. In NumPy, you filter an array using a Boolean index list.


• If the value at an index is True that element is contained in the filtered array, if the value at that
index is False that element is excluded from the filtered array.

Example program to demonstrate basic arithmetic operations using NumPy library:

import NumPy as np
# Create a NumPy array
arr = np.array ([1, 2, 3, 4, 5])
# Print the array
print("Original Array:")
print(arr)
# Perform some basic operations
mean_value = np.mean(arr)
sum_value = np.sum(arr)
max_value = np.max(arr)
min_value = np.min(arr)
# Print the results
print("\nMean:",mean_value)
print("Sum:", sum_value)
print("Max:", max_value)
print("Min:", min_value)
# Create a 2D array
matrix = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# Print the 2D array print("\n2D Array:")
print(matrix)
# Transpose the 2D array
transposed_matrix = np.transpose(matrix)
# Print the transposed 2D array
print("\nTransposed 2DArray:")
print(transposed_matrix)

Dept. Of AIML, AIET, Mijar Page | 14


Artificial Intelligence and Machine Learning

Output:
Original Array: [1 2 3 4 5]
Mean: 3.0
Sum: 15
Max: 5
Min: 1
2D Array:
[[1 2 3] [4 5 6]
[7 8 9]]
Transposed 2D Array:
[[1 4 7] [2 5 8]
[3 6 9]]
2. Matplotlib
What is Matplotlib?
• Matplotlib is a low-level graph plotting library in python that serves as a visualization utility.
• Matplotlib was created by John D. Hunter. o Matplotlib is open source and we can use it freely.
• Matplotlib is mostly written in python, a few segments are written in C, Objective C and Java
script for Platform compatibility.
Different Matplotlib:
➢ Pyplot:
o Most of the Matplotlib utilities lies under the pyplot submodule, and are usually imported under
the plt alias:
o import matplotlib. Pyplot as plt
o Now the Pyplot package can be referred to as plt
➢ Plotting:
o The plot() function is used to draw points (markers) in a diagram.
o By default, the plot() function draws a line from point to point.
o The function takes parameters for specifying points in the diagram.
o Parameter 1 is an array containing the points on the x-axis.
o Parameter 2 is an array containing the points on the y-axis.
➢ Markers:
o You can use the keyword argument marker to emphasize each point with a specified marker.
➢ Line:
o You can use the keyword argument line style, or shorter ls, to change the style of the plotted line.

Dept. Of AIML, AIET, Mijar Page | 15


Artificial Intelligence and Machine Learning

➢ Labels:
o With Pyplot, you can use the xlabel() and ylabel() functions to set a label for the x- and y-axis.
➢ Grids:
o With Pyplot, you can use the grid() function to add grid lines to the plot.
➢ Subplot:
o With the subplot() function you can draw multiple plots in one figure.
Example program to demonstrate simple plot using Matplot library:
# importing the required module
import matplotlib.pyplot as plt
# x axis values x =[1,2,3]
# corresponding y axis values
y = [2,4,1]
# plotting the points plt.plot(x, y)
# naming the x axis plt.xlabel('x - axis')
# naming the y axis plt.ylabel('y - axis')
# giving a title to my graph plt.title('My first graph!')
# function to show the plot plt.show()
3. Pandas
➢ What are Pandas?
o Pandas is a Python library used for working with data sets.
o It has functions for analysing, cleaning, exploring, and manipulating data.
o the name "Pandas" has a reference to both "Panel Data", and "Python Data Analysis" and was created
by Wes McKinney in 2008.
❖ Different methods in Pandas
➢ Series:
• A Pandas Series is like a column in a table. o It is a one-dimensional array holding data of any
type.
➢ Key/Value Objects as Series
• You can also use a key/value object, like a dictionary, when creating a Series.
➢ Data frame:
➢ What is a Data Frame?
• A Pandas Data Frame is a 2-dimensional data structure, like a 2-dimensional array, or a table
with rows and columns.
➢ Read CSV:
❖ Read CSV Files
• A simple way to store big data sets is to use CSV files (comma separated files).

Dept. Of AIML, AIET, Mijar Page | 16


Artificial Intelligence and Machine Learning

• CSV files contains plain text and is a well know format that can be read by everyone including
Pandas.
• In our examples we will be using a CSV file called 'data.csv'.
➢ Read JSON
• Big data sets are often stored, or extracted as JSON.
• JSON is plain text, but has the format of an object, and is well known in the world of
programming, including Pandas.
• In our examples we will be using a JSON file called 'data.json'.
➢ Analysing Data:
▪ Viewing the Data
• One of the most used method for getting a quick overview of the DataFrame, is the head()
method.
• The head () method returns the headers and a specified number of rows, starting from the top.
• There is also a tail () method for viewing the last rows of the Data Frame.
• The tail () method returns the headers and a specified number of rows, starting from the
bottom.
• The Data Frames object has a method called info (), that gives you more information about
the data set.
4.Scikit-learn:

➢ What is Sklearn in Python? A Python module called Scikit-learn offers a variety of supervised
and unsupervised learning techniques. It is based on several technologies you may already be
acquainted with, including NumPy, pandas, and Matplotlib.
➢ What is Sklearn? French research scientist David Cornopean’s scikits. learn is a Google Summer
of Code venture where the scikit-learn project first began. Its name refers to the idea that it's a
modification to SciPy called "Seiki" (SciPy Toolkit), which was independently created and
published. Later, other programmers rewrote the core codebase.
❖ Different methods in Sci-Kit learn:
➢ Data Modelling Process Dataset Loading:
A collection of data is called dataset. It is having the following two components:
▪ Features − The variables of data are called its features. They are also known as predictors, inputs
or attributes.
▪ Feature matrix − It is the collection of features, in case there are more than one.
▪ Feature Names − It is the list of all the names of the features.
▪ Response − It is the output variable that basically depends upon the feature variables. They are
also known as target, label or output.
Dept. Of AIML, AIET, Mijar Page | 17
Artificial Intelligence and Machine Learning

▪ Response Vector − It is used to represent response column. Generally, we have just one response
column.
▪ Target Names − It represent the possible values taken by a response vector. Scikit-learn have few
example datasets like iris and digits for classification and the Boston house prices for regression.
➢ Data Representation
As we know that machine learning is about to create model from data. For this purpose, computer must
understand the data first. Next, we are going to discuss various ways to represent the data in order to
be understood by computer

➢ Data as table
• The best way to represent data in Scikit-learn is in the form of tables. A table represents a 2-D grid
of data where rows represent the individual elements of the dataset and the columns represents the
quantities related to those individual elements.
➢ Estimator API
▪ What is Estimator API
• It is one of the main APIs implemented by Scikit-learn. It provides a consistent interface for a wide
range of ML applications that’s why all machine learning algorithms in Scikit-Learn are
implemented via Estimator API. The object that learns from the data (fitting the data) is an
estimator. It can be used with any of the algorithms like classification, regression, clustering or
even with a transformer, that extracts useful features from raw data.
➢ Linear Modelling
The following table lists out various linear models provided by Scikit-Learn −
1. Linear Regression
It is one of the best statistical models that studies the relationship between a dependent variable (Y)
with a given set of independent variables (X).
2. Logistic Regression
Logistic regression, despite its name, is a classification algorithm rather than regression algorithm.
Based on a given set of independent variables, it is used to estimate discrete value (0 or 1, yes/no,
true/false).
3. Ridge Regression
Ridge regression or Tikhonov regularization is the regularization technique that performs L2
regularization. It modifies the loss function by adding the penalty equivalent to the square of the
magnitude of coefficients.
4. Bayesian Ridge Regression
Bayesian regression allows a natural mechanism to survive insufficient data or poorly distributed data
by formulating linear regression using probability distributors rather than point estimates.

Dept. Of AIML, AIET, Mijar Page | 18


Artificial Intelligence and Machine Learning

5. LASSO
LASSO is the regularisation technique that performs L1 regularisation. It modifies the loss function
by adding the penalty (shrinkage quantity) equivalent to the summation of the absolute value of
coefficients.
6. Multi-task LASSO
It allows to fit multiple regression problems jointly enforcing the selected features to be same for all
the regression problems, also called tasks. Sklearn provides a linear model named Multitask Lasso,
trained with a mixed L1, L2-norm for regularisation, which estimates sparse coefficients for multiple
regression problems jointly.
7. Elastic-Net
The Elastic-Net is a regularized regression method that linearly combines both penalties i.e. L1 and
L2 of the Lasso and Ridge regression methods. It is useful when there are multiple correlated features.
8. Multi-task Elastic-Net
It is an Elastic-Net model that allows to fit multiple regression problems jointly enforcing the selected
features to be same for all the regression problems, also called tasks.
➢ Stochastic Gradient Descent
Here, we will learn about an optimization algorithm in Sklearn, termed as Stochastic Gradient
Descent (SGD). Stochastic Gradient Descent (SGD) is a simple yet efficient optimization algorithm
used to find the values of parameters/coefficients of functions that minimize a cost function. In other
words, it is used for discriminative learning of linear classifiers under convex loss functions such as
SVM and Logistic regression. It has been successfully applied to large-scale datasets because the
update to the coefficients is performed for each training instance, rather than at the end of instances.
➢ SGD Classifier
Stochastic Gradient Descent (SGD) classifier basically implements a plain SGD learning routine
supporting various loss functions and penalties for classification. Scikit-learn provides SGD Classifier
module to implement SGD classification.
➢ K-Nearest Neighbours (KNN)
This chapter will help you in understanding the nearest neighbour methods in Sklearn.
Neighbour based learning method are of both types namely supervised and unsupervised. Supervised
neighbours-based learning can be used for both classification as well as regression predictive problems
but, it is mainly used for classification predictive problems in industry.

❖ Types of algorithms
Different types of algorithms which can be used in neighbour-based methods implementation are as
follows –

Dept. Of AIML, AIET, Mijar Page | 19


Artificial Intelligence and Machine Learning

• Brute Force
The brute-force computation of distances between all pairs of points in the dataset provides the
most neighbour search implementation. Mathematically, for N samples in D dimensions, brute-force
approach scales for small data samples, this algorithm can be very useful, but it becomes infeasible as
and when number of samples grows. Brute force neighbour search can be enabled by writing the
keyword.
• K-D Tree
One of the tree-based data structures that have been invented to address the computational
inefficiencies of the brute-force approach, is KD tree data structure. Basically, the KD tree is a binary
tree structure which is called K-dimensional tree. It recursively partitions the parameters space along
the data axes by dividing it into nested orthographic regions into which the data points are filled.
• Boosting Methods
Boosting methods build ensemble model in an increment way. The main principle is to build the
model incrementally by training each base model estimator sequentially. In order to build powerful
ensemble, these methods basically combine several week learners which are sequentially trained over
multiple iterations of training data. The sklearn. ensemble module is having following two boosting
methods.
• AdaBoost
It is one of the most successful boosting ensemble methods whose main key is in the way they give
weights to the instances in dataset. That’s why the algorithm needs to pay less attention to the instances
while constructing subsequent models.
Example program:
# load the iris dataset as an example from sklearn. Datasets
import load_iris
iris = load_iris()
# store the feature matrix (X) and response vector (y)
X = iris.data y = iris.target
# store the feature and target names feature_names = iris.feature_names
target_names = iris.target_names
# X and y are numpy arrays
print("\nType of X is:", type(X))
# printing first 5 input rows print("\nFirst 5 rows of X:\n", X[:5])
Output:
Feature names: ['sepal length (cm)','sepal width (cm)','petal length (cm)','petal width (cm)']

Target names: ['setosa' 'versicolor' 'virginica'] Type of X is:

Dept. Of AIML, AIET, Mijar Page | 20


Artificial Intelligence and Machine Learning

First 5 rows of X:

[[ 5.1 3.5 1.4 0.2]

[ 4.9 3. 1.4 0.2]

[ 4.7 3.2 1.3 0.2]


[ 4.6 3.1 1.5 0.2]

[ 5. 3.6 1.4 0.2]]

5.OpenCV

➢ What is OpenCV?
OpenCV is a Python open-source library, which is used for computer vision in Artificial
intelligence, Machine Learning, face recognition, etc. In OpenCV, the CV is an abbreviation form of
a computer vision, which is defined as a field of study that helps computers to understand the content
of the digital images such as photographs and videos.
➢ OpenCV Read and Save Image
▪ OpenCV Reading Images
OpenCV allows us to perform multiple operations on the image, but to do that it is necessary to read
an image file as input, and then we can perform the various operations on it. OpenCV provides
following functions which are used to read and write the images.

▪ OpenCV imread function


The imread () function loads image from the specified file and returns it. The syntax is:
cv2.imread(filename [, flag])
Parameters:
filename: Name of the file to be loaded
▪ OpenCV Save Images

OpenCV imwrite() function is used to save an image to a specified file. The file extension defines the
image format. The syntax is the following:
cv2.imwrite(filename, img[, params])
Parameters:
Filename- Name of the file to be loaded image- Image to be saved. params- The following parameters
are currently supported:
o For JPEG, quality can be from 0 to 100. The default value is 95.
o For PNG, quality can be the compress level from 0 to 9. The default value is 1.
o For PPM, PGM, or PBM, it can be a binary format flag 0 or 1. The default value is 1.

Dept. Of AIML, AIET, Mijar Page | 21


Artificial Intelligence and Machine Learning

▪ OpenCV Basic Operation on Images


In this tutorial, we will learn the essential operations that are related to the images. We are going to
discuss the following topics.

o Access pixel values and modify them


o Access Image Properties
o Setting Region of Image
o Splitting and merging images
o Change the image color
▪ OpenCV Drawing Functions
We can draw the various shapes on an image such as circle, rectangle, ellipse, polylines, convex, etc.
It is used when we want to highlight any object in the input image. The OpenCV provides functions
for each shape. Here we will learn about the drawing functions.
• Drawing Circle
We can draw the circle on the image by using the cv2.circle() function. The syntax is the following:
cv2.circle(img, center, radius, color[,thickness [, lineType[,shift]]])
• Drawing Rectangle
The OpenCV provides a function to draw a simple, thick or filled up-right rectangle. The syntax is
following:
cv2.rectangle(img, pt1, pt2, color[, thickness[,lineType[,shift]]])
• Drawing Ellipse
We can draw an ellipse on an image by using the cv2.ellipse() function. It can draw a simple or thick
elliptic arc or can fill an ellipse sector.
cv2.ellipse(img, center, axes, angle, startAngle, endAngle, color[, thickness[, lineType[,
shift]]]) cv2.ellipse(img, box, color[, thickness[, lineType]])
• Drawing lines
OpenCV provides the line() function to draw the line on the image. It draws a line segment between
ptr1 and ptr2 points in the image. The image boundary clips the line.
cv2.line(img, pt1, pt2, color[, thickness[, lineType[, shift]]])
• Write Text on Image
We can write text on the image by using the putText() function. The syntax is given below.
cv2.putText(img, text, org, font, color)
• Drawing Polylines
We can draw the polylines on the image. OpenCV provides the polylines() function, that is used to
draw polygonal curves on the image. The syntax is given below:
cv2.polyLine(img, polys, is_closed, color, thickness=1, lineType=8, shift=0)

Dept. Of AIML, AIET, Mijar Page | 22


Artificial Intelligence and Machine Learning

➢ OpenCV Blob Detection


Blob stands for Binary Large Object and refers to the connected pixel in the binary image. The
term "Large" focuses on the object of a specific size, and those other “small” binary objects are usually
noise. There are three processes regarding BLOB analysis.

• BLOB extraction
Blob extraction means to separate the BLOBs (objects) in a binary image. A BLOB contains a
group of connected pixels. We can determine whether two pixels are connected or not by the
connectivity, i.e., which pixels is neighbour of another pixel. There are two types of connectivity. The
8-connectivity and the 4-connectivity. The 8-connectivity is far better than 4-connectivity.
• BLOB representation
BLOB representation is simply means that convert the BLOB into a few representative numbers.
After the BLOB extraction, the next step is to classify the several BLOBs. There are two steps in the
BLOB representation process. In the first step, each BLOB is denoted by several characteristics, and
the second step is to apply some matching methods that compare the features of each BLOB.
• BLOB classification
Here we determine the type of BLOB, for example, given BLOB is a circle or not. Here the
question is how to define which BLOBs are circle and which are not based on their features that we
described earlier. For this purpose, generally we need to make a prototype model of the object we are
looking for.
➢ OpenCV Image Filters
Image filtering is the process of modifying an image by changing its shades or color of the pixel. It is
also used to increase brightness and contrast. In this tutorial, we will learn about several types of filters.

A. Bilateral Filter
OpenCV provides the bilateral Filter () function to apply the bilateral filter on the image. The
bilateral filter can reduce unwanted noise very well while keeping edges sharp. The syntax of the
function is given below:

cv2.bilateralFilter(src, dst, d, sigmaSpace, borderType)


B. Box Filter
We can perform this filter using the boxfilter() function. It is similar to the averaging blur operation.
The syntax of the function is given below:
cv2. boxfilter(src, dst, ddepth, ksize, anchor, normalize, bordertype)
C. Filter2D
It combines an image with the kernel. We can perform this operation on an image using the Filter2D

method. The syntax of the function is given below:

cv2.Filter2D(src, dst, kernel, anchor = (-1,-1))

Dept. Of AIML, AIET, Mijar Page | 23


Artificial Intelligence and Machine Learning

❖ Face recognition and Face detection using the OpenCV


The face recognition is a technique to identify or verify the face from the digital images or video
frame. A human can quickly identify the faces without much effort. It is an effortless task for us, but
it is a difficult task for a computer. There are various complexities, such as low resolution, occlusion,
illumination variations, etc. These factors highly affect the accuracy of the computer to recognize the
face more effectively. First, it is necessary to understand the difference between face detection and
face recognition.

▪ Face Detection: The face detection is generally considered as finding the faces (location and size)
in an image and probably extract them to be used by the face detection algorithm.
▪ Face Recognition: The face recognition algorithm is used in finding features that are uniquely
described in the image. The facial image is already extracted, cropped, resized, and usually
converted in the grayscale.

There are various algorithms of face detection and face recognition. Here we will learn about face
detection using the HAAR cascade algorithm.

▪ Basic Concept of HAAR Cascade Algorithm


The HAAR cascade is a machine learning approach where a cascade function is trained from a lot
of positive and negative images. Positive images are those images that consist of faces, and negative
images are without faces. In face detection, image features are treated as numerical information
extracted from the pictures that can distinguish one image from another.
▪ HAAR-Cascade Detection in OpenCV
OpenCV provides the trainer as well as the detector. We can train the classifier for any object like
cars, planes, and buildings by using the OpenCV. There are two primary states of the cascade image
classifier first one is training and the other is detection. OpenCV provides two applications to train
cascade classifier opencv_haartraining and opencv_traincascade. These two applications store the
classifier in the different file format. A set of negative samples must be prepared manually, whereas
the collection of positive samples are created using the opencv_createsamples utility.
▪ Negative Sample
Negative samples are taken from arbitrary images. Negative samples are added in a text file. Each
line of the file contains an image filename (relative to the directory of the description file) of the
negative sample. This file must be created manually. Defined images may be of different sizes.
▪ Positive Sample
Positive samples are created by open cv create samples utility. These samples can be created from
a single image with an object or from an earlier collection. It is important to remember that we require
a large dataset of positive samples before you give it to the mentioned utility because it only applies
the perspective transformation.

Dept. Of AIML, AIET, Mijar Page | 24


Artificial Intelligence and Machine Learning

Example program using OpenCV:


# Python program to explain cv2.imread() method
# importing cv2 import cv2
# path path = r'geeksforgeeks.png'
# Using cv2.imread() method
# Using 0 to read image in grayscale mode
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
# Displaying the image cv2.imshow('image', img)
cv2.waitKey(0) cv2.destroyAllWindows()
Output: img.shape (225, 225)
6.Tensorflow
TensorFlow is an open-source machine learning library developed by the Google Brain team.
It is widely used for various tasks in artificial intelligence and deep learning, providing a flexible
platform for building and deploying machine learning models. TensorFlow is particularly popular for
its ability to work with neural networks and handle large-scale numerical computations efficiently.
One of the key features of TensorFlow is its computational graph paradigm. In TensorFlow, a
computation is represented as a directed acyclic graph (DAG) where nodes represent operations, and
edges represent the flow of data. This graph-based approach allows for efficient parallel execution of
computations and facilitates optimization. Let’s delve into a simple example to demonstrate how
TensorFlow works. Consider a basic linear regression problem, where the goal is to fit a line to a set
of data points. In this example, we'll use TensorFlow to create a linear regression model that predicts
the relationship between input and output.
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# Generate random data for demonstration
np.random.seed(42)
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
# Define the linear regression model using TensorFlow
X = tf.constant(X, dtype=tf.float32, name="X")
y = tf.constant(y, dtype=tf.float32, name="y")
theta = tf.Variable(tf.random.normal((2, 1)), name="theta")
X_b = tf.concat([tf.ones((100, 1)), X], axis=1)
# Add bias term to input
Dept. Of AIML, AIET, Mijar Page | 25
Artificial Intelligence and Machine Learning

# Define the cost function (mean squared error)


error = X_b @ theta - y
mse = tf.reduce_mean(tf.square(error), name="mse")
# Define the optimization algorithm (gradient descent)
learning_rate = 0.01
optimizer = tf.optimizers.SGD(learning_rate)
training_op = optimizer.minimize(mse)
# Initialize variables and run the optimization
init = tf.compat.v1.global_variables_initializer()
with tf.compat.v1.Session() as sess:
sess.run(init)
# Perform 1000 iterations of gradient descent
for epoch in range(1000):
if epoch % 100 == 0:
print(f"Epoch {epoch}, MSE = {mse.eval()}")
sess.run(training_op)
# Get the final parameter values
theta_values = theta.eval()
# Plot the original data and the fitted line
plt.scatter(X, y)
plt.plot(X, X_b @ theta_values, color='red', linewidth=3)
plt.xlabel("X")
plt.ylabel("y")
plt.title("Linear Regression with TensorFlow")
plt.show()

Dept. Of AIML, AIET, Mijar Page | 26


Artificial Intelligence and Machine Learning

CHAPTER 3

SMART COMMUNICATION FOR DEAF AND DUMB PEOPLE

3.1 Introduction

One of the most precious gifts to a human being is an ability to see, listen, speak and respond
according to the situations. But there are some unfortunate ones who are deprived of this. Making a
single compact device for people with hearing and vocal impairment is a tough job. Communication
between deaf-dumb and normal person have been always a challenging task. This project proposes an
innovative communication system framework for deaf, dumb and people in a single compact device.
We provide a technique for a person to read a text and it can be achieved by capturing an image through
a camera which converts a text to speech (TTS). It provides a way for the deaf people to read a text by
speech to text (STT) conversion technology. Also, it provides a technique for dumb people using text
to voice conversion.
The system is provided with four switches and each switch has a different function. The dumb
people can communicate their message through text which will be read out by e-speak, the deaf people
can be able to hear others speech from text. All these functions are implemented by the use of Laptop.
The number of Deaf & Dumb people is over five- hitter of the population. Linguistic communication
is principally used by Deaf & Dumb to speak with each other. The most downside today moon-faced
by Deaf & Dumb folks is to talk with those that don’t understand linguistic communication. In contrast,
writing is associate degree possibility; it’s thought as a slow and inefficient manner of communication.
Thus a viable possibility would be to rent an expert linguistic communication translator. In this project,
we are introducing two- way smart communication system for Deaf & Dumb and Normal people; the
project is building a system that assists Deaf & Dumb people to convey their messages to Normal
people.
The system consists of two main parts: The first part is for Deaf & Dumb person to convey their
messages to a normal person and the second one is for a normal person who can also respond them
easily without learning a sign language with the help of GUI.
3.2 Objectives of Smart Communication for deaf and dumb people

Creating smart communication solutions for deaf and mute individuals using Python involves
several objectives, each contributing to the overall goal of enhancing accessibility, inclusivity, and
independence for this community. In this detailed explanation, we will explore the key objectives in
developing smart communication tools using Python:

Dept. Of AIML, AIET, Mijar Page | 27


Artificial Intelligence and Machine Learning

➢ Real-Time Sign Language Interpretation:


▪ Objective: Develop Python-based algorithms and models for real-time sign language
interpretation. This involves leveraging computer vision techniques to recognize and interpret sign
language gestures. The objective is to enable seamless communication between deaf individuals
who use sign language and those who may not understand it.
▪ Implementation: Utilize Python libraries such as OpenCV for image processing and machine
learning frameworks like TensorFlow or PyTorch for training and deploying sign language
recognition models. Real-time interpretation can be achieved by integrating these models into
interactive applications.
➢ Speech-to-Text and Text-to-Speech Conversion:
▪ Objective: Enable automatic conversion between spoken language and text, as well as text to
speech. This facilitates communication between deaf individuals and those who communicate
verbally. Python can be used to develop robust speech recognition and synthesis systems.
▪ Implementation: Leverage Python libraries like Speech Recognition for converting speech to text
and pyttsx3 for text-to-speech synthesis. These tools can be integrated into applications that allow
seamless communication between deaf and hearing individuals.
➢ Gesture-Based Interfaces:
▪ Objective: Develop Python-based interfaces that recognize and respond to gestures, providing a
way for deaf and mute individuals to interact with computers and devices. This objective aims to
enhance accessibility to technology for this community.
▪ Implementation: Use Python with frameworks like Pygame or PyQt to create applications with
gesture-based interfaces. This involves integrating gesture recognition algorithms to interpret user
movements and trigger actions in software applications.
❖ Educational Tools for Learning Sign Language:
▪ Objective: Develop Python-based educational tools to aid in learning and practicing sign language.
This includes interactive applications that teach sign language vocabulary, grammar, and
communication skills.
▪ Implementation: Create Python applications with user-friendly interfaces using frameworks like
Tkinter or PyQt. Implement features such as video tutorials, interactive quizzes, and feedback
mechanisms to support the learning process.
❖ Communication Devices and Apps:
▪ Objective: Build Python applications and devices that facilitate communication for deaf and mute
individuals. This includes developing mobile apps and wearable devices that support text-based,
sign language, or visual communication.

Dept. Of AIML, AIET, Mijar Page | 28


Artificial Intelligence and Machine Learning

▪ Implementation: Use Python for mobile app development (e.g., with Kivy or Flask for web-based
apps) and integrate features like real-time chat, video calls with sign language interpretation, and
other communication tools.
❖ Integration of Natural Language Processing (NLP):
▪ Objective: Incorporate natural language processing capabilities into communication tools to
enhance the understanding of written language. This objective aims to make written
communication more accessible and efficient for deaf individuals.
▪ Implementation: Leverage Python libraries for NLP, such as NLTK or spaCy, to develop
applications that analyse and understand written text. This can be applied to chat applications,
educational tools, and other communication platforms.
❖ Accessibility in Web Development:
▪ Objective: Ensure web applications and online platforms are accessible to deaf and mute users.
This involves implementing features that accommodate various communication needs, such as
captioning, sign language interpretation, and accessible user interfaces.
▪ Implementation: Use Python frameworks like Django or Flask for web development and integrate
accessibility features, such as ARIA (Accessible Rich Internet Applications) attributes, to enhance
the usability of web applications for individuals with diverse communication abilities.
❖ User Authentication and Security:
▪ Objective: Develop secure and user-friendly authentication mechanisms for smart communication
applications. This ensures that the privacy and security of deaf and mute individuals using these
tools are safeguarded.
▪ Implementation: Implement secure user authentication using Python frameworks like Flask-
Security or Django's authentication system. Integrate encryption protocols to protect sensitive
communication data exchanged within the applications.
❖ Community Engagement and Social Inclusion:
▪ Objective: Foster community engagement and social inclusion by developing Python-based
platforms that connect deaf and mute individuals with each other and the broader community. This
can include social networking features, forums, and collaborative tools.
▪ Implementation: Use Python frameworks to build community platforms with features like user
profiles, discussion forums, and collaborative spaces. Integrate communication tools within these
platforms to facilitate interaction and collaboration among users.
❖ Continuous Improvement and Adaptability:
▪ Objective: Strive for continuous improvement in smart communication tools by staying updated
with technological advancements. Ensure that the tools remain adaptable to evolving needs and
emerging technologies.
Dept. Of AIML, AIET, Mijar Page | 29
Artificial Intelligence and Machine Learning

▪ Implementation: Regularly update Python libraries and frameworks, adopt new machine learning
models, and incorporate user feedback to enhance the functionality and adaptability of smart
communication applications over time. The objectives of creating smart communication solutions
for deaf and mute individuals using Python encompass a wide range of functionalities, from real-
time sign language interpretation to building accessible web applications. These objectives
collectively aim to empower the deaf and mute community, promoting inclusivity, accessibility,
and independence through the use of technology.
3.3 Software/Hardware Requirement
❖ SYSTEM REQUIREMENTS
❖ Software Requirements
➢ Python IDE(3.11.5)
➢ PyCharm(2023.1.2)
➢ Python libraries:
• Open-CV python
• NumPy
• Pyttsx3
• Speech-recognition
• OS
❖ Hardware Requirements
➢ laptop/PC
➢ 4//8/16 GB RAM
➢ i3/i5/i7 intel more then 6th gen
➢ etc.
❖ Camera to capture image: An RGB image can be viewed as three images (a red scale image, a
green scale image and a blue scale image) stacked on top of each other. In MATLAB, an RGB
image is basically a M*N*3 array of color pixel, where each color pixel is a triplet which
corresponds to red, blue and green color component of RGB image at a specified spatial location.
Similarly, A Grayscale image can be viewed as a single layered image.
Key Words: MATLAB, RGB.
❖ Speech texter: Speech texter is an online multi-language speech recognizer that can help you type
long documents, books, reports, blog posts with your voice. If you need help, please visit our help
page at https://www.speechtexter.com/help .This app supports over 60 different languages. For
better results use a high-quality microphone, remove any background noise, and speak loudly and
clearly. It can create text notes/sms/emails/tweets from users’ voice.

Dept. Of AIML, AIET, Mijar Page | 30


Artificial Intelligence and Machine Learning

❖ Microphone: Microphone is used to give speech input that would be later converted into text using
speech texter so that deaf could read it easily as they cannot hear.[3] OpenCV: OpenCV is a library,
a cross-platform and is an opensource tool of various programming functions that focus mainly on
real-time computer vision. It can be used for various purposes such as face recognition, object
identification, mobile robotics, segmentation, gesture-recognition, etc.
❖ OpenCV: OpenCV is a library, a cross-platform and is an opensource tool of various programming
functions that focus mainly on real-time computer vision. It can be used for various purposes such
as face recognition, object identification, mobile robotics, segmentation, gesture-recognition, etc.
METHODOLOGY AND IMPLEMENTATION
The Project is divided into 3 different modules:
1. Gesture-to-Text (GTT)
2.Text-to-Speech (TTS)
3.Speech-to-Text (STT)
1. Gesture-to-Text (GTT)
The third process is developed for the vocally impaired people who cannot exchange the
thoughts to the normal people. Dumb people use gesture to communicate with normal people which
are majorly not understandable by normal people. The process starts with the capturing of image and
crops the useful portion. Convert the RGB image into Gray scale image for better functioning, Blur
the cropped image through Gaussian blur function and pass it to the threshold function to get the
highlighted part of the image. Find the contours and an angle between two fingers. By using convex
hull function, we can implement the finger point. Count the number of angles which is less than 90
degree which gives the number of defects. According to the number of defects, the text is printed on
display and read out by the Speaker.
Key Words: OpenCV, Gaussian blur, contours
2.Text-to-speech (TTS)
The first process text to speech conversion is done for the dumb masses who cannot speak. The
Dumb people convert their thoughts to text which could be transferred to a voice signal. The converted
voice signal is spoken out by E-speak synthesizer. After selecting the option OP1 the OS and sub
process imported. Call text to speech function and enter the text as input. After entering the text from
keyboard, the E-speak synthesizer converts text to speech. The process also provided with the
keyboard interrupt ctrl+ C.
3.Speech-to-Text (STT)
The second process is developed for dumb people who cannot speak. In order to help them,
we have interfaced the Logitech camera to capture the image by using OPENCV tool. The captured
image is converted to text using Tesseract OCR and save the text to file out.txt. Open the text file

Dept. Of AIML, AIET, Mijar Page | 31


Artificial Intelligence and Machine Learning

and split the paragraph into sentences and save it. In OCR, the adaptive thresholding techniques are
used to change the image into binary images and they are transferred to character outlines. The
converted text is read out by the E-speak.
3.3 Source Code
# Python program to translate
# speech to text and text to speech
import speech_recognition as sr
import pyttsx3
import cv2
import numpy as np
import math
import pyttsx3
# Initialize the recognizer
r = sr.Recognizer()
# Function to convert text to
# speech
def text_to_speech(command):
# Initialize the engine
engine = pyttsx3.init()
engine.say(command)
engine.runAndWait()
# Loop infinitely for user to
# speak
def speech_to_text():
recognizer = sr.Recognizer()
# Use the default microphone as the audio source
with sr.Microphone() as source:
print("Say something:")
# Adjust for ambient noise before listening
recognizer.adjust_for_ambient_noise(source)
# Listen for the user's speech
audio = recognizer.listen(source)
print("Transcribing...")
try:
# Use Google Web Speech API to convert speech to text
text = recognizer.recognize_google(audio)
print(f"You said: {text}")
except sr.UnknownValueError:
print("Google Web Speech API could not understand audio")
except sr.RequestError as e:
print(f"Could not request results from Google Web Speech API; {e}")
def gesture_to_text():
cap = cv2.VideoCapture(0)
while (1):
try: # an error comes if it does not find anything in window as it
cannot find contour of max area
# therefore this try error statement
ret, frame = cap.read()
frame = cv2.flip(frame, 1)
kernel = np.ones((3, 3), np.uint8)
# define region of interest
roi = frame[100:400, 100:400]
cv2.rectangle(frame, (100, 100), (400, 400), (0, 255, 0), 0)
hsv = cv2.cvtColor(roi, cv2.COLOR_BGR2HSV)
# define range of skin color in HSV
lower_skin = np.array([0, 20, 70], dtype=np.uint8)

Dept. Of AIML, AIET, Mijar Page | 32


Artificial Intelligence and Machine Learning

upper_skin = np.array([20, 200, 200], dtype=np.uint8)


# extract skin colur imagw
mask = cv2.inRange(hsv, lower_skin, upper_skin)
# extrapolate the hand to fill dark spots within
mask = cv2.dilate(mask, kernel, iterations=4)
# blur the image
mask = cv2.GaussianBlur(mask, (5, 5), 100)
# find contours
contours, hierarchy = cv2.findContours(mask, cv2.RETR_TREE,
# find contour of max area(hand)
cnt = max(contours, key=lambda x: cv2.contourArea(x))
# approx the contour a little
epsilon = 0.0005 * cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, epsilon, True)
# make convex hull around hand
hull = cv2.convexHull(cnt)
# define area of hull and area of hand
areahull = cv2.contourArea(hull)
areacnt = cv2.contourArea(cnt)
# find the percentage of area not covered by hand in convex hull
arearatio = ((areahull - areacnt) / areacnt) * 100
# find the defects in convex hull with respect to hand
hull = cv2.convexHull(approx, returnPoints=False)
defects = cv2.convexityDefects(approx, hull)
# l = no. of defects
l = 0
# code for finding no. of defects due to fingers
for i in range(defects.shape[0]):
s, e, f, d = defects[i, 0]
start = tuple(approx[s][0])
end = tuple(approx[e][0])
far = tuple(approx[f][0])
pt = (100, 180)
# find length of all sides of triangle
a = math.sqrt((end[0] - start[0]) ** 2 + (end[1] - start[1])** 2)
b = math.sqrt((far[0] - start[0]) ** 2 + (far[1] - start[1])** 2)
c = math.sqrt((end[0] - far[0]) ** 2 + (end[1] - far[1]) ** 2)
s = (a + b + c) / 2
ar = math.sqrt(s * (s - a) * (s - b) * (s - c))
# distance between point and convex hull
d = (2 * ar) / a
# apply cosine rule here
angle = math.acos((b ** 2 + c ** 2 - a ** 2) / (2 * b * c)) * 57
# ignore angles > 90 and ignore points very close to convex
hull(they generally come due to noise)
if angle <= 90 and d > 30:
l += 1
cv2.circle(roi, far, 3, [255, 0, 0], -1)
# draw lines around hand
cv2.line(roi, start, end, [0, 255, 0], 2)
l += 1
# print corresponding gestures which are in their ranges
font = cv2.FONT_HERSHEY_SIMPLEX
if l == 1:
if areacnt < 2000:
cv2.putText(frame, 'Put hand in the box', (0, 50), font, 2,
(0, 0, 255), 3, cv2.LINE_AA)
else:

Dept. Of AIML, AIET, Mijar Page | 33


Artificial Intelligence and Machine Learning

if arearatio < 12:


cv2.putText(frame, '0', (0, 50), font, 2, (0, 0, 255), 3,
cv2.LINE_AA)
elif arearatio < 17.5:
cv2.putText(frame, 'Best of luck', (0, 50), font, 2, (0,
0, 255), 3, cv2.LINE_AA)
engine = pyttsx3.init()
engine.setProperty("rate", 120)
engine.say("Best of Luck")
# engine.runAndWait()
else:
cv2.putText(frame, 'Ok', (0, 50), font, 2, (0, 0, 255),
3, cv2.LINE_AA)
engine = pyttsx3.init()
engine.setProperty("rate", 120)
engine.say("Ok")
# engine.runAndWait()
elif l == 2:
cv2.putText(frame, 'No', (0, 50), font, 2, (0, 0, 255), 3,
cv2.LINE_AA)
engine = pyttsx3.init()
engine.setProperty("rate", 120)
engine.say("No")
# engine.runAndWait()
elif l == 3:
cv2.putText(frame, 'How are You', (0, 50), font, 2, (0, 0, 255),
3, cv2.LINE_AA)
engine = pyttsx3.init()
engine.setProperty("rate", 120)
engine.say("How are you")
# engine.runAndWait()
elif l == 4:
cv2.putText(frame, 'I am Fine', (0, 50), font, 2, (0, 0, 255), 3,
cv2.LINE_AA)
engine = pyttsx3.init()
engine.setProperty("rate", 120)
engine.say("I am Fine")
# engine.runAndWait()

elif l == 5:
cv2.putText(frame, 'i am hungry', (0, 50), font, 2, (0, 0, 255),
3, cv2.LINE_AA)
engine = pyttsx3.init()
engine.setProperty("rate", 120)
engine.say("i am hungry")
# engine.runAndWait()
else:
cv2.putText(frame, 'reposition', (10, 50), font, 2, (0, 0, 255),
3, cv2.LINE_AA)
# show the windows
cv2.imshow('mask', mask)
cv2.imshow('frame', frame)
except:
pass

k = cv2.waitKey(5) & 0xFF


if k == 27:
break

Dept. Of AIML, AIET, Mijar Page | 34


Artificial Intelligence and Machine Learning

cv2.destroyAllWindows()
cap.release()
if __name__ == "__main__":
while True:
com_mode = input("Select 1 for Text To Speech \nSelect 2 for Speech To
Text\nSelect 3 for Gesture to Text")
if com_mode == "1":
command = input("Enter Text To convert Speech:")
text_to_speech(command)
if com_mode == "2":
print("Say Something to convert text")
speech_to_text()
if com_mode == "3":
#print("Say Something to convert text")
gesture_to_text()

3.5 Outputs

Select 1 for Text To Speech


Select 2 for Speech To Text
Select 3 for Gesture to Text 1
Enter Text To convert Speech: Hello Everyone Welcome to ALVAS
(This will be converted in voice)

Select 1 for Text To Speech


Select 2 for Speech To Text
Select 3 for Gesture to Text 2
Say Something to convert text
Say something:
Transcribing...
You said: hello everyone welcome to ALVAS
(This will be converted to text)

Select 1 for Text To Speech


Select 2 for Speech To Text
Select 3 for Gesture to Text 3

Figure 3.5 1 Interacting Camera Display Figure 3.5 2 Gesture to Text 1

Figure 3.5.1 and Figure 3.5.2 Showing Interacting camera display and Displaying “Ok” text from
Gesture.
Dept. Of AIML, AIET, Mijar Page | 35
Artificial Intelligence and Machine Learning

Figure 3.5 3 Gesture to Text 2 Figure 3.5 4 Gesture to Text 3

Figure 3.5.3 and Figure 3.5.4 Displaying “Best of Luck” and “No” text from Gesture.

Figure 3.5 5 Gesture to Text 4 Figure 3.5 6 Gesture to Text 5

Figure 3.5 7 Gesture to Text 6

Figure 3.5.5, Figure 3.5.6 and Figure 3.5.7 Displaying “How are you”, “I am fine” and “I am Hungry”
text from Gesture.

Dept. Of AIML, AIET, Mijar Page | 36


Artificial Intelligence and Machine Learning

CHAPTER 4
SNAPSHOTS AND PHOTOGRAPHS
4.1 Coding

Figure 4.1.1 Coding in Laptop

Figure 4.1.2 Pie chart output

Figure 4.1.3 Bar graph output

Figure 4.1, Figure 4.2 and Figure 4.3 shows the image of the practice session during the Internship

Dept. Of AIML, AIET, Mijar Page | 37


Artificial Intelligence and Machine Learning

4.2 Team

Figure 4.2.1 Internship Team Photo 1

Figure 4.2.2 Internship Team Photo 2

Figure 4.2.3 Internship Team Photo 3

Figure 4.4, Figure 4.5 and Figure 4.6 shows our team during the Internship.

Dept. Of AIML, AIET, Mijar Page | 38


Artificial Intelligence and Machine Learning

CHAPTER 5

CONCLUSION

The internship has been a transformative experience, providing a holistic


Immersion into various facets of Artificial Intelligence. In the realm of Python, hands-
on projects cultivated proficiency in Python Libraries, fostering a deep understanding
of scalable and user-friendly applications. The proposed project addresses a crucial need
to bridge the communication gap between the deaf or mute community and the broader
world, facilitating a more inclusive and standard lifestyle. The compact device, serving
as a smart assistant, effectively converts text/images to voice for individuals with speech
impairments, enables speech-to-text conversion for the deaf, and interprets hand
gestures into text for those who are mute. Notably, the system is language-independent,
enhancing its versatility and accessibility across diverse linguistic backgrounds. The
prototype's potential for further improvement, such as implementing gesture recognition
for numbers and alphabets, opens avenues for enhanced functionality. Additionally, the
prospect of advancing the system to process video inputs for real-time text readings
signifies a promising direction for future development. Overall, this project stands as a
valuable initiative, leveraging technology to empower differently-abled individuals and
foster improved communication and interaction within the wider community. Grateful
for this invaluable opportunity, I look forward to applying these skills in future
endeavours.

Dept. Of AIML, AIET, Mijar Page | 39

You might also like