You are on page 1of 18

2023

ARTIFICIAL
INTELLIGENCE
PREPARED BY
Amran Qasim

SUPERVISION:
Dr. Raghad
Table of Contents

CHAPTER ONE..................................................................................................02
1.Introduction to Artificial Intelligence....................................................................................02
1.1 Definition of AI...............................................................................................................................02
1.2 Brief history of AI........................................................................................................................03
1.3 Types of AI ....................................................................................................................................03
1.4 Applications of AI......................................................................................................................04
1.5 Current State and Future Prospects of AI................................................................05

CHAPTER TWO....................................................................................................06
2. Key Concepts and Techniques in AI..................................................................................06
2.1 Machine Learning.....................................................................................................................06
2.2 Natural Language Processing (NLP)...........................................................................07
2.3 Deep Learning............................................................................................................................07
2.4 Computer Vision......................................................................................................................08
2.5 Robotics.........................................................................................................................................09

CHAPTER THREE.................................................................................................11
3: Data Preparation and Feature Engineering....................................................................11
3.1 Data Collection.............................................................................................................................11
3.2 Data Cleaning.............................................................................................................................12
3.3 Feature Engineering................................................................................................................13
3.4 Data Transformation..............................................................................................................14
3.5 Data Augmentation................................................................................................................15

4.Conclusion.............................................................................................................................................16
5.References..............................................................................................................................................17

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 01


CHAPTER ONE

1. Introduction to Artificial Intelligence


1.1 DEFINITION OF AI
We have claimed that AI is interesting, but we have not said what it is.
Historically, researchers have pursued several different versions of AI. Some
have defined intelligence in terms of fidelity to human performance, while
others prefer an abstract, formal definition of intelligence called rationality—
loosely speaking, doing the “right thing.” The subject matter itself also varies:
some consider intelligence to be a property of internal thought processes and
reasoning, while others focus on intelligent behavior, an external
characterization 1

1- IN THE PUBLIC EYE, THERE IS SOMETIMES CONFUSION BETWEEN THE TERMS “ARTIFICIAL INTELLIGENCE”
AND “MACHINE LEARNING.” MACHINE LEARNING IS A SUBFIELD OF AI THAT STUDIES THE ABILITY TO
IMPROVE PERFORMANCE BASED ON EXPERIENCE. SOME AI SYSTEMS USE MACHINE LEARNING METHODS TO
ACHIEVE COMPETENCE, BUT SOME DO NOT.

From these two dimensions—human vs. rational 2 and thought vs. behavior—
there are four possible combinations, and there have been adherents and
research programs for all four. The methods used are necessarily different: the
pursuit of human-like intelligence must be in part an empirical science related
to psychology, involving observations and hypotheses about actual human
behavior and thought processes; a rationalist approach, on the other hand,
involves a combination of mathematics and engineering, and connects to
statistics, control theory, and economics. The various groups have both
disparaged and helped each other. Let us look at the four approaches in more
detail.

2- WE ARE NOT SUGGESTING THAT HUMANS ARE “IRRATIONAL” IN THE DICTIONARY SENSE OF “DEPRIVED OF
NORMAL MENTAL CLARITY.” WE ARE MERELY CONCEDING THAT HUMAN DECISIONS ARE NOT ALWAYS
MATHEMATICALLY PERFECT.

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 02


1.2 BRIEF HISTORY OF AI
The idea of creating machines that can perform intelligent tasks dates back
to ancient times. However, the modern history of AI can be traced back to the
mid-20th century, when researchers began to develop electronic computers
and the idea of "thinking machines" emerged.

In 1956, John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude


Shannon organized the Dartmouth Conference, which is considered to be the
birthplace of AI as a field of study. During this conference, they proposed the
creation of machines that can simulate any aspect of intelligence, including
learning, problem-solving, and language understanding.
In the following years, AI research progressed rapidly, and various techniques
and applications were developed. However, the field faced several setbacks,
such as the "AI winter" in the 1970s and 1980s, which was characterized by a
decline in funding and interest in AI research.

In the 21st century, AI research has experienced a resurgence, thanks to


advances in computing power, data availability, and algorithm development.
Today, AI is transforming various industries and changing the way we live and
work.

1.3 TYPES OF AI
AI can be classified into various types, based on the level of human-like
intelligence they exhibit and the tasks they can perform. The main types of AI
are:
1.3.1 Reactive Machines: Reactive machines are the simplest type of AI, which
can only react to specific inputs and produce outputs based on predefined
rules or patterns. They do not have the ability to learn from past experiences
or make predictions. Examples of reactive machines include automated teller
machines (ATMs), voice assistants, and chess-playing computers.
1.3.2 Limited Memory: Limited memory AI systems can learn from past
experiences and make predictions based on them. They can store a limited
amount of data and use it to improve their performance over time. Examples
of limited memory AI include self-driving cars, recommendation systems, and
fraud detection systems.

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 03


1.3.3 Theory of Mind: Theory of mind AI refers to systems that can understand
human emotions, intentions, and beliefs, and use this knowledge to interact
with humans in a more natural and empathetic way. They can interpret and
respond to social cues, which makes them suitable for applications such as
customer service and mental health support.

1.3.4 Self-aware: Self-aware AI is a hypothetical type of AI that has


consciousness and the ability to reflect on its own existence. It can
understand its own emotions, thoughts, and beliefs, and make decisions
based on them. However, self-aware AI does not currently exist, and its
development is a subject of debate and speculation.

1.4 APPLICATIONS OF AI
AI has a wide range of applications across various industries, including:
1.4.1 Healthcare: AI is used in healthcare for tasks such as diagnosis, drug
discovery, and patient monitoring. For example, AI algorithms can analyze
medical images to identify signs of diseases, predict the risk of complications
during surgery, and recommend personalized treatment plans based on a
patient's medical history.

1.4.2 Finance: AI is used in finance for tasks such as fraud detection, risk
assessment, and investment management. For example, AI algorithms can
analyze large amounts of financial data to identify fraudulent transactions,
predict market trends, and make investment decisions based on real-time
data.
1.4.3 Manufacturing: AI is used in manufacturing for tasks such as predictive
maintenance, quality control, and supply chain optimization. For example, AI
algorithms can analyze sensor data from machines to detect potential
failures before they occur, inspect products for defects, and optimize
production schedules based on demand and inventory levels.
1.4.4 Education: AI is used in education for tasks such as personalized
learning, student assessment, and administrative tasks. For example, AI
algorithms can adapt learning materials to a student's learning style and
pace, analyze student performance data to identify areas of improvement,
and automate administrative tasks such as grading and scheduling.

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 04


1.5 CURRENT STATE AND FUTURE PROSPECTS OF AI

1.5.1 Current State: AI has made significant advancements in recent years,


particularly in the areas of machine learning and natural language
processing. State-of-the-art AI systems can now outperform humans in
complex tasks such as image recognition and language translation. AI is also
becoming more ubiquitous, with the integration of smart assistants, chatbots,
and recommendation engines in various applications.

1.5.2 Future Prospects: The future of AI holds immense potential for further
innovation and impact on society. Some of the key areas of research and
development in AI include:

Advancements in deep learning and neural networks


Continued progress in natural language processing and understanding
The development of more explainable and transparent AI systems
The integration of AI with other emerging technologies such as blockchain
and IoT
The ethical and responsible deployment of AI to ensure that it benefits
society as a whole.

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 05


CHAPTER TWO

2. Key Concepts and Techniques in AI


2.1 MACHINE LEARNING
Machine learning is a subset of AI that involves training models to make
predictions or decisions based on input data. The three main types of
machine learning are supervised, unsupervised, and reinforcement learning.
Common machine learning algorithms include decision trees, neural
networks, and support vector machines.

2.1.1 Supervised learning: Supervised learning involves training a model on


labeled data, where the correct output is already known. The goal of
supervised learning is to learn a mapping function that can accurately
predict outputs for new, unseen inputs. Common algorithms used in
supervised learning include decision trees, random forests, and support vector
machines.

2.1.2 Unsupervised learning: Unsupervised learning involves training a model


on unlabeled data, where the correct output is unknown. The goal of
unsupervised learning is to identify patterns and relationships in the data,
such as clustering or dimensionality reduction. Common algorithms used in
unsupervised learning include k-means clustering, principal component
analysis (PCA), and autoencoders.

2.1.3 Q-Learning: is a type of reinforcement learning algorithm used for


making optimal decisions in situations where there is incomplete information.
The goal of Q-learning is to learn a policy that maximizes the cumulative
future reward in a given environment. The algorithm learns through
exploration and exploitation of the environment, and updates its action-value
function (Q-function) to determine the best action to take in a given state. Q-
learning is widely used in applications such as robotics, game playing, and
control systems.

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 06


2.2 NATURAL LANGUAGE PROCESSING (NLP)
NLP is a field of AI that focuses on enabling computers to understand,
interpret, and generate human language. Techniques used in NLP include text
classification, sentiment analysis, and language translation. Applications of
NLP include virtual assistants, chatbots, and language translation software.

2.3 DEEP LEARNING


Deep learning is a subset of machine learning that focuses on developing
neural networks with multiple layers. This section covers some popular
architectures and applications of deep learning, such as neural networks,
convolutional neural networks, recurrent neural networks, generative
adversarial networks, and transfer learning.
2.3.1 Neural Networks: Neural networks are a type of machine learning model
that are designed to mimic the way the human brain works. They consist of
interconnected nodes, or neurons, that process information and pass it on to
the next layer of neurons until a final output is generated. By adjusting the
connections between neurons, neural networks can learn to recognize
patterns in data and make predictions or classifications. Neural networks
have been used successfully in a wide range of applications, including image
and speech recognition, natural language processing, and game playing.
2.3.2 Convolutional Neural Networks: Convolutional Neural Networks (CNNs)
are a type of deep neural network that is commonly used for image
recognition and computer vision tasks.
CNNs work by applying a series of convolutional filters to an input image,
which helps to identify important features and patterns within the image. The
filters are then used to create a set of feature maps, which are then passed
through a series of pooling layers to further reduce the dimensionality of the
data, the resulting feature maps are fed into a series of fully connected layers,
which are used to make predictions about the input image.

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 07


2.3.3 Generative Adversarial Networks: Generative Adversarial Networks
(GANs) are a type of deep learning model that can generate new data that
resembles a given dataset. GANs consist of two neural networks: a generator
network and a discriminator network. The generator network creates new
data by transforming random noise into samples that resemble the training
data. The discriminator network then tries to distinguish between the
generated samples and the real ones. Through a process of training and
feedback, the generator learns to create increasingly realistic samples, while
the discriminator learns to become better at distinguishing between real and
fake samples.

2.4 COMPUTER VISION


Computer vision involves enabling computers to interpret and analyze visual
data from the world around them. Techniques used in computer vision include
image recognition, object detection, and semantic segmentation.
Applications of computer vision include self-driving cars, facial recognition,
and surveillance systems.

2.4.1 Image Preprocessing: Image preprocessing refers to a set of techniques


applied to images to improve their quality, enhance certain features or reduce
noise, with the aim of obtaining better results in subsequent analysis or
applications. This involves various operations such as resizing, cropping, color
correction, and filtering.
2.4.2 Image Captioning: Image captioning is a computer vision and natural
language processing task that involves generating a textual description of an
image. It is an important application of AI that has many practical uses, such
as for assisting the visually impaired or for automatically generating image
descriptions for the visually impaired.
The task of image captioning involves training a model to analyze an image
and generate a description that accurately captures its content. Typically,
this involves using a combination of convolutional neural networks (CNNs) to
extract features from the image and recurrent neural networks (RNNs) to
generate the textual description.

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 08


2.4.3 Object Detection: Image captioning is a computer vision and natural
language processing task that involves generating a textual description of an
image. It is an important application of AI that has many practical uses, such
as for assisting the visually impaired or for automatically generating image
descriptions for the visually impaired.
The task of image captioning involves training a model to analyze an image
and generate a description that accurately captures its content. Typically,
this involves using a combination of convolutional neural networks (CNNs) to
extract features from the image and recurrent neural networks (RNNs) to
generate the textual description.

2.5 ROBOTICS
Robotics involves the development of intelligent machines that can perform
tasks autonomously. Robotics incorporates elements of AI, computer vision,
and control systems. Applications of robotics include manufacturing,
healthcare, and space exploration.

2.5.1 Kinematics and Dynamics: Kinematics and dynamics are two important
fields in robotics that are concerned with the study of motion and forces in
robotic systems.
Kinematics is the study of the motion of objects without considering the
forces that cause the motion. In robotics, kinematics is concerned with
determining the position, velocity, and acceleration of a robot's end effector
(i.e. the part of the robot that interacts with the environment) given the joint
angles and velocities. Kinematic models are used to describe the relationship
between the robot's joints and the position and orientation of its end effector.
This information is critical for tasks such as path planning, motion control,
and robot calibration.

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 09


2.5.2 Robot Perception: Robot perception refers to the ability of a robot to
sense and understand the environment it is operating in. This involves
collecting information about the surrounding environment through various
sensors, such as cameras, lidar, and sonar, and then processing this data to
extract useful information. Robot perception is essential for many robotics
applications, such as autonomous navigation, manipulation, and inspection.

2.4.4 Motion Planning: Motion planning is the process of determining a


sequence of actions that will enable a robot to achieve a desired task or goal.
It involves computing a collision-free path for the robot to follow while
avoiding obstacles and respecting constraints such as joint limits and velocity
limits. This can be a challenging problem, especially in complex environments,
and there are many techniques and algorithms that have been developed to
address it.

2.4.5 Reinforcement Learning for Robotics: Reinforcement learning is a type


of machine learning that involves training an agent to make decisions in an
environment in order to maximize a reward signal. In robotics, reinforcement
learning can be used to train a robot to perform a task or a sequence of tasks,
such as grasping an object or navigating through an environment.

YOUR NFP NAME | SDG PROGRESS REPORT 2020 10


CHAPTER Three

3: Data Preparation and Feature Engineering


3.1 DATA COLLECTION
Data collection is the process of gathering information to be used in analysis,
research, or decision making. Data can come in various forms, such as
numerical, textual, or multimedia. In order to utilize data effectively, it is
necessary to collect and store it in a structured manner. The following
sections will discuss the types of data, sources of data, and data sampling
methods.

3.1.1 Types of Data: There are two main types of data: quantitative and
qualitative. Quantitative data is numerical in nature, and can be analyzed
using statistical methods. Examples of quantitative data include the number
of people in a room, the temperature outside, or the number of cars passing
by on a street. Qualitative data, on the other hand, is descriptive in nature
and is often collected through observations or interviews. Examples of
qualitative data include the color of a car, the opinion of a person, or the
behavior of an animal.

3.1.2 Data Sources: Data can come from a variety of sources, including:

1. Surveys and questionnaires: These are useful for collecting data from a
large number of people on a specific topic.
2. Interviews: These are useful for collecting in-depth information from a
smaller number of people.
3. Observations: These involve watching and recording behavior or activity.
4. Existing data: This can be obtained from sources such as government
agencies, academic institutions, or private companies.
5. Experiments: These involve manipulating variables to observe the effect
on a certain outcome.

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 11


3.2 DATA CLEANING
Data cleaning is a crucial step in preparing data for machine learning
algorithms. It involves identifying and correcting errors, inconsistencies, and
inaccuracies in the dataset. The following are some of the common
techniques used for data cleaning:

3.2.1 Missing Data: Missing data is a common problem in datasets. It can


occur due to various reasons such as human errors, system failures, or
incomplete surveys. Dealing with missing data is important because most
machine learning algorithms cannot handle missing values.

There are several methods for dealing with missing data, such as:
Dropping the missing values: In this method, rows or columns with missing
values are deleted from the dataset. However, this method can lead to a
reduction in the sample size and loss of valuable information.
Imputing the missing values: In this method, missing values are replaced
with a value derived from the other observations. Some common
imputation methods include mean imputation, mode imputation, and
regression imputation.

3.2.2 Outliers: Outliers are data points that deviate significantly from the rest
of the data. Outliers can be due to measurement errors, data entry errors, or
genuine extreme values. Outliers can affect the performance of machine
learning algorithms, especially those that rely on distance measures.
There are several methods for dealing with outliers, such as:

Deleting the outliers: In this method, the outliers are removed from the
dataset. However, this method can lead to a reduction in the sample size
and loss of valuable information.
Transforming the data: In this method, the data is transformed to reduce
the effect of outliers. Some common transformation methods include log
transformation, square root transformation, and Box-Cox transformation.
Using robust methods: In this method, robust statistical methods are used
that are less affected by outliers. Some common robust methods include
median and trimmed mean.

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 12


3.2.3 Data Normalization:Data normalization is the process of scaling the
values of a variable to a specific range. Normalization is important because
machine learning algorithms can be sensitive to the scale of the variables.
There are several methods for normalizing data, such as:
Min-Max normalization: In this method, the values of a variable are scaled
to a range between 0 and 1.
Z-score normalization: In this method, the values of a variable are scaled
to have a mean of 0 and a standard deviation of 1.
Decimal scaling normalization: In this method, the values of a variable are
scaled by a factor of 10, 100, or 1000.

3.3 FEATURE ENGINEERING


Feature engineering is the process of selecting and transforming raw data
into features that better represent the underlying problem to improve the
accuracy of machine learning models. It involves a variety of techniques such
as feature selection, feature extraction, and feature scaling.

3.3.1 Feature Selection: Feature selection is the process of selecting a subset


of relevant features from the original dataset. This is done to remove
irrelevant or redundant features, which can improve the accuracy,
interpretability, and efficiency of machine learning models. Common
techniques for feature selection include correlation analysis, principal
component analysis (PCA), and recursive feature elimination (RFE).

3.3.2 Feature Extraction: Feature extraction is the process of transforming raw


data into a set of features that are more meaningful and informative for
machine learning models. This is often done by applying mathematical and
statistical techniques to the data, such as Fourier transforms, wavelet
transforms, and autoencoders. Feature extraction can be used to reduce the
dimensionality of the data, which can improve the accuracy and efficiency of
machine learning models.
3.3.3 Feature Scaling: Feature scaling is the process of transforming the
values of features in a dataset to a similar scale. This is done to ensure that
features with different scales do not bias the machine learning model
towards certain features. Common techniques for feature scaling include
min-max scaling, z-score normalization, and log transformation.

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 13


3.4 DATA TRANSFORMATION
Data transformation is the process of converting raw data into a new format
that is more useful for analysis or modeling. It involves applying mathematical
and statistical techniques to reduce the dimensionality of the data, extract
relevant features, or create new variables.

3.4.1 Principal Component Analysis: Principal Component Analysis (PCA) is a


technique for reducing the dimensionality of a dataset by finding the most
important features. It works by transforming the data into a new coordinate
system where the new axes correspond to the principal components of the
data. These principal components are linear combinations of the original
variables that capture the most variance in the data.

3.4.2 t-SNE: t-Distributed Stochastic Neighbor Embedding (t-SNE) is a non-


linear data transformation technique used for visualizing high-dimensional
data. It works by reducing the dimensionality of the data while preserving the
local structure of the data points. It is commonly used for exploratory data
analysis and data visualization.
t-SNE is especially useful for visualizing complex datasets such as images,
audio, and text. It can help identify clusters of similar data points and reveal
underlying patterns in the data.

3.4.3 Other Data Transformation Techniques: Other data transformation


techniques include:
Normalization: scaling the data to have a mean of 0 and standard
deviation of 1.
Standardization: scaling the data to have a range of 0 to 1.
Log transformation: transforming data by taking the logarithm to reduce
skewness.
Box-Cox transformation: transforming data to make it more normally
distributed.
Wavelet transformation: decomposing signals into wavelets to extract
features.

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 14


3.5 DATA AUGMENTATION
Data augmentation is the process of generating new data samples by
applying random transformations to the existing data. The goal of data
augmentation is to increase the diversity of the dataset, reduce overfitting,
and improve the generalization performance of the model.

3.5.1 Image Data Augmentation: Image data augmentation techniques


include:
Rotation: rotating the image by a random angle.
Flip: flipping the image horizontally or vertically.
Crop: cropping a random section of the image.
Zoom: zooming in or out on the image.
Color jitter: randomly adjusting the brightness, contrast, and saturation of
the image.
These techniques can be applied individually or in combination to generate
new images.

3.5.2 Text Data Augmentation: Text data augmentation techniques include:


Synonym replacement: replacing words in the text with synonyms.
Random insertion: inserting random words into the text.
Random deletion: deleting random words from the text.
Random swap: swapping two words in the text.
These techniques can be used to generate new variations of the text data.

3.5.3 Other Data Augmentation Techniques: Other data augmentation


techniques include:
Audio data augmentation: applying random distortions or noise to audio
signals.
Video data augmentation: applying random transformations to video
frames.
Time series data augmentation: generating new time series data by
shifting, scaling, or adding noise to the existing data.

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 15


CONCLUSION
In conclusion, this report provided an overview of some of the key
concepts and techniques in the field of machine learning and
artificial intelligence. Chapter 1 introduced the fundamentals of
machine learning and AI, including supervised and unsupervised
learning, reinforcement learning, and deep learning. It also
discussed some of the ethical considerations surrounding the use of
these technologies.
Chapter 2 focused on the various applications of machine
learning and AI in different industries, including healthcare,
finance, and robotics. It discussed the key techniques used in
these applications, such as neural networks, convolutional
neural networks, and generative adversarial networks. It also
touched on the importance of image preprocessing and data
preparation in these applications.
Chapter 3 delved into the importance of data preparation and
feature engineering, including data collection, cleaning, and
transformation. It also discussed the different techniques for
feature selection, extraction, scaling, and transformation, as
well as the concept of data augmentation.
Overall, this report aimed to provide a comprehensive overview
of some of the key concepts and techniques in the field of
machine learning and AI. While there is still much to be explored
and developed in this rapidly evolving field, the potential
applications and benefits of these technologies are immense.
However, it is important to continue to address ethical
considerations and ensure that these technologies are used
responsibly and for the greater good.

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 16


REFERENCES

1. RUSSELL, S. J., & NORVIG, P. (2010). ARTIFICIAL INTELLIGENCE: A MODERN


APPROACH. PEARSON EDUCATION.
2. JORDAN, M. I., & MITCHELL, T. M. (2015). "MACHINE LEARNING: TRENDS,
PERSPECTIVES, AND PROSPECTS." SCIENCE.
3. BROWNLEE, J. (2020). "DEEP LEARNING STATE OF THE ART IN 2020."
MACHINE LEARNING MASTERY.
4. LI, J., & LI, X. (2019). "RECENT ADVANCES IN NATURAL LANGUAGE
PROCESSING." IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND
LANGUAGE PROCESSING.
5. LIPTON, Z. C., & STEINHARDT, J. (2018). "TROUBLING TRENDS IN MACHINE
LEARNING SCHOLARSHIP." ARXIV PREPRINT ARXIV:1807.03341.
6. BRYNJOLFSSON, E., & MCAFEE, A. (2014). THE SECOND MACHINE AGE:
WORK, PROGRESS, AND PROSPERITY IN A TIME OF BRILLIANT
TECHNOLOGIES. W. W. NORTON & COMPANY.
7. BOCK, S. (2019). "A NEW ERA OF SUPPLY CHAIN VISIBILITY WITH AI."
FORBES.
8. KOEDINGER, K. R., & CORBETT, A. T. (2006). "COGNITIVE TUTORS:
TECHNOLOGY BRINGING LEARNING SCIENCE TO THE CLASSROOM." AI
MAGAZINE.
9. GOODFELLOW, I., BENGIO, Y., & COURVILLE, A. (2016). "DEEP LEARNING."
MIT PRESS.
10. JURAFSKY, D., & MARTIN, J. H. (2020). "SPEECH AND LANGUAGE
PROCESSING (3RD ED.)." PEARSON.
11. SZELISKI, R. (2010). "COMPUTER VISION: ALGORITHMS AND
APPLICATIONS." SPRINGER.
12. CHOSET, H. (2019). "INTRODUCTION TO ROBOTICS: MECHANICS AND
CONTROL (4TH ED.)." PEARSON.
13. SUTTON, R. S., & BARTO, A. G. (2018). "REINFORCEMENT LEARNING: AN
INTRODUCTION (2ND ED.)." MIT PRESS.

AMRAN QASIM | CATHOLIC UNIVERSITY IN ERBIL 2023 17

You might also like