You are on page 1of 10

1.

INTRODUCTION

The goal of this capstone project is to shed more light on a trending topic that not only affects

healthcare but basically almost all facets of life. The world of information technology is rapidly

developing and the emergence of Artificial Intelligence (AI) is raising a lot of questions that

even highly rated scientists do not have answers to at the moment. They do not totally

understand how much can be accomplished with AI but however, there is no doubt that this

powerful tool will have a great impact on the world in coming years.

Through out the history of man on this planet, we see that there have been landmark discoveries

that changed the world, from little discoveries like using stones to create fire to more significant

ones like using nuclear plants as a source of power. Today it is strongly believed that the

development of AI would lead to tremendous changes on the world as we know it to be now. We

believe that very soon all big tech companies will used AI for almost all areas of production and

decision making.

Today the use of AI can be seen in giant tech companies like Tesla and the world is already

wowed by some of the things they are accomplishing with AI but it is just a scratch on the

surface. Daily, numerous research and studies are being carried out on AI to see more ways they

can be applied to every activity of our lives. This paper will focus more on how it is being used

in health care today and the possible impact it can have on it in the future.
2. ARTIFICIAL INTELLIGENCE

The concept of Artificial Intelligence (AI) was first introduced in 1956 at a gathering at

Dartmouth College, New Hampshire (Brighton, 2015). Since then, various disciplines within

multiple areas such as computer science and even philosophy have been debating the definition

of AI. How can we determine if a machine, computer program or something else found in our

labs is an AI?

2.1 Definition of Artificial Intelligence

Before elucidating on what AI is, we would like take your mind back to the fundamental idea on

what intelligence is. Using animals a reference, intelligence is the ability of that animal to learn

from its environment, solve problems, interact with other members of it specie and retain

memory. These are learned behaviours unlike the instinctive ones like breathing, eating etc. We

can call simply call an animal intelligent when it is able to think and carry out activities it has

learned.

Let’s now relate this established idea of intelligence with AI. When man made machines, the did

so in order to make it easier for them to accomplish certain tasks. Since man is curious and

ambitious, they thought of ways to have machines than can think like them if not better and

that’s how man came to invent computers and programs to give machines intelligent behaviours

which are the abilities to learn and develop. That’s where the term ‘Machine Learning’ which is

the foundation of AI comes from.


Artificial Intelligence (AI) is defined as the science and engineering of creating intelligent

machines, especially computer systems that can think and act like humans (Russell & Norvig,

2016). AI involves a variety of technologies including natural language processing, robotics,

machine learning, image recognition and speech recognition. AI has been used to develop

autonomous vehicles such as self-driving cars and robots that can interact with their environment

in more sophisticated ways than previous generations of technology.

In Henry Brighton's book Introducing Artificial Intelligence: A Graphic Guide (2015), he divides

AI into two forms: Strong AI, also known as Artificial General Intelligence (AGI), and Weak AI.

AGI is an intelligent machine capable of performing all tasks just like a normal human; however,

its existence is still uncertain with no one being able to determine when, how or even if it will

appear. On the other hand, Weak AI has a weaker form of intelligence in comparison to AGI and

can only solve certain problems or perform specific tasks similar to humans but not all.

The term Artificial Intelligence (AI) was first coined in 1956 by John McCarthy, a computer

scientist at Dartmouth College. He defined it as “the science and engineering of making

intelligent machines” during the now-famous Dartmouth Conference that he organized along

with Marvin Minsky, Nathan Rochester, and Claude Shannon. The conference focused on

exploring how computers can be used to simulate human thought processes and problem solving

capabilities (McCarthy et al., 1955). During this event, McCarthy proposed the use of AI for

games such as chess and checkers to demonstrate its effectiveness in simulating human behavior.
This was the beginning of research into developing algorithms that could enable machines to

think like humans do (Gardner & Rosenblatt, 1958).

Today we have several dictionary definitions of AI:

Merriam-Webster defines AI as “the capability of a machine to imitate intelligent human

behaviour”.

Oxford English Dictionary: “the theory and development of computer systems able to

perform tasks normally requiring human intelligence, such as visual perception, speech

recognition, decision-making, and translation between languages”.

Cambridge Dictionary: “the study of how to produce machines that have some of the

qualities that the human mind has, such as the ability to understand language, recognize

pictures, solve problems and learn from experience”.

2.2 What is Machine Learning?

Machine learning is a subset of artificial intelligence (AI) that enables computers to learn from

data and improve their performance over time without explicit programming (Marr 2016). In

other words, machine learning provides machines with the capability to autonomously acquire

knowledge and skills through experience. For example, humans may be too lazy or busy to

manually upload data into a database every day; instead, they can connect the machine with the

Internet so it can "learn" on its own. The core of this autonomous learning process is neural
networks, which are computer systems designed to classify data in ways similar to how human

brains process information and knowledge (Marr 2016). A neural network can recognize images,

colors, sizes, text - all kinds of elements within datasets - then categorize them into different

groups based on the requirements of humans. With machine learning technology, all kinds of

industries can save a tremendous amount of time and effort due to its access to the Internet

anytime and anywhere.

The focus of ML as a branch of (AI) is of making computer systems to that ca expand the

knowledge of humans. Through analyzing, self-training, observing and recognizing patterns

from new data, machines can learn and make decisions in future scenarios (though not identical).

While there are similarities between ML and data mining, they are distinct disciplines.

Replicating the unconscious way humans learn can be difficult for machines which require a

lengthy training period with complex algorithms designed to determine their behavior going

forward. Popular methods of Machine Learning include Supervised Learning, Unsupervised

Learning, Deep Learning (Neural Networks), Reinforcement Learning, Semi-Supervised

Learning and Transfer Learning.

Supervised and unsupervised learning have distinct differences. Supervised learning is a type of

machine learning algorithm in which the data has already been labeled with the right answer

(Alteryx, 2020). Supervised learning is a type of machine learning algorithm that utilizes labeled

data to develop models and make predictions. In supervised learning, the model is trained on
existing datasets containing input variables (x) and an output variable (y). The goal of supervised

learning is to create a model which can accurately predict the output when given new inputs.

Unsupervised learning, on the other hand, does not require labeled data in order to train its

models. Unsupervised learning is used when there are no labels or categories associated with the

data set, allowing for the algorithm to detect patterns in data without being given any guidance

(Alteryx, 2020). Instead, it relies on algorithms that cluster similar patterns within unlabeled

datasets. Unsupervised learning techniques are used for discovering hidden patterns or features

within raw data without any prior knowledge about them. It also helps identify outliers from

normal observations as well as relationships between different variables in a dataset. The goal of

unsupervised learning is to identify previously unknown patterns within a dataset.


Image gotten from www.gatevidyalay.com

FIGURE 1. Machine Learning Workflow

Today machine learning is being applied to our daily lives. Google maps using the location data

from people’s phones can inspect the dynamics of traffic at any given time telling us the fastest
route or notifying if there is an accident or construction going on. By accessing relevant data and

appropriate fed algorithms, Google Maps can reduce commuting time by indicating the fastest

route (Tyagi, 2020).

2.3 What is Deep Learning?

Deep learning is more intricate than Machine learning. It can be seen as the next level of

Machine learning or a subset thereof. With data obtained from Machine learning, Deep learning

then decides what to do with it, especially predicting the future (Marr 2016). To reiterate, Deep

learning is an improved version of Machine Learning incorporating deep neural networks;

computers are not only able to classify data into different categories but also make decisions on

what action to take and forecast outcomes based on datasets. This type of network is capable of

handling large quantities of information such as comments and posts on Facebook and Google's

image library. Ultimately, deep learning and its associated neural networks exist due to particular

cognitive tasks that cannot be resolved by regular neural networks.

Have you ever wondered why you suddenly keep getting ads about a product you just checked

out on a website or something you just search on google? Facebook for example, uses deep

learning for ads in order to better target ads to users. Facebook's deep learning algorithms

analyze user data such as age, gender, interests and past interactions with the platform in order to

determine which type of ads are most likely to be successful with a given user. By leveraging its

vast amount of data from billions of users around the globe, Facebook is able to tailor ads more
effectively than ever before. This helps advertisers get their message out more accurately and

efficiently while providing users with relevant content that is tailored specifically for them.

2.4 AI as a game changer in the healthcare.

In recent years, Artificial Intelligence (AI) has become a game changer for the healthcare

industry. AI is transforming the way medical professionals diagnose and treat patients,

improving outcomes and reducing costs. AI can quickly analyze large amounts of data to provide

accurate diagnoses and recommend treatments faster than ever before. This technology can also

help automate tedious tasks such as paperwork or administrative duties, freeing up clinicians’

time so they can focus on providing better care for their patients. In addition, AI-powered robots

are being deployed in hospitals to assist with surgery and other medical procedures that require

precision and accuracy (Varanasi, 2020).

The use of AI in healthcare has numerous benefits including improved patient outcomes through

more accurate diagnosis and treatment plans; reduced costs due to automated processes and staff

efficiency; improved patient safety through the use of robotic surgical assistants; and better data

collection for research purposes. AI-driven technologies such as machine learning, natural

language processing, computer vision, and deep learning are allowing medical professionals to

quickly access patient records and make informed decisions about diagnosis and treatment plans

(Varanasi, 2020).
AI is also being used in the development of personalized medicine. This technology can be used

to analyze a patient’s genetic makeup in order to create treatments tailored specifically for them.

Additionally, AI can be utilized to detect signs of disease before they manifest physically so that

doctors can intervene earlier on when it may have a greater chance of success (Varanasi, 2020).

As AI continues to develop and become more sophisticated, it is expected that the healthcare

industry will continue to benefit from its use. AI-driven technologies can reduce costs, improve

patient outcomes, and provide better data collection for research purposes. This technology has

already proven itself as a game changer in the healthcare sector, and with continued

advancements in this field, it is likely to continue revolutionizing how medical professionals

diagnose and treat patients for years to come (Varanasi, 2020).

You might also like