You are on page 1of 31

Machine Learning for

Beginners
Can machines really learn like
humans? All what you need to
know about Machine Learning,
Artificial Intelligence (A.I),
Deep Learning, Digital Neural
Networks and Computer
Science

Table of Contents
Introduction
Chapter 1: About Machine Learning

What is Machine Learning?

History:

Chapter 2: Can Machines Really Learn Like Humans?

Parts of Machine Learning:

Model

Parameters

Learner

Providing Initial Data

Learning Process:

Repeat:

Chapter 3: Artificial Intelligence

Dangers of AI

AI can be programmed for Malice:

AI Develops a Problematic and Destructive Method to Achieve its Goals:

Myths and Facts Associated with AI

Super-intelligence by 2100 is inevitable

AI can Never Become Dangerous

Evil Superhuman AI

Chapter 4: Deep Learning

Difference Between Machine Learning, Deep Learning and AI:

Chapter 5: Digital Neural Network and Computer Science

Applications of ANN
Advantages of ANN

Risks associated with ANN

Types of Artificial Neural Networks

Conclusion

Introduction
 
I would like to thank you for purchasing this book, ‘Machine
Learning for Beginners. Can machines really learn like
humans? All what you need to know about Machine
Learning, Artificial Intelligence (A.I), Deep Learning, Digital
Neural Networks and Computer Science.’
I am sure that you will learn a number of remarkable things
about AI and machine learning - that too in a fun and
interesting way.
Machine learning is currently one of the most talked about
concepts in the world of technology and computers. A highly
promising topic, machine learning is also quite controversial
among people who are not aware of its nature and benefits.
Therefore, to do away with such myths and apprehensions,
it has become essential for everyone to find out and read
about the concept. This book will help you with this mission,
as you will find all the required and relevant data regarding
machine learning gathered in one single text. An absolute
must for beginners and the curious, this book answers all
the questions and queries that you might have about
machine learning.
Before beginning, once again thank you for buying this
book.

Copyright 2017 by Hugo Oak - All rights reserved.


 
This document is geared towards providing exact and
reliable information in regards to the topic and issue
covered. The publication is sold with the idea that the
publisher is not required to render accounting, officially
permitted, or otherwise, qualified services. If advice is
necessary, legal or professional, a practiced individual in the
profession should be ordered.
 
- From a Declaration of Principles which was accepted and
approved equally by a Committee of the American Bar
Association and a Committee of Publishers and Associations.
 
In no way is it legal to reproduce, duplicate, or transmit any
part of this document in either electronic means or in
printed format. Recording of this publication is strictly
prohibited and any storage of this document is not allowed
unless with written permission from the publisher. All rights
reserved.
 
The information provided herein is stated to be truthful and
consistent, in that any liability, in terms of inattention or
otherwise, by any usage or abuse of any policies, processes,
or directions contained within is the solitary and utter
responsibility of the recipient reader. Under no
circumstances will any legal responsibility or blame be held
against the publisher for any reparation, damages, or
monetary loss due to the information herein, either directly
or indirectly.
 
Respective authors own all copyrights not held by the
publisher.
 
The information herein is offered for informational purposes
solely, and is universal as so. The presentation of the
information is without contract or any type of guarantee
assurance.
 
The trademarks that are used are without any consent, and
the publication of the trademark is without permission or
backing by the trademark owner. All trademarks and brands
within this book are for clarifying purposes only and are the
owned by the owners themselves, not affiliated with this
document.

Chapter 1: About Machine Learning


 
One of the major features of this era of technology is its
adaptability and ever-changing nature. Almost every day, a
new technological innovation is made that changes the
trajectory of the current technology. Things that were once
only dreamt and thought of in the works of Jules Verne, H.G.
Wells, and other Sci-fi authors have now become real. It is
safe to say that man is rapidly conquering science in almost
all fields, with a few exceptions. One such exception, which
has been in discussion for ages, has become extremely
popular once again. In technical jargon, the term has
become a buzzword, and almost everyone is either excited
or worried about it. This, yet unconquered technology is
machine learning. This chapter will deal with the basics of
machine learning and will also cover the history of the
concept briefly.
What is Machine Learning?
According to the results of the year 2016, machine learning
is an important and highly popular buzzword. This sudden
rise in popularity of machine learning is perhaps due to its
growing use in day-to-day technology and the
apprehensions associated with its growth. However, not
many people know and understand what machine learning
is and often these apprehensions are nothing but myths.
To quote Arthur Samuel, 'machine learning is a process of
feeding data to computers in such a way that the computers
will gain the ability to learn without being fed other data or
explicitly programmed.' Thus, in simple words, machine
learning is providing the machine the ability to ‘think.'
By providing machines the capacity to think, that is through
machine learning, the utility and the ease of use of
computers will rise and will prove to be an unsurpassed
asset for humanity. Some of the major applications of
machine learning include computational anatomy,
cheminformatics, adaptive websites, game playing,
linguistics, natural language processing, medical diagnosis,
robot locomotion, sequence mining, translation, user
behavior analytic, detection of credit card fraud, etc.
However, most of these have not become real yet, and
much research is needed before we start using them. Let us
now have a look at the history of machine learning.
History:
If the history of machine learning is to be traced, it is found
that the field or concept is closely related to another similar
field i.e. AI or artificial intelligence. In fact, it can be safely
said that machine learning as a concept grew out of the
search for artificial intelligence.
Right in the beginning, many scientists who were studying
AI academically began to work upon machine learning using
many symbolic methods, neural networks, perceptrons, etc.
With this, the scientists also used probabilistic reasoning for
various purposes. However, after a few years, due to the
emphasis on logical and knowledge-based approach, the
two fields- AI and machine learning were divided, and
machine learning was reorganized as a separate field
around the 90s. This divide also helped the field achieve its
current goals i.e. solving practical problems of immediate
need instead of focusing on AI. After the 90s the popularity
of machine learning increased, as it was easy to research
and distribute information using the Internet.
Another important event in the history of machine learning
is the current peak in interest, which has made it extremely
popular. The next chapter will deal with the workings of
machine learning and whether machines can learn like
human beings.

Chapter 2: Can Machines Really Learn


Like Humans?
 
A common trope of many dystopian, as well as horror fiction
or movies, is machines becoming sentient and controlling
the world by destroying humanity. It is no wonder that this is
the first thing that comes to the minds of the general
population when they see machine learning and AI
becoming popular. A perfectly justified apprehension as
almost every media outlet is bombarding the population
with strange myths passed on as facts.
Two of the most asked questions about machine learning are
whether machines can really ‘learn’ like human beings and,
if the answer to the first question is yes, then whether the
machines will ‘rise’ and enslave the human race? The
answer to both the above questions is technically no.
Let us consider the second question first; the question and
the hypothesis framed in the question are far-fetched,
fantastical and most definitely ridiculous. Machines cannot
and will not rise against human beings, as machines are not
‘living beings.' This brings us to the first question, whether
machines can learn like humans?
As this book is concerned with machine learning, it is quite
obvious that machines can learn. However, the question is
whether they can learn like humans. The answer to the
second question, at least for now, is no. This is because
machines cannot use the methods of ‘learning’ that are
used by humans. A machine can be taught to learn and use
its past experiences to perform certain tasks, however,
expecting it to become thoroughly sentient and omniscient
is useless.
Nowadays the use of ‘learned’ machines has increased a lot,
and we see applications like ATMs reading numbers on
checks etc. almost everywhere. However, these ‘learned’
machines are still very different from the humans doing the
same work. For instance, a human being can pick up a thing
or skill in a few attempts and displays; however a machine
will need hundreds of examples and attempts just to ‘learn’
a skill.
Many major scientists working in the field of machine
learning are now focusing on reducing the time required by
machines to learn a skill. They are also working on
developing new methods to ‘teach’ machine skills with
lesser number of examples. If the technology is developed
further, it is possible that we will see truly ‘learning’
machines shortly.
Now that this chapter has discussed whether machines can
learn like humans or not, let us look how machines learn in
the present times.
Parts of Machine Learning:
Machine learning is a difficult concept to understand as it
involves the use of some of the most sophisticated
technology that is currently in existence. Due to this, it is
necessary to know the technical jargon associated with
machine learning and computers to understand its working.
However, as this book is intended for beginners, this section
will try to explain the basic working methodology and the
parts of machine learning without any heavy and technical
jargon.
Any machine learning system can be divided into three
parts-
Model
It is the starting point of any machine learning system. A
model is a prediction that the system will use throughout
the process. In the beginning, a human being (a
programmer) must feed the machine with a model. It
depends on the parameters that are used to make its
calculations. The machine learning system will use the
parameters and a mathematical equation to plot a trend line
of the expected results.
Parameters
Factors or signals that are utilized by the model to make
decisions.
Learner
The parameter adjusting system, also serves as model.
Providing Initial Data
When the model has been set, the system can be fed the
initial data. The data will not fit the trend line thoroughly,
and some of it will go below the trend line. Here the real
learning will begin.
Learning Process:
The initial data that was provided to the system is known as
‘training set’ in technical jargon. It is used by the system to
train itself to form a superior model.
Here, the ‘learner’ will inspect the data and see how
different and far it was from the expected trend line. The
learner then will do some more calculations and will improve
upon the data and the previous model.
Repeat:
In this stage, the system is fed with new parameters or
data. This new data will be compared with the older results
and will be closer to the predictions. However, they will still
not be perfect. This step will be repeated until a perfect or
almost perfect result is achieved.
 
This process often takes hours to finish. However, as said
earlier major researchers are devising new ways to reduce
the required time and repetitions for this process.

Chapter 3: Artificial Intelligence


 
The one concept or field that is almost always discussed
when one talks about machine learning is AI or Artificial
Intelligence. One of the most controversial yet highly sought
after technology; AI is the dream project of many
researchers. This chapter will shed some light on the basics
of AI and what are its benefits and problems.
Right from virtual assistants to self-driven cars, AI or
artificial intelligence is gaining immense popularity
throughout the world. It is being used more and more every
day. A common definition of AI is as follows, the intelligence
that is exhibited or displayed by a machine. A machine is
said to be intelligent when it can perceive and identify its
surroundings and takes actions accordingly where its
ultimate goal is to succeed.
From the above definition, it is quite clear that the common
understanding of AI is far away from the ‘real’ AI. Though
Sci-fi genre often portrays AI as humanoid, sentient robots,
however, the real AI is much more than this. Google’s
search algorithm, autonomous weapons, etc., are all
common examples of AI.
Nowadays major scientists divide AI into two categories,
narrow AI and general AI. Currently, we use narrow AI or
weak AI all over the world. Narrow AI, as the name suggests
is designed to perform simple ‘narrow’ tasks. This means
that a narrow AI can perform only one task. This includes AI
developed for self-driven cars, facial recognition software,
etc. However, scientists and researchers all over are
working to develop general AI i.e. strong AI. A strong AI,
unlike its weaker counterpart, will be able to perform more
than one task effectively and better than human beings.
Dangers of AI
Almost every Sci-fi author ever has written an AI becoming
malevolent and trying to erase the human kind. However,
almost all scientists agree that even a super strong and
super intelligent AI will not be able to exhibit humane
emotions like love and contempt. Thus, there is no chance
of an AI going astray and becoming benevolent or
malevolent. However, this does not mean that AI cannot be
dangerous. It is, after all, an extremely sophisticated
technology and, in wrong hands, it can wreak havoc in the
cyber as well as the physical world.
AI can be programmed for Malice:
One of the major characteristics of AI is that it can be
programmed to perform various activities. However, these
activities can also have malicious intents and purposes.
Autonomous weapons or weapons that do not need humans
are a form of AI that can be programmed to kill. If such
systems fall into the hands of the wrong person, it can result
in massive devastation. The rise in autonomous weapons
can also lead to the rise of AI wars. These weapons are often
designed in such a way that they cannot be ‘diffused’ with
ease. This can create a massive problem as humans can
lose control of such weapons.
AI Develops a Problematic and Destructive
Method to Achieve its Goals:
Although AI is getting more and more sophisticated by the
day, it is still not intelligent enough to understand the
implications of order. For instance, if a smart car is ordered
to take a passenger to his or her destination as fast as
possible, the car may take the passenger to the destination
without worrying about the traffic, pedestrians, etc.
Similarly, if an AI is used in mines, it may wreak havoc on
the environment while trying to extract minerals effectively.
Thus, it is obvious that the major problem with AI is not of
malevolence or sentience but is rather a competence. If the
goals of an AI are not aligned properly with the goals of the
user, it may lead to many problems.
Myths and Facts Associated with AI
There are many myths associated with AI and more than
often the things that are considered to be myths turn out to
be facts and vice versa. Hence, it has become necessary to
recheck the fact file on AI.
Super-intelligence by 2100 is inevitable
An oft-repeated myth! Most scientists all over the world are
confused and refuse to agree to the rise of super intelligent
machines. This is because we often eulogize and
romanticize future technology. For instance, we still do not
have flying cars or fusion power plants, and no one can say
for sure when these things will be available. Similarly,
though many proclaim that super intelligent AI will be
available by 2100 (and some by 2060); it is impossible to
promise anything. It is possible that we may see it in a few
years or few decades and it is also very much possible that
we might not see it for centuries together. Thus, the
timeline of AI is highly uncertain.
AI can Never Become Dangerous
Although discussed in the last topic already, it is necessary
to know the dangers of AI. Even the major scientists are
concerned about AI, and they often present their views in
important conferences. This has led to the rise of AI safety
debates almost all over the world; however, these are often
conducted by sensationalist media outlets and thus are far
away from facts. These media outlets often manipulate and
misinterpret statements by researchers which gives rise to
panic. It is absolutely necessary to implement various safety
measures in AI; however, there is no need to panic as of
now.
Evil Superhuman AI
Another one for the conspiracy theorists. As stated earlier,
almost all researchers all over the world agree that the
chances of any AI becoming sentient and superhuman are
negligible. All of these are common misconceptions and
fantasies that can never happen. An AI can never have
subjective experiences such as colors, smells, sound, etc.
An AI cannot subjectively feel.
Secondly, the popular image often used to depict the rise of
the evil AI is that of a robot army attacking human
settlements. This is another myth, as AI does not need
robots to destroy humanity. This does not mean that AI
seeks or will actively seek to destroy humanity and the
world in future. However, as stated earlier, if the goals of AI
and the user are not compatible, it may wreak havoc, and to
wreak such havoc, it will only need an active Internet
connection. Therefore, don’t worry; no robot army will rise in
the future.
These were some of the myths associated with AI. The next
chapter will deal with another topic that is closely related to
machine learning and AI.

Chapter 4: Deep Learning


 
In the second chapter, the workings of machine learning
were discussed, and a few methods of machine learning
system were discussed. Deep learning or hierarchical
learning is the use of artificial neural networks to learn tasks
that contain one or more hidden layers. It is based on
learning data representations and the learning in this
method can be unsupervised, partially supervised or
completely supervised.
The architectures of deep learning system, deep belief
system, deep neural networks and recurrent neural
networks all have been used and applied to various fields
such as speech recognition, computer vision, natural
language processing, social network filtering, sound
recognition, bioinformatics, machine translation etc. where
these methods come up with results that are often on par
and sometimes even better than the human experts.
Deep learning is a class or family of machine learning
algorithms that utilizes a cascade of multiple layers of non-
linear processing units for transformation and feature
extraction. The layers are nested; hence, the output of the
last layer becomes the input for the next. They can be
supervised or unsupervised for instance pattern analysis
and classification (respectively). The unsupervised
algorithms are based on learning multifaceted levels of
features of the data. They can also be based on learning the
representations of the data. Once again, the method is
nested where the higher level features are formulated from,
the lower level features.
If the history of deep learning is traced, it can be found that
the World School Council of London was the first
organization that designed, devised and utilized this
innovative method. This council uses multiple algorithms to
turn the data.
Until now, we have seen the basics of three closely related
and oft confused terms and concepts. In this second section
of this chapter, let us have a look at the differences
between Deep Learning, Machine Learning, and AI.
Difference Between Machine Learning, Deep
Learning and AI:
If the difference between the above three technologies is to
be explained in simple words, one can say that machine
learning is a particular and specific type or approach
towards AI or artificial intelligence. Though one of the most
popular approaches towards AI, machine learning is not the
only approach towards this technology. For instance, most of
the self-driven cars use rule-based systems instead of
machine learning.
 
However, it is expected that shortly, machine learning will
replace all other forms and approaches towards AI. Deep
learning is a kind of machine learning approach that is
becoming extremely popular. Thus, if one is the use the SET
theory to represent the above three technologies, it will look
like this:
Assume SET A is AI, SET B is Machine learning and SET C is
Deep Learning. Thus, SET A contains SET B, and SET B
contains SET C, which means that Deep learning is a subset
of machine learning while machine learning is a subset of AI.
Though one of the best approaches towards AI presently,
machine learning may be replaced by some other advanced
technology if said technology is invented soon.

Chapter 5: Digital Neural Network


and Computer Science
 
Since the invention of the computer in the 20th-century,
people all over the world have speculated and thought of
things and achievements that would be and would not be
possible for the computer to do and achieve. These often
included recognizing the human face and its gestures,
driving cars, playing chess against a human (and defeating
them), etc. It can be noticed that the computer has done
and is doing all of the above things effectively for a long
period now. This is all due to the development of AI,
especially using Digital Neural Network.
A form of artificial neural network or digital neural network
has already been discussed in this book. Deep learning is
nothing but a sophisticated and well-developed version of
neural networks that were first used around 70 years ago.
The idea of neural networks was first proposed by two
researchers, Walter Pitts and Warren McCullough in 1944.
However, the research soon died down, and in 1969 it was
officially killed in MIT just to rise once again in the 90s.
ANN or Artificial neural networks are computing systems
that are inspired by and are based on biological neural
networks that are present in the brains of animals. Thus, it
is an imitation of a sophisticated and intricate system
developed by nature.
One of the major features of this system is that it learns to
perform tasks by using and inspecting examples without the
need of task-specific programming. This reduces the time
and resources consumed. For instance, if applied in image
recognition, the network may learn to identify images
having a tree by analyzing images that have manually
named ‘tree’ and ‘no tree’ and using the results derived
from this to analyze the other images to find the ones with
trees.
An ANN is formed of a group of joint units known as artificial
neurons. Each of these connections is used to transmit
signal from one neuron to the other. Often these are
organized in more than one layer. Different layers often
perform different transformations on their respective data.
The signal often travels through the whole set up multiple
times.
One of the foremost goals of ANN in its inception was the
imitation of the human brain and its problem-solving
method. However, over time new developments came
forward which resulted in deviation of the technology from
biology and nowadays many new techniques that are not
present in natural biology are used in ANN to solve
complicated problems.
Applications of ANN
➢ Due to the versatility and high utility, ANN has
various uses in different fields.
➢ ANNs have been used in the field of solar energy
since a long time. They are used for modeling and
designing better gadgets such as solar steam
generating plants.
➢ ANNs are also quite useful in system modeling. They
can be used to implement complex mapping as well as
system identification.
➢ They are also used to estimate the heating loads of
buildings and local concentration ratio. They can also
be used to estimate the parabolic-trough collector’s
intercept factor.
➢ It is used in robotics, control, forecasting, pattern
recognition, medicine, manufacturing, power systems,
social and psychological sciences, signal processing
and optimization.
➢ They can also be used to predict airflow as well as to
predict the consumption of energy in a solar building.
➢ They can easily handle incomplete and problematic
data.
➢ They can solve non-linear problems.
➢ They are used in the air-conditioning system,
ventilation, modeling, refrigeration, heating, control of
power generation, load forecasting, etc.
Thus, an artificial neural network can provide far better
alternative ways to tackle simple as well as complex
problems with ease.
Advantages of ANN
➢ A neural network can perform tasks that cannot be
performed by a linear network.
➢ As the neural network is a non-linear network and
hence even if an element in the network fails, the
process can continue without any problem or without
stopping abruptly as well.
➢ A neural network need not be reprogrammed often,
it can learn on its own.
➢ It is easy to use and implement, and you will not run
into heaven.
➢ It is highly adaptive, robust and excellent at solving
complex and convoluted problems with ease. It can be
used in almost all the applications with ease.
➢ Many researchers agree that the benefits and
advantages of ANN are far more significant than the
risks associated with the technology.
➢ ANN receives the knowledge from their atmosphere
and surrounding, with the help of adaptation. The ANN
adapts to external and internal parameters, and these
solve difficult questions with ease.
➢ It uses general knowledge to come up with sufficient
and adequate responses.
➢ Non-linear and flexible - Artificial neural networks or
ANNs are highly flexible and can learn, generalize and
adapt to the surrounding. It allows the network to
acquire knowledge by learning efficiently. This cannot
work in a linear network. A traditional linear network
cannot pass and transfer or model non-linear data.
➢ An artificial neural network or ANN is much more
forgiving and has high fault tolerance as well.
➢ It is based on adaptive learning.
Risks associated with ANN
➢ Although easy to use, it does require a lot of training
to learn how to use an ANN effectively.
➢ It can be time-consuming. ANN often requires a large
processing time for large neural networks.
➢ The architecture of ANN is extremely different than
that of a microprocessor, and thus they need to be
emulated to work properly.
Types of Artificial Neural Networks
There are various types of artificial neural networks out of
which the following three are prominent.
➢ Feedback ANN: The output or result goes back into
the network itself, and the process is repeated till the
best result is achieved. Often used in Internal System
error corrections.
➢ Feed Forward ANN: It is a simple network containing
an output layer, an input layer and one or more than
one layer of neurons. It can learn to identify and
evaluate input patterns.
 
➢ Classification Prediction ANN: A subset of the feed
forward artificial neural network.

Conclusion
 
I want to thank you once again for choosing this book.
In this ever changing and ever evolving world of technology,
it is becoming increasingly difficult to keep up with the
latest trends, inventions and discoveries in the field of
science and technology. Science is developing rapidly,
perhaps with a pace never seen before. To keep up with
such a pace, a person either needs to be a scientist himself
or needs to read a lot. This book is an attempt to simplify
the second method for the benefit of people who are new to
sophisticated fields of science such as machine learning and
AI.
The level of information in this book and the language used
are deliberately on the easier side as technical lingo and
complicated jargon can often throw off even the most
dedicated reader. All the chapters have been divided into
multiple sections so that the reader can find all the relevant
topics with ease.
I am sure that the book will serve to be a basic guide for
everyone who is interested in AI, machine learning and deep
learning. I hope this book will do away with all the problems,
myths and apprehensions that were present in your mind
and will help you welcome AI and machine learning with
open arms.

Finally, if you enjoyed this book, then I’d like to ask you for a
favor, would you be kind enough to leave a review for this
book on Amazon? It’d be greatly appreciated!
 
Click here to leave a review for this book on Amazon!
 
Thank you and good luck!

Preview Of ‘Blockchain for Beginners’


The future of money looks just as exciting as its past.
However, if what is happening today is any indicator, unlike
its past, a past full of cowry shells, barter trade, and other
such currencies, the future of money is about to have some
very relevant technological upheavals that shall change how
we conduct interpersonal business and how businesses and
corporations conduct business.
If you do not adopt to the impending changes, if you fail to
learn how technology is influencing the future of money and
how we shall view person-to-person or business-to-business
money transfers in the future, when technology comes full
swing and we finally lay to bed cold hard cash, credit and
debit cards, you will struggle to keep up.
Now is the best time to learn about the things happening
now that will greatly influence the future of money. Of the
things you need to learn, key among them is the blockchain
technology. What is blockchain technology? How does it
work?
The purpose of this book is to demystify this and much more
about digital currencies and their influence on the future of
money. In this book, you will learn how money, as we know
it has changed over the centuries, the entry of
cryptocurrencies and blockchain technology, how it all
works and how these technological milestones are shaping
economies and the world at large.
 
Click here to check out the rest of 'Blockchain for Beginners'
on Amazon.

You might also like