You are on page 1of 14

1

A MICRO PROJECT

ON

" Machine Learning"


1.0 Aims/Benefits of the micro project

 To learn about what machine learning is.


 To get Information about machine learning applications.
 Gain Knowledge about the advantages of machine learning.

2.0 Course outcome addressed.

a. Develop programs using Object Oriented methodology in


Java.
b. Apply the concept of inheritance for code reusability

3.0 Proposed methodology

1. Focused on the selection of an appropriate topic for the


micro-project.

2. Select the topic i.e. To Prepare a report on machine learning.

3. Brief study on our topic.

4. Gather all information based on the topic of the micro project.

5. Analysis and study of our topic in detail.


2

6. Following all the above methodologies we successfully


completed our microproject.

4.0 Action Plan

Name of
Plan Plan
Sr. responsible
Detail of activity start finish
No. team
date date
members

Searching the topic for micro-


1
project

collect information from the


2
internet and textbook

collect information from the


ETI Emerging Trends in
3 Computer & Information
Technology 22618 reference
book & Manual.

arrange all information in ms


4
word

Prepare a report on it using


5
MS word

6 print micro project

5.0 Resources used


3

Name of
Sr.
resource Specifications Quantity
no.
material

Computer 16 GB RAM, Windows 11


1 1
System OS

2 Internet Youtube / geek4geek

ETI Emerging Trends in


1
3 textbook/manual Computer & Information
Technology 22618

annexure-II
Micro-Project Report

A MICRO PROJECT ON "Machine Learning"

1.0 Brief Introduction/Rationale

Machine learning is a branch of artificial intelligence (AI) and


computer science that concentrates on the usage of data and
algorithms to emulate the way that humans learn, slowly
enhancing its precision.

IBM has a rich history with machine learning. One of its own,
Arthur Samuel, is credited for coining the term, “machine
learning” with his research (PDF, 481 KB) (link resides outside
IBM) about the game of checkers. Robert Nealey, the self-
proclaimed checkers master, played the game on an IBM 7094
computer in 1962, and he lost to the computer. Corresponded
to what can be done today, this feat seems trivial, but it’s
believed a main milestone in the area of artificial intelligence.
4

Over the last couple of decades, technological advances in


storage and processing power have allowed some creative
products based on machine learning, such as Netflix’s
recommendation engine and self-driving cars.

Machine learning is a significant component of the growing field


of data science. Through the usage of statistical methods,
algorithms are trained to make categories or forecasts and to
reveal key insights in data mining projects. These insights
subsequently drive decision-making within applications and
businesses, ideally influencing key growth metrics. As big data
continues to grow and grow, the market need for data scientists
will increase. They will be needed to help determine the most
relevant business questions and the data to answer them.

Machine learning can be categorized into two broad


learning tasks:
1. Supervised ML
2. Unsupervised ML
There are numerous other algorithms.

1. Supervised learning:

An algorithm utilizes training data and feedback from humans


to understand the relationship between given inputs to a given
output. For instance, a practitioner can utilize marketing costs
and weather forecasts as input data to forecast the sales of
cans. You can utilize supervised learning when the output data
is known. The algorithm will forecast new data.
There are two types of supervised learning:
1. Classification task
2. Regression task
Classification
5

Suppose you want to forecast the gender of a customer for a


commercial. You will start collecting data on height, weight,
job, salary, purchasing basket, etc. from your customer
database. You know the gender of each of your customers, it
can only be male or female. The objective of the classifier will
be to allocate a chance of being a male or a female (i.e., the
label) based on the information (i.e., features you have
gathered). When the model learned how to identify males or
females, you can utilize new data to make a prediction. For
instance, you just got new information from an anonymous
customer, and you want to know if it is a male or female. If the
classifier forecasts male = 70%, it means the algorithm is sure
at 70% that this customer is a male, and 30% it is a female.
The label can be for two or more classes. The above Machine
understanding example has only two classes, but if a classifier
requires to forecast an object, it has dozens of classes (e.g.,
glass, table, shoes, etc. each object represents a class)
Regression
When the output is a continuous value, the task is a
regression. For instance, a financial analyst may require to
forecast the value of a stock based on a range of features like
equity, last stock performances, and macroeconomics index.
The system will be trained to evaluate the price of the stocks
with the lower possible error.

2. Unsupervised learning

In unsupervised learning, an algorithm examines input data


without being given an explicit output variable (e.g., explores
customer demographic data to recognize patterns).
6

You can use it when you do not know how to organize the
data, and you want the algorithm to find patterns and
categorize the data for you.
Example: Training of students during exams. While
preparing for the exams students don’t really cram the subject
but try to learn it with full understanding. Before the
examination, they provide their machine(brain) with a good
quantity of high-quality data (questions and answers from
different books or teachers’ notes, or online video lectures).
Even, if they are training their brain with input as well as output
i.e. what type of strategy or logic do they have to solve various
types of questions? Each time they solve practice test papers
and find the performance (accuracy /score) by comparing
answers with the answer key given, Slowly, the performance
keeps on growing, achieving more confidence with the
adopted method. That’s how actual models are built, train the
machine with data (both inputs and outputs are given to the
model), and when the time comes to test on data (with input
only) and execute our model scores by comparing its answer
with the actual output which has not been provided while
training. Researchers are performing with assiduous efforts to
enhance algorithms, and methods so that these models
perform even better.

Basic Distinction in ML and Traditional Programming?


 Traditional Programming: We provide in DATA (Input) +

PROGRAM (logic), run it on the machine, and obtain the


output.
 Machine Learning: We provide in DATA(Input) + Output,

run it on the machine during training and the machine


7

creates its own program(logic), which can be evaluated


while testing.
What does exactly learning mean for a computer A
computer is said to be learning from Experiences with regard
to some class of Jobs if its performance in a given job
enhances the Experience.
A computer program is said to learn from experience E with
respect to some class of tasks T and performance measure P,
if its performance at jobs in T, as measured by P, enhances
with experience E Example: playing checkers. E = the
experience of playing numerous games of checkers T = the
task of playing checkers. P = the possibility that the program
will win the next game In general, any machine learning
problem can be appointed to one of two broad categories:
Supervised learning and Unsupervised learning.
How does ML work?
Machine learning is the brain where all the learning carries out.
The way the machine learns is identical to the human being.
Humans learn from experience. The more we know, the more
efficiently we can forecast. By analogy, when we face an
unknown condition, the probability of success is lower than the
known situation. Machines are trained the same. To create a
precise forecast, the machine sees an example. When we give
the machine an identical example, it can figure out the
outcome. However, like a human, if it provides an earlier
unseen example, the machine has complications predicting.
The core objective of machine learning is
learning and inference. First of all, the machine learns via the
discovery of patterns. This discovery is created thanks to
the data. One essential part of the data scientist is to select
carefully which data to deliver to the machine. The list of
attributes utilized to solve a problem is called a feature
8

vector. You can think of a feature vector as a subset of data


that is utilized to tackle a problem.
The machine utilizes some fancy algorithms to facilitate reality
and transforms this discovery into a model. Therefore, the
learning stage is utilized to represent the data and summarize
it into a model.
 Collecting past data in any form appropriate for processing.
The better the rate of the data, the more suitable it will be
for modeling
 Data Processing – Occasionally, the data gathered is in raw
form and it requires to be pre-processed. Example: Some
tuples may have missing values for certain attributes, and,
in this case, it has to be filled with suitable values in order to
achieve machine learning or any form of data mining.
Missing values for numerical attributes such as the price of
the house may be replaced with the mean value of the
attribute whereas missing values for categorical attributes
may be replaced with the attribute with the highest mode.
This invariably depends on the types of filters we utilize. If
data is in the form of text or images then converting it to
numerical form will be needed, be it a list or array, or matrix.
Simply, Data is to be made relevant and consistent. It is to
be transformed into a format understandable by the
machine
 Split the input data into training, cross-validation, and test
sets. The ratio between the respective sets must be 6:2:2
 Building models with suitable algorithms and methods on
the training set.
 Testing our conceptualized model with data that was not
provided to the model at the time of training and assessing
its performance utilizing metrics such as F1 score,
accuracy, and recall.
9

 Linear Algebra
 Statistics and Probability
 Calculus
 Graph theory
 Programming Skills – Languages such as Python, R,
MATLAB, C++, or Octave.

 Restrictions of Machine Learning:


1. The prior challenge of machine learning is the shortage of
data or the diversity in the dataset.
2. A machine cannot learn if there is no data available.
Similarly, a dataset with a shortage of diversity gives the
machine a hard time.
3. A machine requires to have heterogeneity to learn
significant insight.
4. It is rare that an algorithm can remove information when
there are no or few deviations.
5. It is advised to have at least 20 observations per group to
help the machine learn. This constraint leads to poor
evaluation and forecast.

Application of Machine Learning

Now in this Machine learning tutorial, let’s learn the


applications of Machine Learning:
Augmentation:
Machine learning, helps humans with their day-to-day duties,
personally or commercially without having complete control of
the output. Such machine learning is utilized in various ways
such as Virtual Assistants, Data analysis, and software
solutions. The preliminary user is to decrease mistakes due to
human bias.
10

Automation:
Machine learning works completely autonomously in any field
without the necessity for any human intervention. For example,
robots execute the necessary process steps in manufacturing
plants.
Finance Industry
Machine learning is expanding in popularity in the finance
industry. Banks are mostly using ML to find patterns inside the
data but also to stop fraud.
Government organization
The government makes use of ML to manage public security
and utilities. Take the example of China with its massive face
recognition. The government utilizes Artificial intelligence to
prevent jaywalking.
Healthcare industry
Healthcare was one of the first industries to use machine
learning with image detection.
Marketing
Broad use of AI is done in marketing thanks to abundant
access to data. Before the age of mass data, researchers
develop advanced mathematical tools like Bayesian analysis
to estimate the value of a customer. With the boom of data,
the marketing department relies on AI to optimize customer
relationships and marketing campaigns.

 History of Machine Learning

Before some years (about 40-50 years), machine knowledge was


science fiction, but today it is a part of our everyday life. Machine
learning is making our day-to-day life effortless from self-
driving cars to Amazon virtual assistant "Alexa". However, the
11

idea of machine learning is so old and has a long history. Below


some milestones are given which have appeared in the history of
machine learning:

 The earlier history of Machine Learning (Pre-1940):

o 1834: In 1834, Charles Babbage, the father of the computer,


created a device that could be programmed with punch
cards. However, the machine was never built, but all modern
computers depend on its logical structure.
o 1936: In 1936, Alan Turing gave a theory that how a
machine can decide and perform a set of instructions.

The generation of stored program computers:

o 1940: In 1940, the first manually managed computer,


"ENIAC" was invented, which was the first electronic
general-purpose computer. After that stored program
computers such as EDSAC in 1949 and EDVAC in 1951 were
developed.
o 1943: In 1943, a human neural network was modeled with
an electrical circuit. In 1950, scientists started using their
idea to work and analyzed how human neurons might work.

Computer machinery and intelligence:

o 1950: In 1950, Alan Turing posted a seminal paper,


"Computer Machinery and Intelligence," on the topic of
artificial intelligence. In his paper, he questioned, "Can
machines think?"
12

Machine intelligence in Games:

o 1952: Arthur Samuel, who was the frontiersperson of


machine learning, developed a program that aided an IBM
computer to play a checkers game. It performed better
more it played.
o 1959: In 1959, the term "Machine Learning" was first coined
by Arthur Samuel.

The first "AI" winter:

o The duration of 1974 to 1980 was a hard time for AI and ML


researchers, and this duration was called AI winter.
o During this duration, the failure of machine translation
appeared, and people reduced their interest in AI, which led
to less funding by the government for the research.

Machine Learning from theory to reality

o 1959: In 1959, the first neural network was applied to a real-


world problem to extract echoes over phone lines using an
adaptive filter.
o 1985: In 1985, Terry Sejnowski and Charles Rosenberg
developed a neural network NETtalk, which was capable to
train itself how to accurately pronounce 20,000 words in
one week.
o 1997: IBM's Deep blue intelligent computer won a chess
game against the chess expert Garry Kasparov, and it
became the first computer that had beaten a human chess
professional.
13

 Machine Learning at present:

Now machine learning has got an outstanding improvement in


its research, and it is present everywhere around us, such
as in self-driving cars, Amazon Alexa, Catboats, recommender
systems, and numerous more. It
contains Supervised, unsupervised, and reinforcement learning
with clustering, classification, decision tree, SVM algorithms, etc.

Modern machine learning models can be utilized for creating


various predictions, including weather prediction, disease
prediction, stock market analysis, etc.

 2.0 Actual Resources Use

Name of
Sr.
resource Specifications Quantity
no.
material

Computer
1 8 GB RAM, Windows 11 OS 1
System

2 Internet Youtube / Wikipedia

ETI Emerging Trends in


3 textbook/manual Computer & Information 1
Technology 22618

 3.0 Skill Developed


14

 1. Teamwork
 2. Communication skills
 3. Able to get all information about Machine Learning.

 4.0 Outputs of the Micro-Project

 we successfully get all information about machine learning


basically .

Machine Learning is a system of computer algorithms that can learn


from examples through self-improvement without being explicitly
coded by a programmer. Machine learning is an element of artificial
intelligence that integrates data with statistical tools to forecast an
output that can be utilized to create actionable insights.

The breakthrough comes with the idea that a machine can singularly
learn from the data (i.e., an example) to create precise results.
Machine learning is closely related to data mining and Bayesian
predictive modeling. The machine accepts data as input and utilizes
an algorithm to formulate answers

You might also like