Professional Documents
Culture Documents
BELAGAVI-590018
An Internship Report on
“Artificial Intelligence & Machine Learning”
Submitted in Partial Fulfillment of the award of the
Degree of Bachelor of Engineering
in
ELECTRONICS AND COMMUNICATION ENGINEERING
Submitted by
SATISH PATIL
(2JI19EC112)
CERTIFICATE
Certified that the Internship entitled “Artificial Intelligence & Machine Learning”,
is carried out by Mr. Satish Patil, USN: 2JI19EC112 a bonafide student of Department of
Electronics and Communication Engineering, Jain College of Engineering, Belagavi, in
partial fulfilment for the award of Bachelor of Engineering in Electronics and
Communication Engineering of the Visvesvaraya Technological University, Belagavi,
during the year 2022-2023. It is certified that all corrections/suggestions indicated for
Continuous Internal Evaluation have been incorporated in the report. The internship report
has been approved as it satisfies the academic requirements in respect of Internship prescribed
for the said Degree.
1. _________________________ __________________
2. __________________________ __________________
DECLARATION
I Satish Patil, hereby declare that my Internship is carried out at “Aqmenz Automation
Pvt. Ltd.” with entitled “Artificial Intelligence & Machine Learning”, submitted by me to
the Department of Electronics and Communication Engineering, Jain College of Engineering,
Belagavi, under the supervision of Prof.Vinayak Dalavi The report is for academic purpose.
Department VISION
“To impart quality technical education for developing globally competent,
ethically sound Electronics & Communication Engineers”
Department MISSION
1. To provide conducive environment through structured student centric,
teaching learning process
2. To nurture needs of society by infusing scientific temper in students
and to grow as a centre of excellence with efficient industry-institute
interaction.
3. To inculcate self-learning skills, entrepreneurial ability and
professional ethics.
5
Jain College of Engineering, Belagavi
Department of Electronics & Communication Engineering
Electronics Engineering Graduates will be able to achieve the following:
6
Jain College of Engineering, Belagavi
Department of Electronics & Communication Engineering
Subject : Internship Subject Code:18ECI85
Course Objectives:
1: Exposure to the current technological developments relevant to the subject area of training
2. Learn to apply the Technical knowledge in real industrial situations.
3. Gain experience in writing Technical reports/projects
4. Expose students to the engineer’s responsibilities and ethics.
5. Expose the students to future employers
CO-PO/PSO Mapping:
L1: Remembering L2: Understanding L3: Applying L4: Analyzing L5: Evaluating L6: Creating
Course Bloom’s
Description
Outcomes Cognitive level
Articulate and apply principles learned in the class rooms to
18ECI85.1 specific internship site experience
L3
18ECI85.2 Develop work competencies for a specific profession or
occupation.
L3
18ECI85.3 Will be able to use modern tools and processes to solve the
problems
L3
18ECI85.4 Present thoughts and ideas clearly and effectively. (Oral and
written communication, report writing, presentation skills).
L3
18ECI85.5 Explore career options and gain general work experience L5
COs PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2
18EC
2 1
I85.1
18EC
1 2 1
I85.2
18EC
1 1 2
I85.3
18EC
1 2
I85.4
18EC
1 2
I85.5
Avg 0.667 0.667 0.2 0.4 0.2 0.2 0.4 0.4
CO-PO Justification
7
Jain College of Engineering, Belagavi
Department of Electronics & Communication Engineering
CO-1 PO1-(2) Students will gain the engineering knowledge
PO2(1) Students will analyze and solve the problem with different tools
PO2(2) Students will analyze and solve the problem with different tools
PO2(1) Students will analyze and solve the problem with different tools
CO-4 PO9(1) Student will develop ability to work effectively as a member in teams,
preferably in a multi-disciplinary.
PO10(2) Students will gain soft skills and develop report writing capability.
CO-5 PO8(1) Students can work ethically and professionally in the workplace
PO12(2) Students will learn to implement knowledge into practice and innovate
Name Prof. S. B. Shindhe Name: Dr. Krupa Rasane Name: Dr. Krupa Rasane
8
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
ABSTRACT
Artificial intelligence (AI) and machine learning (ML) are two of the most rapidly
growing and important fields in computer science. AI is the study of how to create
computers that can “think” for themselves, while ML is the study of how to create
computers that can learn from data. Both AI and ML have been used for years in a wide
variety of applications, including search engines, spam filtering, medical diagnosis, and
voice recognition. In recent years, there has been a huge increase in the amount of data
available, and as a result, there has been a corresponding increase in the use of AI and
ML techniques to extract information from data. In this talk, we will give a brief
overview of AI and ML, and we will discuss some of the applications of these
techniques in data science
TABLE OF CONTENTS
CHAPTER 1 ..................................................................................................................4
INTRODUCTION..........................................................................................................4
CHAPTER 2 ..................................................................................................................5
COMPANY PROFILE...................................................................................................5
2.1.1 COMANY PROFILE……………………………………………………………5
CHAPTER 3 ..................................................................................................................6
WORK CARRIED OUT ...............................................................................................6
3.1 INTRODUCTION TO MACHINE LEARNIG USING PYTHON AND
JUPYTER NOTEBOOK ...............................................................................................6
3.2 PYTHON..................................................................................................................6
3.3 JUPYTER NOTEBOOK .........................................................................................8
3.4 DATA SCIENCE...................................................................................................10
3.5 MACHINE LEARNING........................................................................................12
3.6 VARIOUS PYTHON LIBRARIES USED IN THE PROJECT............................13
3.7 PROJECT…………………………………………………………………...........16
CHAPTER 4 ...............................................................................................................23
SKILLS ACQUIRED...................................................................................................23
4.1 SCOPE OF THE SKILLS ACQUIRED ...............................................................23
4.2 APPLICATIONS...................................................................................................24
CHAPTER 5 ................................................................................................................25
CONCLUSION............................................................................................................25
TABLE OF FIGURES
CHAPTER 1
INTRODUCTION
To sum up, internships play a crucial role in shaping one's career. It not only helps
undergraduates and graduates gain real exposure to working environments but
also helps them develop the necessary skills required to stand out in a saturated
job market. It highlights the potential of candidate during hiring and Career
Development Generally, an internship is a task-specific exchange of service for
experience between a student and a business. Within internships, classroom
concepts suddenly become real tools of the trade as you interact and learn in a
professional setting. Internship experiences are formal, formative, and
foundational to your career. Developing your knowledge of workplace
collaboration, business etiquette, and strong communication tactics are among the
vital “soft skills” that can only be Character Growth Not only do internships help
develop your professionalism, but they also encourage character growth. Many
employers even value personal qualities over professional knowledge when it
comes to employment. A Door to Opportunity Internships is foundational in
preparing students for the workforce and providing opportunities after graduation.
Most employers seek career ready college graduates who have been equipped
with prior experiences and skills in a given field.
CHAPTER 2
COMPANY PROFILE
CHAPTER -3
3.2 Python:
The combination of features that give the Python language an advantage over others
has led to its wide range of applications. Among the advantages of Python
programming are:
3.3Jupyter Notebook :
Jupyter Notebook is an open-source web application that allows users to create and
share documents that contain live code, equations, visualizations, and narrative text. It
is widely used by data scientists, researchers, and educators to perform data analysis,
numerical simulations, and machine learning tasks.
Jupyter Notebook is built on top of the IPython kernel, which supports various
programming languages, including Python, Julia, R, and others. The notebook interface
provides a convenient way to write and execute code, visualize data, and document the
analysis process. The notebooks can be saved in various formats, including HTML,
PDF, and Markdown, making them easy to share and collaborate on with others.
Here are the main steps involved in using Jupyter Notebook for machine learning
using Python:
1. Install the required libraries: You must install the necessary Python libraries,
including NumPy, Pandas, Scikit-learn, TensorFlow, and Keras, before using Jupyter
Notebook for machine learning.
2. Load the dataset: Using a library like Pandas, load the dataset into Jupyter Notebook
so that it may be used for machine learning.
3. Explore the dataset: To learn more about the dataset, use data visualization and
summary statistics.
4. Data preparation: Prepare the dataset by cleaning and transforming the data. This
entails actions like eliminating null values, scaling the features, and encoding category
variables.
5. Division of the data to evaluate the machine learning model's performance, divide
the dataset into training and testing sets.
6.Train the machine learning model: Train the machine learning model using a library
like Scikit-learn, TensorFlow, or Keras.
7.Evaluate the model: Evaluate the performance of the machine learning model using
metrics like accuracy, precision, recall, and F1 score.
8.Fine-tune the model: Fine-tune the machine learning model by adjusting the
hyperparameters and exploring different algorithms and architectures.
9.Predict on new data: Use the trained machine learning model to make predictions on
new, unseen data.
Jupyter Notebook allows you to easily perform these tasks in a single environment,
making it a powerful tool for machine learning using Python.
Overall, Jupyter Notebook is a powerful and versatile tool that provides a convenient
and flexible environment for data analysis and scientific computing. Its popularity and
user community make it a valuable resource for anyone working in data science or
related fields.
Data Science is a multidisciplinary field that involves the use of statistical and
computational methods to extract insights and knowledge from data. It
combines techniques from statistics, computer science, and domain-specific
knowledge to analyze, interpret, and visualize complex data sets.
Data Science typically involves several steps, including data collection, data
cleaning, data preprocessing, exploratory data analysis, modeling, and
communication of results. Data scientists use programming languages like
Python, R, or SQL, as well as machine learning algorithms and statistical
models to uncover patterns and relationships in data.
“Data science” is just about as broad of a term as they come. It may be easiest
to describe what it is by listing its more concrete components:
5) Data storage and big data frameworks : Big data is best defined as data that
is either literally too large to reside on a single machine, or can’t be processed
in the absence of a distributed environment. The Python bindings to Apache
technologies play heavily here. Apache Spark; Apache Hadoop; HDFS; Dask;
h5py/pytables.
6) Odds and ends : Includes subtopics such as natural language processing, and
imag manipulation with libraries such as OpenCV. Included here: nltk; Spacy;
OpenCV/cv2; scikit-image; Cython.
Problem Statement :
3.5MACHINE LEARNING :
• Numpy:
NumPy is the fundamental package for scientific computing with Python. It contains
among other things.
1) a powerful N-dimensional array object.
2) sophisticated (broadcasting) functions.
3) tools for integrating C/C++ and Fortran code .
4) useful linear algebra, Fourier transform, and random number capabilities.
• Pandas
Pandas is an open-source, BSD-licensed Python library providing high-performance,
easy-to-use data structures and data analysis tools for the Python programming
language. Python with Pandas is used in a wide range of fields including academic and
commercial domains including finance, economics, Statistics, analytics, etc. In this
tutorial, we will learn the various features of Python Pandas and how to use them in
practice.
• Matplotlib
Matplotlib is a Python 2D plotting library which produces publication quality figures in
a variety of hardcopy formats and interactive environments across platforms. Matplotlib
can be used in Python scripts, the Python and IPython shells, the Jupyter notebook,
web application servers, and four graphical user interface toolkits.
For simple plotting the pyplot module provides a MATLAB-like interface, particularly
when combined withIPython. For the power user, you have full control of line styles,
font properties, axes properties, etc, via an object-oriented interface or via a set of
functions familiar to MATLAB users.
• Scikit-Learn
Scikit-learn is a popular machine learning library for the Python programming language.
It provides a variety of tools for data preprocessing, model selection, model training,
and model evaluation, making it a powerful tool for both beginners and experienced
machine learning practitioners.
1) Scikit-learn provides a clean and consistent interface to tons of different models.
2) It provides you with many options for each model, but also chooses sensible
defaults.
3) Its documentation is exceptional, and it helps you to understand the models as well
as how to use them properly.
4) It is also actively being developed
3.7Project:
Project Objective
the objective of this project is to conduct exploratory data analysis (EDA) and statistical
modeling on the Titanic Dataset in order to gather insights and evenutally predicting
survior(0 = Not Survived, 1 = Survived). Out of the 891 passengers that went on board
the titanic, approximately 38% of them got surived where as majority 62% did not survive
the disaster..
Dataset
Fig:3.1.1:dataset
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
data.head()
Fig:3.1.3:null data
print(data.isnull().sum()/data.shape[0]*100)
As we can see from the diagram the column ‘Age’ and ‘Cabin’ have got null values.
While ‘Cabin’ has huge amount null values, ‘Age’ has moderate amount of null
values.
Here we would drop the ‘Cabin’ column as it has got mostly NULL values.
Fig:3.1.5:filling data
Step 5 Heatmap
Fig:3.1.6:Heatmap
Fig:3.1.7:Data
Data info
data.info()
Fig:3.1.8:Data info
Fig:3.1.9:Important coloumns
Data processing
➢ # Data Preprocessing
➢ # convert categorical Features into Numerical (Sex, Embarked)
➢ # since only few categories in "Sex" & "Embarked column", we can
use Label Encoding
➢
➢ from sklearn.preprocessing import LabelEncoder
➢
➢ le = LabelEncoder()
➢ data["Sex"] = le.fit_transform(data["Sex"])
➢ data["Embarked"] = le.fit_transform(data["Embarked"])
➢ data
Fig:3.1.10:Data preprocessing
Let us try to know if the dependent variable ‘Survived’ has any relation with the variable
‘Sex’.
To do so we would use factorplot. The following code snippet would return us the
required
sns.factorplot(x=’Survived’,col=’Sex’,kind=’count’,data=tra
in)
Fig:3.1.11:Graph
X = data.drop("Survived",axis = 1)
Y = data["Survived"]
xtrain,xtest,ytrain,ytest = train_test_split(X,Y,test_size =
0.2,random_state = 2)
print(xtrain.shape,xtest.shape,ytrain.shape,ytest.shape)
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
data.head()
data.isnull().sum()
# Checking the Missing Values percentage
print(data.isnull().sum()/data.shape[0]*100)
data = data.drop(["Cabin"],axis = 1)
data["Age"] = data["Age"].fillna(data["Age"].mean())
data["Fare"] = data["Fare"].fillna(data["Fare"].mean())
data.isnull().sum()
#heatmap
sns.heatmap(data.corr(),annot = True, fmt = "0.1f")
data
data.info()
#Name, Ticket, Passenger ID Features are Not important,
we can drop these Features
data = data.drop(["PassengerId","Name","Ticket"],axis =
1)
data
# Data Preprocessing
# convert categorical Features into Numerical (Sex,
Embarked)
# since only few categories in "Sex" & "Embarked
column", we can use Label Encoding
le = LabelEncoder()
data["Sex"] = le.fit_transform(data["Sex"])
data["Embarked"] = le.fit_transform(data["Embarked"])
data
sns.factorplot(x=’survived’,col=’sex’,kind=’count’
,data=data)
X = data.drop("Survived",axis = 1)
Y = data["Survived"]
# Split the data into Train & Test
from sklearn.model_selection import train_test_split
xtrain,xtest,ytrain,ytest =
train_test_split(X,Y,test_size = 0.2,random_state = 2)
print(xtrain.shape,xtest.shape,ytrain.shape,ytest.shape)
CHAPTER-4
SKILLS ACQUIRED
4.2 APPLICATIONS :
1. Image Recognition :
➢ Object detection and recognition
➢ Facial recognition
➢ Image classification
2.Speech Recognition :
➢ Voice assistants
➢ Speaker identification
➢ Language translation
3.RecommenationSystems :
➢ E-Commerce
➢ Entertainment
➢ Social media,Music,Travel
CHAPTER 5
CONCLUSION
Overall, the internship has equipped me with skills that will be invaluable in my
future career. I am grateful for the opportunity to work with my mentors and
colleagues, and I look forward to applying my newfound knowledge and skills in my
future endeavours.