You are on page 1of 13

Plagiarism Checker X Originality Report

Similarity Found: 18%

Date: Wednesday, December 09, 2020


Statistics: 630 words Plagiarized / 3341 Total words
Remarks: Low Plagiarism Detected - Your Document needs Selective Improvement.
-------------------------------------------------------------------------------------------

B.E. Project Phase-I Report on SPEAKING SYSTEM FOR MUTE PEOPLE USING IOT by
Candidates Name (Exam Seat No.) Candidates Name (Exam Seat No.) Candidates Name
(Exam Seat No.) Candidates Name (Exam Seat No.) Under the guidance of Guide Name /
Department of Information Technology Smt. Kashibai Navale College of Engineering,
Pune-41 Accredited by NBA SAVITRIBAI PHULE PUNE UNIVERSITY 2020-2021 Sinhgad
Technical Education Society, Department of Information Technology Smt. Kashibai
Navale College of Engineering , Pune-411041 _/ _ _ Date: CERTIFICATE This is to certify
that, Candidates Name (Exam Seat No.)

Candidates Name (Exam Seat No.) Candidates Name (Exam Seat No.) Candidates Name
(Exam Seat No.) of class B.E IT; have successfully completed their project Phase-I work
on “SPEAKING SYSTEM FOR MUTE PEOPLE USING IOT’’ at Smt. Kashibai Navale College
of Engineering, Pune in the partial fulfillment of the Graduate Degree course in B.E

at the Department of Information Technology, in the academic Year 2020-2021 as


prescribed by the Savitribai Phule Pune University, Pune. Name of the Guide Project
Guide _ Prof. R. H. Borhade Head of the Department (Department of Information
Technology) _ _ (External Examiner) Savitribai Phule Pune University, Pune _ Principal
SKNCOE, Pune _ _ Acknowledgement Acknowledgements should be in the same order
of hierarchy- your guide, head of department, Principal, management, lab attendants,
friends and family.
For acknowledgement to every category, use separate paragraphs. This may take 1 or 2
pages- if it exceeds one page, then, it is to be printed back to back. That means, in one
paper, acknowledgement should be given. Always apply ‘justify’ in every paragraph you
write in your report. Candidates Name (Exam Seat No.) Candidates Name (Exam Seat
No.) Candidates Name (Exam Seat No.) Candidates Name (Exam Seat No.)
Table of Contents _Certificate _ _ _ _Abstract _ _ _ _Acknowledgement _ _ _ _List of Tables
_ _ _ _List of Figures _ _ _ _List of Symbols, Abbreviations and Nomenclature _ _ _Chapter
No. _Chapter Name _Page No. _ _ _INTRODUCTION (Topic
Background/History/Criticality) Minimum of 200 words, giving some briefing of the
details to follow. _ _ _ _1.1 Detailed problem definition _ _ _ _1.2 Justification of problem
_ _ _ _1.3 Need for the new system _ _ _ _1.4

Advances/additions/updating the previous system _ _ _ _1.5 Presently available systems


for the same _ _ _ _1.6 Organization of the report _ _ _2. _LITERATURE SURVEY _ _ _ _ 2.1
Related Work Done (Lit survey with citations) _ _ _ _ 2.2 Existing System _ _ _3. _ PROJECT
STATEMENT _ _ _ _What is to developed (Problem Definition) _ _ _ _ 3.2 Proposed
Algorithm/Methodology _ _ _4.

_ SYSTEM REQUIREMENTS & SPECIFICAITON _ _ _ _ 4.1 H/W Requirements _ _ _ _ 4.2


S/W Requirements _ _ _5. _ SYSTEM DESIGN _ _ _ _5.1Overall Architecture Diagram _ _ _ _
5.2 Use-Case Diagram _ _ _ _5.3 Sequence Diagram _ _ _ _5.4 Activity Diagram _ _ _ _5.5
Class Diagram _ _ _ _5.6 Deployment Diagram _ _ _ _5.7 Collaboration Diagram _ _ _
_5.8Component Diagram _ _ _ _5.9 DFD Level-0 Diagram (If applicable) _ _ _ _5.10 DFD
Level-1 Diagram (If applicable) _ _ _6. _CONCLUSION _ _ _7.

_REFERENCES Annexure (attach one page plagiarism report indicating % of copied


contents) _ _ _
List of Tables Table No. _Title _Page No. _ _1.1 _General description of CEFLE corpus and
the subcorpus. _12 _ _3.1 _Filter coefficients of db4 wavelet transform. _36 _ _3.2
_Classification accuracy for individual texture features computed for synthetic images,
S1, S2 and real SAR image. _39 _ _ List of Figures Figure No.

_Title _Page No. _ _1.1 _Location and topography of Roorkee, India. _10 _ _1.2 _Location
and aerial view of New Orleans (Google Earth, 2008). _11 _ _1.3 _Preprocessing of ERS-2
SAR image. _13 _ _1.4 _Preprocessing of RADARSAT-1 image. _16 _ _3.1 _Scaling and
wavelet function of fourth order daubechies wavelet transform (db4). _36 _ _ Notes:
Figure numbers should be given as a.b

such that ‘a’ corresponds to chapter number and ‘b’ corresponds to figure number
inside the chapters. Acronyms ART Adaptive Resonance Theory AVHRR Advanced High
Resolution Radiometer CEOS Committee on Earth Observation Satellites CP Change’
Pixels CSA Canadian Space Agency DCT Discrete Cosine Transform DEM Digital
Elevation Model DInSAR Differential Interferometric SAR DN Digital Number Note:
acronyms should be alphabetically sorted
Abstract (Minimum 300 words & maximum 500 words)
Chapter 1 Introduction Motivation It’s very difficult for mute people to convey their
message to regular people. Since regular people are not trained on hand sign language,
the communication becomes very difficult.

In emergency or other times when a mute person travelling or among new people
communication with nearby people or conveying a message becomes very difficult.
Here we propose smart speaking systems that help mute people in conveying their
message to regular people using hand motions and gestures. Justification of problem
Here we propose a smart speaking system that helps mute people in conveying their
message to regular people using hand motions and gestures.

Need for the new system Mute people can’t speak and normal people don’t know the
sign language which is used for inter-communication between mute people. This system
will be useful to solve this problem Advances/additions/updating the previous system
Smart wearable hand device as a sign interpretation system using a built-in SVM
classifier is implemented in [1] but the system is for blind people we here created a
system for mute people too.
Presently available systems for the same Smart Wearable Hand Device for Sign
Language Interpretation System with Sensors Fusion Hand Gesture Movement Tracking
System for Human Computer Interaction Vision-Based Sign Language Translation Device
Organization of the report Proposed report is divided into 6 chapters Chapter 1
introduced the subject, clarify reason behind taking this project, need of this this system
in medical, how system justify formulated problem, advances or addition in previous
system and presently available system which are available for same problem. Chapter 2
discusses different papers related to proposed system.

A table is drawn which tells what was the outline of referred papers and advantages over
previous system. Chapter 3 discussed problem and proposed methodology. In this
project we are using SVM hence SVM is discussed thoroughly. Chapter 4 analyses
required hardware and software for proposed system Chapter 5 is all about detail
design of proposed system.

It includes system architecture, different UML diagrams like Use case, component,
sequence diagram etc. Chapter 6 draws conclusion on proposed system

Chapter 2 Literature Survey In [1], authors successfully designed and implemented a


novel and smart wearable hand device as a sign interpretation system using a built-in
SVM classifier.

An Android-based mobile application was developed to demonstrate the usability of the


proposed smart wearable device with an available text-to-speech service. Jian Wu and
Lu Sun [2] proposed a wearable real-time American Sign Language recognition system
in their paper. Feature selection is performed to select the best subset of features from a
large number of well-established features and four popular classification algorithms are
investigated for our system design.

The prototype architecture of the application comprises of a central computational


module that applies the camshift technique for tracking of hands and its gestures. Haar
like technique has been utilized as a classifier that is creditworthy for locating hand
position and classifying gesture. The virtual objects are produced using Open GL library.
[3] In [4], hand tracking based virtual mouse application has been developed and
implemented using a webcam.

The system has been implemented in MATLAB environment using MATLAB Image
Processing Toolbox. The system can recognize and track the hand movement and can
replace the mouse function to move the mouse cursor and the mouse click function. In
general, the system can detect and track the hand movement so that it can be used as
user interface in real time.

Yellapu Madhuri and et al [5] presents a report on a mobile Vision-Based Sign Language
Translation Device for automatic translation of Indian sign language into speech in
English to assist the hearing and/or speech impaired people to communicate with
hearing people. This system is an interactive application program developed using
LABVIEW software and incorporated into a mobile phone.

This is able to recognize one handed sign representations of alphabets (A-Z) and
numbers (0-9). Siddharth S. Rautaray and Anupam Agrawal design a system for gestural
interaction between a user and a computer in dynamic environment in their paper [6].
The gesture recognition system uses image processing techniques for detection,
segmentation, tracking and recognition of hand gestures for converting it to a
meaningful command. The interface being proposed here can be substantially applied
towards different applications like image browser, games etc. G.

Simion and et al [7] studied advances in the field of vision based hand gesture
recognition, from hardware and software point of view and reviews some major trends
and the recent evolution. While providing a non-exhaustive inventory of the huge
amount of past research in the field, the paper reviews in more detail part based
approaches, particularly those embedded in the compositional framework, an emerging
dominant trend in computer vision.

To get daily information from the internet has become most people’s living habits
today. In order to reduce the steps of receiving the information, such as complex mouse
or keyboard steps, paper [8] propose a system designed for easily getting daily
information without mouse and keyboard actions. N. Subhash Chandra, T. Venu and P.
Srikanth developed a simple and fast motion image based algorithm.
Gestures recognition deals with the goal of interpreting human gestures via
mathematical algorithm In general; it is suitable to control home appliances using hand
gestures. [9] The system proposed in [10] by Lee and et al is divided into three modules:
processing module, sensor module, and communication module. The sensor module,
consisting of three BNO055 absolute orientation sensors, is placed on the thumb, index
finger, and the back of the hand.

The magnetometer is used to remove orientation readings caused by gravity.


Chapter 3 Project Statement Problem Definition In our society we have people with
disabilities. The technology is developing day by day but no significant developments
are undertaken for the betterment of these people. About nine billion people in the
world are deaf and dumb Communications between deaf-mute and a normal person
have always been a challenging task.

Sign language helps deaf and dumb people to communicate with other people. But not
all people understand sign language. Proposed Algorithm/Methodology Support vector
machines (SVMs) are powerful yet flexible supervised machine learning algorithms
which are used both for classification and regression. But generally, they are used in
classification problems. In 1960s, SVMs were first introduced but later they got refined in
1990.

SVMs have their unique way of implementation as compared to other machine learning
algorithms. Lately, they are extremely popular because of their ability to handle multiple
continuous and categorical variables. An SVM model is basically a representation of
different classes in a hyperplane in multidimensional space.

The hyperplane will be generated in an iterative manner by SVM so that the error can be
minimized. The goal of SVM is to divide the datasets into classes to find a maximum
marginal hyperplane (MMH). / The followings are important concepts in SVM - Support
Vectors - Datapoints that are closest to the hyperplane is called support vectors.

Separating line will be defined with the help of these data points. Hyperplane - As we
can see in the above diagram, it is a decision plane or space which is divided between a
set of objects having different classes. Margin - It may be defined as the gap between
two lines on the closet data points of different classes.

It can be calculated as the perpendicular distance from the line to the support vectors.
Large margin is considered as a good margin and small margin is considered as a bad
margin. The main goal of SVM is to divide the datasets into classes to find a maximum
marginal hyperplane (MMH) and it can be done in the following two steps - First, SVM
will generate hyperplanes iteratively that segregates the classes in best way. Then, it will
choose the hyperplane that separates the classes correctly.
In practice, SVM algorithm is implemented with kernel that transforms an input data
space into the required form. In simple words, kernel converts non-separable problems
into separable problems by adding more dimensions to it. It makes SVM more powerful,
flexible and accurate. The following are some of the types of kernels used by SVM.
Linear Kernel It can be used as a dot product between any two observations.

The formula of linear kernel is as below - ??(??,  ?? ?? )=??????(??* ?? ?? ) From the above
formula, we can see that the product between two vectors say ?? & ???? is the sum of
the multiplication of each pair of input values. Polynomial Kernel It is more generalized
form of linear kernel and distinguish curved or nonlinear input space.

Following is the formula for polynomial kernel - ?? ??,  ?? ?? =1+??????(??* ?? ?? )^??


Here d is the degree of polynomial, which we need to specify manually in the learning
algorithm. Radial Basis Function (RBF) Kernel RBF kernel, mostly used in SVM
classification, maps input space in indefinite dimensional space. Following formula
explains it mathematically - ??(??,  ?? ?? )= exp (-??????????* ?????? ??- ?? ?? 2 )  
Here, gamma ranges from 0 to 1.

We need to manually specify it in the learning algorithm. A good default value


of gamma is 0.1. As we implemented SVM for linearly separable data, we can implement
it in Python for the data that is not linearly separable. It can be done by using kernels.
Chapter 4 System Requirements and Specifications 4.1 H/W Requirements Processor -
Intel i3 core Speed - 1.1

GHz RAM - 4GB Hard Disk - 50 GB Key Board - Standard Windows Keyboard Mouse -
Two or Three Button Mouse Monitor - SVGA  4.2 S/W Requirements Operating system :
Windows 7. Coding Language : Python Database : MySQL

Chapter 5 System Design 5.1Overall Architecture Diagram / Fig system architecture


Camera captures image of hand.

Pre-processing: aim of pre-processing is to de-noise the image and subtract


background from image. RoI (Region of Interest) extraction: as the name suggest
extracting only that part of image which is of our interest or which has meaningful
information. Database: it consists of different hand gestures and against them is user
defined readable word/sentences.

Classification: classifying hand gesture into readable and understandable


sentences/words. Classifier compares input hand gesture with hand gesture present in
database and accordingly classifies it into sentence. Classified sentence is then sent to
server (firebase server). Android App fetches this sentence from server and then displays
it onto app.

App also plays audio of sentences for better results. 5.2 Uses-Case Diagram / 5.3
Sequence Diagram 5.3.1 User Sequence Diagram / 5.3.2 Admin sequence diagram / 5.4
Activity Diagram / 5.5 Class Diagram / 5.6 Deployment Diagram / 5.7Component
Diagram / 5.8 DFD Level-0 Diagram (If applicable) /
Chapter 6 Conclusion This system eliminates the barrier in communication between the
mute community and the normal people.

It also provides communication between dumb and blind. It is also useful for speech
impaired and paralysed patient means those do not speak properly. The project
proposes a translational device for deaf-mute people using glove technology. Further
the device will be an apt tool for deaf mute community to learn gesture and words
easily. And also it is portable.

Chapter 7 References Smart Wearable Hand Device for Sign Language Interpretation
System with Sensors Fusion B. G. Lee and S. M. Lee 1558-1748 (c) 2017 IEEE A Wearable
System for Recognizing American Sign Language in Real-time Using IMU and Surface
EMG Sensors Jian Wu, Lu Sun International Journal of UbiComp (IJU), Vol.3, No.1,
January 2012 DOI:10.5121/iju.2012.3103 21 REAL TIME HAND GESTURE RECOGNITION
SYSTEM FOR DYNAMIC APPLICATIONS Siddharth S.

Rautaray1 , Anupam Agrawal2 International Research Journal of Engineering and


Technology (IRJET) e-ISSN: 2395 -0056 Volume: 02 Issue: 08 | Nov-2015 www.irjet.net p-
ISSN: 2395-0072 © 2015, IRJET ISO 9001:2008 Certified Journal Page 1536 Hand Gesture
Movement Tracking System for Human Computer Interaction Shital M. Chavan1, Smitha
Raveendran Yellapu Madhuri1, Anitha.G2, Anburajan.M3 “VISION-BASED SIGN
LANGUAGE TRANSLATION DEVICE” P. R. V. Chowdary, M. N. Babu, T. V.

Subbareddy, B. M. Reddy and V. Elamaran, “Image processing algorithms for gesture


recognition using matlab,” presented at Int. Conf. Adv. Comm. Control Comput.
Technol., Ramanathapurum, India, Jan. 2015. T. Khan and A. H. Pathan, “Hand gesture
recognition based on digital image processing using matlab,” Int. J. Sci. Eng. R., vol. 6,
no. 9, pp. 338-346, Sep. 2015. J. Siby, H. Kader and J. Jose, “Hand gesture recognition,”
Int. J. Innov. Technol. R., vol. 3, no. 2, pp. 1946-1949, March 2015.

L. Lamberti and F. Camastra, “Real-time hand gesture recognition using a color glove,”
presented at Int. Conf. Image Analy. Process., Ravenna, Italy, Sep. 2011. Y. Iwai, K.
Watanabe, Y. Yagi and M. Yachida, “Gesture recognition using colored gloves,” in Proc.
13th Int. Conf. Pattern Recog., Vienna, Austria, Aug. 25-29, 1996.
INTERNET SOURCES:
-------------------------------------------------------------------------------------------
0% - Empty
0% - https://en.wikipedia.org/wiki/Indian_Ins
0% - http://www.sinhgad.edu/Mandatory_Disclos
0% - https://www.collegedekho.com/colleges/co
0% - https://kgr.ac.in/ece/
0% - https://www.campusoption.com/college/adm
0% - http://sppudocs.unipune.ac.in/sites/news
1% - http://www.kscst.iisc.ernet.in/spp/42_se
0% - https://docplayer.net/56885520-C-h-m-e-s
0% - https://baixardoc.com/documents/macmilla
0% - https://shaunakelly.com/word/styles/page
0% - https://www.quora.com/
0% - https://www.coursehero.com/file/74779133
0% - https://engineering.eckovation.com/forma
0% - https://docs.oracle.com/middleware/1221/
0% - https://www.scribd.com/document/39225074
0% - https://www.visual-paradigm.com/tutorial
0% - http://researchspace.ukzn.ac.za/bitstrea
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://www.researchgate.net/publication
0% - http://www.science.gov/topicpages/c/comp
1% - https://www.semanticscholar.org/paper/SP
1% - https://www.youtube.com/watch?v=l2K6Fcrg
1% - http://tecxperia.com/rp_project.html
1% - http://pep.ijieee.org.in/journal_pdf/11-
0% - https://www.technewsworld.com/story/Stud
0% - https://stackoverflow.com/questions/3053
1% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://www.examveda.com/computer-scienc
4% - https://www.tutorialspoint.com/machine_l
0% - http://dspace.cusat.ac.in/jspui/bitstrea
0% - https://www.slideshare.net/shahsmzh/onli
0% - https://www.slideshare.net/MahanteshHire
1% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
1% - https://www.researchgate.net/post/What_a
1% - https://www.researchgate.net/publication
1% - https://www.researchgate.net/publication
1% - https://www.researchgate.net/publication
0% - https://in.mathworks.com/help/hdlverifie
0% - https://www.computerhope.com/issues/ch00
0% - https://www.cl.cam.ac.uk/teaching/1011/O
2% - https://www.researchgate.net/publication
2% - https://www.researchgate.net/publication
2% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
1% - https://www.sciencedirect.com/science/ar
0% - https://www.interaction-design.org/liter
0% - https://www.researchgate.net/publication
1% - https://www.cpc.ncep.noaa.gov/products/a
1% - https://naun.org/main/NAUN/circuitssyste
1% - https://www.semanticscholar.org/paper/Vi
0% - https://smallbiztrends.com/2017/07/impac
1% - https://www.inderscienceonline.com/doi/a
1% - https://www.coursehero.com/file/p2keq00o
0% - https://academic.oup.com/iwc/article/26/
1% - https://www.researchgate.net/publication
0% - https://b-ok.cc/book/3662142/8f1225
0% - https://blog.jooq.org/2018/09/21/how-to-
1% - https://www.sciencedirect.com/science/ar
2% - https://www.tutorialspoint.com/machine_l
0% - https://ueeh.elenaturin.it/rbf-python.ht
4% - https://www.tutorialspoint.com/machine_l
0% - https://machinelearningmastery.com/suppo
1% - https://www.tutorialspoint.com/machine_l
0% - http://www.projecttopics.info/java-proje
0% - https://www.researchgate.net/publication
0% - https://www.ijert.org/research/Fruit-Rec
0% - https://deepai.org/publication/a-novel-r
0% - https://www.researchgate.net/publication
1% - https://www.slideshare.net/RakeshKumarAl
1% - https://www.slideshare.net/RakeshKumarAl
1% - https://dl.acm.org/doi/10.1145/3301275.3
1% - https://www.ijcaonline.org/archives/volu
1% - https://www.irjet.net/archives/V2/i8/IRJ
0% - https://issuu.com/irjet/docs/irjet-v3i94
0% - https://authorzilla.com/V5VN3/complete-v
0% - https://www.ijeat.org/icodsip2017/
1% - https://www.researchgate.net/publication

You might also like