Professional Documents
Culture Documents
INTRODUCTION
Integration with the Internet implies that devices will utilize an IP address as a
unique identifier. However, due to the limited address space of IPv4 (which allows for 4.3
billion unique addresses), objects in the IoT will have to use IPv6 to accommodate the
extremely large address space required. Objects in the IoT will not only be devices with
sensory capabilities, but also provide actuation capabilities (e.g., bulbs or locks controlled
over the Internet).
To a large extent, the future of the Internet of Things will not be possible without
the support of IPv6 and consequently the global adoption of IPv6 in the coming years will
be critical for the successful development of the IoT in the future. The embedded
computing nature of many IoT devices means that low-cost computing platforms are likely
to be used. In fact, to minimize the impact of such devices on the environment and energy
consumption, low-power radios are likely to be used for connection to the Internet. Such
low-power radios do not use WiFi, or well established Cellular Network technologies, and
remain an actively developing research area.
However, the IoT will not be composed only of embedded devices, since higher
order computing devices will be needed to perform heavier duty tasks (routing, switching,
data processing, etc.). Companies such as FreeWave Technologies have developed and
manufactured low power wireless data radios (both embedded and standalone) for over 20
years to enable Machine-to-Machine applications for the industrial internet of things.
Besides the plethora of new application areas for Internet connected automation to expand
into, IoT is also expected to generate large amounts of data from diverse locations that is
aggregated and very high-velocity, thereby increasing the need to better index, store and
process such data.
Diverse applications call for different deployment scenarios and requirement, which
have usually been handled in a proprietary implementation. However, since the IoT is
connected to the Internet, most of the devices comprising IoT services will need to operate
utilizing standardized technologies. Prominent standardization bodies, such as
the IETF, IPSO Alliance and ETSI, are working on developing protocols, systems,
architectures and frameworks to enable the IoT.
Ubiquitous computing which was thought as a difficult task has now become a
reality due to advances in the field of Automatic Identification, wireless communications,
distributed computation process and fast speed of Internet. From just a data perspective the
amount of data generated, stored and processed will be enormous. We focused on making
this architecture as a sensor based architecture where each sensor node will be as important
as the sensor network itself. Visualizing each sensor as having intelligence is the ultimate
aim of any architecture in the IoT domain.
The Internet of Things is a vision which is under development and there can be many stake
holders in this development depending upon their interests and usage. It is still in nascent
stages where everybody is trying to interpret IoT in with respect to their needs.
Sensor based data collection, data management, data mining and World Wide Web
is involved in the present vision. Of course sensor based hardware is also involved. A
simple and broad definition of the internet of things and the basic idea of this concept is the
pervasive presence around us of a variety of things or objects – such as Radio-Frequency
Identification (RFID) tags, sensors, actuators, mobile phones, etc. which, through unique
addressing schemes, are able to interact with each other and cooperate with their neighbours
to reach common goals.
Figure 1.2 has been discussion on three particular visions given by:
• Things Oriented Vision
This vision is supported by the fact that we can track anything using sensors and pervasive
technologies using RFID. The basic philosophy is uniquely identifying any object using
specifications of Electronic Product Code (EPC) .This technique is extended using sensors.
It is important to appreciate the fact that future vision will depend upon sensors
and its capabilities to fulfill the “things” oriented vision. We will be able to generate the
data collectively with the help of sensors, and sensor type embedded system. The
summarized vision will be dependent upon sensor based networks as well as RFID-based
Sensor Networks which will take care of the integration of technology based on RFID and
sophisticated sensing and computing devices and the global connectivity.
The internet-oriented vision has pressed upon the need to make smart objects which are
connected. The objects need to have characteristics of IP protocols as this is one of the
major protocols being followed in the world of Internet. The sensor based object can be
converted in to an understandable format, which can be identified uniquely and its attributes
can be continuously monitored. This makes the base for smart embedded objects which can
be assumed to be a microcomputers having computing resources.
This vision is powered by the fact that the amount of sensors which will be available at our
disposal will be huge and the data that they will collect will be massive in nature. Thus we
will have vast amount of information, possibly redundant, which needs to be processed
meaningfully. The raw data needs to be managed, processed and churned out in an
understandable manner for better representations and understanding.
If we are able to make the sets of data into homogeneous and heterogeneous formats
then the interoperability issues of understanding the data will be dependent upon the
semantic technologies to process the data. It is here that needs a generic vision of processing
the raw data in to meaningful data and a marked separation of data and their interpretation.
Figure 1.2: Three main visions of Internet of Things.
Typically a WSN node contains interfaces to sensors, computing and processing units,
transceiver units and power supply. More sophisticated sensor nodes can communicate over
multiple frequencies.
• Middleware:-
This is associated with the internet infrastructure and the concept of service oriented
architecture (SOA) for access to heterogeneous sensor resources as described in. WSNs
technological advances in hardware domain catering to circuits and wireless
communications have made robust and cost effective devices in sensing applications.
This has led to the use of sensors in wireless communication devices in diversified
environments. Sensor data is collected and sent for centralized, distributed or any hybrid
processing module for data processing. Hence, there are several challenges WSN has to
face to develop a successful IoT communication networks.
All the objects that are present in the environment can be called object fit to be the “things”
of the internet. All these objects need an address which must be unique. This uniqueness
property will be a unique constraint and it will pave the way to gather information and even
control sensor based devices. Internet Protocol is the standard based protocol which is used
for internetworking methods of Internet. The first version was IPv4 and was thought of
having huge address spaces. But IPv4 got exhausted. smart embedded devices or simply a
sensor. Their communication mechanisms will be Wi-Fi, DSL, Satellite, Cable, Ethernet
and so forth.
The typical packet size of the communicating protocol will be around 1500 data
bytes to 9000 data bytes and even more. Today large amount of spatial data is also being
generated and thus we can use to use metadata for connecting database and Internet. As
happens in World Wide Web, the operations with sensor nodes may not be possible by
giving unique names to the sensors. Instead a unique address scheme must be formulated
and will be known as Unique Resource Name (URN).A look up table of these URN must
be present at the centralized node commonly known as gateway to the sensor sub system.
Thus entire network now forms a web of connectivity from users (high-level) to
sensors (low-level) that is addressable (through URN) accessible (through URL) and
controllable (through Uniform Resource Characteristics - URC) [11]
Data Storage
As IoT is getting developed the amount of data getting created is huge. The data centers
which will be storing this data will also needs space requirement as well as the energy and
power resources. It is this data which needs to be organized and processed.
Semantic data fusion models will also be required to create meaning out of this data.
Artificial Intelligence algorithms must be applied to extract meaning from this redundant
data. Data storage and analysis will be a challenge when the whole world will be connected
through IoT.
Visualization
Any interaction of user with the environment will need proper visualization software which
will highlight the sensing mechanism as well the interpretation of data scenario too. Touch
screens and smart embedded tablets have created a conductive environment for the system.
The information which is being processed in to meaningful data using sensor fusion
algorithms will present lot many inferences about the current situation.
The basis of this tracking is indeed RFID tags which are placed on object, human beings,
animals, logistics etc. RFID tag reader may be used in all the intermediate stages for
tracking anything which has the RFID tag in it. This object position identification can be
smartly used to trigger an alarm, event or a specific inference regarding a specific subject.
Smart Environment and Enterprise Collection
In any work environment an enterprise based application can come up with the fact that it
is based on smarter environment. Here the individual or the enterprise may give data to
outside world on its own discretion. Smart embedded sensor technology can be used in
order to monitor and transmit critical parameters of the environment.
Smart Unit
Another IoT application which is making waves is the Smart grid and smart metering
technology. The energy consumption can be efficiently monitored in a smart home or in a
small office or even a locality. This model can be extended over a city for better load
balancing. The world is fast changing and now camera based surveillance is high in
demand. This surveillance will not only require image processing but also computer vision.
IoT which will be based on video processing is a new technological challenge to integrate
large computation with small embedded device. Smart homes can be developed where
things of daily use will be tracked using sensor enabled technologies.
Imagine a scenario where each of the family members of the family have a RFID enabled
gadget and thus object tracking can result actually in human tracking. This can readily
happen in IoT where common mobile phones can be used for tracking human beings. There
can be various types of sensors based devices which can be used for such type of tracking.
This is whole process is known as local, global and social sensing. The object can be tracked
locally, globally and in any place, any time and over any network.
Imagine a scenario in a village where old age persons, infants, pregnant ladies etc. have
RFID enabled chips over their bodies to track their vital health parameters. Any unusual
activity on their part will raise an alarm or an alert in the nearby local medical assistance
home. For example, RFID chips can be implanted in patients in order to track their medical
history.
Traffic Monitoring
In any city in the world, traffic monitoring is an important part of the smart-city
infrastructure. Normal traffic to highway traffic all requires adequate information about the
support and logistics available on the highway and in turn the system can be made self-
reliable and intelligent. Any type of congestion on roads will ultimately lead to loss of fuel
and economic loss. Any foresight on traffic will always help to improve the whole system.
With number of WSN and Sensor enabled communications, an IoT of traffic will
be generated. This will be known as Traffic IoT (TIoT). The information collected from
TIoT can be presented to travelers. The traffic information will be dependent upon the
queuing model on roads and infrastructure of roads itself. This identification of critical road
points and present state of traffic information on all roads can be provided to the user.
However this traffic monitoring application needs to be secure to prevent any terrorist
attack frequent in urban cities.
One of the most precious gifts to a human being is an ability to see, listen, speak and
respond according to the situations. But there are some unfortunate ones who are deprived
of this. Communication between deaf-dumb and normal person have been always a
challenging task. The proposed device is an innovative communication system framework
in a single compact device. We provide a technique for a blind person to read a text and it
can be achieved by capturing an image through a camera which converts a text to speech
(TTS). It provides a way for the deaf people to read a text by speech to text (STT)
conversion technology. Also, it provides a technique for dumb people using text to voice
conversion and the gestures made by them can be converted to text.
Tesseract OCR (Online Character Recognition) is used to read the word for blind,
the dumb people can communicate their message through text and gestures which will be
read out by espeak, the deaf people can be able to understand others speech through text.
All these functions are implemented using Raspberry Pi.
The main goal of our project is to provide a standard lifestyle for deaf dumb and blind
peoples as normal ones. Through this device the visually impaired people can able to
understand the words easily. The vocally impaired people can communicate their message
through text and gestures. The deaf people can able to understand others speech from the
text that is displayed. This helps them to experience the Independent life.
Chapter 2
PREAMBLE
In the earlier days, blind people can only read Braille script. Braille is a tactile writing
system used by people who are blind. It is traditionally written with embossed paper.
Now-a-days Braille user can read computer screens and other electronics support using
refreshable braille displays.
In Braille system, the language will go from left to right across the page, just like printed
words. The symbols which represent each letter are prepared up of between one and six
dots based on the figure of six dots which we would pick up on a dice or a domino. Later
in the evolution there exists screen reader system which is a computer program that enables
the blind masses to interpret what is shown on the screen through speech.
The next technology which is beneficial to the BVI is Finger Reader. It is a
wearable device in finger. It helps the BVI to access the plain printed text. People who ware
this device, scan a text line with their finger and in a result they get an audio feedback of
the words and also a haptic sense of the layout. These senses may be the start or end of the
line, new line and so on. It also alerts the reader if he moves away from the baseline thus it
helps him maintain straight scanning.
Blinds want the assistance of other neighbors to use braille script which is time
consuming and finger reader is not a language independent system and limited for
English language.
It is not possible for all the masses to learn the sign language to understand
whatever is said through gestures. Therefore, the communication gap still exists
Dumb people can simply tilt the message by sign language which could not be
System consisting solution for all the three disabilities does not exist.
2.2 Proposed System
In resolving the difficulties with visually and vocally impaired people we have used the
tiny credit card sized computer named raspberry pi. By this device we provide the solution
for blind, deaf and dumb people.
The proposed system consists of input – microphone to record voice modulation,
camera to capture image, keyboard to type a message and output – speaker and device
screen to display the texts and the images. The user can give reply as text message and the
device does text to speech (TTS) conversion. The output is obtained from a small and
powerful speaker. The image is captured through the camera and the reading of a text can
be achieved by text to speech (TTS) conversion. The device also recognises the gestures of
the users and will displays the words related to it. Then the device does speech to text (STT)
conversion and displays it on the device screen, based on what the normal person conveys.
It takes inputs from the microphone.
All-in-one device, where the deaf, dumb and blind can overcome their disabilities
Sign to text and text to voice conversion for dumb people communicating to normal
person.
LITERATURE SURVEY
“Design of Smart e-Tongue for the Physically Challenged People[16]”
Here they designed a system which converts their sign symbol to text as well as voice
output and normal person’s voice to corresponding sign symbol for two way
communication. This system has flex sensor and IMU (Inertial Measurement Unit) to
recognize their sign symbol, speech synthesis chip for voice output and speech recognizing
module for converting voice to sign symbol. These are interfaced with microcontroller,
which is programmed to obtain corresponding output.
Figure 3.1: Digital display showing Welcome and How are you
Figure 3.8: Block Diagram of a System and Testing Prototype to the User
“Assistive System for Physically Disabled People using Gesture
Recognition[24]”
In this paper we have proposed a method based on hand gesture recognition to recognize
the different gesture used by deaf person to communicate using scale invariant feature
transform (SFIT) algorithm. This will make a bridge between deaf and dumb people and
normal public. In earlier system the uses of colour markers and gloves for gesture
recognition has been used but it resulted in delay in processing time and sometimes
inconvenient for the user. This system will focus on hand gesture recognition and
development of human computer interface (HCI) system which will achieve accuracy, real
time implementation of gesture processing and reduced processing time.
Figure 3.10: Gesture Pattern, Braille Output, Arduino Circuit and Experimental setup
“Fitting like a GlovePi: a wearable device for deaf-blind people[26]”
The paper presents the design and implementation of a low cost and open source assistive
system exploiting a wearable device in order to support deaf-blind people in
communication using the Malossi alphabet. More specifically, the system, that we called
GlovePi, is composed by three main low cost components, such as: (i) a gardener glove;
(ii) a Raspberry Pi; (iii) a MPR121 capacitive touch sensor module with expansion board.
The MPR121 module works as bridge between the Raspberry Pi and the sensors in the
glove, allowing the data transfer. Instead, the Raspberry Pi works as a hotspot wifi and as
a server to transfer data to the client, an android application. In this way, the deaf-blind user
can use the glove to deliver messages to other users, using the Malossi alphabet. The
characters (and phrases) in this way created, will be sent to the android application and
displayed or listen.
4.1 Introduction
Analysis is the process of breaking a complex topic or substance into smaller parts to gain
a better understanding of it. Analysts in the field of engineering look at requirements,
structures, mechanisms, and systems dimensions. Analysis is an exploratory activity.
The Analysis Phase is where the project lifecycle begins. The Analysis Phase is
where you break down the deliverables in the high-level Project Charter into the more
detailed business requirements. The Analysis Phase is also the part of the project where
you identify the overall direction that the project will take through the creation of the project
strategy documents.
Gathering requirements is the main attraction of the Analysis Phase. The process of
gathering requirements is usually more than simply asking the users what they need and
writing their answers down. Depending on the complexity of the application, the process
for gathering requirements has a clearly defined process of its own. This process consists
of a group of repeatable processes that utilize certain techniques to capture, document,
communicate, and manage requirements
4.3 Requirements
HARDWARE:
Raspbian OS / Noobs OS
Tesseract OCR
Open CV
Espeak
Xming
Putty
Description about Modules
Tesseract OCR
Python Tesseract is an optical character recognition (OCR) engine for various OS. Tesseract
OCR is the process of electronically extracting1text from images and1reusing it in a variety
of ways1such as document1editing,1free-text1searches. OCR is a technology that is
capable converting documents such as scanned papers, PDF files and captured image into
editable data. Tesseract can be used for Linux, Windows and Mac OS. It can be used by
programmers to extract typed, printed text from images using an API. Tesseract can use
GUI from available 3rd party page. The installation process of tesseract OCR is a
combination of two parts-The engine and training data for a language. For Linux OS,
Tesseract can be obtained directly from many Linux distributers. The latest stable version
of tesseract OCR is 3.05.00. In our project Tesseract is used to convert the captured image
text into text format. Tesseract Features: 1) Page layout analysis. 2) More languages are
supported. 3) Improve forecast accuracy. 4) Add UI.
Open CV
It is a compact open source software speech synthesizer for English and 11other languages
for Linux and Windows platform. It is used to convert text to voice. It supports many
languages in a small size. The programming for espeak software is done using rule files
with feedback. It supports SSML. It can be modified by voice variant. These are text files
which can change1characteristics such as1pitch range, add effects1such as echo, whisper
and croaky voice, or make systematic adjustments to formant frequencies to change the
sound of the voice. The default speaking speed of 180 words per minute is too fast to be
intelligible. In our project Espeak is used to convert the text to voice signal.
Xming provides the X Window System display server, a set of traditional sample X
applications and tools, and a set of fonts. It features support of several languages and
has Mesa 3D, OpenGL, and GLX 3D graphics extensions capabilities. The Xming X server
is based on Cygwin/X, the X.Org Server. It is cross-compiled on Linux with
the MinGW compiler suite and the Pthreads-Win32 multi-threading library. Xming runs
natively on Windows and does not need any third-party emulation software. Xming may
be used with implementations of Secure Shell (SSH) to securely forward X11 sessions
from other computers. It supports PuTTY and ssh.exe, and comes with a version of
PuTTY's plink.exe. The Xming project also offers a portable version of PuTTY. When SSH
forwarding is not used, the local file Xn.hosts must be updated with host name or IP address
of the remote machine where GUI application is started. The software has been
recommended by authors of books on free software when a free X server is needed and
described as simple and easier to install though less configurable than other popular free
choices like Cygwin/X.
Putty
Putty is a secluded and open-source mortal emulator, serial comfort network file transfer
application. Putty was formulated for Microsoft Windows, but it has been ported to various
other operating systems. It can link up to a serial port. It backs up a variety of network
protocols, together with SCP, SSH, Telnet, and raw socket connection.
SYSTEM DESIGN
System design is the process of defining the architecture, components, modules, interfaces
and data for a system to satisfy specified requirements. System design could see it as the
application of systems theory to product development. Theory is some overlap with the
disciplines of system analysis, systems architecture and systems engineering.
If the broader topic development “blends the perspective of marketing, design, and
manufacturing into a single approach to product development,” then design the act of
talking the marketing information and creating the design of the product to be
manufactured. Systems design is therefore the process of defining and developing systems
to satisfy specified requirements of the user.
Until the 1990s systems design had crucial and respected role in the data processing
industry. In the 1990s standardization of hardware and software resulted in the ability to
build modular systems. The increasing importance of software running on generic
platforms has enhanced the discipline of software engineering.
Object-oriented analysis and design methods are becoming the most widely used
methods for computer systems design. The UML has become the standard language in
object-oriented analysis and design. It is widely used for modelling software systems and
is increasingly used for high designing non-software systems and organizations.
The design of the system is perhaps the most critical factor affecting the quality of
the software. The objective of the design phase is to produce overall design of the software.
It aims to figure out the modules that should be in the system to fulfil all the system
requirements in efficient manner.
The design will contain the specification of all the modules, their interaction with
other modules and the desired output from each module. The output of the design process
is a description of the software architecture.
5.1 High level design
Implementation is one of the most important phases of the Software Development Life
Cycle (SDLC). It encompasses all the processes involved in getting new software or
hardware operating properly in its environment, including installation, configuration,
running, testing, and making necessary changes. Specifically, it involves coding the system
using a particular programming language and transferring the design into an actual working
system.
This phase of the system is conducted with the idea that whatever is designed should be
implemented; keeping in mind that it fulfills user requirements, objective and scope of the
system. The implementation phase produces the solution to the user problem.
There could be many ways of implementing the this project, we have chosen Python
because python is a widely used high-level, general-purpose, interpreted, dynamic
programming language. Its design philosophy emphasizes code readability, and its syntax
allows programmers to express concepts in fewer lines of code than would be possible in
languages such as C++ or Java. The language provides constructs intended to enable clear
programs on both a small and large scale. Python supports multiple programming
paradigms, including object-oriented, imperative and functional programming or
procedural styles. It features a dynamic type system and automatic memory management
and has a large and comprehensive standard library. Python interpreters are available for
installation on many operating systems, allowing Python code execution on a wide variety
of systems. Using third-party tools, such as Py2exe or Pyinstaller, Python code can be
packaged into stand-alone executable programs for some of the most popular operating
systems, allowing the distribution of Python-based software for use on those environments
without requiring the installation of a Python interpreter. CPython, the reference
implementation of Python, is free and open-source software and has a community-based
development model, as do nearly all of its alternative implementations. CPython is
managed by the non-profit Python Software Foundation.
The purpose of using pseudo code is that it is easier for people to understand than
conventional programming language code, and that it is an efficient and environment-
independent description of the key principles of an algorithm. It is commonly used in
textbooks and scientific publications that are documenting various algorithms, and also in
planning of computer program development, for sketching out the structure of the program
before the actual coding takes place.
No standard for pseudo code syntax exists, as a program in pseudo code is not an
executable program. Pseudo code resembles, but should not be confused with skeleton
programs, including dummy code, which can be compiled without errors. Flowcharts and
Unified Modeling Language (UML) charts can be thought of as a graphical alternative to
pseudo code, but are more spacious on paper.
1. Text-to-Speech (TTS)
2. Image-to-Speech using camera (ITSE)
3. Gesture-to-Speech (GTS)
4. Speech-to-Text (STT)
6.4 Module Description
6.4.1 Text-to-speech (TTS) algorithm:-
Step 1: Start
Step 7: Stop
speechtexter.com
obtained.
Unit Testing
Integration Testing
Functional Testing
Acceptance Testing
Functional tests are typically written using the WebTest package, which provides
APIs for invoking HTTP(S) requests to your application. We also use py.test and pytest-
cov to provide simple testing and coverage reports. The functional tests used in the project
are mentioned below
Figure 7.3: Functional Testing of Xming
Figure 7.4: Functional Testing of Putty
Ensure that all positive scenarios and negative scenarios are covered.
Language:
Write in simple and easy to understand language.
Use active voice: Do this, do that.
Use exact and consistent names (of forms, fields, etc).
Characteristics of a good test case:
Accurate: Exacts the purpose.
Economical: No unnecessary steps or words.
Traceable: Capable of being traced to requirements.
Repeatable: Can be used to perform the test over and over.
Reusable: Can be reused if necessary.
We aim for developing the prototype model for blind dumb and deaf people by
employing in a single compact device. The device provides a unique solution for these
people to manage their sites by themselves. The project is concerned with the source code
of Python. It is the easiest programming language to interface with the Raspberry Pi. The
system is provided with 4 Options. Each option has different functions. We have chosen
the options for necessary conversion.
1) Text to speech (TTS) using (option1)
2) Image to speech using camera (ITSC) using (option2)
3) Gesture control using (option3)
4) Speech to text(STT) using (option4)
The Dumb people convert their thoughts to text which could be transferred to a voice signal.
The converted voice signal is spoken out by espeak synthesizer. After selecting the option
OP1 the OS and sub process imported. Call text to speech function and enter the text as
input. After entering the text from keyboard, the espeak synthesizer converts text to speech.
help blind people, we have interfaced the Logitech camera to capture the image by using
OPENCV tool. The captured image is converted to text using Tesseract OCR and save the
text to file out.txt. Open the text file and split the paragraph into sentences and save it. In
OCR, the adaptive thresholding techniques are used to change the image into binary images
and they are transferred to character outlines. The converted text is read out by the espeak.
thoughts to the normal people. Dumb people uses gesture to communicate with normal
people which are majorly not understandable by normal people. The process starts with the
capturing of image and crops the useful portion. Convert the RGB image into gray scale
image for better functioning, Blur the cropped image through Gaussian blur function and
pass it to the threshold function to get the highlighted part of the image. Find the contours
and an angle between two fingers. By using convex hull function, we can implement the
finger point. Count the number of angles which is less than 90 degree which gives the
number of defects. According to the number of defects, the text is printed on display and
the words of normal people. In order to help them, our project is provided with a switch
which is used to convert the voice of the normal people text. We have used a chromium
performed by assigning a minimum threshold voltage to recognize the voice signal. The
input is given through a microphone which is converted into a text format. The URL
supports a variety of languages. If the voice signal recognizable it will print the text else it
8.1 Conclusion
This project aims to lower the communication gap between the deaf or mute community
and the normal world, help them to lead standard lifestyle. The device is used to convert
text/image to voice for blind, speech to text conversion for deaf and conversion of hand
gestures to text (kannada words) for dumb people. We have designed the prototype model
for blind, deaf and dumb people into a single compact device. The advantage of this device
is that it can be easily carried (portable) due to its less weight and size. The device can be
used as smart assistant for differently abled people to communicate with others and it is a
language independent system.
There can be number of future advancements that can be associated with this project work
and some of which are described as follows:
The system can be further expanded for the alphabets, numbers in gesture control.
The input can be also taken in the form of videos and they are divided into frames
and then it is converted into text.
We can also add grammatical structure for sign language.
The system can be made handy by incorporating it into a mobile phone.
We can produce a product for blind people that converts the information in any
hand-written notes, newspaper or books into an audio signal that these people can
here.
System can be more efficient for all languages.
BIBLIOGRAPHY
1. Shraddha R. Ghorpade, Surendra K. Waghamare, “Full Duplex Communication
System for Deaf & Dumb People,” International Journal of Emerging Technology
and Advanced Engineering (IJETAE), Volume 5, Issue 5, May 2015, ISSN 2250-
2459.
2. Chucai Yi, Student Member, IEEE, Yingli Tian, Senior Member, IEEE, and
Aries Arditi “Portable Camera-Based Assistive Text and Product Label Reading
From Hand-Held Objects for Blind Persons” 2013 IEEE.
5. Bachar Y.R, Gupta. R, Pathan W.A (E&T Dept. SIER NASIK, SPP University,
Pune, India)” Smart Speaking Gloves for Speechless “.
15. Shahed Anzarus Sabab, Md. Hamjajul Ashmafee, 19th International Conference
on Computer and Information Technology, December 18-20, 2016, North South
University, Dhaka, Bangladesh.
16. L. Anusha, Y. Usha Devi, Vignan’s Lara Institute of Technology and Science,
Implementation Of Gesture Based Voice And Language Translator For Dumb
People,(2016).
17. Shweta S. Shinde, Rajesh M. Autee, Vitthal K. Bhosale, 2016 IEEE International
Conference on Computational Intelligence and Computing Research.
19. Gita Indah Hapsari, Giva Andriana Mutiara, Dicky Tiara Kusumah, 2017 Fifth
International Conference on Information and Communication Technology
(ICoICT).
20. Subhankar Chattoraj, Karan Vishwakarma, Tanmay Paul, 2017 IEEE 2nd
International Conference on Signal and Image Processing.
22. Silvia Mirri, Catia Prandi, Paola Salomoni, Lorenzo Monti, 2017 14th IEEE
Annual Consumer Communications & Networking Conference (CCNC).
24. Anchal Sood, Anju Mishra, AAWAAZ: A Communication System for Deaf and
Dumb, 2016 5th International Conference on Reliability, Infocom Technologies
and Optimization (ICRITO) (Trends and Future Directions), Sep. 7-9, 2016,
AIIT, Amity University Uttar Pradesh, Noida, India.
APPENDIX
Prof. R Chandramma, Lekhana M, Deepa B N, Archana Kumari Jha “A Literature
Survey on Raspberry Pi Based Assistive Communication system for Deaf,
Dumb and Blind”, paper presented in 10th National Conference on Advances in
Information Technology at SJBIT, Bangalore on 9 th May 2018.
Page 55