You are on page 1of 79

CHAPTER 1

INTRODUCTION

1.1 OVERVIEW
According to WHO, globally, at least 1 billion people have a near or distance
vision impairment that could have been prevented or has yet to be addressed.
Population growth and aging are expected to increase the risk that more
people acquire vision impairment. In a world where we are building the tallest
skyscrapers, we believe that as engineering students it is our responsibility to
develop affordable technology and facilities using our knowledge to even out
the social disparity. The blind reader is low-cost, portable devices which can
be the hype as the Braille system is expensive and not accessible to many. It
can be a complete aid for the visually impaired making their life much sweeter
while travelling, reading bulletins, making notes, remembering events, etc.
Blind Reader is an intelligent assistant based on Raspberry Pi. Using this
device, it is easier for the visually impaired to read text, to recognize people,
and to detect the objects appearing in front of their goggles. A speech assistant
made out of python language is also incorporated into this. The text detection
is made using OCR technology. Here we use Open CV to detect text in images
that are scanned by the camera mounted in front of the goggles. A document
scanner is made for the proper scanning of images. Tesseract is an open-
source library for OCR and Py Tesseract is an OCR tool for python. We use
YOLO for object recognition in the Open CV framework and the hear-
cascades are used for face recognition.
Visually impaired people report numerous difficulties with accessing printed
text using existing technology, including problems with alignment, focus,
accuracy, mobility and efficiency. We present a smart device that assists the
visually impaired which effectively and efficiently reads paper-printed text.
The proposed project uses the methodology of a camera based assistive device

1
that can be used by people to read Text document. The framework is on
implementing image capturing technique in an embedded system based on
Raspberry Pi board. The design is motivated by preliminary studies with
visually impaired people, and it is small-scale and mobile, which enables a
more manageable operation with little setup. In this project we have proposed
a text read out system for the visually challenged. The proposed fully
integrated system has a camera as an input device to feed the printed text
document for digitization and the scanned document is processed by a
software module the OCR (optical character recognition engine).
A methodology is implemented to recognition sequence of characters and the
line of reading. As part of the software development the Open CV (Open
source Computer Vision) libraries is utilized to do image capture of text, to
do the character recognition. Most of the access technology tools built for
people with blindness and limited vision are built on the two basic building
blocks of OCR software and Text-to-Speech (TTS) engines. Optical character
recognition (OCR) is the translation of captured images of printed text into
machine-encoded text. OCR is a process which associates a symbolic
meaning with objects (letters, symbols an number) with the image of a
character. It is defined as the process of converting scanned images of
machine printed into a computer process able format. Optical Character
recognition is also useful for visually impaired people who cannot read Text
document, but need to access the content of the Text documents. Optical
Character recognition is used to digitize and reproduce texts that have been
produced with non-computerized system. Digitizing texts also helps reduce
storage space. Editing and Reprinting of Text document that were printed on
paper are time consuming and labor intensive. It is widely used to convert
books and documents into electronic files for use in storage and document
analysis. OCR makes it possible to apply techniques such as machine
translation, text-to-speech and text mining to the capture / scanned page. The

2
final recognized text document is fed to the output devices depending on the
choice of the user. The output device can be a headset connected to the
raspberry pi board or a speaker which can spell out the text document aloud.
Total Blindness = Visual acuity less than 3/60 in better eye with spectacle
correction. Economic blindness = Visual acuity less than 6/60 in the better
eye with spectacle correction. One eye blindness = Visual acuity less than
3/60 in one eye and better than 6/60 in the other eye with spectacle correction.
The proposed algorithm uses a camera module with which it can capture the
desired text and then convert it to binary representation, i.e. converting the
image into a gray-scale image. From this gray scale image the individual
characters are extracted and recognized all of which is carried out by the
Optical Character Recognition Algorithm. Upon undergoing the processes of
Scanning, Pre-processing, Segmentation and Feature Extraction, the scanned
text is finally ready to give its output by means of the speaker connected to
the Pi module. Even though such systems are present, most of them are in the
crude forms and developing a commercially viable setup will be a huge aid
for the visually impaired thus giving them access to unprecedented amounts
of text and written media. Such as system which involves only a one-time
investment is thus a vital assist tool. The main objective of this project is
converting print and written media into playable audio with high efficiency.
A unique addition in this device is to record speech in the memory and replay
these audio files at a convenient time.

3
CHAPTER 2

LITERATURE SURVEY

1. RASPBERRY PI BASED READER FOR BLIND PEOPLE:


Ms.Pavithra C A,
Student,EPCET,Bangalore,India,pavithra.pavithra20@yahoo.com Prof. N
Kempraju, Professor, EPCET, Bangalore, India,
nkraju_z5132@rediffmail.com Ms. Pavithra Bhat N, Student, EPCET,
Bangalore, India,pavi.bn07@gmail.com Ms. Shilpa P, Student, EPCET,
Bangalore, India,shilpashilu178@gmail.com Ms.Ventaka Lakshmi V,
Student, EPCET, Bangalore, India, ventakalakshmi.2997@gmail.com
ABSTRACT- This paper shows the programmed report peruser for
outwardly hindered individuals, created on Raspberry Pi. It utilizes the
Optical character acknowledgment innovation for the distinguishing proof
of the printed characters utilizing picture detecting gadgets and PC
programming. It changes over pictures of composed, transcribed, or printed
content into machine encoded content. In this exploration these pictures are
changed over into the sound yield (Speech) using OCR and Text-to-
discourse union. The transformation of printed record into content
documents is finished utilizing Raspberry Pi which again utilizes Tesseract
library and Python programming. The content records are handled by
OpenCV library and python programming language and sound yield is
accomplished.
2. SMART READER FOR VISUALLY IMPAIRED USING
RASPBERRY PI S Sarkar, G Pansare, B Patel, A Gupta, A Chauhan, R
Yadav and N Battula* School of Mechanical Engineering, Vellore Institute
of Technology, Vellore, Tamil Nadu - 632014, India *e-mail:
nithin.battula@vit.ac.in
ABSTRACT. With the humongous amount of texts and written media

4
available today it has been increasingly necessary to make these available
to people from all walks of life specially catering to the visually impaired
people, thus devising a system which can assist in this task is of prime
importance. The proposed device aims to solve this predicament by using
Raspberry Pi module B+ to convert printed and hand written into easily
accessible playable speech. It also stores speech which can be replayed at
a time deemed suitable. The main focus of this paper is to develop a smart
reader system which coverts hand written and printed text to speech. It has
been observed that the scanned texts were converted into easily audible
speech heard via the speaker with high efficiency.

3. RASPBERRY PI BASED SMART ASSISTANT FOR THE BLIND


Ria Maria Mathew, Dona Joy, Nivya Dileep Student, Palakadan H,
Kothamangalam Student, Karipra H, Puthencruz, Ernakulam Student,
Valiyavalappil H, Cheruvathani Dept. of Electrical and Electronics
Engineering, Mar Athanasius College of Engineering, Kerala,India
ABSTRACT - This is a Raspberry Pi based smart assistant for the blind.
The project is intended to involve hardware including a reader, face
recognition, and object detecting spectacles/glasses and an audio assistant
connected to earphones phones. The project took us deep into the world of
open CV, Tesseract, and machine learning for its realization through the
python environment. The final result would enable blind people in need of
assistance, to detect people and objects, have a smart assistant as well as
read text. A person with the assistance of our project would be able to walk
on a road or public place excluding assistance from another human, along
with reading boards and signs much faster.
4. SMART STICK FOR BLIND PEOPLE: an ultrasonic sensor is
employed for this system the instrument detects obstacles at a distance of

5
four meter and infrared the instrument recognizes the complexities faced
by blind people. By this way the receiver and transmitter help the user to
find the smart stick by means of the buzzer. The vibration motor set on the
stick activates and it creates vibrations. Arduino UNO is used for system
control. The system is capable of knowing all the problems faced by the
user. The smart stick is very handy, easy to use, very responsive, power
efficient, light weight and foldable by the user.
5. INFRARED SENSOR-BASED SMART STICK FOR BLIND
PEOPLE: in this paper, they have investigated a handy, user friendly,
lightweight, very - responsive, and very power-efficient smart stick that
takes infrared technology into the work. Where an infrared sensor detect
any obstacles in the way of the user. The device can detect obstacles up to
two meters. The device offers good accuracy and this stick is able to detect
all kinds of complications.
6. MULTIPLE DISTANCE SENSOR BASED SMART STICK FOR
VISUALLY IMPAIRED PEOPLE: this smart stick is able to detect
obstacles of any height in front of person or slightly to the side the stick
gives right information about distance and location of obstacles through
vibrations and audio in the user’s ear. A wireless Bluetooth connection is
used between the earphones and the stick.
7. ULTRASONIC SENSOR BASED SMART BLIND STICK: Obstacle
detection is done by ultrasonic sensor module and all the warnings are
given buzzer.
Assistive Stick For Visually Impaired Persons: Obstacle detection is done
by setting the ultrasonic sensor at a 30-degree angle on a suitable blind
stick and it is sense if there are a hole, staircase in front of the blind person
at about 30 cm distance to avoid a person from falling. The device is full
of features and very useful.

6
CHAPTER 3

PROPOSED SYSTEM

3.1 INTRODUCTION

System analysis refers to the study of existing system in terms of system


goals. The system analysis of a project includes the basic analysis for the project
development, the required data to develop the project, the cost factor considered
for the project development and other related factors.

3.2 EXISTING SYSTEM


The system solves the problem of object identification for a blind person.
Object detection algorithm can identify the category of object and object name
also. Accuracy of object detection is a minor issue faced in this methodology and
can be overcome with the training of models with different data sets. The problem
of interaction with system for a blind person is solved with the help of voice kit.
The user simply gives voice commands to search the required object and can be
navigated to the object with the use of voice kit and vibration on the fist. The
efficiency of voice kit depends on pronunciation of words as well as API used for
voice kit. Less number of dictionary words are added. The system has low
performance and high cost, more complex circuit design.

3.2.1 Disadvantage
 They have a computer and matlab, its large size, its not carried with
wherever you go.
 Camera quality not satisfied
 Processing time lag
 Didn't convert all languages
 Power consumption high

7
3.3 PROPOSED SYSTEM
The framework of the proposed project is the raspberry pi board. The raspberry
pi B+ is a single board computer which has 4 USB ports, an Ethernet port for
internet connection, 40 GPIO pins for input/ output, CSI camera interface, HDMI
port, DSI display interface, SOC (system on a chip), LAN controller, SD card
slot, audio jack, and RCA video socket and 5V micro USB connector. The power
supply is given to the 5V micro USB connector of raspberry pi through the
Switched Mode Power Supply (SMPS). The SMPS converts the 230V AC supply
to 5V DC. The web camera is connected to the USB port of raspberry pi. The
raspberry pi has an OS named RASPION which process the conversions. The
audio output is taken from the audio jack of the raspberry pi. The converted
speech output is amplified using an audio amplifier. The Internet is connected
through the Ethernet port in raspberry pi. The page to be read is placed on a base
and the camera is focused to capture the image.

The captured image is processed by the OCR software installed in raspberry pi.
The captured image is converted to text by the software. The text is converted
into speech by the TTS engine. The final output is given to the audio amplifier
from which it is connected to the speaker. Speaker can also be replaced by a
headphone for convenience. Here it is shown how to style a subsection and sub
sub-section also.

8
3.4 BLOCK DIAGRAM

Fig No. 3.1 Block diagram of Proposed system

9
3.5 FLOW OF PROCESS

3.5.1 Image Capturing

The initial step is the one in which the archive is put under the camera and the
camera catches a picture of the set report. The nature of the picture caught will be
high in order to have quick and clear acknowledgment because of the high-goals
camera.

3.5.2 Pre-Processing

The pre-preparing stage comprises of three stages: Skew Correction,


Linearization, and Noise Removal. The caught picture is checked for skewing.
There are conceivable outcomes of the picture getting skewed with either left or
right introduction. Here the picture is first lit up and binarized. The capacity for
skew recognition checks for an edge of introduction between ±15 degrees and
whenever distinguished then a straightforward picture pivot is completed till the
lines coordinate with the genuine flat pivot, which creates a skew rectified picture.
The commotion acquainted amid catching or due with the low quality of the page
must be cleared before further handling.

3.5.3 Image to Text Converter


The ASCII estimations of the perceived characters are handled by Raspberry Pi
board. Here every one of the characters is coordinated with its comparing format
and spared as standardized content interpretation. This interpretation is further
conveyed to the sound yield.

3.5.4 Text To Speech


The extent of this module is started with the finish of the retreating module of
Character Recognition. The module plays out the undertaking of transformation
of the changed content to capable of being heard structure. The Raspberry Pi has
an on-board sound jack, the on-board sound is created by a PWM yield and is

10
negligibly separated. A USB sound card can incredibly improve the sound quality
and volume. As the acknowledgment procedure is finished, the character codes
in the content record are handled utilizing Raspberry Pi gadget on which perceive
a character utilizing Tesseract calculation and python programming, the sound
yield tunes in.

Fig.No:3.2 Flow Process

11
CHAPTER 4

WORKING PRINCIPLE

4.1 Introduction

At the point when catch is clicked, this framework catches the archive picture put
before the camera which is associated with ARM microcontroller through USB
.After choosing the procedure catch the caught record picture experiences Optical
Character Recognition (OCR) Technology. OCR innovation permits the change
of examined pictures of printed content or images into content or data that can be
comprehended or altered utilizing a PC program. In our framework for OCR
innovation we are utilizing TESSERACT library. Utilizing Text-to-discourse
library the information will be changed over to sound. Camera goes about as
primary vision in recognizing the picture of the set record, at that point picture is
prepared inside and isolates mark from picture by utilizing open CV library lastly
distinguishes the content which is articulated through voice. Presently the
changed over content into sound yield is listened either by associating headsets
by means of 3.5mm sound jack or by interfacing speakers

4.1.1 Image Capturing

The first step in which the device is moved over the printed page and the camera
captures the images of the text. The quality of the image captured will be high so
as to have fast and clear recognition due to the high-resolution camera.
12
4.1.2 Pre-Preocessing

The pre-processing stage consists of three steps: Skew Correction, Linearization,


and Noise Removal. The captured image is checked for skewing. There are
possibilities of the image getting skewed with either left or right orientation. Here
the image is first brightened and binarized. The function for skew detection
checks for an angle of orientation between ±15 degrees and if detected then a
simple image rotation is carried out till the lines match with the true horizontal
axis, which produces a skew corrected image. The noise introduced during
capturing or due to the poor quality of the page has to be cleared before further
processing.

4.1.3 Segmentation

After pre-processing, the noise free image is passed to the segmentation phase. It
is an operation that seeks to decompose an image of sequence of characters into
sub image of individual symbol (characters). The binarized image is checked for
inter line spaces. If inter line spaces are detected then the image is segmented into
sets of paragraphs across the interline gap. The lines in the paragraphs are scanned
for horizontal space intersection

4.2 OPTICAL CHARACTER RECOGNITION (OCR)

Optical character recognition (OCR) technology is an efficient business process


that saves time, cost and other resources by utilizing automated data extraction
and storage capabilities.

Optical character recognition (OCR) is sometimes referred to as text recognition.


An OCR program extracts and repurposes data from scanned documents, camera
images and image-only pdfs. OCR software singles out letters on the image, puts
them into words and then puts the words into sentences, thus enabling access to

13
and editing of the original content. It also eliminates the need for manual data
entry.

OCR systems use a combination of hardware and software to convert physical,


printed documents into machine-readable text. Hardware — such as an optical
scanner or specialized circuit board — copies or reads text; then, software
typically handles the advanced processing.

OCR software can take advantage of artificial intelligence (AI) to implement


more advanced methods of intelligent character recognition (ICR), like
identifying languages or styles of handwriting. The process of OCR is most
commonly used to turn hard copy legal or historical documents into pdf
documents so that users can edit, format and search the documents as if created
with a word processor.

4.2.1 The history of optical character recognition

In 1974, Ray Kurzweil started Kurzweil Computer Products, Inc., whose omni-
font optical character recognition (OCR) product could recognize text printed in
virtually any font. He decided that the best application of this technology would
be a machine-learning device for the blind, so he created a reading machine that
could read text aloud in a text-to-speech format. In 1980, Kurzweil sold his
company to Xerox, which was interested in further commercializing paper-to-
computer text conversion.

OCR technology became popular in the early 1990s while digitizing historical
newspapers. Since then, the technology has undergone several improvements.
Today’s solutions have the ability to deliver near-to-perfect OCR accuracy.
Advanced methods are used to automate complex document-processing
workflows. Before OCR technology was available, the only option to digitally
format documents was to manually retype the text. Not only was this time-
consuming, but it also came with inevitable inaccuracies and typing errors.
14
Today, OCR services are widely available to the public. For example, Google
Cloud Vision OCR is used to scan and store documents on your smartphone.

4.2.2 Working of OCR

Optical character recognition (OCR) uses a scanner to process the physical form
of a document. Once all pages are copied, OCR software converts the document
into a two-color or black-and-white version. The scanned-in image or bitmap is
analyzed for light and dark areas, and the dark areas are identified as characters
that need to be recognized, while light areas are identified as background. The
dark areas are then processed to find alphabetic letters or numeric digits. This
stage typically involves targeting one character, word or block of text at a time.
Characters are then identified using one of two algorithms — pattern recognition
or feature recognition.

Pattern recognition is used when the OCR program is fed examples of text in
various fonts and formats to compare and recognize characters in the scanned
document or image file.

Feature detection occurs when the OCR applies rules regarding the features of a
specific letter or number to recognize characters in the scanned document.
Features include the number of angled lines, crossed lines or curves in a character.
For example, the capital letter “A” is stored as two diagonal lines that meet with
a horizontal line across the middle. When a character is identified, it is converted
into an ASCII code (American Standard Code for Information Interchange) that
computer systems use to handle further manipulations.

15
Fig.No:4.1 Optical Character Recognition

An OCR program also analyzes the structure of a document image. It divides the
page into elements such as blocks of texts, tables or images. The lines are divided
into words and then into characters. Once the characters have been singled out,
the program compares them with a set of pattern images. After processing all
likely matches, the program presents you with the recognized text.

16
Fig.No 4.2 General OCR Model

4.2.3 The benefits of OCR

The main benefit of optical character recognition (OCR) technology is that it


simplifies the data-entry process by creating effortless text searches, editing and
storage. OCR allows businesses and individuals to store files on their computers,
laptops and other devices, ensuring constant access to all documentation.

17
The benefits of employing OCR technology include the following:

 Reduce costs
 Accelerate workflows
 Automate document routing and content processing
 Centralize and secure data (no fires, break-ins or documents lost in the back
vaults)
 Improve service by ensuring employees have the most up-to-date and
accurate information
 Optical character recognition use cases

The most well-known use case for optical character recognition (OCR) is
converting printed paper documents into machine-readable text documents. Once
a scanned paper document goes through OCR processing, the text of the
document can be edited with a word processor like Microsoft Word or Google
Docs.

OCR is often used as a hidden technology, powering many well-known systems


and services in our daily life. Important — but less-known — use cases for OCR
technology include data-entry automation, assisting blind and visually impaired
persons and indexing documents for search engines, such as passports, license
plates, invoices, bank statements, business cards and automatic number plate
recognition.

OCR enables the optimization of big-data modeling by converting paper and


scanned image documents into machine-readable, searchable pdf files.
Processing and retrieving valuable information cannot be automated without first
applying OCR in documents where text layers are not already present.

With OCR text recognition, scanned documents can be integrated into a big-data
system that is now able to read client data from bank statements, contracts and

18
other important printed documents. Instead of having employees examine
countless image documents and manually feed inputs into an automated big-data
processing workflow, organizations can use OCR to automate at the input stage
of data mining. OCR software can identify the text in the image, extract text in
pictures, save the text file and support jpg, jpeg, png, bmp, tiff, pdf and other
formats.

4.2.4 OCR and IBM

As a leader in global technology, IBM is constantly producing new and improved


software applications for both business and personal use. Over the decades, IBM
has improved upon its optical character recognition capability by combining it
with artificial intelligence (AI).

Simply creating templates of documents is no longer sufficient because


enterprises want insights, as well. Combining AI and OCR together is proving to
be a winning strategy for data capture, while recognition software is
simultaneously collecting information and comprehending the content. In
practice, this means that AI tools can check for mistakes independent of a human
user, providing streamlined fault management and saving time.

IBM Cloud Pak® for Business Automation, IBM’s leading offering for document
processing, also helps take your automation a step further by infusing artificial
intelligence (AI). Its features are designed to improve both your internal
processes and your customers' experiences.

To get more insights into document processing, optical character recognition,


automation and the latest in AI, subscribe to the IBM
Business Automation Insider. Learn how the latest products work, implement
best practices and maximize your tech investments.

19
Fig.No:4.3 Flow Chart

20
CHAPTER 5

HAPTIC FEEDBACK

5. 1 INTRODUCTION

Today, we interact with a virtual environment using a plurality of portable devices


that mainly affect our visual and aural perception. However, the complete human
experience isn’t limited to these two, but formed from five basic senses (sight,
hearing, touch, smell, taste). As you might expect, for most people the main
senses in the perception of reality are vision and hearing. According to the
latest psychological research, these two embrace about 90% of the day-to-day
experience of the person without perceptual disorders. And the honorable third
place with approximately 10% goes to the sense of touch. Digits might differ for
various individuals, but it’s quite difficult to deny the importance of a physical
contact for the human beings. Every day we interact with a ton of electronic
devices, some of them we carry around (like laptops or mobile phones)
with others being wearable (like smart watches, wristbands, clothing). By
introducing proper implementation, modern gadgets enable us not only to see or
hear, but literally feel the virtual reality. That’s exactly where the haptic
technology comes into play.

5.2 HAPTIC FEEDBACK TECHNOLOGY

Simply put, haptics is a technology, which allows one to receive tactile


information through their sensations, by applying forces, vibrations, or touches.
Haptics simulates an object or interaction from the virtual system, producing the
feeling it’s real.

Mobile phone vibration is very often described as an example of haptic feedback


technology. But it is just one and very simple illustration of how haptics can

21
function. Haptics allows a user to interact with computer-based devices by
receiving tactile and force feedback. The former may let us know what the texture
of the object is (e.g. rough or smooth). The latter simulates some physical
properties of the object, such as its weight or pressure.

5.3 HAPTIC FEEDBACK TYPOLOGY

Five main types of haptic feedback technologies (haptics) are

 force,
 vibrotactile,
 electro tactile,
 ultrasonic, and
 Thermal.

Each of them is considered in detail below.

5.3.1 Force Feedback

It is the kind of haptic technology that appeared first (starts in the late 1960s).
Therefore, it is the most studied and well-implemented in different applications
so far.

Force feedback stimulates the ligaments and muscles through our skin into the
musculoskeletal system, whereas any other types of haptics affect top layers of
skin receptors (technology called transcutaneous electrical nerve stimulation —
TENS). The cutaneous devices (involving the outer layer of the skin) are quite
compact and apply the acupressure on small areas of the body.

In contrast, force devices are mostly large (think about a powered exoskeleton as
an example). They move together with a human and have an impact on large areas
of the body, such as an arm or a leg. These devices are far more complex, as they

22
are designed to both apply the force to the body part, and provide a person with
sufficient freedom of movement.

There are two types of force feedback devices by human body parts
emulation: biomimetic and non-biomimetic. Biomimetic devices move as human
limbs and resemble them by their forms. Such devices are difficult to develop
because ideally, they should have a functionality of human body and be suitable
for different people. Non-biomimetic devices may be very different from the
human body.

Another classification of the force feedback appliances (by the direction of the
applied power) includes resistive and active devices. Resistive devices limit the
movement of the user with the help of the brakes. Active devices restrict
movements of the user or move the body in space by means of motors. Active
devices can simulate a wider range of interactions, but they generally need to be
more powerful than passive devices, and they are more difficult to control.

Fig No 5.1 Force Feedback

23
5.3.2 Vibrotactile feedback

Vibrotactile feedback is by far the most common type of haptics. Vibrio


stimulators apply pressure to the definite receptors of human skin. These
receptors resemble “onion” layers’ structure and can accept vibrations of up to
1000 hertz. Ordinary human speech frequency varies from 80 to 250 hertz, so our
skin can actually feel sounds.

Fig No 5.2 Vibrotactile feedback

5.3.3 Electrotactile Feedback

Electrotactile stimulators affect both receptors and nerve endings by applying


electrical impulses. By means of electrical impulses, a user can receive a wide
range of sensations which cannot be reproduced with any other current feedback
systems. This type of feedback has many forms depending on the intensity and
frequency of the stimuli delivered to the skin. Sensations can also vary depending
on the current, voltage, material, wave form, electrode size, contact force,
hydration, and skin type.

24
Fig No.5.3 Electrotactile Feedback

The principal advantage of the electro-haptic feedback system, compared to vibro


tactile or force feedback, is the absence of mechanical or moving parts. Another
benefit of electro-neural stimulation is that the electrodes can be assembled into
compact arrays and used to implement electro tactile displays.

Electrical muscle stimulation (EMS) technology is used in medicine for more


than 30 years and has proven its safety. Moreover, electrical signals are the basis
of the nervous system, so we can say confidently that this type of haptic feedback
is the best suitable for generating and simulating the real-world sensations.

5.3.4 Ultrasound Feedback

Ultrasound is a sound wave of high frequency. One or more ultrasound


emitters are used to create the subtle feedback. In such appliances the

25
emitter located on one part of the body sends a signal to another part. This
principle of transmission is called “acoustic time reversal”.

Fig No: 5.4 Ultrasound Feedback

To ensure the impact on larger areas, it’s necessary to form a haptic


feedback field. One emitter itself is not powerful enough, so, several emitter are
used. Together they create invisible, tangible interfaces in the air. Ultrasound
waves generate turbulence, which humans can feel through the skin.

The main advantage of the ultrasonic technology is that the user does not need to
wear any accessories. At the same time this kind of haptic feedback is quite
expensive and usually less perceptible than previously considered vibro tactile
or electro tactile feedback.

5.3.5 Thermal Feedback

For thermal feedback formation, the actuators grid is used. It is in direct contact
with the skin. Most commonly, to implement this effect, the thermoelectric diodes
(based on the Peltier effect) are used.

26
It’s important to note, that people don’t define well the place of the thermal
stimulus unlike with tactile communication. Therefore, there is no need in many
actuators to create a heat or cold feedback, and they can be positioned not so close
to each other. Thus, in some way, thermal feedback devices are even easier to
design.

Fig No: 5.5 Thermal Feedback

However, due to the law of energy conservation heat cannot be taken from
nowhere. It can only be moved from one place to another. Furthermore, it should
be done quickly to provide a realistic feel. So, haptic suits using thermal feedback
require quite a lot of energy.

5.4 PRACTICAL APPLICATIONS OF HAPTIC FEEDBACK

Based on the previous section, it’s quite easy to conclude that the haptic feedback
feature has a great number of applications in different practical areas, from

27
medicine and industrial training to gaming and entertainment. Let’s have a closer
look at just some of the most common cases.

5.4.1 Automative and Aviation

The means of transportation have a lot of different points, where the benefits of
the haptic feedback can be used.

For example, the car may benefit from the haptic feedback technology when it’s
employed for conveying the diversity of information: spatial signals, warnings,
communication, coded information, and other general data. To achieve the
expected goals, the haptic feedback can be directly introduced into different car
components like steering wheel, seat belt, pedal, seat, dashboard, or driver’s
clothes.

Speaking about aircrafts, the purpose of haptic effects is mostly the same as in
the previous case — provide a pilot with necessary information about the flight
control, assisting in the management of the safe and economical flight regime (so-
called flight envelope protection). Again, the haptic feedback actuators can be
mounted on different components inside the cockpit and the controls, physically
interacting with pilot’s body parts.

5.4.2 Medicine and Dentistry

Medicine is by far one of the most important areas that can benefit from the haptic
technology use. As an example, the minimally invasive surgery (MIS)
can leverage the specially crafted automatic laparoscopic tool with tactile and/or
force feedback to palpate tissues for diagnosing if they are normal or abnormal.
In comparison with the conventional research and surgery methods, in the
considered case the surgeon has much more control and hence the overall
procedure becomes a way safer for the end patient.

28
Dentists can be educated on the specific VR models, using the haptic feedback to
provide more realistic medical operation feel.

5.5 COMPUTERS AND MOBILE DEVICES

We interact with desktop computers, laptops, tablets, mobile phones, etc. on a


daily basis. Incorporation of the haptic feedback in the named devices and proper
implementation of the related reactions on different user’s actions leads to better
UX and hence more satisfaction from device use.

The famous examples of the haptic feedback in the public technology sphere are
the last-years Apple products. So, the latest Mac Books’ trackpads and iPhones’
screens starting from 2015 incorporate the patented Haptic Engine to produce
Force Touch and 3D Touch feature, creating the unique and flawless user
experience in respect to the system reaction to different executed actions.

5.5.1 Gaming and Entertainment

Almost any new and captivating technology is inevitably tested in gaming and
entertainment, as this sphere generates much profit and in fact drives the progress
forward. And the various types of haptic feedback are no exception.

The explosive growth of the immersive interactive attractions number, spreading


throughout the malls and thematic parks is also powered by the incorporation of
the quality haptics features, that can produce the famous “wow effect” on the user
of any age.

And of course, the games are also involved in the show practically from the very
dawn of the haptic technology. Using the diversity of physical sensations

29
transited via the controllers like joysticks, gamepads, steering wheels, jet seats
with force or electro tactile feedback, the game persuades you in the reality of the
on-screen or VR picture. Using the electric signals of different parameters, the
haptic devices may reproduce the effects of the weapon recoil, steering wheel
resistance, bullet hit, rain or sandstorm and many more. And the modern consoles
controllers like Sony PlayStation 5 Dual Sense and Nintendo Switch Joy-
Cons utilize the haptic feedback to enrich the gamer’s experience.

30
CHAPTER 6

HARDWARE DESCRIPTION

6.1 RASPBERRY PI

The Raspberry Pi Compute Module (CM1), Compute Module 3 (CM3) and


Compute Module 3 Lite(CM3L) are DDR2-SODIMM-mechanically-compatible
System on Modules (SoMs) containing processor, memory, eMMC Flash (for
CM1 and CM3) and supporting power circuitry. These modules allow a designer
to leverage the Raspberry Pi hardware and software stack in their own custom
systems and form factors. In addition these module have extra IO interfaces over
and above what is available on the Raspberry Pi model A/B boards opening up
more options for the designer.

The CM1 contains a BCM2835 processor (as used on the original Raspberry Pi
and Raspberry Pi B+ models), 512MByte LPDDR2 RAM and 4Gbytes eMMC
Flash. The CM3 contains a BCM2837 processor (as used on the Raspberry Pi 3),
1Gbyte LPDDR2 RAM and 4Gbytes eMMC Flash. Finally the CM3L product is
the same as CM3 except the eMMC Flash is not fitted, and the SD/eMMC
interface pins are available for the user to connect their own SD/eMMC device.

Note that the BCM2837 processor is an evolution of the BCM2835 processor.


The only real differences are that the BCM2837 can address more RAM (up to
1Gbyte) and the ARM CPU complex has been upgraded from a single core
ARM11 in BCM2835 to a Quad core Cortex A53 with dedicated 512Kbyte L2
cache in BCM2837. All IO interfaces and peripherals stay the same and hence
the two chips are largely software and hardware compatible.

The pinout of CM1 and CM3 are identical. Apart from the CPU upgrade and
increase in RAM the other significant hardware differences to be aware of are

31
that CM3 has grown from 30mm to 31mm in height, the VBAT supply can now
draw significantly more power under heavy CPU load, and the HDMI HPD N
1V8 (GPIO46 1V8 on CM1) and EMMC EN N 1V8 (GPIO47 1V8 on CM1) are
now driven from an IO expander rather than the processor. If a designer of a CM1
product has a suitably specified VBAT, can accommodate the extra 1mm module
height increase and has followed the design rules with respect to GPIO46 1V8
and GPIO47 1V8 then a CM3 should work fine in a board designed for a CM1.

6.1.1 Features

Hardware

_ Low cost
_ Low power
_ High availability
_ High reliability
– Tested over millions of Raspberry Pis Produced to date
– Module IO pins have 35u hard gold plating

Peripherals

_ 48x GPIO
_ 2x I2C
_ 2x SPI
_ 2x UART
_ 2x SD/SDIO
_ 1x HDMI 1.3a
_ 1x USB2 HOST/OTG

_ 1x DPI (Parallel RGB Display)


_ 1x NAND interface (SMI)

32
_ 1x 4-lane CSI Camera Interface (up to 1Gbps per lane)
_ 1x 2-lane CSI Camera Interface (up to 1Gbps per lane)
_ 1x 4-lane DSI Display Interface (up to 1Gbps per lane)
_ 1x 2-lane DSI Display Interface (up to 1Gbps per lane)
Software
_ ARMv6 (CM1) or ARMv7 (CM3, CM3L) Instruction Set
_ Mature and stable Linux software stack
– Latest Linux Kernel support
– Many drivers up streamed
– Stable and well supported userland
– Full availability of GPU functions using standard APIs

33
6.1.2 Block Diagram

Fig.No:6.1 CM1 Block Diagram

34
Fig.No:6.2 CM3/CM3L Block Diagram

6.1.3 Booting

The 4GB eMMC Flash device on CM3 is directly connected to the primary
BCM2837 SD/eMMC interface. These connections are not accessible on the
module pins. On CM3L this SD interface is available on the SDX pins.

When initially powered on, or after the RUN pin has been held low and then
released, the BCM2837 will try to access the primary SD/eMMC interface. It will
then look for a file called bootcode. bin on the primary partition (which must be
FAT) to start booting the system. If it cannot access the SD/eMMCdevice or the
boot code cannot be found, it will fall back to waiting for boot code to be written

35
to it over USB; in other words, its USB port is in slave mode waiting to accept
boot code from a suitable host.

A USB boot tool is available on Github which allows a host PC running Linux to
write the BCM2837 boot code over USB to the module. That boot code then runs
and provides access to the SD/eMMC as a USB mass storage device, which can
then be read and written using the host PC. Note that a Raspberry Pi can be used
as the host machine. For those using Windows a precompiled and packeged tool
is available.

The Compute Module has a pin called EMMC DISABLE N which when shorted
to GND will disable the SD/eMMC interface (by physically disconnecting the SD
CMD pin), forcing BCM2837 to boot from USB. Note that when the eMMC is
disabled in this way, it takes a couple of seconds from powering up for the
processor to stop attempting to talk to the SD/eMMC device and fall back to
booting from USB.

Note that once booted over USB, BCM2837 needs to re-enable the SD/eMMC
device (by releasing EMMC DISABLE N) to allow access to it as mass storage.
It expects to be able to do this by driving the EMMC EN N 1V8 pin LOW, which
at boot is initially an input with a pull up to 1V8. If an end user wishes to add the
ability to access the SD/eMMC over USB in their product, similar circuitry to
that used on the Compute Module IO Board to enable/disable the USB boot and
SD/eMMC must be used; that is, EMMC DISABLE N pulled low via
MOSFET(s) and released again by MOSFET, with the gate controlled by EMMC
EN N 1V8. Ensure you use MOSFETs suitable for switching at 1.8V (i.e. use a
device with gate threshold voltage, Vt, suitable for 1.8V switching).

36
6.1.4 Peripherals

GPIO

BCM283x has in total 54 GPIO lines in 3 separate voltage banks. All GPIO pins
have at least two alternative functions within the SoC. When not used for the
alternate peripheral function, each GPIO pin may be set as an input (optionally as
an interrupt) or an output. The alternate functions are usually peripheral I/Os, and
most peripherals appear twice to allow flexibility on the choice of I/O voltage.

On CM1, CM3 and CM3L bank2 is used on the module to connect to the eMMC
device and, on CM3and CM3L, for an on-board I2C bus (to talk to the core SMPS
and control the special function pins). OnCM3L most of bank 2 is exposed to
allow a user to connect their choice of SD card or eMMC device (if required).

Bank0 and 1 GPIOs are available for general use. GPIO0 to GPIO27 are bank 0
and GPIO28-45 make up bank1. GPIO0-27 VDD is the power supply for bank0
and GPIO28-45 VDD is the power supply for bank1. SDX VDD is the supply for
bank2 on CM3L. These supplies can be in the range 1.8V-3.3V (see Table 7) and
are not optional; each bank must be powered, even when none of the GPIOs for
that bank are used.

Note that the HDMI HPD N 1V8 and EMMC EN N 1V8 pins (on CM1 these
were called GPIO46 1V8 and GPIO47 1V8 respectively) are 1.8V IO and are
used for special functions (HDMI hot plug detect and boot control respectively).
Please do not use these pins for any other purpose, as the software for the
Compute Module will always expect these pins to have these special functions.
If they are unused please leave them unconnected.

All GPIOs except GPIO28, 29, 44 and 45 have weak in-pad pull-ups or pull-
downs enabled when the device is powered on. It is recommended to add off-chip

37
pulls to GPIO28, 29, 44 and 45 to make sure they never float during power on
and initial boot.

Secondary Memory Interface (SMI)

The SMI peripheral is an asynchronous NAND type bus supporting Intel mode80
type transfers at 8 or 16 bit widths and available in the ALT1 positions on GPIO
banks 0 and 1 (see Table 9 and Table 10). It is not publicly documented in the
Broadcom Peripherals Specification but a Linux driver is available in the
Raspberry Pi Github Linux repository (bcm2835 smi.c in linux/drivers/misc).

Display Parallel Interface (DPI)

A standard parallel RGB (DPI) interface is available on bank 0 GPIOs. This up-
to-24-bit parallel interface can support a secondary display. Again this interface
is not documented in the Broadcom Peripherals Specification but documentation
can be found here.

SD/SDIO Interface

The BCM283x supports two SD card interfaces, SD0 and SD1. The first (SD0)
is a proprietary Broadcom controller that does not support SDIO and is the
primary interface used to boot and talk to the eMMC or SDX x signals. The
second interface (SD1) is standards compliant and can interface to SD, SDIO and
eMMC devices;for example on a Raspberry Pi 3 it is used to talk to the on-board
BCM43438 WiFi device in SDIO mode. Both interfaces can support speeds up
to 50MHz single ended (SD High Speed Mode).

6.1.5 CSI (MIPI Serial Camera)

Currently the CSI interface is not openly documented and only CSI camera
sensors supported by the official Raspberry Pi firmware will work with this

38
interface. Supported sensors are the Omni Vision OV5647 and Sony IMX219. It
is recommended to attach other cameras via USB.

6.1.6 DSI (MIPI Serial Display)

Currently the DSI interface is not openly documented and only DSI displays
supported by the official Raspberry Pi firmware will work with this interface.
Displays can also be added via the parallel DPI interface which is available as a
GPIO alternate function

6.1.7 USB

The BCM283x USB port is On-The-Go (OTG) capable. If using either as a fixed
slave or fixed master, please tie the USB OTGID pin to ground. The USB port
(Pins USB DP and USB DM) must be routed as 90 ohm differential PCB traces.
Note that the port is capable of being used as a true OTG port however there is
no official documentation Some users have had success making this work.

6.1.8 HDMI

BCM283x supports HDMI V1.3a. It is recommended that users follow a similar


arrangement to the Compute Module IO Board circuitry for HDMI output. The
HDMI CK P/N (clock) and D0-D2 P/N (data) pins must each be routed as
matched length 100 ohm differential PCB traces. It is also important to make sure
that each differential pair is closely phase matched. Finally, keep HDMI traces
well away from other noise sources and as short as possible.

Composite (TV Out) The TVDAC pin can be used to output composite video
(PAL or NTSC). Please route this signal away from noise sources and use a 75
ohm PCB trace. Note that the TV DAC is powered from the VDAC supply which
must be a clean supply of 2.5-2.8V. It is recommended users generate this supply
from 3V3 using a low noise LDO.

39
If the TVDAC output is not used VDAC can be connected to 3V3, but it must be
powered even if the TV-out functionality is unused.

6.1.9 Thermals

The BCM283x SoC employs DVFS (Dynamic Voltage and Frequency Scaling)
on the core voltage.

When the processor is idle (low CPU utilization), it will reduce the core frequency
and voltage to reduce current draw and heat output. When the core utilization
exceeds a certain threshold the core voltage is increased and the core frequency
is boosted to the maximum working frequency. The voltage and frequency are
throttled back when the CPU load reduces back to an ’idle’ level OR when the
silicon temperature as measured by the on-chip temperature sensor exceeds 85C
(thermal throttling).

A designer must pay careful attention to the thermal design of products using the
CM3/CM3L so that performance is not artificially curtailed due to the processor
thermal throttling, as the Quad ARM complex in the BCM2837 can generate
significant heat output.

Temperature Range

The operating temperature range of the module is set by the lowest maximum and
highest minimum of any of the components used. The eMMC and LPDDR2 have
the narrowest range, these are rated for -25 to +80 degrees Celsius. Therefore the
nominal range for the CM3 and CM3L is -25C to +80C.

However, this range is the maximum for the silicon die; therefore, users would
have to take into account the heat generated when in use and make sure this does
not cause the temperature to exceed 80 degrees Celsius.

40
6.1.10 Availability

Raspberry Pi guarantee availability of CM1, CM3 and CM3 Lite until at least
January 2023.

6.1.11 Support

For support please see the hardware documentation section of the Raspberry Pi
website and post questions to the Raspberry Pi forum.

6.2 SPEAKER
Speakers are one of the most common output devices used with computer system.
Some speakers are designed to work specifically with computers, while others
can be hooked up to any type of sound system. Regardless of their design, the
purpose of speakers is to produce audio output that can be heard by the listener.

Fig No: 6.3Amplifier to Speaker


Speakers are transducers that convert electromagnetic waves into sound waves.
The speakers receive audio input from a device such as a computer or an audio
receiver. This input may be either in analog or digital form. Analog speakers
simply amplify the analog electromagnetic waves into sound waves. Since sound
waves are produced in analog form, digital speakers must first convert the digital
input to an analog signal, then generate the sound waves.

41
The sound produced by speakers is defined by frequency and amplitude. The
frequency determines how high or low the pitch of the sound is. For example, a
soprano singer's voice produces high frequency sound waves, while a bass guitar
or kick drum generates sounds in the low frequency range. A speaker system's
ability to accurately reproduce sound frequencies is a good indicator of how clear
the audio will be. Many speakers include multiple speaker cones for different
frequency ranges, which helps produce more accurate sounds for each range.
Two-way speakers typically have a tweeter and a mid-range speaker, while three-
way speakers have a tweeter, mid-range speaker, and subwoofer.

Fig No 6.4 Speaker


Amplitude, or loudness, is determined by the change in air pressure created by
the speakers' sound waves. Therefore, when you crank up your speakers, you are
actually increasing the air pressure of the sound waves they produce. Since the
signal produced by some audio sources is not very high (like a computer's sound
card), it may need to be amplified by the speakers. Therefore, most external
computer speakers are amplified, meaning they use electricity to amplify the
signal. Speakers that can amplify the sound input are often called active speakers.
You can usually tell if a speaker is active if it has a volume control or can be

42
plugged into an electrical outlet. Speakers that don't have any internal
amplification are called passive speakers. Since these speakers don't amplify the
audio signal, they require a high level of audio input, which may be produced by
an audio amplifier.

Speakers typically come in pairs, which allows them to produce stereo sound.
This means the left and right speakers transmit audio on two completely separate
channels. By using two speakers, music sounds much more natural since our ears
are used to hearing sounds from the left and right at the same time. Surround
systems may include four to seven speakers (plus a subwoofer), which creates an
even more realistic experience.

Speaker impedance changes amplifier power output. In fact, your amplifier power
could be nearly half or double its capacity – depending on the impedance of your
speakers.

In reality, amplifiers cannot maintain the theoretical output levels as calculated


above. This is because the power supply on most amplifiers cannot maintain the
maximum power when driving the lower impedance speakers.

In a real amplifier, the above principles still hold but the theoretical values will
not be achieved. The power output will be increased with lower impedance
speakers, but the maximum power output will not be doubled when the
impedance is halved.

6.3 LCD DISPLAY

The term LCD stands for liquid crystal display. It is one kind of electronic display
module used in an extensive range of applications like various circuits & devices
like mobile phones, calculators, computers, TV sets, etc. These displays are
mainly preferred for multi-segment light-emitting diodes and seven segments.
The main benefits of using this module are inexpensive; simply programmable,
43
animations, and there are no limitations for displaying custom characters, special
and even animations.

6.3.1 LCD 16×2 Pin Diagram

Fig.No: 6.5 LCD 16×2 Pin Diagram

 Pin1 (Ground/Source Pin): This is a GND pin of display, used to connect the
GND terminal of the microcontroller unit or power source.
 Pin2 (VCC/Source Pin): This is the voltage supply pin of the display, used to
connect the supply pin of the power source.
 Pin3 (V0/VEE/Control Pin): This pin regulates the difference of the display,
used to connect a changeable POT that can supply 0 to 5V.
 Pin4 (Register Select/Control Pin): This pin toggles among command or data
register, used to connect a microcontroller unit pin and obtains either 0 or 1(0
= data mode, and 1 = command mode).
 Pin5 (Read/Write/Control Pin): This pin toggles the display among the read or
writes operation, and it is connected to a microcontroller unit pin to get either
0 or 1 (0 = Write Operation, and 1 = Read Operation).

44
 Pin 6 (Enable/Control Pin): This pin should be held high to execute Read/Write
process, and it is connected to the microcontroller unit & constantly held high.
 Pins 7-14 (Data Pins): These pins are used to send data to the display. These
pins are connected in two-wire modes like 4-wire mode and 8-wire mode. In
4-wire mode, only four pins are connected to the microcontroller unit like 0 to
3, whereas in 8-wire mode, 8-pins are connected to microcontroller unit like 0
to 7.
 Pin15 (+ve pin of the LED): This pin is connected to +5V
 Pin 16 (-ve pin of the LED): This pin is connected to GND.

6.3.2 Features of LCD 16x2


 The operating voltage of this LCD is 4.7V-5.3V
 It includes two rows where each row can produce 16-characters.
 The utilization of current is 1mA with no backlight
 Every character can be built with a 5×8 pixel box
 The alphanumeric LCDs alphabets & numbers
 Is display can work on two modes like 4-bit & 8-bit
 These are obtainable in Blue & Green Backlight
 It displays a few custom generated characters

6.4 POWER SUPPLY UNIT

Ac to 12v dc power supply is the most used and common circuit. There are so
many applications of AC to DC converter Project. The 220v to 12v dc power
supply is built to convert AC input to 12 volt DC output. The ac to dc converter
project is useful for fixed DC applications like DC motors, pumps, Chargers and
many other applications. Here we are going to discuss the what is a dc power
supply and circuit for power supply for 12 volt output.

45
Fig No: 6.6 POWER SUPPLY UNIT

The high current dc power supply is quite simple to test and build. This AC to
DC converter Project of Power supply is a beginner’s level circuit for basic
electronics projects. We are going to define how to make a 12v power supply.
The circuit can be used with many useful applications as it draws 2-A Current.
The ac to dc converter project is the best approach to make this easy and simple
power supply project. This is a 12V DC adapter circuit.

What is a dc power supply and how we can define our objective as how to make
a 12v power supply. To convert 220V AC input into 12V DC output. The 12 Volt
Fixed DC output is useful for many DC Controlled applications like DC motors,
DC circuits, Pumps, Battery Chargers and many other useful applications.

Fig.No:6.7 Circuit Diagram


46
The 220v ac to 12v dc power supply is easy and quite simple. The input voltage
is 220 volts AC. This is also the AC to DC converter Project. Connect an AC wire
plug at input and then a Switch and a fuse. The circuit is based upon a transformer.
The transformer reduces the AC voltages from 220 to 12 volts. As we know that
whenever we convert from ac to dc we need a rectifier circuit. The diodes are
used to rectify the output. The output is 12 Volt DC.

 The basic purpose of 220v ac to 12v dc power supply project is to make a


12V DC output voltages to run DC applications

 The fuse is used for the protection of the circuit.

 Connect the circuit input to Mains 220V AC 50/60 Hz.

 The 220 Volts Ac to 12 Volts DC current transformer is used to convert


AC voltages to DC. The Current rating of transformer is 2-Amperes.

 The Diode Rectifier is used to rectify the AC input into 12V DC. The
1N5402 Diode is used to make a rectifier circuit.

 The use of capacitor here is to filter the output.

 The LED shows the rectified filtered output of 12 volts DC.

 Now you can connect any DC operated circuitry to 12-VDC output.

7805

So ever wondered how come some circuit takes a 12V input but is internally
driving LED’s Microcontrollers and other low voltage peripherals which are not
even designed for such a high voltage? For a newbie, the first thought might be
making a voltage divider circuit and thus supplying the desired voltage as such,
but this is not how it’s done.

47
Fig.No:6.8 Pin Diagram

The losses in a voltage divider circuit and the uncertainty of a load that does not
have a fixed resistance makes this hardly the best possible way to tackle it. So
that is where a regulator IC comes in such as the infamous 7805.

6.4.1 Working Explanation

The 7805 totally works in heat dissipation of the extra energy it receives to reduce
the voltage, so it may not be ideal but gets the job done for most cases. So what’s
happening is that on the input side you give the 12V DC input and the IC regulates
that voltage and gives an output of ideally 5V (4.8V to 5.2V as per its datasheet).

Fig.No: 6.9 Volt Converter

48
The capacitors are there to smooth out certain spikes due to increased current
requirements. The energy gain is lost in terms of heat, thus it is always advisable
to attach a heatsink.

6.5 USB CAMERA

6.5.1Characteristics

 Adatta per videoconferenze, seminari, chat e giochi online,


videochiamate e registrazione video, formazione online etc.
 Microfono incorporato con riduzione del rumore
 Base a clip regolabile per montaggio su monitor e notebook
 Compatibile con Windows e Android TV box e con tutte
le piattaforme per la videoconferenza / videochiamata
 Driver: non richiede driver, installazione plug and play
 Formato foto: bmp, jpg
 Tipo di sensore: CMOS
 Regolazione contrasto e luminosità automatici
 Risoluzione I-WEBCAM-60T: 1920x1080p
 Risoluzione I-WEBCAM-70T: 1280x720p
 Interfaccia: USB 2.0
 Campo di messa a fuoco: 20 mm - estremo
6.5.2 Details

1280x720p webcam suitable for video conferencing, seminars, online chat and
games, video calling, and video recording, online training etc.

Built-in microphone with noise reduction

Plug and Play

Adjustable clip base for mounting on monitors and notebooks

Compatible with Windows and Android TV box and with all platforms for
videoconferencing / video calling

49
6.5.3 Features

Driver: no driver required, plug and play installation

Scintillation Control: 50Hz, 60Hz

Photo format: bmp, jpg

Sensor type: CMOS

Automatic contrast and brightness adjustment

USB cable length: 110 cm

Resolution: 1280x720p up to 25 FPS

Interface type: USB 2.0

Incorporates microphone with noise reduction

Focus range: 20mm - extreme

Supports any video conferencing software

System requirements: computer with Windows Vista, Win7, Win8, Win10


operating system

Fig.No: 6.10 USB Camera

50
CHAPTER 7

SOFTWARE DESCRIPTION

7.1 RASPBERRY PI OS
Edit this on GitHub

Raspberry Pi OS is a free operating system based on Delian, optimized for the


Raspberry Pi hardware, and is the recommended operating system for normal
use on a Raspberry Pi. The OS comes with over 35,000 packages: pre-compiled
software bundled in a nice format for easy installation on your Raspberry Pi.

Raspberry Pi OS is under active development, with an emphasis on improving


the stability and performance of as many Delian packages as possible on
Raspberry Pi.

Updating and Upgrading Raspberry Pi OS

Edit this on GitHub

It’s important to keep your Raspberry Pi up to date. The first and probably the
most important reason is security. A device running Raspberry Pi OS contains
millions of lines of code that you rely on. Over time, these millions of lines of
code will expose well-known vulnerabilities, which are documented in publicly
available databases meaning that they are easy to exploit. The only way to
mitigate these exploits as a user of Raspberry Pi OS is to keep your software up
to date, as the upstream repositories track CVEs closely and try to mitigate them
quickly.

The second reason, related to the first, is that the software you are running on
your device most certainly contains bugs. Some bugs are CVEs, but bugs could
also be affecting the desired functionality without being related to security. By
keeping your software up to date, you are lowering the chances of hitting these
bugs.

Using API

51
The easiest way to manage installing, upgrading, and removing software is
using APT (Advanced Packaging Tool) from Delian. To update software in
Raspberry Pi OS, you can use the apt tool from a Terminal window.

APT keeps a list of software sources on your Raspberry Pi in a file at


/etc./apt/sources. List. Before installing software, you should update your
package list with apt update. Go ahead and open a Terminal window and type:

sudo apt update

Next, upgrade all your installed packages to their latest versions with the
following command:

sudo apt full-upgrade

Note that full-upgrade is used in preference to a simple upgrade, as it also picks


up any dependency changes that may have been made.

Generally speaking, doing this regularly will keep your installation up to date
for the particular major Raspberry Pi OS release you are using (e.g. Buster). It
will not update from one major release to another, for example, Stretch to
Buster or Buster to Bullseye.

However, there are occasional changes made in the Raspberry Pi OS image that
require manual intervention, for example a newly introduced package. These are
not installed with an upgrade, as this command only updates the packages you
already have installed.

If moving an existing SD card to a new Raspberry Pi model (for example the


Raspberry Pi Zero 2 W), you may also need to update the kernel and the
firmware first using the instructions above.

Running Out of Space

When running sudo apt full-upgrade, it will show how much data will be
downloaded and how much space it will take up on the SD card. It’s worth
checking with df -h that you have enough free disk space, as unfortunately apt
will not do this for you. Also be aware that downloaded package files (.deb
files) are kept in /var/cache/apt/archives. You can remove these in order to free
up space with sudo apt clean (sudo apt-get clean in older releases of apt).

52
The latest version of Raspberry Pi OS is based on Delian Bullseye. The
previous version was based on Buster. If you want to perform an in-place
upgrade from Buster to Bullseye (and you’re aware of the risks) see the
instructions in the forums.

Searching for Software

You can search the archives for a package with a given keyword with apt-cache
search:

apt-cache search locomotive

sl - Correct you if you type `sl' by mistake

You can view more information about a package before installing it with apt-
cache show:

apt-cache show sl

Package: sl

Version: 3.03-17

Architecture: arch

Maintainer: Hiroyuki Yamamoto <yama1066@gmail.com>

Installed-Size: 114

Depends: libc6 (>= 2.4), libncurses5 (>= 5.5-5~), libtinfo5

Homepage: http://www.tkl.iis.u-tokyo.ac.jp/~toyoda/index_e.html

Priority: optional

Section: games

Filename: pool/main/s/sl/sl_3.03-17_armhf.deb

Size: 26246

SHA256:
42dea9d7c618af8fe9f3c810b3d551102832bf217a5bcdba310f119f62117dfb

SHA1: b08039acccecd721fc3e6faf264fe59e56118e74

53
MD5sum: 450b21cc998dc9026313f72b4bd9807b

Description: Correct you if you type `sl' by mistake

Sl is a program that can display animations aimed to correct you

if you type 'sl' by mistake.

SL stands for Steam Locomotive.

Installing a Package with APT

sudo apt install tree

Typing this command should inform the user how much disk space the package
will take up and asks for confirmation of the package installation. Entering Y
(or just pressing Enter, as yes is the default action) will allow the installation to
occur. This can be bypassed by adding the -y flag to the command:

sudo apt install tree -y

Installing this package makes tree available for the user.

Uninstalling a Package with APT

You can uninstall a package with apt remove:

sudo apt remove tree

The user is prompted to confirm the removal. Again, the -y flag will auto-
confirm.

You can also choose to completely remove the package and its associated
configuration files with apt purge:

sudo apt purge tree

Using rpi-update

rpi-update is a command line application that will update your Raspberry Pi


OSkernel and Video Core firmware to the latest pre-release versions.

The rpi-update script was originally written by Hexed, but is now supported by
Raspberry Pi engineers. The script source is in the rpi-update repository.

What it does

54
rpi-update will download the latest pre-release version of the Linux kernel, its
matching modules, device tree files, along with the latest versions of the Video
Core firmware. It will then install these files to relevant locations on the SD
card, overwriting any previous versions.

All the source data used by rpi-update comes from the rpi-firmware repository.
This repository simply contains a subset of the data from the official firmware
repository, as not all the data from that repo is required.

Running rpi-update

If you are sure that you need to use rpi-update, it is advisable to take a backup
of your current system first as running rpi-update could result in a non-booting
system.

rpi-update needs to be run as root. Once the update is complete you will need to
reboot.

sudo rpi-update

sudo reboot

It has a number of options documented in the rpi-update repository.

How to get back to safety

If you have done an rpi-update and things are not working as you wish, if your
Raspberry Pi is still bootable you can return to the stable release using:

sudo apt-get update

sudo apt install --reinstall libraspberrypi0 libraspberrypi-{bin,dev,doc}


raspberrypi-bootloader raspberrypi-kernel

You will need to reboot your Raspberry Pi for these changes to take effect.

Playing Audio and Video

containerized video.mp4 at 30 fps:

ffmpeg -r 30 -i video.h264 -c:v copy video.mp4

Options During Playback

55
There are a number of options available during playback, actioned by pressing
the appropriate key. Not all options will be available on all files. The list of key
bindings can be displayed using omxplayer --keys:

1 decrease speed
2 increase speed
< rewind
> fast forward
z show info
j previous audio stream
k next audio stream
i previous chapter
o next chapter
n previous subtitle stream
m next subtitle stream
s toggle subtitles
w show subtitles
x hide subtitles
d decrease subtitle delay (- 250 ms)
f increase subtitle delay (+ 250 ms)
q exit omxplayer
p / space pause/resume
- decrease volume
+/= increase volume
left arrow seek -30 seconds
right arrow seek +30 seconds
down arrow seek -600 seconds
up arrow seek +600 seconds
Playing in the Background
omxplayer will close immediately if run in the background without tty (user
input), so to run successfully, you need to tell omxplayer not to require any user
input using the --no-keys option.

omxplayer --no-keys example.mp3 &Adding the & at the end of the command
runs the job in the background. You can then check the status of this
background job using the jobs command. By default, the job will complete
when omxplayer finishes playing, but if necessary, you can stop it at any point
using the kill command.

56
$ jobs

[1]- Running omxplayer --no-keys example.mp3 &

$ kill %1

[1]- Terminated omxplayer --no-keys example.mp3 &

Using a USB webcam

Edit this on GitHub

Rather than using the Raspberry Pi camera module, you can use a standard USB
webcam to take pictures and video on your Raspberry Pi.

NOTE

The quality and configurability of the camera module is highly superior to a


standard USB webcam.

First, install the fswebcam package:

sudo apt install fswebcam

If you are not using the default pi user account, you need to add your username
to the video group, otherwise you will see 'permission denied' errors.

sudo usermod -a -G video <username>

To check that the user has been added to the group correctly, use the groups
command.

Basic Usage

Enter the command fswebcam followed by a filename and a picture will be


taken using the webcam, and saved to the filename specified:

fswebcam image.jpg

This command will show the following information:

--- Opening /dev/video0...

Trying source module v4l2...

57
/dev/video0 opened.

No input was specified, using the first.

Adjusting resolution from 384x288 to 352x288.

--- Capturing frame...

Corrupt JPEG data: 2 extraneous bytes before marker 0xd4

Captured frame in 0.00 seconds.

--- Processing captured image...

Writing JPEG image to 'image.jpg'.

The webcam used in this example has a resolution of 1280 x 720 so to specify
the resolution I want the image to be taken at, use the -r flag:

fswebcam -r 1280x720 image2.jpg

This command will show the following information:

--- Opening /dev/video0...

Trying source module v4l2...

/dev/video0 opened.

No input was specified, using the first.

--- Capturing frame...

Corrupt JPEG data: 1 extraneous bytes before marker 0xd5

Captured frame in 0.00 seconds.

--- Processing captured image...

Writing JPEG image to 'image2.jpg'.

Picture now taken at the full resolution of the webcam, with the banner present.

Removing the Banner

Now add the --no-banner flag:

fswebcam -r 1280x720 --no-banner image3.jpg

58
which shows the following information:

--- Opening /dev/video0...

Trying source module v4l2...

/dev/video0 opened.

No input was specified, using the first.

--- Capturing frame...

Corrupt JPEG data: 2 extraneous bytes before marker 0xd6

Captured frame in 0.00 seconds.

--- Processing captured image...

Disabling banner.

Writing JPEG image to 'image3.jpg'.

display power [0 | 1 | -1] [display]

Show current display power state, or set the display power state. vcgencmd
display power 0 will turn off power to the current display. vcgencmd display
power 1 will turn on power to the display. If no parameter is set, this will
display the current power state. The final parameter is an optional display ID, as
returned by service -l or from the table below, which allows a specific display to
be turned on or off.

Note that for the 7" Raspberry Pi Touch Display this simply turns the backlight
on and off. The touch functionality continues to operate as normal.

vcgencmd display_power 0 7 will turn off power to display ID 7, which is


HDMI 1 on a Raspberry Pi 4.

Display ID
Main LCD
0
Secondary LCD
1
HDMI 0
2

59
Composite
3
HDMI 1
7
To determine if a specific display ID is on or off, use -1 as the first parameter.

vcgencmd display_power -1 7 will return 0 if display ID 7 is off, 1 if display ID


7 is on, or -1 if display ID 7 is in an unknown state, for example undetected.

vcdbg

vcdbg is an application to help with debugging the Video Core GPU from Linux
running on the ARM. It needs to be run as root. This application is mostly of
use to Raspberry Pi engineers, although there are some commands that general
users may find useful.

Python vs ipython

You can look back on the history of the commands you’ve entered in the REPL
by using the Up/Down keys like in python. The history also persists to the next
session, so you can exit ipython and return (or switch between v2/3) and the
history remains. Use Ctrl + D to exit.

Installing Python Libraries

apt

Some Python packages can be found in the Raspberry Pi OS archives, and can
be installed using apt, for example:

sudo apt update

sudo apt install python-picamera

This is a preferable method of installing, as it means that the modules you


install can be kept up to date easily with the usual sudo apt update and sudo apt
full-upgrade commands.

pip

Not all Python packages are available in the Raspberry Pi OS archives, and
those that are can sometimes be out of date. If you can’t find a suitable version

60
in the Raspberry Pi OS archives, you can install packages from the Python
Package Index (known as PyPI).

To do so, install pip:

sudo apt install python3-pip

Then install Python packages (e.g. simplejson) with pip3:

sudo pip3 install simplejson

piwheels

The official Python Package Index (PyPI) hosts files uploaded by package
maintainers. Some packages require compilation (compiling C/C++ or similar
code) in order to install them, which can be a time-consuming task, particularly
on the single-core Raspberry Pi 1 or Raspberry Pi Zero.

piwheels is a service providing pre-compiled packages (called Python wheels)


ready for use on the Raspberry Pi. Raspberry Pi OS is pre-configured to use
piwheels for pip. Read more about the piwheels project at www.piwheels.org.

GPIO and the 40-pin Header

Edit this on GitHub

A powerful feature of the Raspberry Pi is the row of GPIO (general-purpose


input/output) pins along the top edge of the board. A 40-pin GPIO header is
found on all current Raspberry Pi boards (unpopulated on Raspberry Pi Zero,
Raspberry Pi Zero W and Raspberry Pi Zero 2 W). Prior to the Raspberry Pi 1
Model B+ (2014), boards comprised a shorter 26-pin header. The GPIO header
on all boards (including the Raspberry Pi 400) have a 0.1" (2.54mm) pin pitch.

GPIO pins

Any of the GPIO pins can be designated (in software) as an input or output pin
and used for a wide range of purposes.

GPIO layout

NOTE

61
The numbering of the GPIO pins is not in numerical order; GPIO pins 0 and 1
are present on the board (physical pins 27 and 28) but are reserved for advanced
use (see below).

Voltages

Two 5V pins and two 3.3V pins are present on the board, as well as a number of
ground pins (0V), which are configurable. The remaining pins are all general
purpose 3.3V pins, meaning outputs are set to 3.3V and inputs are 3.3V-
tolerant.

Outputs

A GPIO pin designated as an output pin can be set to high (3.3V) or low (0V).

Inputs

A GPIO pin designated as an input pin can be read as high (3.3V) or low (0V).
This is made easier with the use of internal pull-up or pull-down resistors. Pins
GPIO2 and GPIO3 have fixed pull-up resistors, but for other pins this can be
configured in software.

More

As well as simple input and output devices, the GPIO pins can be used with a
variety of alternative functions, some are available on all pins, others on specific
pins.

PWM (pulse-width modulation)

Software PWM available on all pins

Hardware PWM available on GPIO12, GPIO13, GPIO18, GPIO19

SPI

SPI0: MOSI (GPIO10); MISO (GPIO9); SCLK (GPIO11); CE0 (GPIO8), CE1
(GPIO7)

SPI1: MOSI (GPIO20); MISO (GPIO19); SCLK (GPIO21); CE0 (GPIO18);


CE1 (GPIO17); CE2 (GPIO16)

I2C
62
Data: (GPIO2); Clock (GPIO3)

EEPROM Data: (GPIO0); EEPROM Clock (GPIO1)

Serial

TX (GPIO14); RX (GPIO15)

GPIO pinout

A handy reference can be accessed on the Raspberry Pi by opening a terminal


window and running the command pinout. This tool is provided by the GPIO
Zero Python library, which is installed by default in Raspberry Pi OS.

gpiozero pinout

For more details on the advanced capabilities of the GPIO pins see gadgetoid’s
interactive pinout diagram.

Permissions

In order to use the GPIO ports your user must be a member of the gpio group.
The pi user is a member by default, other users need to be added manually.

sudo usermod -a -G gpio <username>

GPIO in Python

Using the GPIO Zero library makes it easy to get started with controlling GPIO
devices with Python. The library is comprehensively documented at
gpiozero.readthedocs.io.

LED

To control an LED connected to GPIO17, you can use this code:

from gpiozero import LED

from time import sleep

led = LED(17)

while True:

led.on()

63
sleep(1)

led.off()

sleep(1)

Run this in an IDE like Thonny, and the LED will blink on and off repeatedly.

LED methods include on(), off(), toggle(), and blink().

Button

To read the state of a button connected to GPIO2, you can use this code:

from gpiozero import Button

from time import sleep

button = Button(2)

while True:

if button.is_pressed:

print("Pressed")

else:

print("Released")

sleep(1)

Button functionality includes the properties is_pressed and is_held; callbacks


when_pressed, when_released, and when_held; and methods wait_for_press()
and wait_for_release.

Button + LED
To connect the LED and button together, you can use this code:
from gpiozero import LED, Button
led = LED(17)
button = Button(2)
while True:
if button.is_pressed:

64
led.on()
else:
led.off()
Alternatively:
from gpiozero import LED, Button
led = LED(17)
button = Button(2)
while True:
button.wait_for_press()
led.on()
button.wait_for_release()
led.off()
or:
from gpiozero import LED, Button
led = LED(17)
button = Button(2)
button.when_pressed = led.on
button.when_released = led.off

7.2 EMBEDDED C
Embedded C is a set of language extensions for the C programming language by
the C Standards Committee to address commonality issues that exist between C
extensions for different embedded systems.

65
Fig No: 7.1 EMBEDDED C
Embedded C programming typically requires nonstandard extensions to the C
language in order to support enhanced microprocessor features such as fixed-
point arithmetic, multiple distinct memory banks, and basic I/O operations. The
C Standards Committee produced a Technical Report, most recently revised in
2008 and reviewed in 2013 providing a common standard for all implementations
to adhere to. It includes a number of features not available in normal C, such as
fixed-point arithmetic, named address spaces and basic I/O hardware addressing.
Embedded C uses most of the syntax and semantics of standard C, e.g., main()
function, variable definition, datatype declaration, conditional statements (if,
switch case), loops (while, for), functions, arrays and strings, structures and
union, bit operations, macros, etc.
The previous tutorial included a discussion about the tools and components
necessary to get started with Arduino. However, before starting with Arduino
UNO (or any other Arduino board) — and experimenting with hardware projects
on various sensors, actuators, and modules — it’s important to get through the
basics of Arduino sketches and the embedded C for Arduino-compatible coding.
The term “Arduino-compatible coding” refers to all Arduino and Arduino-
compatible microcontroller boards that can be programmed and uploaded using
Arduino IDE.
Arduino boards are programmed in “C.” C is a popular system programming
language that has minimal execution time on hardware in comparison to other

66
high-level programming languages. It’s the reason most of the operating systems
and several programming languages are built on C.
Much like other microcontrollers, the AVR microcontrollers housed in Arduino
boards are programmed in a subset of C. A general term for such subsets is
“Embedded C” because they apply to programming embedded controllers. The
language in which Arduino is programmed is a subset of C and it includes only
those features of standard C that are supported by the Arduino IDE.

This does not mean that Arduino C lags anywhere because it is a subset of C.
Most of the missing features of standard C can be easily worked around. Rather,
Arduino C is a hybrid of C and C++, meaning it is functional and object-oriented.
The structure of sketches essentially, a blank Arduino sketch has two functions:
1. Setup()
2. Loop ()
As the Arduino sketch starts executing, the setup() function is called first. It’s
executed only once and must be used to initialize variables, set pinModes, make
settings for hardware components, use libraries, etc.
The loop() function is next to the setup() function and it is iterated infinitely. Any
other user-defined functions must be called inside the loop function. This is how
microcontrollers execute their firmware code by repeating their code for an
infinite number of times while they remain powered on.
If users have programmed other microcontrollers (such as 8051, AVR, PIC, or
RX), it’s possible to compare the code inside the setup() function with the one
outside of the main() loop of an embedded C program — which may have been
written to initialize variables and make hardware settings. The setup() and loop()
functions have void return types.
A program for a microcontroller must be structured in the same manner as it
functions. A microcontroller must be “aware” of its hardware environment and
know how to interact with it.

67
A microcontroller can interact with other hardware components or devices only
through these five ways:
1. Digital Input. This may be received in digital LOW or HIGH from other
devices. These will be TTL logic levels or voltages converted to TTL logic levels
before being applied to the GPIO.
2. Digital Output. This may be output that’s digital LOW or HIGH compared to
other devices. Again, the output will be TTL logic levels.
3. Analog Input. It may “sense” analog voltage from other devices. The sensed
voltage is converted to a digital value using a built-in, analog-to-digital converter.
4. Analog Output. It may output analog voltage to other devices. This analog
output is not analog voltage but a PWM signal that approximates analog voltage
levels.
5. Serial Communication. It may transmit, receive, or transceiver data with other
devices in serial, according to a standard serial data protocol such as UART,
USART, I2C, SPI, microware, 1-wire, and CAN, etc. The serial communication
with other devices can be peer-to-peer (UART/USART), half-duplex (I2C), or
full-duplex (SPI).
Users that know how to perform these five types of microcontroller interactions
can interface any hardware with it.

An Arduino program or any microcontroller program must first have code for
initialization. This may include:
 Defining variables and constants
 Setting up pinModes
 Setting up ADC/PWM channels
 Initializing settings for serial communications
A microcontroller simply intercepts incoming data, processes it according to
programmed instructions, and outputs data through its I/O peripherals. This

68
means the program must be organized in specific sections that can handle input
data, process data, and control output.
Unlike desktop applications, µc programs are not designed to terminate. These
programs keep iterating for an infinite number of times until the system is shut
down or it meets failure. After a power shutdown, Arduino or any microcontroller
resets on the “power resume” and begins execution of its program from the
beginning.

The program includes code to handle failures when possible. So, any Arduino
program can be visualized as a four-step program as follows:
1. Initialization
2. Input – this should include code for data validation and to handle incorrect
or unexpected incoming data.
3. Processing – this should include code for unexpected failures or exceptions
raised while data processing
4. Output – this may include code for verification of expected results if the
interfaced device can also communicate back to the microcontroller

7.2.1Comments
The comments in Arduino C are similar to the comments in standard C.
Single-line comments start with a pair of slashes (//) and finish at the end of the
line (EOL). Multi-line comments start with a slash-asterisk pair (/*) and ends as
an asterisk-slash pair (*/).
These are examples of single and multi-line comments:
Arduino C data types. These data types are supported in Arduino C.
It’s worth noting that “string” and “string objects” are different. The string data
type defines a simple character array while the string data type defines a string
object.
Arduino C supports these built-in functions for manipulation of string objects:

69
7.2.2Identifiers
Identifiers are names given to variables, functions, constants, classes, methods,
and other objects in a program. In Arduino C, identifiers should contain only
alphanumeric characters, dash (-), or underscore (_). An identifier can only start
from an underscore or a letter.

7.2.3Keywords
Keywords are constants, variables, or function names that cannot be used as
identifiers.
Arduino C has the following keywords:

7.2.4Variables
Variables are references in a program with values that can change during the
execution of the program. Variables can have names that must be identifiers.
For example, in Arduino C, each variable must be explicitly defined with a
specified data type before it’s used in the code.
 If, in a code statement, a variable has been instantiated by a data type but
there’s no value assigned to it, the variable is said to be defined but not
declared.
 If it’s also assigned a value in the same statement or another statement, it’s
said to be declared.
The memory location where the value of a variable is stored at runtime is called
its “lvalue” or location value. The value stored in the memory location of the
variable is called its “rvalue” or register value.
A defined variable has a lvalue but no rvalue. A declared variable has a lvalue
and rvalue.

This is a valid definition of a variable:

70
int num1;
This is a valid declaration of a variable:
int num1 = 0;
Or…
int num1;
num1 = 0;

7.2.5Constant
Constants are references in a program with a value that does not change during
the execution of the program. The integer and floating-point constants can be
declared in Arduino C using const keyword or #define directive. T
This is an example of a valid declaration of an integer constant:
const int RXPIN = 0;
Some built-in constants are HIGH, LOW, INPUT, OUTPUT, INPUT_PULLUP,
LED_BUILTIN, true, and false. The #define directive allows for declaring
constants before the compilation of the program.
This is a valid declaration of a constant using #define directive:
#define LED Pin 3

7.2.6 Operators
These operators are available in Arduino C:
1. Arithmetic – addition (+), multiplication (*), subtraction (-), division (/), and
modular division (%)
2. Assignment (=)
3. Comparison – equal to (==), not equal to (!=), less than (<), greater than (>),
less than or equal to (<=), and greater than or equal to (>=)
4. Bitwise – bitwise and (&), bitwise or (|), bitwise xor (^), bitwise not (~), left
bitshift (<<), and right bitshift (>>)

71
5. Boolean – and (&&), or (||) and not (!)
6. Compound – increment (++), decrement (–), compound addition (+=),
compound subtraction (-=), compound multiplication (*=), compound division
(/=), compound bitwise and (&=), and compound bitwise or (|=)
7. Cast – These operators translate current type of a variable to another type. Type
casting can be applied to a variable by indicating the new data type in parenthesis,
before the appearance of the variable.
For example:
i = (int) f
8. sizeof – The sizeof operator returns the size of an array in the number of bytes.
9. Ternary (:?)
10. Pointer – dereference operator (*) and reference operator (&)
Statements and statement blocks
A statement is a complete C instruction for the processor. All C statements end
with a semicolon (;). A block of statements is a group of statements enclosed
within braces ({, }). A block of statement is also viewed as a single statement
by the compiler.
Operator precedence
This table shows the precedence of operators in Arduino C in descending order:

7.2.7 Control Structure


Arduino C supports these control structures:
 if
 if …else…
 for
 switch case
 while
 do… while…

72
 break
 continue
 goto
 Return
User-defined functions
Functions are callable blocks of statements. Programmers can also write their
own functions. Functions are an ideal way to organize code according to the
functionality of statement blocks in the program.
A function definition has this syntax:
function type function name(arguments){
function_body
}
The type of a function can be any data type including void. The function is
expected to return a value of the same type via a return statement. This statement
should be the last one in a function body (any statements made after a return
statement will fail to execute).
The function exits after the return statement. If the type of a function is void, it
should not return any value. The function name can be any identifier, and may or
may not need arguments. The arguments are variables that are bound to the
function.
The body of a function is a block of statements. Whenever the function is called,
this block of statements is executed.
This is a valid example of user-defined C function:
int add_inputs(int a, int b, int c) {
return a+b+c;
}
A function is called by its name and followed by a parenthesis. Any positional
arguments must be passed within parenthesis.

73
This is a valid example of calling a function:
add_inputs(5, 2, 14)
Built-in functions
Arduino supports several built-in functions that make programming Arduino
boards much easier. The commonly used built-in Arduino functions are listed in
this table.

7.2.8VariableScope
The scope of a variable refers to visibility and lifetime of a variable in the
program. For example, variables that are:
 Only visible inside a function have a local scope.
 Visible to all functions of the program have a global scope.
A variable having global scope must be defined or declared outside of any
function, including setup() and loop() functions. If a local variable is defined as
static (using the static keyword), it remains visible to only one function. However,
it’s not destroyed and will persists beyond the function call, preserving its data
between function calls.
If a variable (local or global) is defined as volatile, it’s stored in RAM instead of
storage registers. A variable must be defined as volatile if it’s likely to be changed
beyond the control of the code (such as in the case of an interrupted service
routine).
In the next tutorial, we will discuss how to perform digital output. Also, through
the digital output from Arduino, we will build an LED driver.

74
CHAPTER 8

RESULT

Fig.No: 8.1 Photocopy of Proposed Sysytem

75
CHAPTER 9

CONCLUSION

Hence the purpose of helping the visually impaired will be successfully achieved
by this system. This device helps the blind and visually disabled people by not
only assisting them while reading but also in shopping by helping them to read
the labels on products, indoor navigation at home and college where directions
are labelled on boards in different locations. The overall cost of this system is low
which is a moderate cost and is more reliable and accurate. This approach saves
a lot of money for them. Since the devices very convenient for the blind people
to carry it and use it. We can implement several other applications like image
captioning which uses deep learning to improvise the system more but the process
consumes a lot of power and processing speed requirement is also high which
restricts the usage of Raspberry Pi for this purpose.

76
CHAPTER 10

FUTURE ENHANCEMENT

Day by day upgrading the electronics components, so in future we use 8GB RAM
of raspberry pi board and 4K HD camera. Improving the capturing distance so
that long distance texts also easily convert to voice. Above technologies are
improved then the performance also increases and accuracy should be improved.

77
REFERENCES

[1] https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual

impairment

[2] Bourne RRA, Flaxman SR, Braithwaite T, Cicinelli MV, Das A, Jonas JB,

et al.; Vision Loss Expert Group. Magnitude, temporal trends, and


projections of the global prevalence of blindness and distance and near
vision impairment: a systematic review and meta-analysis. Lancet Glob
Health. 2017 Sep;5(9): pp. e888–97

[3] Dandona L, Dandona R. Revision of visual impairment definitions in the

International Statistical Classification of Diseases. BMC Medicine. 2006;


pp. 4:7.

[4] Pooja P. Gundewar and Hemant K. Abhyankar, A Review on an Obstacle

Detection in Navigation of Visually Impaired, International Organization


of Scientific Research Journal of Engineering (IOSRJEN), Vol.3, No.1 pp.
01-06, 2013.

[5] Tejaswani, G., Afroz. B, and Sunitha. S, A Text Recognizing Device For

Visually Impaired. International Journal of Engineering and Computer


Science, 7(03), pp. 23697-23700, 2018.

[6] V. Wu, R. Manmatha and E. M. Riseman, Finding Text in Images, In the

Proceedings of Second ACM International Conference on DigitalLibraries,


Philadelphia, PA, pp. 23-26, 1997.

[7] H. Li and D. Doermann, Automatic Text Tracking In Digital Videos, In

the Proceedings of IEEE 1998 Workshop on Multimedia Signal


Processing, Redondo Beach, California, USA, pp. 21- 26, 1998.

[8] M. Laine and O. S. Nevalainen, A standalone OCR system for mobile

78
camera-phones, Personal,Indoor and Mobile Radio Communications, 2006
IEEE 17th International Symposium, pp.1-5, Sept. 2006

[9] Nikil Mishra, Image Text to speech conversion using Raspberry pi and

OCR Techniques, International Journal for Scientific Research and


Development, e2321-0613, Vol 5, 2017

[10] Archana. A, Shinde. D, Text Pre-processing and Text Segmentation for

OCR, International Journal of Computer Science Engineering and


Technology, pp. 810-812, 2012.

[11] Ranjana Jana, Amritha Roy Chowdhary, Mazharul Islam, Optical

Character Recognition form Text Image. International Journal of


Computer Applications Technology and Research, Vol 3, pp. 2319-8656.
2014

[12] Neto ,Roberto and Fonseca, Nuno, Camera Reading for Blind People.

Procedia Technology, Vol 16, pp. 1200-1209. 2014. DOI


10.1016/j.protcy.2014.10.135.

[13] “ An accuracy examination of OCR Tools”, International Journal of

Innovative Technology and Exploring Engineering,2019.

79

You might also like