You are on page 1of 50

AI VIRTUAL PAINTER USING PYTHON

A MINI PROJECT REPORT

Submitted to

JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY, HYDERABAD


In partial fulfillment of the requirement for the award of the degree
BACHELOR OF TECHNOLOGY in COMPUTER SCIENCE AND ENGINEERING

SUBMITTED BY

MYANA PAVANI 19641A0570


MOHAMMED SOHAIL ZUBAIR 19641A0561
MUTHINENI DHEERAJ 19641A05B6

Under the Guidance of

Dr. M. SUKESH

(Assistant Professor)

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

VAAGDEVI COLLEGE OF ENGINEERING


(UGC Autonomous, Accredited by NBA, Accredited by NAAC with “A”)
Bollikunta, Khila Warangal (Mandal), Warangal Urban-506 005 (T.S)

1
VAAGDEVI COLLEGE OF ENGINEERING
(UGC Autonomous, Accredited by NBA, Accredited by NAAC with “A”)
Bollikunta, Khila Warangal (Mandal), Warangal Urban-506 005 (T.S)

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

CERTIFICATE

This is to certify that the mini project entitled “AI VIRTUAL PAINTER USING PYTHON”
is submitted by Myana Pavani bearing (19641A0570), Mohammed Sohail Zubair bearing
(19641A0561), Muthineni Dheeraj bearing (19641A05B6) in partial fulfillment of the requirements
for the award of the Degree in Bachelor of Technology in Computer Science and Engineering during
the academic year 2022-2023.

Guide Head of the Department

External Examiner
ACKNOWLEDGEMENT

We wish to take this opportunity to express our sincere gratitude and deep sense of respect to
our beloved Dr.K.PRAKASH, Principal, and VAAGDEVI COLLEGE OF ENGINEERING for making us available
all the required assistance and for his support and inspiration to carry out this mini project in the institute.

We extend our heartfelt thanks to Dr.N.Satyavathi, Head of the Department of CSE, Vaagdevi
College of Engineering for providing us necessary infrastructure and thereby giving us freedom to carry
out the mini project.

We express heartfelt thanks to the guide, Dr.M.Sukesh, Assistant Professor for his constant
support and giving necessary guidance for the completion of this project.

Finally, we express our sincere thanks and gratitude to my family members, friends, for their
encouragement and outpouring their knowledge and experience throughout the thesis.
ABSTRACT

Wouldn’t it be cool if you could just wave a pen in the air to draw something virtually and it actually
draws it on the screen? It could be even more interesting if we didn’t use any special hardware to
actually achieve this, just plain simple computer vision would do, in fact, we wouldn’t even need to
use machine learning or deep learning to achieve this. In this project, we are going to create a virtual
painter using AI. We will first track our hand and get its landmarks and then use the points to draw on
the screen. We will use two fingers for selection and one finger for drawing. And the best part is that
all of this will be done in real-time. This can be used for quick explanation during office meetings or
during online education classes. Existing System: we use touch Screen pens (stylus) for drawing , they
are More expensive, and also Pairing required for full functionality. Battery power required. Particular
app compatibility. Proposed System: with the use of AI virtual painter ,we don’t need any special
hardware requirements for functioning just your fingers are enough , no charging required and
compactible with any system.

Keywords: python , media pipe, computer vision, numpy ,hand gesture recognition
TABLE OF CONTENTS

S.NO TITTLE PG.NO

1 INTRODUCTION 01
1. PURPOSE, AIM AND OBJECTIVES 01
2. BACKGROUND OF PROJECT 01
3. SCOPE OF PROJECT 02
4. MODULES DESCRIPTION 02

2 SYSTEM ANALYSIS 03
1. HARDWARE AND SOFTWARE REQUIREMENTS 03
2. SOFTWARE REQUIREMENTS SPECIFICATION 04
3. EXISTING SYSTEM
4. PROPOSED SYSTEM
3 TECHNOLOGIES USED 06
1. PYTHON 06
2. NUMPY 15
3. MEDIAPIPE 20
4. COMPUTER VISION 22
4 SYSTEM DESIGN & UML DIAGRAMS 24
1. SOFTWARE DESIGN 24
2. ARCHITECTURE 24
3. UNIFIED MODELING LANGUAGE 25

5 INPUT/OUTPUT DESIGN 33
1. INPUT DESIGN 33
2. OUTPUT DESIGN 33

6 IMPLEMENTATION 34

7 TESTING 39
1. TESTING OBJECTIVES 39
2. TESTING METHODOLOGIES 39
3. USER TRAINING 43
4. MAINTAINENCE 43
5. TESTING STRATEGY 43
6. TEST CASES 44

8 OUTPUT SCREENS 47

9 CONCLUSION & FUTURE SCOPE 49

10 BIBLIOGRAPHY 50
LIST OF FIGURES

FIGURE.NO NAME PG.NO

3.3.1 MEDIAPIPE TOOL KIT 10

4.2.1 UML DIAGRAM 15

5.1.1 .SELECTION BOX 16

5.2.1 HAND DETECTION 17

8.1 USING ERASER 33

8.2 USING DRAW OPTION 34

8.4.1 DRAWING A CIRCLE 35

8.4.2 DRAWING A STRAIGHT LINE IN CIRCLE 35

8.5 USING TWO FINGERTIPS TO ERASE 36

8.6 DRAWING A LINE 36


LIST OF OUTPUTS

FIGURE.NO NAME PG.NO

8.1 USING ERASER 33

8.2 USING DRAW OPTION 34

8.4.1 DRAWING A CIRCLE 35

8.4.2 DRAWING A STRAIGHT LINE IN CIRCLE 35

8.5 USING TWO FINGERTIPS TO ERASE 36

8.6 DRAWING A LINE 36


[1] INTRODUCTION

1. PURPOSE,AIM AND OBJECTIVES

The purpose it to create an interesting platform for writing, drawing and ect.We
can also minimize the usage of hardware components, it also saves our time and
it is a kind of fun activity to kids. Social development is not a people's will, but
the needs of human life are the main driving force anyway. The same situation
happens in art. In the present circumstances, digital art and traditional art are
inclusive of the symbiotic state, so we need to systematically understand the basic
knowledge of the form between digital art and traditional art. The traditional way
includes pen and paper, chalk and board method of writing. The essential aim of
digital art is of building hand gesture recognition sy1stem to write digitally
Here we aim to create an external hardware free virtual painter that is easy to use.
This also helps dumb people to do the things so easily.
Here we just need to use our hand to draw or write ,we need to use our two fingers
to select a color ,one finger to draw. we will have multiple colors we can select
as we required also there will be an eraser incase if you go wrong.

2. BACKGROUND OF PROJECT

The background of the project is a Fingertip detection .The existing system only
works with your fingers, and there are no highlighters, paints, or relatives.
Identifying and characterizing an object such as a finger from an RGB image
without a depth sensor is a great challenge. Lack of pen up and pen down motion
The system uses a single RGB camera to write from above. Since depth sensing
is not possible, up and down pen movements cannot be followed. Therefore, the
fingertip's entire trajectory is traced, and the resulting image would be absurd and

1
not recognized by the model. Controlling the real-time system Using real-time
hand gestures to change the system from one state to another requires a lot of code
care. Also, the user must know many movements to control his plan adequately.

3. SCOPE OF THE PROJECT


The scope of the project is the Digital art includes many ways of writing like by
using keyboard, touch-screen surface, digital pen, stylus, using electronic hand
gloves, etc. But in this system, we are using hand gesture recognition with the use
of machine learning algorithm by using python programming, which creates
natural interaction between man and machine. With the advancement in
technology, the need of development of natural ‘human – computer interaction
(HCI)’ [10] systems to replace traditional systems is increasing rapidly.

4. MODULES DESCRIPTION

1..Import necessary packages


import mediapipe as mp
import cv2
import numpy as np
import time
import pandas
2. read the frame
we created vedio capture object and defined the object as fingertip
cap.read() function reads each frame from the webcam.
cv2.flip() flips the frame. The arguments are input image/frame and axis. 0 is
for flip vertically, and 1 for flip horizontally.

2
cv2.imshow() function shows frame in a new window.
cv2.waitKey(1) == ord(‘q’) keep open the window until ‘q’ is pressed.

3. Create the canvas window

Then we create the canvas window.


Np.Zeros()create a matrix containing all 0’s.

4. Detect the fingertip

Here we’re using a fingertip to simulate our canvas. OpenCV reads frames as
BGR color space. But to detect colors, we have to convert the frame to the HSV
color space.
HUE – The hue encodes color information in angular dimension.

SATURATION – Saturation encodes the intensity of color.

VALUE – Value represents the brightness of the color

5. Drawing on the screen.

We have 5 modes in the code


They are rectangle , circle, draw , straight-line and eraser

3
[2] SYSTEM ANALYSIS

2.1 HARDWARE AND SOFTWARE REQUIREMENTS


Software Requirements:
Python – 3.x (We used 3.8.8 for this project)
OpenCV – 4.4
Run “pip install OpenCV-python opencv_contrib-python” to install package.
Numpy – 1.20.1

Hardware Requirement:
The hardware requirement specifies each interface of the software elements and
the hardware elements of the system. These hardware requirements include
configuration characteristics.
System :PentiumIV2.4GHz.
Hard Disk : 100 GB.
Monitor : 15 VGA Color.
Mouse : Logitech.
RAM : 1 GB.
2.2 SOFTWARE REQUIREMENT SPECIFICATIONS:
Hand Detection is a python software which is designed to detect hand
movements and patterns. This can help to increase or decrease volume and
brightness.
Open cv in our project is used to install the packages
Here we also use numpy for better quality of our project

4
2.3 EXISTING SYSTEM

we use touch Screen pens (stylus) for drawing,they are More expensive,and also
Pairing required for full functionality. Battery power required. Particular app
compatibility.
we also have applications that identify a particular color and draw accordingly
like using a blue color cap or any object

2.4 PROPOSED SYSTEM

with the use of AI virtual painter ,we don’t need any special hardware
requirements for functioning just your fingers are enough , no charging required
and compactible with any system.

AI painter user for who the application looks like an user interface actually
consists of a work space called as whiteboard that comes along with different tools
such as different pen tools and eraser and no need of other additional devices This
is the simple and easy way to write and draws with figures gesture
. This is an application that is developed in python with the help of opencv and
hence all its features apply here as well such as platform independence.

5
[3] TECHNOLOGIES USED

3.1 PYTHON

what is Python? Chances you are asking yourself this. You may have
found this book because you want to learn to program but don’t know anything
about programming languages. Or you may have heard of programming
languages like C, C++, C#, or Java and want to know what Python is and how
it compares to “big name” languages. Hopefully I can explain it for you.

Python concepts
If you not interested in the how and whys of Python, feel free to skip to
the next chapter. In this chapter I will try to explain to the reader why I think
Python is one of the best languages available and why it’s a great one to start
programming with.

• Open source general-purpose language.

• Object Oriented, Procedural, Functional

• Easy to interface with C/ObjC/Java/Fortran

• Easy-ish to interface with C++ (via SWIG)

• Great interactive environment

6
Python is a high-level, interpreted, interactive and object-oriented scripting
language.
Python is designed to be highly readable. It uses English keywords frequently
where as other languages use punctuation, and it has fewer syntactical
constructions than other languages.

• Python is Interpreted − Python is processed at runtime by the


interpreter. You do not need to compile your program before executing
it. This is similar to PERL and PHP.

• Python is Interactive − You can actually sit at a Python prompt and


interact with the interpreter directly to write your programs.

• Python is Object-Oriented − Python supports Object-Oriented style


or technique of programming that encapsulates code within objects.

• Python is a Beginner's Language − Python is a great language for the


beginnerlevel programmers and supports the development of a wide
range of applications from simple text processing to WWW browsers to
games.

Python Features

Python's features include −

• Easy-to-learn − Python has few keywords, simple structure, and a


clearly defined syntax. This allows the student to pick up the language
quickly.

• Easy-to-read − Python code is more clearly defined and visible to the


eyes.

7
• Easy-to-maintain − Python's source code is fairly easy-to-maintain.

• A broad standard library − Python's bulk of the library is very


portable and cross-platform compatible on UNIX, Windows, and
Macintosh.

3.2 NUMPY

NumPy’s main object is the homogeneous multidimensional array. It is a table


of elements (usually numbers), all of the same type, indexed by a tuple of
positive integers. In NumPy dimensions are called axes. The number of axes
is rank.
• Offers Matlab-ish capabilities within Python

• Fast array operations

• 2D arrays, multi-D arrays, linear algebra etc.

POWERFUL N-DIMENSIONAL ARRAYS:

• Fast and versatile, the NumPy vectorization, indexing, and broadcasting


concepts are the de-facto standards of array computing today.

NUMERICAL COMPUTING TOOLS:

NumPy offers comprehensive mathematical functions, random number


generators, linear algebra routines, Fourier transforms, and more.

INTEROPERABLE:

• NumPy supports a wide range of hardware and computing platforms,


and plays well with distributed, GPU, and sparse array libraries.

8
PERFORMANT:

• The core of NumPy is well-optimized C code. Enjoy the flexibility of


Python with the speed of compiled code.

EASY TO USE:

• NumPy’s high level syntax makes it accessible and productive for


programmers from any background or experience level.

OPEN SOURCE:

• Distributed under a liberal BSD license, NumPy is developed and


maintained publicly on GitHub by a vibrant, responsive, and
diverse community.

3.3 MEDIAPIPE

MediaPipe powers revolutionary products and services we use daily. Unlike


power-hungry machine learning Frameworks, MediaPipe requires minimal
resources. It is so tiny and efficient that even embedded IoT devices can run it.
In 2019, MediaPipe opened up a whole new world of opportunity for
researchers and developers following its public release.

If you have just started with MediaPipe and this is one of the first
articles you are going through, congratulations, you are in the right

9
place. This article will cover the basics of MediaPipe, the difference
between Solutions, and the Framework. The official documentation states that
inferencing is real-time, and it takes just a few lines of code to create a
perception pipeline. Is it that simple? Does it provide real-time output for any
kind of data we throw at it? Follow along the article to know more.

MediaPipe Toolkit comprises the Framework and the Solutions. The


following diagram shows the components of the MediaPipe Toolkit.

FIGURE: 3.3.1. MEDIAPIPE TOOL KIT

MediaPipe offers ready-to-use yet customizable Python solutions as a prebuilt


Python package. MediaPipe Python package is available on PyPI for Linux,
macOS and Windows.

10
3.4 COMPUTER VISION

Computer vision is an interdisciplinary scientific field that deals with how


computers can gain high-level understanding from digital images or videos.From
the perspective of engineering, it seeks to understand and automate tasks that the
human visual system can do.

Computer vision tasks include methods for acquiring, processing, analyzing and
understanding digital images, and extraction of high-dimensional data from the
real world in order to produce numerical or symbolic information, e.g. in the
forms of decisions. Understanding in this context means the transformation of
visual images (the input of the retina) into descriptions of the world that make
sense to thought processes and can elicit appropriate action. This image
understanding can be seen as the disentangling of symbolic information from
image data using models constructed with the aid of geometry, physics, statistics,
and learning theory.

The scientific discipline of computer vision is concerned with the theory behind
artificial systems that extract information from images. The image data can take
many forms, such as video sequences, views from multiple cameras, multi-
dimensional data from a 3D scanner, or medical scanning device. The
technological discipline of computer vision seeks to apply its theories and models
to the construction of computer vision systems.

Sub-domains of computer vision include scene reconstruction, object detection,


event detection, video tracking, object recognition, 3D pose estimation, learning,
indexing, motion estimation, visual servoing, 3D scene modeling, and image
restoration.[6

11
[4] SYSTEM DESIGN & UML DIAGRAMS

System design is the process of designing the elements of a system such as the
architecture, modules and components, the different interfaces of those
components and the data that goes through that system.

System Analysis is the process that decomposes a system into itscomponent


pieces forthe purpose of defining how well those components interact to
accomplish the set requirements.

The purpose of the System Design process is to provide sufficient detailed data
and information about the system and its system elements to enable the
implementation consistent with architectural entities as defined in models and
views of the system architecture.
Elements of a System

1 Architecture - This is the conceptual model that defines the structure,


behavior and more views of a system. We can use flowcharts to represent
and illustrate the architecture.

2 Modules - This are components that handle one specific tasks in a system.
A combination of the modules make up the system.

3 Components - This provides a particular function or group of related


functions. They are made up of modules.

4 Interfaces - This is the shared boundary across which the components of a


the system exchange information and relate.
12
5 Data - This the management of the information and data flow.

1. SOFTWARE DESIGN:

The design concepts provide the software designer with a foundation from
which more sophisticated methods can be applied. A set of fundamental design
concepts has evolved. They are as follows:

1. Abstraction - Abstraction is the process or result of generalization by


reducing the information content of a concept or an observable
phenomenon, typically in order to retain only information which is
relevant for a particular purpose. It is an act of Representing essential
features without including the background details or explanations.

2. Refinement - It is the process of elaboration. A hierarchy is developed


by decomposing a macroscopic statement of function in a step-wise
fashion until programming language statements are reached. In each
step, one or several instructions of a given program are decomposed
into more detailed instructions. Abstraction and Refinement are
complementary concepts.

3. Modularity - Software architecture is divided into components called


modules.

4. Software Architecture - It refers to the overall structure of the software


and the ways in which that structure provides conceptual integrity for
a system. Good software architecture will yield a good return on

13
investment with respect to the desired outcome of the project, e.g. in
terms of performance, quality, schedule and cost.

5. Control Hierarchy - A program structure that represents the


organization of a program component and implies a hierarchy of
control.
6. Structural Partitioning - The program structure can be divided into
both horizontally and vertically. Horizontal partitions define separate
branches of modular hierarchy for each major program function.
Vertical partitioning suggests that control and work should be
distributed top down in the program structure.

7. Data Structure - It is a representation of the logical relationship among


individual elements of data.

8. Software Procedure - It focuses on the processing of each module


individually.

9. Information Hiding - Modules should be specified and designed so


that information contained within a module is inaccessible to other
modules that have no need for such information.

AI painter user for who the application looks like an user interface actually
consists of a work space called as whiteboard that comes along with different
tools such as different pen tools and eraser and no need of other additional devices
This is the simple and easy way to write and draws with figures gesture
. This is an application that is developed in python with the help of opencv and
hence all its features apply here as well such as platform independence.

14
4.2 UML Diagram :

FIGURE:4.2.1.UMLDIAGRAM

Object diagram:

15
Class diagram:

16
[5] INPUT/OUTPUT DESIGN

5.1 INPUT DESIGN

Input Design plays a vital role in the life cycle of software development, it
requires very careful attention of developers. The input design is to feed data to
the application as accurate as possible. So inputs are supposed to be designed
effectively so that the errors occurring while feeding are minimized. According
to Software Engineering Concepts, the input forms or screens are designed to
provide to have a validation control over the input limit, range and other related
validations.Input design is the process of converting the user created input into a
computer-based format. The goal of the input design is to make the data entry
logical and free from errors.

Validations are required for each data entered. Whenever a user enters an
erroneous data, error message is displayed and the user can move on to the
subsequent pages after completing all the entries in the current page.

FIGURE.5.1.1.SELECTION BOX

5.2 OUTPUT DESIGN

17
The Output from the computer is required to mainly create an efficient method of
communication within the company primarily among the project leader and his
team members, in other words, the administrator and the clients. The output of
VPN is the system which allows the project leader to manage his clients in terms
of creating new clients and assigning new projects to them, maintaining a record
of the project validity and providing folder level access to each client on the user
side depending on the projects allotted to him. After completion of a project, a
new project may be assigned to the client. User authentication procedures are
maintained at the initial stages itself.

FIGURE:5.2.1.HAND DETECTION

18
[6] IMPLEMENTATION

The most exciting part of this system is the workflow phase. Writing involves a
lot of functionalities. So, the number of gestures used for controlling this
particular system is equal to these number of actions involved. The basic
functionalities that are included in this system are:

1. Writing Mode - In this state, the system will trace the fingertip coordinates and
stores them.

2. Colour Mode – The user can change the color of the text among the various
available colors.

3. Backspace - Say if the user goes wrong, we need a gesture to add quick
backspace.

import mediapipe as mp
import cv2
import numpy as np
import time
import pandas

#contants
ml = 150
max_x, max_y = 250+ml, 50
curr_tool = "select tool"
time_init = True
rad = 40
var_inits = False

19
thick = 4
prevx, prevy = 0,0

#get tools function


def getTool(x):
if x < 50 + ml:
return "line"

elif x<100 + ml:


return "rectangle"

elif x < 150 + ml:


return"draw"

elif x<200 + ml:


return "circle"

else:
return "erase"

def index_raised(yi, y9):


if (y9 - yi) > 40:
return True

return False

hands = mp.solutions.hands

20
hand_landmark = hands.Hands(min_detection_confidence=0.6,
min_tracking_confidence=0.6, max_num_hands=1)
draw = mp.solutions.drawing_utils

# drawing tools
tools = cv2.imread("tools.png")
tools = tools.astype('uint8')

mask = np.ones((480, 640))*255


mask = mask.astype('uint8')
'''
tools = np.zeros((max_y+5, max_x+5, 3), dtype="uint8")
cv2.rectangle(tools, (0,0), (max_x, max_y), (0,0,255), 2)
cv2.line(tools, (50,0), (50,50), (0,0,255), 2)
cv2.line(tools, (100,0), (100,50), (0,0,255), 2)
cv2.line(tools, (150,0), (150,50), (0,0,255), 2)
cv2.line(tools, (200,0), (200,50), (0,0,255), 2)
'''

cap = cv2.VideoCapture(0)
while True:
_, frm = cap.read()
frm = cv2.flip(frm, 1)

rgb = cv2.cvtColor(frm, cv2.COLOR_BGR2RGB)

op = hand_landmark.process(rgb)

21
if op.multi_hand_landmarks:
for i in op.multi_hand_landmarks:
draw.draw_landmarks(frm, i, hands.HAND_CONNECTIONS)
x, y = int(i.landmark[8].x*640), int(i.landmark[8].y*480)

if x < max_x and y < max_y and x > ml:


if time_init:
ctime = time.time()
time_init = False
ptime = time.time()

cv2.circle(frm, (x, y), rad, (0,255,255), 2)


rad -= 1

if (ptime - ctime) > 0.8:


curr_tool = getTool(x)
print("your current tool set to : ", curr_tool)
time_init = True
rad = 40

else:
time_init = True
rad = 40

if curr_tool == "draw":
xi, yi = int(i.landmark[12].x*640), int(i.landmark[12].y*480)
y9 = int(i.landmark[9].y*480)

if index_raised(yi, y9):

22
cv2.line(mask, (prevx, prevy), (x, y), 0, thick)
prevx, prevy = x, y

else:
prevx = x
prevy = y

elif curr_tool == "line":


xi, yi = int(i.landmark[12].x*640), int(i.landmark[12].y*480)
y9 = int(i.landmark[9].y*480)

if index_raised(yi, y9):
if not(var_inits):
xii, yii = x, y
var_inits = True

cv2.line(frm, (xii, yii), (x, y), (50,152,255), thick)

else:

if var_inits:
cv2.line(mask, (xii, yii), (x, y), 0, thick)
var_inits = False

elif curr_tool == "rectangle":


xi, yi = int(i.landmark[12].x*640), int(i.landmark[12].y*480)
y9 = int(i.landmark[9].y*480)

23
if index_raised(yi, y9):
if not(var_inits):
xii, yii = x, y
var_inits = True

cv2.rectangle(frm, (xii, yii), (x, y), (0,255,255), thick)

else:
if var_inits:
cv2.rectangle(mask, (xii, yii), (x, y), 0, thick)
var_inits = False

elif curr_tool == "circle":


xi, yi = int(i.landmark[12].x*640), int(i.landmark[12].y*480)
y9 = int(i.landmark[9].y*480)

if index_raised(yi, y9):
if not(var_inits):
xii, yii = x, y
var_inits = True

cv2.circle(frm, (xii, yii), int(((xii-x)**2 + (yii-y)**2)**0.5),


(255,255,0), thick)

else:
if var_inits:
cv2.circle(mask, (xii, yii), int(((xii-x)**2 + (yii-y)**2)**0.5),
(0,255,0), thick)
var_inits = False

24
elif curr_tool == "erase":
xi, yi = int(i.landmark[12].x*640), int(i.landmark[12].y*480)
y9 = int(i.landmark[9].y*480)

if index_raised(yi, y9):
cv2.circle(frm, (x, y), 30, (0,0,0), -1)
cv2.circle(mask, (x, y), 30, 255, -1)

op = cv2.bitwise_and(frm, frm, mask=mask)


frm[:, :, 1] = op[:, :, 1]
frm[:, :, 2] = op[:, :, 2]

frm[:max_y, ml:max_x] = cv2.addWeighted(tools, 0.7, frm[:max_y,


ml:max_x], 0.3, 0)

cv2.putText(frm, curr_tool, (270+ml,30), cv2.FONT_HERSHEY_SIMPLEX,


1, (0,0,255), 2)
cv2.imshow("paint app", frm)

if cv2.waitKey(1) == 27:
cv2.destroyAllWindows()
cap.release()
break

25
[7] TESTING
Software testing is a critical element of software quality assurance and
represents the ultimate review of specification, design and code generation.

7.1 TESTING OBJECTIVES:

• To ensure that during operation the system will perform as per


specification.
• To make sure that system meets the user requirements during operation
• To make sure that during the operation, incorrect input, processing and
output will be detected
• To see that when correct inputs are fed to the system the outputs are
correct
• To verify that the controls incorporated in the same system as intended
• Testing is a process of executing a program with the intent of finding an
error
• A good test case is one that has a high probability of finding an as yet
undiscovered error

The software developed has been tested successfully using the


following testing strategies and any errors that are encountered are
corrected and again the part of the program or the procedure or function is
put to testing until all the errors are removed. A successful test is one that
uncovers an as yet undiscovered error.

Note that the result of the system testing will prove that the system
is working correctly. It will give confidence to system designer, users of
the system, prevent frustration during implementation process etc.

26
2. TESTING METHODOLOGIES:

 White box testing.


 Black box testing.
 Unit testing.
 Integration testing.
 User acceptance testing.
 Output testing.
 Validation testing.
 System testing.
1) White Box Testing:

White box testing is a testing case design method that uses the control
structure of the procedure design to derive test cases. All independents path in a
module are exercised at least once, all logical decisions are exercised at once,
execute all loops at boundaries and within their operational bounds exercise
internal data structure to ensure their validity. Here the customer is given three
chances to enter a valid choice out of the given menu. After which the control
exits the current menu.

2) Black Box Testing:

Black Box Testing attempts to find errors in following areas or categories,


incorrect or missing functions, interface error, errors in data structures,
performance error and initialization and termination error. Here all the input data
must match the data type to become a valid entry.

27
3) Unit Testing:

Unit testing focuses verification effort on the smallest unit of Software


design that is the module. Unit testing exercises specific paths in a module’s
control structure to ensure complete coverage and maximum error detection. This
test focuses on each module individually, ensuring that it functions properly as a
unit. Hence, the naming is Unit Testing.

4) Integration Testing:

Integration testing addresses the issues associated with the dual problems
of verification and program construction. After the software has been integrated
a set of high order tests are conducted. The main objective in this testing process
is to take unit tested modules and builds a program structure that has been dictated
by design.

The following are the types of Integration Testing:

 Top Down Integration:

This method is an incremental approach to the construction of program


structure. Modules are integrated by moving downward through the control
hierarchy, beginning with the main program module.

 Bottom Up Integration:

This method begins the construction and testing with the modules at the
lowest level in the program structure. Since the modules are integrated from the

28
bottom up, processing required for modules subordinate to a given level is always
available and the need for stubs is eliminated.

5) User acceptance Testing:

User Acceptance of a system is the key factor for the success of any system.
The system under consideration is tested for user acceptance by constantly
keeping in touch with the prospective system users at the time of developing and
making changes wherever required. The system developed provides a friendly
user interface that can easily be understood even by a person who is new to the
system.

6) Output Testing:

After performing the validation testing, the next step is output testing of the
proposed system, since no system could be useful if it does not produce the
required output in the specified format. Asking the users about the formatrequired
by them tests the outputs generated or displayed by the system under
consideration. Hence the output format is considered in 2 ways – one is on screen
and another in printed format.

7) Validation Testing:

Validation testing is generally performed on the following fields:

 Text Field:

29
The text field can contain only the number of characters lesser than or
equal to its size. The text fields are alphanumeric in some tables and alphabetic
in other tables. Incorrect entry always flashes and error message.

 Numeric Field:

The numeric field can contain only numbers from 0 to 9. An entry of any
character flashes an error messages. The individual modules are checked for
accuracy and what it has to perform.

 Preparation of Test Data:

Taking various kinds of test data does the above testing. Preparation of test
data plays a vital role in the system testing. After preparing the test data the system
under study is tested using that test data. While testing the system by usingtest data
errors are again uncovered and corrected by using above testing steps and
corrections are also noted for future use.

 Using Live Test Data:

Live test data are those that are actually extracted from organization files.
After a system is partially constructed, programmers or analysts often ask users
to key in a set of data from their normal activities. Then, the systems person uses
this data as a way to partially test the system. In other instances, programmers or
analysts extract a set of live data from the files and have them entered themselves.

 Using Artificial Test Data:

30
Artificial test data are created solely for test purposes, since they can be
generated to test all combinations of formats and values. In other words, the
artificial data, which can quickly be prepared by a data generating utility program
in the information systems department, make possible the testing of all login and
control paths through the program.

The most effective test programs use artificial test data generated by persons
other than those who wrote the programs. Often, an independent team of testers
formulates a testing plan, using the systems specifications

3. USER TRAINING:

Whenever a new system is developed, user training is required to educate


them about the working of the system so that it can be put to efficient use by those
for whom the system has been primarily designed. For this purpose the normal
working of the project was demonstrated to the prospective users. Its working is
easily understandable and since the expected users are people who have good
knowledge of computers, the use of this system is very easy.

4. MAINTAINENCE:

This covers a wide range of activities including correcting code and design
errors. To reduce the need for maintenance in the long run, we have more
accurately defined the user’s requirements during the process of system
development. Depending on the requirements, this system has been developed to
satisfy the needs to the largest possible extent. With development in technology,
it may be possible to add many more features based on the requirements in future.
The coding and designing is simple and easy to understand which will make
maintenance easier.

31
5. TESTING STRATEGY :

A strategy for system testing integrates system test cases and design
techniques into a well-planned series of steps that results in the successful
construction of software. The testing strategy must co-operate test planning, test
case design, test execution, and the resultant data collection and evaluation .A
strategy for software testing must accommodate low-level tests that are necessary
to verify that a small source code segment has been correctly implemented as well
as high level tests that validate major system functions against user requirements.
Software testing is a critical element of software quality assurance and
represents the ultimate review of specification design and coding.

1. SYSTEM TESTING:

Software once validated must be combined with other system elements (e.g.
Hardware, people, database). System testing verifies that all the elements are
proper and that
overall system function performance is achieved. It also tests to find
discrepancies between the system and its original objective, current
specifications and system documentation.
2. UNIT TESTING:

In unit testing different are modules are tested against the specifications
produced during the design for the modules. Unit testing is essential for
verification of the code produced during the coding phase, and hence the goals to
test the internal logic of the modules. Using the detailed design description as a
guide, important Conrail paths are tested to uncover errors within the boundary
of the modules. This testing is carried out during the programming stage itself. In

32
this type of testing step, each module was found to be working satisfactorily as
regards to the expected output from the module. In Due Course, latest technology
advancements will be taken into consideration. As part of technical build-up many
components of the networking system will be generic in nature so that future
projects can either use or interact with this.

33
[8].OUTPUT SCREENS

The following output figure demonstrates the user using the eraser option, With
this fingertip using the hand detection algorithm.

FIGURE:8.1.USING ERASER

we could see that we have straight line rectangle ,draw ,circle and eraser
optionshere the wrote something with draw option and trying to erase them with
this fingertip

34
FIGURE:8.2.USING DRAW OPTION

Here the user is drawing letters SUBS using the option draw ,the picture shows
draw option enabled
As we can see he used 2 fingertips to erase but to draw he used one finger tip

35
FIGURE.8.3.DRAWING A CIRCLE

Here the user draw the circle using the circle option and tried to create a
meaningful figure .
Here he used only one fingertip to draw.

FIGURE:8.4.DRAWING A STRATIGHT LINE IN CIRCLE

36
Here we see that the user user his two fingers to draw a circle and draw a
straight line in the middle as its diameter.

FIGURE.8.5.USING TWO FINGER TIPS TO ERASE


He pointed out center and indicated it as radius and attempts to erase the figure.

37
FIGURE:8.6.DRAWING A LINE

Here we could observe that before tracing the line is orange color and after the
it turns into blue color.

38
[9] CONCLUSION & FUTURE SCOPE

CONCLUSION:

system has the potential to challenge traditional writing methods. It eradicates


the need to carry a mobile phone in hand to jot down notes, providing a simple
on-the-go way to do the same. It will also serve a great purpose in helping
especially abled people communicate easily. Even senior citizens or people who
find it difficult to use keyboards will able to use system effortlessly.
Extending the functionality, system can also be used to control IoT devices
shortly. Drawing in the air can also be made possible. The system will be an
excellent software for smart wearables using which people could better interact
with the digital world. Augmented Reality can make text come alive. There are
some limitations of the system which can be improved in the future.

Firstly, using a handwriting recognizer in place of a character recognizer will


allow the user to write word by word, making writing faster.

Secondly, hand-gestures with a pause can be used to control the real-time


system as done by instead of using the number of fingertips.

Thirdly, our system sometimes recognizes fingertips in the background and


changes their state.
Air-writing systems should only obey their master's control gestures and should
not be misled by people around. In the future, advances in Artificial Intelligence
will enhance the efficiency of air-writing.

39
Demonstration of the image processing capabilities of OpenCV. The ultimate
goal is to create a computer vision machine learning application that promotes
Human computer interaction (HCI) also named ManMachine Interaction
(MMI)] refers to the relation between the human and the computer or more
precisely the machine, and since the machine is insignificant without suitable
utilize by the human there are two main characteristics should be deemed when
designing a HCI system as mentioned: functionality and usability[5].
System functionality referred to the set of functions or services that the system
equips to the users while system usability referred to the level and scope that the
system can operate and perform specific user purposes efficiently.

SCOPE:

The scope of this system is mainly used as a powerful means of communication


for the deaf, which means implementing this project can help in:

1. An effective communication method that reduces mobile and laptop usage by


eliminating the need to write.

2. It helps people with hearing impairments to communicate well.

3. Various purposes, such as sending messages, e-mails, etc., as the generated


text can also be used for that.

Some examples that are being implemented in the near future are as follows:
Video Processing using Android Phone:
Mobile devices such as smart phones, iPads and tablet pcs are equipped with
cameras, the demand of the image processing applications increased.

40
These applications need to be faster and consumes lower power because the
mobile device is only powered by a battery.
The hardware technology depends on the semiconductor technology instead we
can use an efficient programming language to write an image processing
application for the mobile devices [4]. Robot Control:
Controlling the robot using gestures considered as one of the interesting
applications in this field proposed a system that uses the numbering to count the
five fingers for controlling a robot using hand pose signs.
The orders are given to the robot to perform a particular task, where each sign
has a specific meaning and represents different function .

41
[10] BIBLOGRAPY

[1]. Volume 5, Issue 1, January 2015 ISSN: 2277 128X, International Journal
of Advanced Research in Computer Science and Software Engineering :
Research Paper -- Gesture Controlled Computer.

[2]. https://towardsdatascience.com/Painting-withopencv

[3]. https://www.geeksforgeeks.org/live-webcamdrawing-using-opencv

[4]. Volume 43, Issue 1, June 2012 ISSN: 2277 128X, International Journal of
Advanced Research in Computer Science and Software Engineering : Research
Paper -- Hand Data Glove: A Wearable Real-Time Device for Human-
Computer Interaction

[5]. International Journal of Artificial Intelligence & Applications (IJAIA),


Vol.3, No.4, J
2012,DOI : 10.5121/ijaia.2012.3412 161- HAND GESTURE RECOGNITION:
A LITERATURE REVIEW

[6]. OpenCV for Computer Vision Applications, Proceedings of National


Conference on Big Data and Cloud Computing (NCBDC’15), March 20, 2015

[7]. https://www.geeksforgeeks.org/live-webcamdrawing-using-opencv/

[8]. https://docs.opencv.org/master/d5/d54/group obj detect.html

42
[9]. https://circuitdigest.com/tutorial/real-life-objectdetection-using-opencv-
python-detectingobjects-in-live-video

[10].Real Time Object Detection and Tracking Using Deep Learning and
OpenCV Proceedings of the International Conference on Inventive Research in
Computing Applications (ICIRCA 2018) IEEE Xplore Compliant Part
Number:CFP18N67-ART; ISBN:978-1-5386-2456-2

[11].Numpy.org, 2017. Online]. Available: http://www.numpy.org

[12].(2017, January 17). Object Detection Online].Available:


http://en.m.wikipedia.org/wiki/Object_detection

[13]. Study on Object Detection using Open CV – Python, International Journal


of Computer Applications (0975 – 8887) Volume 162 – No 8, March 2017

[14].Nidhi, “Image Processing and Object Detection”, Dept. of Computer


Applications, NIT, Kurukshetra, Haryana,

43

You might also like