You are on page 1of 15

IMAGE PROCESSING

J- Component Report

Virtual Mouse Pointer Using Color


Detection

by

AKSHAT TONGIA
16BCE2050

Fall Semester 2018


TABLE OF CONTENTS

Chapter Contents Page


Number
1 Introduction 3
2 Scope 4
3 System Working 5
4 Task Analysis 6
5 Diagrams 8
6 Code 10
7 Screenshots 13
8 Conclusion 15

Page 2 of 15
Chapter- I

Introduction

In this project an approach for Image Processing is done, where we have tried to
control the mouse cursor movement and click events of the mouse using hand
gestures. Hand gestures were acquired using a camera based on color detection
technique. This method mainly focuses on the use of a Web Camera to develop a
interaction using image processing. Now a day’s intelligent machine which can be
used along with the computer are being developed, which helps in friendly
Interaction . In the recent years different technologies are used for developing the
virtual mouse. In this project, we have tried to provide a review on different
technologies for the virtual mouse. To work with a computer mouse and Keyboard
are the very essential input devices. To solve this problem virtual mouse is
developed.

In our work, we have tried to control mouse cursor movement and click events
using a camera based on color detection technique. Here real time video has been
captured using a Web-Camera. The user wears colored tapes to provide information
to the system. Individual frames of the video are separately processed. The
processing techniques involve an image subtraction algorithm to detect colors. Once
the colors are detected the system performs various operations to track the cursor
and performs control actions, the details of which are provided below. No additional
hardware is required by the system other than the standard webcam which is
provided in every laptop computer.

Page 3 of 15
Chapter- II Scope of

the Project

To design virtual mouse which detects hand gestures patterns instead of


physical mouse.
Basically we use colored tips for detection which are captured by webcam.
Here, the colored fingertip acts as an object which the web cam senses.
The camera is positioned such that it recognizes the moment of finger tips
and performs the operations of mouse.
The utilization of virtual mouse appears in space saving situations or in
movement situation.

Page 4 of 15
Chapter- III

System Working

The following steps are involved in the working of the virtual mouse developed by
us:

1. Capturing real time video using Web-Camera.


2. Processing the individual image frame.
3. Flipping of each image frame.
4. Conversion of each frame to a grey scale image.
5. Color detection and extraction of the different colors (RGB) from flipped gray
scale image.
6. Conversion of the detected image into a binary image.
7. Finding the region of the image and calculating its centroid.
8. Tracking the mouse pointer using the coordinates obtained from the centroid.
9. Simulating the left click and the right click events of the mouse by assigning
different color pointers.

Page 5 of 15
Chapter IV

Task Analysis

1. Knowing the pre-requisites such as hardware and software components


required and also what all basic knowledge is required for the project.
1.1 Hardware
1.1.1 Webcam: Webcam is a necessary component for detecting the
image.
1.2 Software
1.2.1 Python: Python and its necessary modules were used to develop
this project
1.2.2 SDK tool and .NET Framework: SDK and .NET are required
in order to create standalone applications for windows
based system.
2. System Development
2.1 Developing the algorithm.
2.2 Analyzing the steps to be followed after developing the algorithm.
3. Processing the individual image frame.
3.1 Flipping of each image frame.
3.2 Conversion of each frame to a grey scale image.
3. Color detection: Coding for color detection using python built in functions
4. Filtering the Images
4.1 Conversion of the detected image into a binary image.
4.2 Removing noises from the image.
5. Move the cursor
5.1 Finding the region of the image and calculating its centroid.
Page 6 of 15
5.2 Tracking the mouse pointer using the coordinates obtained from the
centroid.
6. Mouse click event
6.1 Simulating the left click and the right click events of the mouse by
assigning different color pointers.
7. Testing: Rigorous testing of the project is done using different gestures.

Page 7 of 15
Chapter- V

Diagrams
Use Case Diagram

Page 8 of 15
Architecture Diagram:

Page 9 of 15
Chapter- VI

Code (Python)

import cv2
import numpy as np
from pynput.mouse import Button, Controller
import wx
mouse=Controller()

app=wx.App(False)
(sx,sy)=wx.GetDisplaySize()
(camx,camy)=(320,240)

#green

lowerBound=np.array([33,80,40])
upperBound=np.array([102,255,255])

#red

#lowerBound=np.array([170,120,150])
#upperBound=np.array([190,255,255])

cam= cv2.VideoCapture(0)

kernelOpen=np.ones((5,5))
Page 10 of 15
kernelClose=np.ones((20,20))
pinchFlag=0
while True:
ret, img=cam.read()
img=cv2.resize(img,(340,220))

#convert BGR to HSV


imgHSV= cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
# create the Mask
mask=cv2.inRange(imgHSV,lowerBound,upperBound)
#morphology
maskOpen=cv2.morphologyEx(mask,cv2.MORPH_OPEN,kernelOpen)
maskClose=cv2.morphologyEx(maskOpen,cv2.MORPH_CLOSE,kernelClose)

maskFinal=maskClose

_,conts,h=cv2.findContours(maskFinal.copy(),cv2.RETR_EXTERNAL,cv2.CHAI
N_APPROX_NONE)
if(len(conts)==2):
if(pinchFlag==1):
pinchFlag=0
mouse.release(Button.left)
x1,y1,w1,h1=cv2.boundingRect(conts[0])
x2,y2,w2,h2=cv2.boundingRect(conts[1])
cv2.rectangle(img,(x1,y1),(x1+w1,y1+h1),(255,0,0),2)
cv2.rectangle(img,(x2,y2),(x2+w2,y2+h2),(255,0,0),2)
cx1=x1+w1/2
Page 11 of 15
cy1=y1+h1/2
cx2=x2+w2/2
cy2=y2+h2/2
cx=(cx1+cx2)/2
cy=(cy1+cy2)/2
cv2.line(img, (int(cx1),int(cy1)),(int(cx2),int(cy2)),(255,0,0),2)
cv2.circle(img, (int(cx),int(cy)),2,(0,0,255),2)
mouseLoc=(sx-(cx*sx/camx), cy*sy/camy)
mouse.position=mouseLoc
# while mouse.position!=mouseLoc:
# pass
elif(len(conts)==1):
x,y,w,h=cv2.boundingRect(conts[0])
if(pinchFlag==0):
pinchFlag=1
mouse.press(Button.left)
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
cx=x+w/2 cy=y+h/2
cv2.circle(img,(int(cx),int(cy)),int((w+h)/4),(0,0,255),2)
mouseLoc=(sx-(cx*sx/camx), cy*sy/camy)
mouse.position=mouseLoc
#while mouse.position!=mouseLoc:
# pass
cv2.imshow("cam",img)
cv2.waitKey(5)

Page 12 of 15
Chapter- VII

Screenshots

Page 13 of 15
Page 14 of 15
Chapter- VIII

Conclusion

The system architecture that has been proposed will completely change the way
people would use the Computer system. Presently, the webcam, microphone and
mouse are an integral part of the Computer system. This project will completely
eliminate the necessity of mouse.

Most of the applications require additional hardware which is often very costly. The
motive was to create this technology in the cheapest possible way and also to create
it under a standardized operating system. Various application programs can be
written exclusively for this technology to create a wide range of applications with
the minimum requirement of resources.

Page 15 of 15

You might also like