You are on page 1of 15

Virtual Keyboard Using Machine learning

A project report submitted in partial fulfillment of the requirements for the

Award of degree of

Bachelor of Technology
In

Computer Science and Engineering

By

Imran Khan (2102920100044)

Under the supervision of

Prof. M I H Ansari

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

MEERUT INSTITUTE OF TECHNOLOGY, MEERUT

Affiliated To

Dr. A.P.J. ABDUL KALAM TECHNICAL UNIVERSITY

UTTAR PRADESH, LUCKNOW

FEB 2024
TABLE OF CONTENTS
CERTIFICATE ii

ACKNOWLEDGEMENT iii

ABSTRACT iv

LIST OF LIBRARIES v

LIST OF FIGURES vi

LIST OF ABBREVIATIONS vii

CHAPTER 1 INTRODUCTION

1.1 PURPOSE OF VIRTUAL KEYBOARD

1.2 SCOPE OF VIRTUAL KEYBOARD

1.3 CHARACTERISTIC OF PROPOSED O

1.4 DISADVANTAGES OF VIRTUAL KEYBOARD

CHAPTER 2 SOFTWARE REQUIREMENTS SPECIFICATIONS

CHAPTER 3 SNAPSHOTS

CHAPTER 4 CONCLUSION
Certificate
We hereby declare that the work which is being presented in the project report entitled,
“Virtual Keyboard using Machine Learning”, in partial fulfillment of the requirements for the
award of degree of Bachelor of Technology submitted in Computer Science and Engineering
of Meerut Institute of Technology, Meerut, is an authentic record of our own work carried
out under the supervision of Prof. M.I.H Ansari and refers other researcher’s works which are
duly listed in the reference section.

The matter presented in this Project has not been submitted for the award of any other
degree of this or any other university.

Imran Khan

2102920100044

This is to certify that the above statement made by the candidate is correct and true to the
best of my knowledge.

(Prof. M.I.H Ansari)

Supervisor

Meerut Institute of Technology, MEERUT.


ACKNOWLEDGEMENT
It gives us a great sense of pleasure to present the report of the B. Tech Project undertaken
during B. Tech. Third Year. We owe special debt of gratitude to Prof. MIH Ansari and,
Department of Computer Science & Engineering, Meerut Institute of Technology, Meerut for
his constant support and guidance throughout the course of our work. His sincerity,
thoroughness and perseverance have been a constant source of inspiration for us. It is only
his cognizant efforts that our endeavors have seen light of the day. We also take the
opportunity to acknowledge the contribution of, Head, Department of Computer Science &
Engineering, Meerut Institute of Technology, Meerut for his full support and assistance during
the development of the project. We also do not like to miss the opportunity to acknowledge
the contribution of all project co-coordinators and faculty members of the department for
their kind assistance and cooperation during the development of our project. Last but not the
least, we acknowledge our friends and team members for their contribution in the completion
of the project.

Signature:

Imran khan

2102920100044

09.02.2024
Abstract
This paper presents the development and implementation of a virtual keyboard system
leveraging machine learning (ML) techniques. The traditional physical keyboard, while widely
used, presents limitations in terms of portability and accessibility. Virtual keyboards offer a
promising alternative, especially in scenarios where physical keyboards are impractical or
unavailable.

The proposed system utilizes ML algorithms to interpret user gestures and translate them
into corresponding keystrokes, effectively simulating the functionality of a physical keyboard.
The libraries are used e.g OpenCV, Mediapipe, pynput etc. ,the system can accurately
recognize and classify hand movements captured by a camera or other sensor devices. Key
components of the virtual keyboard include gesture recognition, motion tracking, and
predictive text input. Gesture recognition enables the system to identify various hand
movements corresponding to specific keystrokes, while motion tracking ensures real-time
responsiveness and accuracy. Additionally, predictive text input enhances user experience by
suggesting and auto-completing words based on context and user behavior.

The implementation of the virtual keyboard involves training the ML models on a dataset of
hand gestures and fine-tuning them to improve accuracy and efficiency. The system is
designed to be adaptable to different input modalities and user preferences, making it
suitable for a wide range of applications across various devices and platforms.

Overall, the virtual keyboard system presented in this paper demonstrates the potential of
ML techniques to enhance human-computer interaction and expand the capabilities of input
devices in the digital age.
List of Libraries and Modules

Sn No. Description Function


for computer vision

1 OpenCV-python tasks, including image


and video processing.
provides solutions for
various perception tasks
such as hand tracking,
2 mediapipe
pose estimation,

and face detection.

The Controller class


from the
pynput.keyboard
3 Pynput.keyboard module is imported for
controlling

the keyboard input.

The sleep function from


the time module is used
4 math
for introducing
delays in the program.
The math module is
imported for
mathematical

5 time operations, particularly


for calculating the
hypotenuse
in this program.
Introduction
The virtual keyboard project utilities computer vision and machine learning techniques to
enable users to interact with a keyboard interface using hand gestures. By tracking hand
movements through a webcam, the program identifies the position of fingers and maps them
to corresponding keys on a virtual keyboard displayed on the screen. This innovative approach
aims to provide an alternative input method, particularly beneficial for individuals with
mobility impairments or in situations where traditional keyboards are not accessible.

Purpose
The purpose of the virtual keyboard project is to offer a hands-free and intuitive input method
for users, leveraging advancements in computer vision and machine learning technologies. By
interpreting hand gestures, users can type characters and control computer functions without
the need for physical keyboards. This project aims to enhance accessibility and usability,
catering to diverse user needs and environments.

Scope
The virtual keyboard project focuses on implementing real-time hand tracking and gesture
recognition functionalities using libraries such as OpenCV and Mediapipe. The scope includes
the development of a user-friendly interface that displays a virtual keyboard on the screen.
The program interprets hand movements to simulate key presses and performs
corresponding actions based on user input. The project can potentially be extended to
support additional features such as predictive text input, customizable layouts, and
integration with other applications.
Characteristics
- Real-time hand tracking: The project utilizes computer vision techniques to accurately track
the movement of hands in real-time.

- Gesture recognition: Hand gestures are recognized and mapped to specific keys on the
virtual keyboard, allowing users to input characters and commands.

- Customizable interface: The virtual keyboard interface can be customized to accommodate


different keyboard layouts and user preferences.

- Accessibility: The project aims to enhance accessibility by providing an alternative input


method for individuals with disabilities or limitations in physical movement.
Disadvantages
- Accuracy limitations: The accuracy of hand tracking and gesture recognition may vary
depending on factors such as lighting conditions, camera quality, and hand movement speed.

- Learning curve: Users may need time to familiarize themselves with the hand gestures and
operation of the virtual keyboard, especially if they are accustomed to traditional keyboard
input.

- Dependency on hardware: The virtual keyboard project relies on the availability of a webcam
or camera-equipped device, which may not always be accessible or compatible with the
system.

- Limited functionality: While the virtual keyboard provides basic typing and control
capabilities, it may lack the advanced features and efficiency of physical keyboards,
particularly for tasks requiring extensive text input or specialized functions.

Overall, the virtual keyboard project represents an innovative approach to human-computer


interaction, offering a hands-free and accessible input method that complements traditional
keyboard interfaces.
Software Requirements and Specifications
To start creating a Virtual keyboard, using OpenCV. I have used Visual Studio Code (VS code)

Table 1: Details of Project Requirement


About Project Project Details Definition

A virtual keyboard application, built using machine


Virtual Keyboard learning and computer vision techniques, serves as a
Project Name
using ML prime example of innovative human-computer
interaction.

Python 3.11 introduces some new syntax to the


language, as well as a few small modifications to
Python version
3.11 Version existing behavior and, most importantly, a slew of
(Recommended)
performance improvements, following in the
footsteps of the previous 3.10version.

Python is for web framework for building safe and


Programming
Python maintainable websites quickly. And for using various
Language Used
inbuilt libraries

IDE Tool
Visual Studio Here we used VS Code for python coding
(Recommended)

A virtual keyboard application, built using machine


learning and computer vision techniques, serves as a
Project Type Virtual application
prime example of innovative human-computer
interaction.
SNAPSHOTS OF PROGRAM
.
Output
in the below snapshot I am showing hand gesture on hand tracking interface
The Final output:- On Notepad
Conclusion

In conclusion, the development of the Gesture Type virtual keyboard application showcases
the transformative potential of machine learning and computer vision in human-computer
interaction. By harnessing advanced algorithms and techniques, Gesture Type offers users a
novel and intuitive way to type and control digital devices through hand gestures.

Throughout this report, we have explored the key features and functionalities of Gesture
Type, highlighting its real-time gesture recognition, customizable virtual keyboard interface,
predictive text input, accessibility features, and multi-platform support. We have also
discussed various use cases, including accessibility, hands-free typing, gaming, multitasking,
and language learning, demonstrating the versatility and applicability of the application
across different domains.

Gesture Type represents a significant advancement in the field of human-computer


interaction, offering a seamless and adaptable typing experience that caters to diverse user
needs and preferences. By providing users with an alternative input method that eliminates
the reliance on physical keyboards, Gesture Type enhances accessibility, productivity, and
user experience in various contexts.

You might also like