You are on page 1of 23

A

Technical Seminar Report


On

TOUCHLESS TOUCHSCREEN TECHNOLOGY

Submitted to JNTU HYDERABAD


In Partial Fulfilment of the requirements for the Award of Degree of

BACHELOR OF TECHNOLOGY
IN
INFORMATION TECHNOLOGY

Submitted
By

Bhati Dhruv (208R1A1266)

Under the Esteemed guidance of


DR.X.S.ASHA.SHINY
Professor, Department of IT

Department of Information Technology

CMR ENGINEERING COLLEGE


(UGC AUTONOMOUS)

(Accredited by NAAC & NBA, Approved by AICTE NEW DELHI, Affiliated to JNTU, Hyderabad)
(Kandlakoya, Medchal Road, R.R. Dist. Hyderabad-501 401)

(2023-2024)
CMR ENGINEERING COLLEGE
(UGC AUTONOMOUS)
(Accredited by NAAC & NBA, Approved by AICTE NEW DELHI, Affiliated to JNTU, Hyderabad)
(Kandlakoya, Medchal Road, R.R. Dist. Hyderabad-501 401)

Department of Information Technology

CERTIFICATE

This is to certify that the seminar entitled “ Touchless Touchscreen Technology Submitted to JNTU
HYDERABAD” is a bonafide work carried out by

Bhati Dhruv (208R1A1266)

in partial fulfilment of the requirement for the award of the degree of BACHELOR OF
TECHNOLOGY in INFORMATION TECHNOLOGY from CMR Engineering College,
affiliated to JNTU, Hyderabad, under our guidance and supervision.
The results presented in this seminar have been verified and are found to be satisfactory. The results
embodied in this seminar have not been submitted to any other university for the award of any other
degree or diploma.

Internal Guide Head of the Department


DR.X.S.ASHA.SHINY Dr. MADHAVI PINGILI
Professor Professor and HOD
Department of IT Department of IT
CMREC, Hyderabad CMREC, Hyderabad
DECLARATION

This is to certify that the work reported in the present seminar entitled “Touchless
Touchscreen Technology” is a record of bonafide work done by me in the Department of
Information Technology, CMR Engineering College, JNTU Hyderabad. The reports are based
on the seminar work done entirely by me and not copied fromany other source.

Bhati Dhruv (208R1A1266)


ACKNOWLEDGEMENT

I am extremely grateful to Dr. A. Srinivasula Reddy, Principal and Dr. Madhavi Pingili,
HOD, Department of IT, CMR Engineering College for their constant support.

I am extremely thankful to Dr.X.S.AshaShiny Professor, Internal Guide, Department


of IT, for her constant guidance, encouragement and moral support throughout the seminar.

I will be failing in duty if i do not acknowledge with grateful thanks to the authors of
the references and other literatures referred in this seminar.

I express my thanks to all staff members and friends for all the help and co-ordination
extended in bringing out this seminar successfully in time.

Finally, I am very much thankful to my parents who guided me for every step.

Bhati Dhruv (208R1A1266)


CONTENTS

TOPIC PAGE NO

ABSTRACT I

LIST OF FIGURES II

1.INTRODUCTION 1

2.LITERATURE SURVEY 2

3.ARCHITECTURE 5

4.WORKING METHODOLOGY 6

5.TECHNOLOGY 9

6.ADVANTAGES 11

7.DISADVANTAGES 12

8.APPLICATIONS 13

9.FUTURE ENHANCEMENT 14

10.CONCLUSION 15

11.REFERENCES 16
ABSTRACT

It was the touch screens which initially created great furore.Gone are the days when you have to fiddle with
the touch screens and end scratching up. Touch screen displays are ubiquitous worldwide.Frequent
touching a touchscreen display with a pointing device such as a finger can result in the gradual de-
sensitization of the touchscreen to input and can ultimately lead to failure of the touchscreen. To avoid this
a simple user interface for Touchless control of electrically operated equipment is being developed.
EllipticLabs innovative technology lets you control your gadgets like Computers, MP3 players or mobile
phones without touching them. A simple user interface for Touchless control of electrically operated
equipment. Unlike other systems which depend on distance to the sensor or sensor selection this system
depends on hand and or finger motions, a hand wave in a certain direction, or a flick of the hand in one area,
or holding the hand in one area or pointing with one finger for example. The device is based on optical
pattern recognition using a solid state optical.

I
LIST OF FIGURES

SNO FIGURE NO DESCRIPTION PAGE NO

1. 3.1 Architecture 5

2. 4.1 Working methodology 6


3. 4.2 Explanation 8

II
1. INTRODUCTION
1.1 Introduction

The touch less touch screen sounds like it would be nice and easy, however after closer examination it
looks like it could be quite a workout. This unique screen is made by TouchKo , White Electronics
Designs , and Groupe 3D. The screen resembles the Nintendo Wii without the Wii Controller. With the
touchless touch screen your hand doesn’t have to come in contact with the screen at all, it works by
detecting your hand movements in front of it. This is a pretty unique and interesting invention, until
you break out in a sweat. Now this technology doesn’t compare to the hologram-like IO2 Technologies
Heliodisplay M3 , but thats for anyone that has $18,100 laying around.In the given parts it would rather
create a same name with in it too.Simply point your finger in the air towards the device and move it
accordingly to control the navigation in the device.If the main group would be at the correct path in it[8]
You probably wont see this screen in stores any time soon. Everybody loves a touch screen and when
you get a gadget with touch screen the experience is really exhilarating. When the I-phone was
introduced,everyone felt the same.But gradually,the exhilaration started fading. While using the phone
with the finger tip or with the stylus the screen started getting lots of finger prints and scratches. When
we use a screen protector; still dirty marks over such beautiful glossy screen is a strict no-no. Same
thing happens with I-pod touch. Most of the time we have to wipe the screen to get a better unobtrusive
view of the screen.Many personal computers will likely have similar screens in the near future. But
touch interfaces are nothing new -- witness ATM machines.Many parts would be noticed to it few will ,
How about getting completely out of touch? A startup called LM3Labs says it's working with major
computer makers in Japan, Taiwan and the US to incorporate touch less navigation into their laptops,
Called Airstrike; the system uses tiny charge-coupled device (CCD) cameras integrated into each side
of the keyboard to detect user movements.You can drag windows around or close them, for instance,
by pointing and gesturing in midair above the keyboard.You should be able to buy an Airstrike-
equipped laptop next year, with high-end stand-alone keyboards to follow.Any such system is unlikely
to replace typing and mousing. But that's not the point. Airstrike aims to give you an occasional quick
break from those activities.Thanks to EllipticLabs innovative technology that lets you control your
gadgets like Computers, MP3 players or mobile phones without touching them. Simply point your
finger in the air towards the device and move it accordingly to control the navigation in the device.
They term this as “Touchless human/machine user interface for 3D navigation ”.You can drag
windows around or close them, for instance, by pointing and gesturing in midair above the
keyboard.You should be able to buy an Airstrike-equipped laptop next year, with high-end stand-alone
keyboards to follow.

1
2. LITERATURE SURVEY

D. Wu proposes a semi-supervise ranked progressive framework. According to the


Hidden Markov Model (HMM) [1]

E. introduced a novel framework based on synthesized pseudo Diffusion-Weighted Imaging


(DWI) from perfusion parameter maps to obtain better image quality for more accurate
segmentation. The proposed framework consists of three components based on The
Touchscreen Era Is Coming To An End. HP began developing a massive touchscreen panel
for their Manhattan PR agency about seven years ago. The Wall of Touch, their final product,
was such a hit that it has now been installed in the offices of additional clients, with more on
the way. Despite its name, another feature that distinguishes the Wall is that users are not
required to contact it. The Wall of Touch is made up of up to nine 1080p 43-46-inch displays.
HP opted against it since only one large panel would necessitate rear projection.

Gaussian-Bernoulli Deep Belief Network (DBN) to perform and pay attention to the
bony gesture, along with 3- D Convolutional Neural Network (3DCNN). [2]

The objective of Nielsen was to validate deep learning-based Alberta Stroke Program Early
Computed Tomography Score (ASPECTS) calculation software that utilizes a three-
dimensional fully convolutional network-based brain hemisphere comparison algorithm (3D-
BHCA). The authors retrospectively collected head non-contrast computed tomography (CT)
data from 71 patients with acute ischemic stroke and 80 non-stroke patients. The results for
ASPECTS on CT assessed by five stroke neurologists and by the 3D-BHCA model were
compared with the ground truth by means of region-based and score-based analyses. They
conclude that the automated ASPECTS calculation software they developed using a deep
learning-based algorithm was superior to orequal to stroke neurologists in performing
ASPECTS calculation in patients with acute stroke and non-stroke patients.

Hidden Markov Model needed to ascertain the gesture array. Deep Neural Networks
(DNNs) . [3]

proposed a multi-scale U-Net to a faster ischemic stroke segmentation from non-enhanced CT


images. To achieve this, the researchers used a multi-scale U-Net deep network model to

2
segment image features of 30 stroke patients. They utilized the Dice loss function training
model to address the data imbalance problem. The motion time of automatic segmentation was
less than 20 ms, indicating that this method can meet the real-time clinical needs for diagnosing
acute ischemic stroke and providing thrombolytic therapy. Because artificial intelligence
technologies require large databases to function effectively, imaging data must be collected
systematically. Another U-Net stroke segmentation, introduced by Soltanpour et al., was
designed to segment objects in different scales and unusual appearances. The proposed method
also used contra-lateral and corresponding Tmax images to enrich the input CTP maps.

P. Molchanov employs Convolutional Deep Neural Networks to fuse data originating at


more than one sensor and also to segregate signaling.[4]

developed and evaluate a convolutional neural network (CNN) algorithm for detecting and
segmenting acute ischemic lesions from CT angiography (CTA) images of patients with
suspected middle cerebral artery stroke. The algorithm's performance was compared to the
volumes reported by widely used CT perfusion-based RAPID software (IschemaView). Here,
this algorithm distinguishes the images efficiently which are captured indoors and outdoors of
the car during daylight and the darkness. It consumes lesser power than any other techniques.
Hand positioning primarily used in graphical interfaces like in cars can lessen foresight and
intellectual interruption, may also get better assurance and luxury Using of small size dataset
is critical limitation of this survey for external validation.

J. Donahue provides a part of designs which has compliance impending interest a kind
of vision strains. It provides the outline of image, activity and video obtained in vision
task. He forms novel Recurrent Convolutional neural network . [5]

the authors proposed a novel deep learning network called This ends up in interference of
one's car owner and feature limited concentration on driving. Further this can result in
casualties. Organizing multiple sensors for accurate and strength-coherent progressive car
owner, fist movement identification having substitute tracking system, colour 35mm, and also
a deep camera, that in combination perform the process physically powerful opposed to
fluctuating light setting. network to segment AIS lesions, which yields promising
segmentation accuracy, and achieves semantic decoupling by processing different parameter
modalities separately. The significance of this study lies in proving the potential of cross-
modal interactions in attention to assist in identifying.So in the next condition it would be the
particular path that would not be at the given non given patients.

3
new imaging biomarkers for more accurately predicting AIS prognosis in future studies.

Spatio-temporal interested retrieve recurrent convolutional modes that are doubly deep
so they might be compositional in contiguous [6].

proposed a novel framework that automatically segments stroke lesions in DWI. This
framework consists of two convolutional neural networks (CNNs). The first CNN, known as
the EDD Net, is an ensemble of two DeconvNets and is responsible for detecting the lesions.
The second CNN, called the MUSCLE Net, evaluates the detected lesions by the EDD Net and
removes potential false positives. They attempted to solve this problem using both CNNs, and
the results demonstrate very good performance. The framework was validated using a large
dataset of DW images from 741 subjects. The mean accuracy, measured by the Dice coefficient,
is 0.67 overall. Additionally, the mean Dice scores for subjects with small and large lesions
were 0.61 and 0.83, respectively. The lesion detection rate achieved was 0.94.

4
3. ARCHITECTURE

Fig: 3.1 Architecture

 The TouchPoint employs a novel haptic touch technique. It detects whether a finger is
contacting a certain surface using 3D ultrasonic technology. This is how you can make non-
traditional surfaces into touchscreens. The sound waves detect whether a finger is hitting
the surface and can distinguish between different touches. Allowing for the possibility of
multipurpose gesturing in the future.

5
4. WORKING METHODOLOGY

 Motion is sensed and processed into on-screen movements by interacting with the line-of-sight
of the sensors that are installed around the screen that is being utilized. There is a mechanism in
place to prevent inadvertent motions from being utilized as input, but it appears to be promising
nonetheless. Without ever putting your fingertips on the screen, the device can detect
movements in three dimensions. For navigation control, we do not need to wear any specific
sensors on our hands with a touchless interface. We must manipulate objects in 3D by pointing
fingers at the screen.

EXPLANATION:

Fig 4.1 : Working methodology

6
 The convolutional neural network is made of four main parts.
They help the CNNs mimic how the human brain operates to recognize patterns and features
in images:
Convolutional layers
Rectified Linear Unit (ReLU for short)
Pooling layers
Fully connected layers

WORKING OF ARTIFICIAL NEURAL NETWORK(ANN):

 Artificial Neural Network can be best represented as a weighted directed graph, where the
artificial neurons form the nodes.

 The association between the neurons outputs and neuron inputs can be viewed as the
directed edges with weights.

 The Artificial Neural Network receives the input signal from the external source in the
form of a pattern and image in the form of a vector.

 These inputs are then mathematically assigned by the notations x(n) for every n number of
inputs.
FUNCTION:

 It's mostly available in select AT&T stores. The simple mechanical mechanism is also
substantially less expensive to manufacture. Three infrared lasers scan a surface to create a
touch wall. When something breaks through the laser line, a camera records it and feeds it back
to the Plex program. On a cardboard screen, earlier prototypes were created. The Plex interface
is displayed on the cardboard using a projector, and the solution works perfectly. Touch wall
isn't the first iPhone multi-touch product. In addition to the Surface, there are several early
prototypes in the works in this arena. What Microsoft has accomplished with a few hundred
dollars worth of reality-based gear is nothing short of amazing.

7
Fig 4.2 : Explanation

 As a result, the sensor produces electrical signals. These signals are analogue in nature. Thus,
using an analogue to digital converter, these impulses are converted into digital signals for
further processing. The host controller receives the digital output of the ADC (Analog to Digital
Converter) (HC). The host controller is in charge of packet transmission on the bus. 1msec
frames are used. A Start of Frame (SOF) packet is generated by the host controller at the start of
each frame. The SOF packet is used to synchronize the start of the frame and keep track of the
frame number. It also manages the depth map, which is an image that provides data about the
distance between the surfaces of scene objects and a viewpoint.

8
5. TECHNOLOGY

Early ischemic stroke detection using deep learning involves a sophisticated combination of
technologies and methodologies. By integrating these technologies and methodologies, early
ischemic stroke detection using deep learning can achieve high accuracy, sensitivity, and
efficiency in clinical practice, facilitating timely intervention and improving patient outcomes.

 Deep Learning Models:

Convolutional Neural Networks (CNNs): CNNs are the cornerstone of deep learning-based
image analysis. These networks consist of multiple layers of interconnected neurons that can
automatically extract hierarchical features from input images.
Recurrent Neural Networks (RNNs): RNNs, particularly Long Short-Term Memory (LSTM)
networks, are used for sequential data analysis, such as processing time-series imaging data or
clinical time-series data.
Deep Belief Networks (DBNs): DBNs are generative models composed of multiple layers of
probabilistic latent variables. They can be used for unsupervised feature learning and may
complement supervised CNN models.

 Medical Imaging Modalities:

Computed Tomography (CT): CT scans provide detailed cross-sectional images of the brain,
allowing visualization of ischemic lesions, vascular structures, and tissue perfusion.
Magnetic Resonance Imaging (MRI): MRI offers superior soft tissue contrast and can detect
subtle changes associated with ischemic stroke, including diffusion-weighted imaging (DWI)
for detecting acute infarcts and perfusion-weighted imaging (PWI) for assessing tissue viability.
Angiography: Angiographic imaging techniques, such as CT angiography (CTA) or magnetic
resonance angiography (MRA), can visualize blood vessel anatomy and identify occlusions or
stenoses indicative of ischemic stroke.

 Image Preprocessing:

Normalization: Adjusting image intensity values to a standard range to enhance consistency


and comparability across different scans.
Registration: Aligning images from multiple modalities or time points to ensure spatial
coherence and facilitate feature extraction.
Noise Reduction: Filtering out noise and artifacts from images to improve the signal-to-noise
ratio and enhance feature clarity.

9
 Data Augmentation:
Geometric Transformation: Rotations, translations, scaling, and flipping of images to
increase dataset variability and robustness to spatial transformations.
Intensity Transformation: Adjusting image contrast, brightness, or gamma correction to
simulate variations in imaging conditions.
 Transfer Learning:
Leveraging pre-trained deep learning models (often trained on large-scale natural image
datasets like ImageNet) and fine-tuning them on medical imaging data for ischemic stroke
detection. This approach helps overcome data scarcity issues and accelerates model
convergence.
 High-Performance Computing (HPC):
Utilizing powerful computing resources, such as GPUs or TPUs, to accelerate model training
and inference tasks, which involve processing large volumes of high-resolution medical
imaging data.
 Clinical Data Integration:
Integrating clinical information, such as patient demographics, medical history,
symptomatology, and laboratory results, with imaging data to enhance the diagnostic accuracy
and clinical relevance of deep learning-based stroke detection systems.

10
6. ADVANTAGES
• The device would last for a long time and is simple and easy to use.

• Because the screen is touchless, a transparent image is always visible.

• Because commands are accepted via sensors such as verbal or hand gestures, the GUI
necessitates the use of lie space. As a result, the touch area is reduced, and the text quality on
the screen improves.

• No screen desensitization required .

• Suitable for people with physical disabilities.

11
7. DISADVANTAGES

• High-resolution cameras are required. (HD).


• The public’s interaction must be monitored.
• Image processing is extremely sensitive to noise (lens aberrations).
• The initial cost is very high.
• Used in a sophisticated environment.
• Needs very high-speed image processing (S/W, H/W)

12
8. APPLICATIONS

• The applications of Touchless Screen Technology are:


• Touchless Monitor ▪ Touch Wall
• Touchless UI ▪ Touchless SDK A.
• Touchless Monitor The touch-less display is intended for uses where the mouse.
• This TouchKo monitor was recently demonstrated at the CeB by White Electronic Designs
• Capacitive sensors on the display can detect movements up to 15- 20cm distant from the
screen, and the software converts these gestures into screen commands.

13
9. FUTURE ENHANCEMENTS

Involve the integration of multimodal data fusion, combining and clinical data for a
comprehensive view. iding clinical decision-making. Enhanced interpretability could provide
clinicians with Touch wall and Plex have a lot in common with Microsoft Surface, a
multitouch table computer that was first introduced in 2007 and has only lately gone
commercial. It's mostly available in select AT&T stores. The simple mechanical mechanism
is also substantially less expensive to manufacture. Three infrared lasers scan a surface to
create a touch wall. When something breaks through the laser line, a camera records it and
feeds it back to the Plex program. On a cardboard screen, earlier prototypes were created.

14
10. CONCLUSION
The likelihood of preventing stroke grows as technology becomes more advanced, particularly
in AI, Deep Learning, and Machine Learning. The purpose of this paper is to provide an
overview of touch-less touch screen technology. The user gains flexibility in how they use the
system by utilizing this technology. The maintenance of touch screen displays can be reduced
by utilizing this technology. Because an external sensing element, such as a camera, is not
required, it is an extremely low-power solution. The level of fault detection is also quite low,
making the method small and interesting. Because the market for touchless and gesture
recognition is expected to increase dramatically, implementation will be critical. Computers,
cell phones, webcams, laptops, and other electronic gadgets can all benefit from touchless
screen technology. Perhaps, after a few years, our bodies will be transformed into a virtual
mouse, virtual keyboard, or input device. While the gadget has potential, it appears that the
API that supports it is not yet able to comprehend the entire spectrum of sign language. The
controller may now be used to recognize basic signs with some effort.

These papers which used the same analysis and the same data on predicting ischemic stroke is
resulting the almost similar conclusion or accuracy. In most of the papers, images as a dataset
for ischemic stroke prediction and most of papers are using the Convolutional Neural
Network (CNN) or it’s derivation. It means that the relevance between methods and data
which be used in most of the papers are resulting the similar and converge to the one
conclusion. This paper can be referenced by another researcher or people who interestedin this
topic. Therefore, we hope this literature review research can be useful for another researcher
and innovation of medical technologies to create a better life. For the future research, we hope
that future research or deep learning model can be produced with a higher accuracy and more
accurate in its application in the healthcare system.

15
11. REFERENCES

[1] S. Prakasam1, M. Venkatachalam2 M. Saroja2, N. Pradheep, “Gesture Recognition Using a


Touchless Sensor to Reduce Driver Distraction,” e-ISSN: 2395 - 0056 p-ISSN: 2395-
0072,International Research Journal of Engineering and Technology (IRJET), Volume: 03 Issue:
09 | Sep-2022|
[2] Ms. Gayatree S. Nakhate Prof. Anup A. Pachghare, “Touchless Screen Technology,” IETE
Zonal Seminar, ISSN: 2277-9477 Recent Trends in Engineering & Technology - 2018
[3] Special Issue of International Journal of Electronics, Communication & Soft Computing
Science and Engineering.2017
[4] Nilofar E. Chanda N.B. Navale College of Engineering, Solapur, Maharashtra, India,
“Study of Touch Less Touch Screen Technology” ISSN (PRINT): 2393-8374, (ONLINE):
2394-0697, VOLUME-4, ISSUE-11, 2016.
[5] ATHIRA M Department of CSE, Musaliar College of Engineering and Technology,
Pathanamthitta, “Touchless Technology,”e-ISSN: 2395-0056 p-ISSN: 2395-0072 International
Research Journal of Engineering and Technology (IRJET),Volume: 07 Issue: 03 | Mar 2020
[6] Kalaiselvi N and Vengateshkumar S, “Touch Less Touch Screen”,International Journal of
Recent Scientific Research Vol. 10, Issue, 10(C), pp. 35353- 35356, October 2019.
[7] Muneer Al-Hammadi, (Member, IEEE), Ghulam Muhammad, (Senior Member, IEEE),
Wadood Abdul, (Member, IEEE), Mansour Alsulaiman, Mohamed A. Bencherif, And
Mohamed Amine Mekhtiche,”Hand Gesture Recognition for Sign Language Using
3DCNN”ISSN: 2169-3536,Issue: 27 | April 2020, Volume 8, 2020.
[8] Y.Vamsi Krishna 1, Akshatha K P 2, Meghana B3,” Machine Learning for Touchless
Touch Screen”, ISSN (Online): 2319 - 8753 ISSN (Print): 2347 – 6710, International Journal of
Innovative Research in Science, Engineering and Technology, Volume 7, Special Issue 6, May
2018 .
[9] Deepak Chahal1, Vidit Narang2,” An Insight to TouchLess Touch-Screen”, ISSN 2319 –
1953, International Journal of Scientific Research in Computer Science Applications and
Management Studies, Volume 8, Issue 2 (March 2019).
[10] Mr. VrushabhShivanandSaharkar,” Touchless Touch Screen Devices”, International
Journal of Multidisciplinary Research Professionals (IJMDRP), Volume 01, Issue 01
(November 2020).
[11] Gagana H, Mrs. Neha Singhal, KavyashreeS,”Innovation of Touchless Touchscreen
Technology in Automotive User Interface”, e-ISSN: 2456-9437, 4th National Conference on
Advancements in Information Technology,Volume 03, Issue 02 - 2022.

16

You might also like