Professional Documents
Culture Documents
TECHNICAL SEMINAR
REPORT ON
“High Resolution Touch Screen Module”
Submitted in partial fulfillment for the award of the degree of
Bachelor of Engineering
In
Electronics and Communication Engineering
By
LEELA KUMARI S
1AT16EC068
Under the guidance of
CERTIFICATE
This is to certify that the report of seminar on the topic entitled
“High Resolution Touch Screen Module” has been
successfully carried out byLEELA KUMARI S(1AT16EC068) in
partial fulfilment for the award of Bachelor Degree in Electronics
and Communication Engineeringduring the academic year of 2019-
2020. It is certified that all corrections and suggestions indicated for
internal assessment have been incorporated in the report deposited in
the department library. The seminar report has been approved as it
satisfies the academic requirements in respect of seminar work
prescribed for the B.E Degree.
LEELA KUMARI S
1AT16EC068
ABSTRACT
Touch screen technologies have revolutionized the digital era with their
increasing demand. Touch screens are common in devices such as game
consoles, personal computers, tablet computers, electronic voting machines,
point of sale systems and smart phones. Though many technologies exist to
create touch screen modules, none fits into a low budget. This work focuses on
conversion of any monitor or surface into a touch screen using object detection,
isolation and depth detection techniques. It can be a low-cost alternative to the
existing touch-screen modules which comprises of finger detection, coordinate
fixation and implementation of basic single touch mouse functions. This virtual
screen can be attached to a monitor or a projected screen and detect touch by
interacting with the main computer to make it work as a touch screen.
TABLE OF CONTENTS
SL NO. TITLE PAGE NO
Chapter 1 Introduction 1
Chapter 4 Implementation 11
Chapter 6 Algorithm 18
Chapter 7 Results 20
8.1 Advantages 21
8.2 Disadvantages 21
Chapter 9 Applications 22
10.1 Conclusion 23
References 24
i
TABLE OF FIGURES
FIGURE NO. TITLE PAGE NO
1.1 Touch screen technology 1
4 Experimental setup 11
ii
High Resolution Touch Screen Module
Chapter 1
Introduction
Though many technologies exist to create touch screen modules, none fits into a low
budget. Touch screens have done in the 21st century what the introduction of the mouse did
in the 20th to create a new platform for interaction with computer software and make this
interaction easy to understand and convenient for the ordinary man. The skin is the first of all
human senses to develop. It is the most attuned sense due to this reason and is able to be used
in a variety of ways to control a range of devices. It is also the largest organ of the human
body and is thus able to be used with more ease to control devices over other sense organs.
A touch screen is an input and output device normally layered on the top of an
electronic visual display of an information processing system. The input is given by the user
to the information processing system through simple or multi-touch gestures by touching the
screen with a special stylus and/or one or more fingers. Some touch screens use ordinary or
specially coated gloves to work while others may only work using a special stylus/pen. The
user can control what is displayed as well as the size, orientation etc. by using touch
functions.
In 2010, a Nokia Research Centre team from Finland [1] showed how a 2 1.5 m ice
wall could be programmed to have touch capabilities, ushering in a new idea of technology
and application aspects. This showed how any surface could be programmed to act as a touch
screen and therefore gave a whole new era of computing to begin. The touch screen enables
the user to interact directly with what is displayed, rather than using a mouse, touchpad, or
any other such device. Touch screens are extensively used in gaming consoles, e-readers,
handheld devices like mobiles etc. In addition, they can be attached as secondary devices or
alternate input methods to computer terminals. In fields where accurate, intuitive and rapid
user interaction is required, the keyboard and mouse would not provide the necessary
capabilities and thus make the use of touch screens necessary. Hence they are extensively
used in museums, kiosks, mall maps, medical equipment and ATMs.
Till 1988 touch screens had the bad reputation of being imprecise. Touch-screens
were considered to be able to select only the objects of size greater than that of the average
finger, as would be specified in most manuals of the time. The main reason for this was that
the targets were selected as soon as the finger came over it and the actions were performed
immediately. Errors were common, due to parallax or calibration problems, leading to
frustration. A - researchers at the University of Maryland Human Computer Interaction Lab
[2] and is still used today. This made sure that when users touch a target, the location is noted
but click is registered only when the finger leaves the target point and thus enabled selection
of a single pixel on a Video Graphics Array (VGA) screen. As such, no technology exists that
makes converting any screen into a touch screen possible without the cost of it being very
high. A low cost module developed with image processing would tick all the checkboxes and
can be made both feasible and scalable with slight adjustments.
● Touch Sensor
A touch screen sensor is a clear glass panel with a touch responsive surface. The touch
sensor/panel is placed over a display screen so that the responsive area of the panel covers the
viewable area of the video screen. There are several different touch sensor technologies on
the market today, each using a different method to detect touch input. The sensor generally
has an electrical current or signal going through it and touching the screen causes a voltage or
signal change. This voltage change is used to determine the location of the touch to the
screen.
● Controller
The controller is a small PC card that connects between the touch sensor and the PC. It takes
information from the touch sensor and translates it into information that PC can understand.
The controller is usually installed inside the monitor for integrated monitors or it is housed in
a plastic case for external touch add-ons/overlays. The controller determines what type of
interface/connection you will need on the PC. Integrated touch monitors will have an extra
cable connection on the back for the touchscreen. Controllers are available that can connect
to a Serial/COM port (PC) or to a USB port (PC or Macintosh). Specialized controllers are
also available that work with DVD players and other devices.
● Software Driver
The driver is a software update for the PC system that allows the touchscreen and computer
to work together. It tells the computer's operating system how to interpret the touch event
information that is sent from the controller. Most touch screen drivers today are a mouse-
emulation type driver. This makes touching the screen the same as clicking your mouse at the
same location on the screen. This allows the touchscreen to work with existing software and
allows new applications to be developed without the need for touchscreen specific
programming. Some equipment such as thin client terminals, DVD players, and specialized
computer systems either do not use software drivers or they have their own built-in touch
screen driver.
We offer two main types of touchscreen products, touchscreen add-ons and integrated
touchscreen monitors.
Touchscreen add-ons are touchscreen panels that hang over an existing computer monitor.
Integrated touchscreen monitors are computer displays that have the touchscreen built-in.
Both product types work in the same way, basically as an input device like a mouse or
trackpad.
Chapter 2
Literature Survey
[1] Programmable of a Frequency for Concurrent Driving Signals of Touch
Screen Controller, 2019
The white paper provides simultaneous output sine wave generators using efficient
memory access technology for large touch screen controllers. Sine waves of various
frequencies are applied simultaneously to the touch screen controller. In memory, address
values other than sine wave values are stored as samples. The range of the sine wave
frequency and the interval between each frequency provide greater flexibility than
conventional configuration methods by adjusting the address calculation algorithm to obtain
sample values stored in memory. It also minimizes the amount of memory required by
providing sample values from one memory to each sine wave generator to create all
necessary sine waves. The sine wave generator was verified using Vivado, and the DAC built
in the MaganChip 130mm CMOS process is defined and verified using Virtuoso and Spectre
of Cadence.
touch screen, and establishes panoramic scenes which takes the birthplace of Tujia brocade
named Lao Chehe village as the cultural coordinates of Tujia brocade. What's more, the
system has ten panoramic scenes, which completes following functions: viewpoint control,
scenes shifting, hotspot information, touch operation and voice explanation. Users can have
real-time interaction with ecological panorama of Tujia Brocade Laoche village in the scene,
this will contribute to the digital protection and social dissemination of Tujia brocade.
https://www.newscientist.com/article/mg20827875-800-worlds-first-
ice-touchscreen-virtually-burns/#ixzz6MymY1Djw
When a plastic cover is used on a touch screen panel, IoT (internet of things)
equipment, a medium-large size touch screen and a curved surface touch screen can have
availability of production. Accordingly, it allows a 75% cost reduction compared to adopting
a glass cover. Cost reduction also enables many users to experience products including touch
screen panels with lower prices. However, when applying pressure on a hard plastic cover, it
takes more time to restore to the original state from the bent state compared to a glass cover
touch screen. This aspect gets heavier depending on the intention of the pressure. In this
paper, a correction algorithm that compensates for this defect is suggested. The retouching
time can be reduced to one-sixth compared to not applying the proposed algorithm.
[8] Comparison of two types of tactile sensing layer in touch screen panel
for force sensitive detection, 2015
Here we present two types of Touch Screen Panels (TSPs) consisting of silicone gel
and glycerin as the transparent Tactile Sensing Layer (TSL) measuring touch force(z axis)
and touch position(x-y axis). The principle of the TSP is based on capacitive methods in
which the distance between top and bottom substrates is varied by touch or interaction force
leading the capacitance change between two substrates. Silicone gel as the TSL showed the
force detection resolution of about 50gf and the dynamic range of 0-500gf. For the same test,
Glycerin showed the detection resolution of 10gf and the dynamic range of 200gf. These
relatively excellent results are caused from the permittivity and hardness of the new two TSL
materials.
[9] Distributed architecture of touch screen controller SoC for large touch
screen panels,2015
Currently large touch screen panels (TSP) tend to use projected capacitance
technology, which allows multi touch and high sensitivity. For large TSPs with a large
number of TX (driving) and RX (sensing) lines, however, it is increasingly challenging to
Dept. of ECE, AIT 2019-2020 Page 6
High Resolution Touch Screen Module
achieve high sensitivity, high detection rate, and multi-touch. In this Paper, we propose a
distributed architecture of touch screen controller where multiple controller SoCs collaborate
in driving and sensing each section of a large TSP. We show that the proposed architecture
and SoC design can increase the detection rate without loss of sensitivity performance. It also
allows a smaller SoC implementation, while its chip expandability provides the flexibility of
supporting a large range of TSP sizes. We implemented the proposed distributed SoC using
TSMC CMOS 0.18um with a low power ARM core, AHB-lite bus, memories, and embedded
touch algorithm software.
Chapter 3
Technologies Used
3.1 Depth Perception And Binocular Vision
Depth perception corresponds to the visual ability to measure the distance of an object and
hence perceive the world in three dimensions. Depth perception can be achieved using
binocular cues based on sensory information in three dimensions using both eyes and
monocular cues based on information in two dimensions using just one eye. Monocular cues
include deducing depth using size of the object (object size reduces with depth) and motion
parallax (apparent relative motion of several stationary objects against a background).
Binocular cues contain stereopsis, eye convergence (kinaesthetic sensations from these extra
ocular muscles as they stretch to focus on a distant object) and binocular parallax. In case of
stereopsis, we can triangulate the distance to an object with a high degree of accuracy by
using two images of the same scene obtained from slightly different angles. Since each eye
views an object at two different angles, stereopsis is achieved.
Finger detection involves skin colour detection where the HSV image is scanned for a
specific set of colours. Setting an HSV range for skin detection is a difficult process as skin
colour varies across individuals and for the same individual itself it could vary with respect to
the background lighting conditions. Also, there can be interference due to other objects of the
same HSV values .
Convex hull or convex envelope of a set X of points in the Euclidean space is the smallest
convex set that contains X (fig.2). Computing the convex hull implies constructing an
unambiguous, efficient representation of the required convex shape. Sklansky [5] introduced
the O(n) convex hull algorithm by 8-connect concavity tree technique but it fails on some
self-intersecting polygons. Sklansky then introduced a modified version of convex hull
algorithm to add an additional process to create a polygon monotonically in both horizontal
and vertical directions prior to the concavity tree technique.
Blob detection refers to identifying a portion of the picture captured using the cameras, by
utilizing image processing techniques that isolate colour or texture or both. Depending on the
axis of viewing, the coordinates of the blob is calculated. By using IR LEDs to illuminate the
screen, a finger placed on it will appear as a blob in isolation. Fig. 3. Experimental Setup This
method precisely gives the coordinates of the touch as it is independent of the skin tone of
finger, that varies from person to person, as well as it is independent of the natural lighting in
the background.
An inbuilt command in Windows can be used to move the cursor of a computer o a pixel
location as specified in the command. This function is called when the code returns the
coordinates of the finger on the virtual screen. Single click, double click and other
functionalities can be incorporated. Advanced cursor movement techniques can also be coded
such as zoom, right click etc.
Chapter 4
Implementation
The experimental setup (fig.3) made consists of a graphical model of the computer screen
placed vertically with two cameras on two adjacent corners at a specific distance with a 45
degree inclination. The distance at which the cameras should be placed so as to capture a
common area depends on the size of the touch screen area and the field of view of the
cameras. Each camera covers a conical region with the camera position as the vertex. Any
surface in the common region covered by the two cameras can be made into a touch screen.
The algorithm incorporates finger detection through skin colour detection and finger curve
detection. The image is converted into HSV and thresholded to obtain a black and white
image containing white patches of recognized colour and some noise. In order to eliminate
error due to other objects in the background which may interfere with the present estimation,
background subtraction is done using a set of images taken during the initialization stage.
This minimizes the need for a local light source for highlighting the finger and avoids false
detection. The image undergoes morphological manipulations like erosion and dilation to
remove small patches of noise. This image is then blurred to fill in gaps due to small shadows
and then contoured to obtain the edges. The object of maximum size is found and the
respective contour is taken. Then the image passes through a Convex Hull Detection
algorithm that scans the contour for fingertips (convex hull). The hull returns three points out
of which the first one points to the centre of the hull. The point nearer to the screen is taken
as the desired point and the y- coordinate is returned. The algorithm incorporates finger
detection through skin colour detection and finger curve detection. This can be incorporated
into one function called Improcess().
The function finds the convex hull of a 2D point set using Sklansky's algorithm that has O(N
logN) complexity in the current implementation. The different contours obtained in the
previous step are scanned to find the one with the largest area (i.e. the one corresponding to
the finger) while others (false detection) are ignored. The convex hull of largest contour is
found and stored into an empty hull created before. This hull is scanned for convexity defects
(convexity defect is a cavity in the contour segmented out from the image). The algorithm
returns three points corresponding to a particular defect. Out of the three points returned to
represent each convexity defect, the one nearest to the screen is taken and given as output.
Fig 4.3, Coordinate calculation. Points A and B are the positions of the cameras capturing the
screen. The cameras are placed at 450 to the screen corners.
The coordinates of the finger (lx, rx) are taken and using pixel width (pw) and field of vision
(fov), the angles (Itheta, rtheta) spanned by the objects from the left and right of the image is
calculated. The real angles (lphi, rphi) were found from these angles using the formula:
From these angles in fig.6, the touch coordinates were found using (1).
The height and length (y and x coordinates) of the object is calculated using the formula
derived from trigonometric identities:
h= c/(cot(𝞪)+cot(𝞪))
The feed from each camera is taken by using the device id and capturing the input from the
device.
Further, real time tracking involves taking the video captured from these web cameras which
are then fed as input to the algorithm. The video actually comprises a series of frames
(images captured by the two webcams). The corresponding images from the two webcams
are passed to a function Improcess() that takes in an image and implements all the previously
mentioned processes and returns the coordinate of the finger in real time.
A narrow strip along the touch surface is defined as the active region and the region beyond it
till the hover limit is termed as the hover region. The active and hover region together forms
the detection region. Whenever the fingertip enters the detection region the touch point is
calculated and the cursor moves to that point. If the fingertip is in the hover region, the mouse
Dept. of ECE, AIT 2019-2020 Page 14
High Resolution Touch Screen Module
hovers over the screen. As soon as the fingertip enters the active region, the mouse executes a
left button press down until the fingertip leaves this region. A left mouse button click can be
performed by moving the fingertip in and out of the active region.
Chapter 5
Proposed System
The algorithm incorporates finger detection through skin colour detection and finger
curve detection. The image is converted into HSV and thresholded to obtain a black and
white image containing white patches of recognized colour and some noise. In order to
eliminate error due to other objects in the background which may interfere with the present
estimation, background subtraction is done using a set of images taken during the
initialization stage. This minimizes the need for a local light source for highlighting the finger
and avoids false detection. The image undergoes morphological manipulations like erosion
and dilation to remove small patches of noise. This image is then blurred to fill in gaps due to
small shadows and then contoured to obtain the edges. The object of maximum size is found
and the respective contour is taken. Then the image passes through a Convex Hull Detection
algorithm that scans the contour for fingertips (convex hull). The hull returns three points out
of which the first one points to the centre of the hull. The point nearer to the screen is taken
as the desired point and the y- coordinate is returned. The algorithm incorporates finger
detection through skin colour detection and finger curve detection. This can be incorporated
into one function called Improcess().
Chapter 6
Algorithm
Fingertip in Active Region ⇒ Measure the amount of time stayed in active region(Active
time)
LEFT_MOUSE_BUTTON_DOWN
Fingertip in Hover Region ⇒ Measure the amount of time stayed in hover region (Idle time)
RIGHT_MOUSE_BUTTON_CLICK
Else
LEFT_MOUSE_BUTTON_CLICK
LEFT_MOUSE_BUTTON_CLICK
Else
LEFT_MOUSE_BUTTON_CLICK
LEFT_MOUSE_BUTTON_UP
D Click time: time interval between two successive left clicks to trigger a double click.
The fig 6.1 shows the active and hover regions defined, together called the detect region.
Chapter 7
Results
The finger tracking and detection is illustrated on a GUI screen (like OpenCV). The white dot
in fig.8 represents the finger in the hover region and the red dot represents the finger entering
the active region where touch is detected.
Chapter 8
Advantages and Disadvantages
8.1 Advantages
• Touch screens enable people to use computers without any training.
• This is becoming more popular because of its ease of use , proven reliability, expanded
functionality and decreasing cost.
• Touch screen virtually eliminates operator errors, because users select from a clearly
defined menu.
• Touch screens provide fast access to any and all types of digital media.
• It ensures that no space is wasted as the input device is completely integrated into the
monitor.
• The touch screen interface can be updated with simple software changes.
8.2 Disadvantages
• Although user friendly, touch screen can not be used to enter large amount of data
• This technology has not been found in real world applications because system designers
have not carefully considered how the system will function.
• Another failure of the industry has been not getting fast enough processing behind the
buttons
• A touch screen system will cost about two or three times of the amount of an existing
keyboard display.
• Touch screens and monitors together are expensive ranging from two and half times the
price of a standard computer.
Chapter 9
Applications
Public computer systems are often designed around a touch screen, which is often
the only visible component. Automated Teller Machines (ATMs) are the most common
application, but falling prices for touch screen technology are making it available for other
applications such as museum exhibits, ticket sales in airports and movie theaters, and public
information kiosks. Touch screens are ideal for these applications because they provide input
and output capabilities. They are often the only part of the system contacted by the user and
are sturdier than many other input devices because they have no moving parts. These
qualities make touch screen-based systems easy and inexpensive to maintain and repair.
Touch screens are used, like mice, as pointing devices. Instead of moving a mouse to
activate and relocate the cursor, the user touches the screen to position the cursor. For
specifying precise location, a touch screen often works with a stylus—a device like a pencil
that has a rubber or plastic point. The user modifies what is seen on the screen by touching it,
rather than by manipulating a cursor or other on-screen component with a mouse, keyboard,
or joystick. Touch screens are invaluable to artists who have been trained to use pencils,
brushes, and other implements that effect change wherever they touch the canvas.
Touch screens have revolutionized personal digital assistants (PDAs). Older PDAs
required the user to enter data using an extremely small keyboard. Modern PDAs consist
almost entirely of a touch screen, which makes them substantially smaller and easier to use
because the user can "write" information directly into the device.
In the late twentieth century, companies began to integrate touch screen technology
with dry-erase boards (wall-mounted surfaces that allow the user to write with markers and
erase the markings with a cloth). With these devices, whatever a user writes on the board can
be simultaneously recorded and saved in a computer file.
Chapter 10
Conclusion and Future Work
10.1 Conclusion
In this work the conversion of any monitor or surface into a touch screen using object
detection, isolation and depth detection techniques could be done successfully. The process
defined in this paper was found to be successful in testing and an innovative solution to the
problem of making devices more user-friendly.
A wide range of applications await the application of this technique in real time. Being both
cost-efficient and modular, it can be widely used. It requires minimal hardware and the entire
processing can be concentrated on one computer alone.
The future of this technique can involve generating multiple-touch gesture facilities for zoom
in, zoom out, scrolling up and scrolling down. The algorithm can be modified further so that
calibrations can be done automatically by detecting the edges of the screen.
References
[*] Ajin T Pullan, Irina Merin Baby, Arun Sasi, Mithun Krishnan, Divya Krishnan, Dhanaraj
K. J, “High Resolution Touch Screen Module”, 2018
[1] Jiun Hong, HyungWon Kim, UnSang Yu, Hongju Lee, “Programmable of a Frequency
for Concurrent Driving Signals of Touch Screen Controller”, 2019
[2] Probuddho Chakraborty, Anip Shah, “Interactive Touch Screen using Augmented
Reality”, 2018
[3] Zhao Gang, Di Bingbing, Zhu Wenjuan, Li Yaxu, He Hui, Zan Hui, “Design and
implementation for Tujia brocade cultural coordinate panorama display system based on
touch screen”, 2018
[4] Nokia, “World’s first ice touchscreen virtually burns”, 2018.
(https://www.newscientist.com/article/mg20827875-800-worlds-first-ice-
touchscreen-virtually-burns/#ixzz6MymY1Djw )
[5] Gözde Sari, M. Bahattin Akgül, Barbaros Kirişken, Ahmet Fatih Ak, Ahmet Alper Akış,
“An Experimental Study of a Piezoelectrically Actuated Touch Screen”, 8th International
Conference on Mechanical and Aerospace Engineering, 2017.
[6] Ahmet Fatih Ak, Gözde Sari, M. Bahattin Akgül, Barbaros Kirişken, Ahmet Alper Akiş,
“Numerical analysis of vibrating touch screen actuated by piezo elements”,8th International
Conference on Mechanical and Aerospace Engineering (ICMAE), 2017
[7] Jaewook Kim, Hyeokjin Lim, Sanghyun Han, Yunho Jung, Seongjoo Lee, “ Compensation
Algorithm for Misrecognition Caused by Hard Pressure Touch in Plastic Cover Capacitive
Touch Screen Panels”, 2016
[8] Yeon Hwa Kwak, Wonhyo Kim, Sungkyu Seo, Kunnyun Kim, “Comparison of two types
of tactile sensing layer in touch screen panel for force sensitive detection”, 2015
[9] Gyeongseop Choi, M.G.A.Mohamed, HyungWon Kim, “Distributed architecture of touch
screen controller SoC for large touch screen panels”, 2015