You are on page 1of 27

INTERACTIV

E DEVICES
Objectives

Understand what are these


known tools and how
Learn about several some are recently Discuss on some new
interactive devices developed or are a and old interactive
are used for the concept to be developed devices.
in the future
human computer
interaction
Overview of Interactive Devices
There are many different types of interaction devices being used and
conceived today. Some are familiar tools from the past and many are just
distant concept dreams of the future. Some of interactive devices are recently
developed and some of them are innovated earlier. This section describes
about some new and old interface devices.
As shown in the figure, though users actually interact physically with a device, they actually require it to execute a use case to
accomplish their need. Hence, users are interacting logically with the service. Software engineers define the service as a use
case that is realized by a certain subsystem/component in the software, while the interface is considered as boundary class
during analysis and as the user interface during the design and implementation stage.
Keyboard
A keyboard can be considered as a primitive
device known to all of us today. Keyboard uses
an organization of keys/buttons that serves as a
mechanical device for a computer. Each key in
a keyboard corresponds to a single written
symbol or character.
Keyboard
This is the most effective and ancient
interactive device between man and machine
that has given ideas to develop many more
interactive devices as well as has made
advancements in itself such as soft screen
keyboards for computers and mobile phones.
Touch Screen
The touch screen concept was prophesized decades
ago, however the platform was acquired recently.
Today there are many devices that use touch screen.
Gadgets like mobile phone, tablet, ipad, etc uses
touch screen technology which allows the users to
navigate with the installed software on their devices
with the use of their fingertips.
Touch Screen
Unlike earlier design of personal computers, touch
screen technology doesn’t need an input device such
as mouse and keyboard as these are already built-in
to the device. After vigilant selection of these
devices, developers customize their touch screen
experiences. The cheapest and relatively easy way of
manufacturing touch screens are the ones using
electrodes and a voltage association.
Touch Screen
Other than the hardware differences, software alone
can bring major differences from one touch device to
another, even when the same hardware is used. Along
with the innovative designs and new hardware and
software, touch screens are likely to grow in a big
way in the future. A further development can be
made by making a sync between the touch and other
devices. In HCI, touch screen can be considered as a
new interactive device.
Gesture Recognition
Gesture recognition is a subject in language
technology that has the objective of
understanding human movement via
mathematical procedures. Hand gesture
recognition is currently the field of focus. This
technology is future based. This new
technology, magnitudes an advanced
association between human and computer
where no mechanical devices are used.
Gesture Recognition
This new interactive device might terminate the old
devices like keyboards and is also heavy on new
devices like touch screens. The general definition of
gesture recognition is the ability of a computer to
understand gestures and execute commands based on
those gestures. Most consumers are familiar with the
concept through Wii Fit, X-box and PlayStation
games such as “Just Dance” and “Kinect Sports.”
How gesture recognition works
Gesture recognition is an alternative user interface for providing real-time data to a computer. Instead of
typing with keys or tapping on a touch screen, a motion sensor perceives and interprets movements as the
primary source of data input. This is what happens between the time a gesture is made and the computer
reacts. For instance, Kinect looks at a range of human characteristics to provide the best command
recognition based on natural human inputs. It provides both skeletal and facial tracking in addition to gesture
recognition, voice recognition and in some cases the depth and color of the background scene. Kinect
reconstructs all of this data into printable three-dimensional (3D) models. The latest Kinect developments
include an adaptive user interface that can detect a user’s height
How gesture recognition works

A camera feeds image data into a Specially designed software identifies


sensing device that is connected to a meaningful gestures from a
computer. The sensing device predetermined gesture library where
typically uses an infrared sensor or each gesture is matched to a computer
projector for the purpose of command.
calculating depth.
How gesture recognition works

The software then correlates each Once the gesture has been interpreted,
registered real-time gesture, interprets the computer executes the command
the gesture and uses the library to correlated to that specific gesture.
identify meaningful gestures that
match the library.
Who makes gesture recognition software?
Microsoft is leading the charge with Kinect, a gesture recognition platform that allows
humans to communicate with computers entirely through speaking and gesturing. Kinect
gives computers, “eyes, ears, and a brain.” There are a few other players in the space such
as SoftKinect, GestureTek, PointGrab, eyesight and PrimeSense, an Israeli company
recently acquired by Apple. Emerging technologies from companies such as eyeSight go far
beyond gaming to allow for a new level of small motor precision and depth perception.
Gesture recognition examples beyond
gaming
Gesture recognition has huge potential in creating interactive, engaging live
experiences. Here are five gesture recognition examples that illustrate the
potential of gesture recognition to to educate, simplify user experiences and
delight consumers.
Gesture recognition examples beyond gaming

In-store retail engagement-Gesture Changing how we interact with traditional


recognition has the power to deliver an computers - A company named Leap Motion last
exciting, seamless in-store experience. This year introduced the Leap Motion Controller, a
gesture-based computer interaction system for PC
example uses Kinect to create an engaging
and Mac. A USB device and roughly the size of a
retail experience by immersing the shopper in
Swiss army knife, the controller allows users to
relevant content, helping her to try on interact with traditional computers with gesture
products and offering a game that allows the control. It’s very easy to see the live experience
shopper to earn a discount incentive. applications of this technology.
Gesture recognition examples beyond gaming

The operating room - Companies such as Windshield wipers - Google and Ford are
Microsoft and Siemens are working together to also reportedly working on a system that
redefine the way that everyone from motorists allows drivers to control features such as air
to surgeons accomplish highly sensitive tasks. conditioning, windows and windshield wipers
These companies have been focused on refining
with gesture controls. The Cadillac CUE
gesture recognition technology to focus on fine
system recognizes some gestures such as tap,
motor manipulation of images and enable a
flick, swipe and spread to scroll lists and
surgeon to virtually grasp and move an object
zoom in on maps.
on a monitor.
Gesture recognition examples beyond gaming

Mobile payments - Seeper, a London-based Sign language interpreter-There are several


startup, has created a technology called examples of using gesture recognition to
Seemove that has gone beyond image and bridge the gap between the deaf and non-deaf
gesture recognition to object recognition. who may not know sign language. This
Ultimately, Seeper believes that their system example showing how Kinect can understand
could allow people to manage personal and translate sign language from Dani
media, such as photos or files, and even Martinez Capilla explores the notion of
initiate online payments using gestures. breaking down communication barriers using
gesture recognition.
Speech Recognition
The technology of transcribing spoken phrases into written text is Speech Recognition.
Such technologies can be used in advanced control of many devices such as switching
on and off the electrical appliances. Only certain commands are required to be
recognized for a complete transcription. However, this cannot be beneficial for big
vocabularies. This HCI device help the user in hands free movement and keep the
instruction based technology up to date with the users.
Application of Gesture Recognition
Kinect: It is a motion sensing console launched by Microsoft as an extension of the
Microsoft Xbox 360 game console. The main function is to enable you to control the Xbox
through voice or gestures rather than physically using the controller. Kinect is based on
technologies which are developed by Microsoft and PrimeSense. It basically makes use of
an infrared projector which is able to read your gestures hence enabling you have a
complete hands-free control of the gadget or game you are playing.
Microsoft has already sold more than 18 million copies of Kinect and they plans to implement the same system
and technology for its PC and release it in February this year.
Application of Gesture Recognition
Eon Interactive Mirror: EON Interactive Mirror enables customers to virtually
try-on clothes, dresses, handbags and accessories using gesture-based interaction.
Changing from one dress to another is just a ‘swipe’ away and offers endless
possibilities for mixing designs and accessories in a fun, quick and intuitive way.
Customers can snap a picture of his/hers current selections and share it on
Facebook or other Social Media to get instant feedback from friends.
The EON Interactive Mirror is growing in popularity in the amusement and retail industry. It will be showcased
on its ability to engage crowds through various interactive applications.
Response Time
Response time is the time taken by a device to respond to a request. The request can be anything from
a database query to loading a web page. The response time is the sum of the service time and wait
time. Transmission time becomes a part of the response time when the response has to travel over a
network. In modern HCI devices, there are several applications installed and most of them function
simultaneously or as per the user’s usage. This makes a busier response time. All of that increase in
the response time is caused by increase in the wait time. The wait time is due to the running of the
requests and the queue of requests following it.
Response Time
So, it is significant that the response time of a device is faster for which advanced
processors are used in modern devices. According to Jakob Nielsen, the basic
advice regarding response times has been about the same for thirty years [Miller
1968; Card et al. 1991]:
Response Time
1.0 Second
0.1 Second 10 Seconds
is about the limit for the user's flow of
thought to stay uninterrupted, even though
the user will notice the delay. Normally, no is about the limit for keeping the user's
special feedback is necessary during delays attention focused on the dialogue. For
is about the limit for having the of more than 0.1 but less than 1.0 second, longer delays, users will want to perform
user feel that the system is but the user does lose the feeling of other tasks while waiting for the computer
reacting instantaneously, meaning operating directly on the data. to finish, so they should be given feedback
that no special feedback is indicating when the computer expects to be
done. Feedback during the delay is
necessary except to display the especially important if the response time is
result. likely to be highly variable, since users will
then not know what to expect

You might also like