You are on page 1of 44

Augmented and Mixed Reality

By

Dr. Anil Mashalkar

SCHOOL OF MECHANICAL ENGINEERING


Contents
• Augmented and Mixed Reality Taxonomy
• Technology and features of augmented reality
• Challenges with AR
• AR systems and functionality
• Augmented reality methods
• Visualization techniques for augmented reality
• Wireless displays in educational augmented reality applications
• Mobile projection interfaces
• Marker-less tracking for augmented reality
• Enhancing interactivity in AR environments
• Evaluating AR systems
2
Augmented and Mixed Reality Taxonomy
A combination of a real scene viewed by a user and a virtual scene generated by a computer
that augments the scene with additional information.
• In the recent growth of types of virtual realities, it can be challenging to keep up because of their
subtle differences. The types of digital realities are:
• Augmented reality (AR)— designed to add digital elements over real-world views with limited
interaction.
• Virtual reality (VR)— immersive experiences helping to isolate users from the real world, usually via
a headset device and headphones designed for such activities.
• Mixed reality (MR)— combining AR and VR elements so that digital objects can interact with the
real world, means businesses can design elements anchored within a real environment.
• Extended reality (XR)— covering all types of technologies that enhance our senses, including the
three types previously mentioned.
• As all technologies blur the lines between reality, determining a suitable use case for your business is
crucial. For many businesses, AR is usually the easiest to integrate into the company’s processes.

3
Augmented and Mixed Reality Taxonomy

• Usually, the term mixed reality is used interchangeably with augmented reality.
• However, mixed reality is a broader interpretation that consists of anything of
both the physical world and the digital world.
• Thus, an example of the GPS mapping described earlier would qualify as “mixed
reality” even though it is not considered an “augmented reality” application in
this book.
• It is mixed reality in that it is mixing real-world information (where I am) with
digital information (an abstract map display).
• Another type of mixed reality application is to use a real world object to interact
with a digital world, using that object in the way it is used in the real world.
• Note that all AR applications are mixed reality, but not all mixed reality
applications are AR

4
Augmented Reality Virtual Reality
• Augmented Reality System augments • Totally immersive environment
the real world scene • Visual senses are under control of
• User maintains &sense of presence in system (sometimes aural and
real world proprioceptive senses too)
• Needs a mechanism to combine
virtual and real worlds

5
Technology and Features of AR
Many industries and sectors already use AR for business processes, including:
• Retail. Employees can use AR for onboarding and training sessions. It helps new employees in their future
transactions, such as sales training, touring the sales floor, and preparing for a retail environment. AR can also
help customers test products before purchasing or learn how to use them within their environments. This can
create better engagement or help customers solve problems by providing actionable information in a real-
world context.
• Manufacturing. Technology can offer step-by-step instructions, allowing trainers to provide feedback during
practice for better retention. Using mixed reality also enables employees to learn while on the job, keeping
their hands free to perform work.
• Healthcare. Getting hands-on experience in performing procedures without risk is imperative for healthcare
professionals. AR provides the guidance to practically, yet safely, learn about anatomy and surgeries.
• Military. AR is integrated in combat training to stimulate situational and operational environments so soldiers
have awareness of their time, space, and forces.
• Automobile. AR can help train and allow specialists to explore current and future models, along with their
internal systems.

6
Challenges In AR
1. Hardware issues
• Currently, every available AR headset is a bulky piece of hardware that may be too expensive for
the masses. Also, a majority of AR headsets need to be tethered to a computer, making the entire
experience limited and inconvenient. Alternatively, consumers can use their smartphones or tablets
for AR applications. However, mobile AR faces major issues in displaying visuals accurately. For
instance, mobile sensors such as accelerometer can be disturbed by electric interference, which is
commonly witnessed in urban areas. Additionally, smartphone cameras are built for 2D image
capture and are incapable of rendering 3D images. Hence, the hardware required for AR
technology needs to be enhanced before mass adoption.
2. Limited content
• One of the major challenges with augmented reality is creating engaging content. The content
created for augmented reality devices consists of games and filters used in social networks such as
Instagram and Snapchat. However, creating content that can promote businesses can be extremely
complicated and expensive. Also, augmented reality developers have not created enough high-
functioning use cases that can be used by consumers on a daily basis.

7
Challenges In AR
3. Lack of regulations
• Currently, there are no regulations that help businesses and consumers understand which type of
AR applications can be used and how data can be processed. Hence, the technology can be used
with malicious intent. For instance, a cybercriminal can hijack personal accounts by mining data
output and manipulating AR content. In such cases, consumers may have questions like who could
be held accountable, which mitigation strategies can be used, and how to avoid such incidents in
the future. Hence, one of the significant challenges of augmented reality is creating regulations that
can ensure the privacy and security of consumer data as well as simplify mainstream adoption of
the technology.
4. Public skepticism
• Although augmented reality is a popular topic of discussion among tech experts, consumers are
unaware of the benefits of the technology. Consumers have only used the most popular
applications of augmented reality such as trying out glasses, wardrobe, and accessories. Therefore,
consumers need to be informed about various applications and benefits of augmented reality.
Additionally, a lack of awareness may lead to concerns about privacy and security while using
augmented reality technology. Hence, users’ concerns need to be addressed to accelerate the
mainstream deployment of augmented reality.

8
Challenges In AR
5. Physical safety risks
• Augmented reality applications can be immensely distracting and may lead to physical
injuries. For instance, many people were injured while playing Pokemon Go. Likewise,
augmented reality applications can lead to serious injuries in case they are used in
potentially risky environments such as busy roads, construction sites, and medical
institutions.
• Although augmented reality technology is still in its infancy, its existing applications have
shown that further research and development to address the challenges with augmented
reality can enable large scale deployment of the technology. And once that happens, the
implementation of augmented reality can be witnessed in law enforcement, healthcare,
finance, and other critical areas.

9
AR Systems and Functionality
• Organizations seeking to increase their efficiency by means of AR can begin today with little or very low risk. Many of
the essential ingredients for success are already in place and additional tools are available at comparatively low cost.
• It is helpful to think of AR as being a “plate” sitting upon three key elements: content (data), hardware and software.
1. Content:
▪ The spectrum of data that may be suitable for AR-assisted viewing ranges from small databases of enterprise
assets or resources and extends all the way to massive, continually expanding information repositories, often
referred to as “Big Data.”
▪ Sophisticated analyses must be performed on Big Data to extract benefits that can then be made accessible to AR
technology.
▪ For example, information about facilities or utilities typically includes a street address or latitude/longitude
coordinates that enable it to be correctly displayed in a Web browser or other software, when requested.
▪ A 3D model of a part in a power plant might display a barcode in a field on the screen that permits another
information retrieval system to associate that part with a particular pump or compressor in need of service or
replacement.

School of Mech Engg. M.Tech Design 10


AR Systems and Functionality
2. Hardware
▪ Producing an AR experience requires capturing the context of the user, performing transformations, comparing the
user context with triggers in a database and producing signals that present digital data, also known as
“augmentations,” to the user’s senses.
▪ Capturing the user context is the task of (hardware) sensors. Sensors come in many sizes and shapes. They can be
mounted on, or integrated and embedded into, many objects in the enterprise.
▪ Machine observations generated by sensors can be filtered and fused for use.
▪ Together or combined, such observations are real time inputs to either a processor on the local device or to a
network-connected server.
▪ Today, the hardware used for most enterprise AR projects is comprised of mass market, consumer-grade smart
phones or tablets.
▪ These platforms provide integrated hardware and software that is inexpensive, familiar to end users and easily
managed. There are also devices designed specifically for enterprise AR use, such as commercially available
smart glasses and tablets that integrate AR technology

School of Mech Engg. M.Tech Design 11


AR Systems and Functionality
3. Software
▪ All AR projects involve software in multiple ways. There is software used during the preparation
(design and publishing) of experiences
▪ Publishing and delivering AR experiences is performed by systems controlled by software in and
across networks.
▪ Finally, AR experiences require software to:
➢ Detect patterns in sensor observations
➢ Interpret user context
➢ Track user changes with respect to the target and various triggers
➢ Produce hardware-generated sounds, tactile signals or visible augmentations
▪ An application may be dedicated to AR functions, or AR features can be embedded into another
enterprise application.

School of Mech Engg. M.Tech Design 12


Augmented reality methods

Augmented Reality can be achieved using two approaches.


They are
1. Marker-based AR
2. Marker-less AR

• A marker-based AR works on the concept of target recognition. The target can be 3D object, text, image,
QR Code or human-face called markers.
• After detection of the target by AR engine, one can embed the virtual object on it and display it on their
camera screen.
• An object can be recognized by extracting the 2D features from an image captured by a camera. If the
shape or the physical structure of the image is known, the process is known as a model based approach.

School of Mech Engg. M.Tech Design 13


Augmented reality methods

Features of Digital Marker


i. Maximum Contrast Level:
The marker is designed in black and white color so as to achieve the maximum contrast level between the
background and the fixed elements, as well as the embedded bits.
ii. Two Orthogonal Guide bars:
The two guide bars which are very noticeable in the images are utilized to look for the position of the code
marker in the image. The relative angle between the two bars should be investigated to eliminate false positive
findings. Under the condition of the tilting, the two bars might not be perpendicular to each other, but they will
certainly lie in a range near ninety degrees.
iii. Three cornerstones:
In addition to the guide bars, there are three fixed elements on the three corners of the marker respectively.
These cornerstones mark the boundary of the marker and thus can be used to ensure a visual code marker has
been found, and eliminate false positives.
iv. Structural information:
The position of the guide bars and cornerstones provides important structural information for us to derive a
transform between image coordinates and standard marker plane.

School of Mech Engg. M.Tech Design 14


Augmented reality methods

• Marker-less AR, also known as location-based AR, uses GPS of mobile devices to record the device
position and displays information relative to that location
• The object recognition process consists of two parts: 2D Vision and 3D Vision

1. 2D Vision
• It extracts 2D features of the objects to be searched
• The extracted and vectorized edges are matched with 2D views of the 3D object models. The pixel images
are pre-processed using a Sobel Filter and a Non-Maxima Elimination and finally vectorized using an Edge
detection algorithm
• Four steps of the 2D vision:
a) The vectorized edges
b) The virtually elongated edges
c) One match of the essential edges
d) One match of the 2D view including neighboring edges.

School of Mech Engg. M.Tech Design 15


Augmented reality methods

2. 3D Vision
• In the 3D vision part, the 2D features are compared with CAD data, containing highly visible
edges, faces and texture information
• With correspondences of image features and 3D-model features, hypotheses for the orientation of
the model relative to the camera are generated.
• Each generated hypothesis will be verified by projecting the model into the image plane. This
projection is compared with the extracted edge graph of the input image and the matching of both
graphs is evaluated.
• The best matching hypothesis is taken to determine the recognized object, its Location, and
orientation relative to the camera coordinate system

16
Visualization Techniques for augmented reality
• Visualization can be described as the process of converting abstract data into a visual
representation that is comprehensible by a human observer.
• The visualization process itself is often described step-by-step in one of the various versions of the
visualization pipeline.
• This allows for subdividing visualization methods into sub methods and provides a better overview
and abstraction of these methods.
• Visualizations in real world environments benefit from the visual interaction between real and
virtual imagery. However, compared to traditional visualizations, a number of problems have to be
solved in order to achieve effective visualizations within Augmented Reality (AR).
• AR visualizations have a high potential, however, their success is dependent on their
comprehensibility.
• If heedlessly implemented AR visualizations easily fail to visually communicate their information.
• The complex character of AR environments requires complex visualization techniques to neither
isolate certain structures nor to generate ambiguous presentations.

17
Visualization Techniques for augmented reality
• AR visualization is a powerful tool for exploring real world structures along with additional
contextual information.
• E.g.: By augmenting textual annotations, AR displays are able to provide semantics to real world
objects or places.
• Data flow in a common AR system. Real world imagery is delivered by the system’s video feed
and processed by vision based tracking algorithms. To align virtual and real data, the derived
tracking data has been applied to transform the virtual content. Finally, the rendering is overlaid on
top of the video feed

18
Visualization Techniques for augmented reality
• Data Integration:
▪ A simple overlay of hidden structure on top of the system’s video feed can cause a number of cognitive
problems, caused by the processes involved in creating the impression of depth. Understanding these
causes allows to develop rendering techniques which successfully add and preserve such information in
AR visualizations.
▪ Pictorial depth cues are those that can be found in a single image including:
➢ Occlusion: if the 2D projections of two objects in the environment overlap, objects which are closer
to the observer occlude objects which are further away.
➢ Relative size: more distant objects appear to be smaller than closer objects.
➢ Relative height: objects with bases higher in the image appear to be further away (compare the
stakes of the bridge).
➢ Detail: objects which are closer offer more detail.
➢ Atmospheric perspective: due to dust in the atmosphere, objects which are further away appear
more blurry than those which are nearby.
➢ Shadows: depending on the position of the light source, shadows can be cast from one object onto
another.
➢ Linear perspective: parallel lines converge with increasing distance. Notice how the sidewalks
seem to converge at some infinite place although in reality they appear to be approximately parallel.

19
Visualization Techniques for augmented reality
• Augmenting Pictorial Depth Cues:
▪ By rendering the virtual structure using a camera which uses parameters reflecting the characteristics of
the real camera, the fusion of virtual and real world imagery will automatically provide pictorial depth
cues which match to those present in the real world environment.
▪ Synchronizing the parameter of the virtual and the real camera allows to align real and virtual pictorial
depth cues. The virtual Lego figure in
(a) is correctly perceived next to the real figures, whereas the virtual one in
(b) is correctly perceived behind both. This effect is achieved by aligning depth cues such as perspective
distortion and relative size.

20
Visualization Techniques for augmented reality
• Occlusion Handling:
▪ While renderings from synchronized real and virtual cameras are already able to align depth cues, as soon as
occlusions between real and virtual objects appear, those depth cues are no longer sufficient to produce believable
augmentations (Fig. a). Even though all other depth cues would have been added to the AR display, the virtual object
will be perceived as floating in front of the video image. A believable integration of virtual structure into the real
world environment becomes only possible if occlusions between real and virtual objects have been resolved (Fig. b).

▪ Importance of occlusion cues (a) Even though a number of different depth cues exist, depth order is ambiguous and
perception is wrong if occlusions have been ignored (b) The same rendering as in (a) with occlusion correctly
resolved. This visualization is able to communicate the spatial relationship between its real and virtual content.

21
Visualization Techniques for augmented reality
• Image based X-Ray Visualization:
▪ AR scenes commonly suffer from incomplete virtual representations. Often only the video in combination with the
object of interest is available to the AR system.
▪ In such situations, the AR system requires knowledge about the organization of the scene in order to correctly sort
their elements.
▪ While this information is often difficult to acquire for a generalvisualization, in case of applications using x-ray
visualization to “see through” real world structure, the virtual data can often be assumed as being completely covered
by real world objects.
▪ In this case, the depth order is known, and the AR system can analyze the video stream only in order to preserve
important depth cues.
▪ In the following, we will review the extraction and preservation of image features which have been used to aid depth
perception in x-ray visualizations in AR.

22
Visualization Techniques for augmented reality
• Scene manipulation:
▪ The limitations of the AR system itself has to be considered as well in order to generate comprehensible
visualizations.
▪ In addition, hardware restrictions such as small display sizes, narrow fields of view or the limitations
caused by the egocentric nature of AR influence the comprehension of their visualization.
▪ As a remedy, spatial rearrangements of the objects within the mixed environment have been
demonstrated to be effective.
▪ Techniques used to deliberately modify real world imagery in order to increase the information content
are:
✓ Rearranging Real World Objects
✓ Space Distortion Visualization

23
Visualization Techniques for augmented reality
• Rearranging Real World Objects:
▪ Rearranged AR scenarios consist of real, virtual and relocated real information.
▪ To correctly compose an image out of all three types of information, the rendering algorithm has to fulfill
three requirements.
➢ It must be able to convincingly relocate real-world structures. Therefore, visual information has to
be transferred from its original to the target location after the explosion was applied.
➢ New imagery has to be generated to fill the original locations.
➢ The rendering algorithm has to correctly resolve occlusions between all used data.
▪ Three types of rearranging:
o Dual Phantom Rendering
o Synchronized Phantom Rendering
o Restoration

24
Wireless displays in educational augmented reality
applications
• AR is increasingly being adopted in educational settings, often to help students with complicated
subjects.
• For example, students struggling with geometry can use AR to see and manipulate 3D geometric
forms. Another application of augmented reality in education includes teaching global perspectives
through virtual field trips, enabling students to interactively engage with other cultures.
• Some educational applications are:
• Wireless Head Mounted Displays
• Wireless Handheld Display

25
• AR can have a significant impact on learning environments:
• Student engagement and interest: Student interest skyrockets with the opportunity to
engage in creating educational content. AR technologies can allow them to add to
curriculum content, create virtual worlds, and explore new interests.
• Learning environment: Classes that incorporate AR can help students become more
involved. An interactive learning environment provides opportunities to implement hands-
on learning approaches that can increase engagement, enhance the learning experience, and
get students to learn and practice new skills.
• Content understanding: Lack of quality content focused on education, rather than
entertainment, is a noted concern among teachers hesitant to use augmented reality in
education. However, existing AR technology enables teachers to create immersive
educational experiences on their own to help ensure their students understand curriculum
content.

26
• Collaboration: As AR content is digital, it is easily shared. For example, a group of teachers can
work with their students to continually refine the content. A collaborative learning environment
provides students with increased motivation to learn because they are actively engaged in the
educational content creation process.
• Memory: AR is an excellent tool for bringing lessons to life and helping students remember
essential details. For example, instead of just presenting photographs on a projector showcasing life
in Colonial America, a teacher can use AR technology to create memorable interactive stories.
• Sensory development: AR technology can help teachers create lesson plans with multisensory
experiences. Students benefit from immersive virtual content that incorporates an experiential
learning style in which students carry out physical activities instead of watching a demonstration.
This approach can help with sensory development.
• Cost-effectiveness: The cost of AR equipment is often cited as a barrier to adoption. However, as
smartphone use continues to rise among young Americans, and since smartphones are already
equipped with the hardware needed to run AR apps, augmented reality in education is increasingly
more cost-effective to implement. Additionally, AR can lower educational costs by replacing
expensive textbooks.

27
Mobile projections interfaces
• Projection-based AR is described as a video projection technique, which can extend and reinforce visual
data by throwing images on the surface of 3D objects or space; this belongs to Spatial Augmented
Reality in a broad sense
• Using projection-based AR, it is easy to implement graphical representation that ordinary lighting
techniques cannot express. Unlike general lighting technique, the technique can project high-definition
image or video, and change the object shape visually with the flow of time.
• With the increase in processing power and memory the only bottleneck left is the small display size and
resolution. To keep these devices mobile the size of the screen is restricted and even though the
resolution of such displays is increasing, there is a limit to information presentable on the display.

School of Mech Engg. M.Tech Design 28


Mobile projection interfaces
• Modern projectors have been miniaturized to the size of mobile phones and can be operated using
batteries. These projectors are called pico projectors.

• The next step is to integrate such a projector directly into a mobile phone.
• These devices are also called projector phones. Up to now several different prototypes both from
research and industry exist. First commercial projector phones are available on the mass market
• Such phones have the capabilities to overcome the problems that arise when exploring large-scale
information on the small display of a present-day mobile phone.
• With these devices one can explore information like maps or web pages without the need for zooming
or panning but up to now the available devices are only projecting a mirror image of the devices display
or images and videos
School of Mech Engg. M.Tech Design 29
Marker-less tracking for augmented reality

• Marker less Augmented Reality (AR) refers to a software application that doesn’t require prior
knowledge of a user’s environment to overlay virtual 3D content into a scene and hold it to a
fixed point in space.
• Marker less AR experiences are possible because of advancements in cameras, sensors,
processors, and algorithms capable of accurately detecting and mapping the real-world.

How Does Marker less Augmented Reality Work?


• Marker less AR merges digital data with input from real-time, real-world inputs registered to a
physical space. The technology combines software, audio, and video graphics with a
smartphone’s or headset’s cameras, gyroscope, accelerometer, haptic sensors, and location
services to register 3D graphics in the real world.
• Marker less AR detects objects or characteristic points of a scene without any prior knowledge of
the environment, such as walls or intersection points.

School of Mech Engg. M.Tech Design 30


Marker-less tracking for augmented reality

Different types of marker less AR

1. In its most basic form, markerless AR superposes virtual objects into a static, pre-captured 2D
image. it’s straightforward and easy to implement for apps that want to offer offline AR instead
of live experiences.

2. Marker less AR systems that use RGB-D SLAM and sensor fusion approaches are on the
opposite end of the spectrum. Microsoft HoloLens is the most notable example. These systems
integrate information from standard, red, green, and blue (RGB) cameras with state-of-the-art
infrared time-of-flight cameras to construct a 3D map of the user’s surroundings while they use
the application. This feature is a critical component of the SLAM tracking paradigm, as it
enables apps running on these devices to place virtual content within the space concretely.

School of Mech Engg. M.Tech Design 31


Marker-less tracking for augmented reality

Pros and Cons of Marker less AR

Pros Cons

Increase range of motion with AR Depends of flat, textured surfaces

Apps running mobile consume a lot


Use a headset to initialize an AR app
of power

Share the experience Slow adoption

Wider field of view for AR content

School of Mech Engg. M.Tech Design 32


Enhancing interactivity in AR environments

• Development of handheld AR supporting two interaction techniques based on the understanding


that both real and virtual objects should be compatible in manipulation.
• One is the ability to contact virtual objects by user’s finger, as well as to physical objects in the real
world. The virtual objects change their position, shape, and/or other graphical features according to
his/her manipulation.
• The other concerns indirect manipulation of virtual objects, where the motion to the handheld device
(or camera) serves to make a certain change in geometrical and/or spatial properties of the virtual
objects

1. Interaction with Virtual Object by Finger

2. Interaction with Virtual Object by Camera Motion

3. Interaction Using QR Code Tracking

4. Interaction Using Color-based Tracking

School of Mech Engg. M.Tech Design 33


Interaction with Virtual Object by Finger
• Manipulation of a virtual object by user’s finger is done as shown in Fig.
• When the user sees a fiducial marker through a camera, a computer-generated snow man appears on the
marker.
• In response to his/her touch operation to it, the snow man is shaken as in the figure.
• Here we assume that, in specification of his/her contact to the virtual object, the finger is inserted in parallel
to the display. We ignore the motion in depth direction
• This is because, when we assume a handheld device, the user tends to put the device close to the target
object due to the small size of its display.
• If he/she wants to view a different part of the object, it is natural to move the device and change his/her
position in relation to the target part. We believe this constraint on finger insertion doesn’t lose a generality.
Of course this makes system implementation simpler and the processing cost could become lower

School of Mech Engg. M.Tech Design 34


Interaction with Virtual Object by Camera
Motion
• Meanwhile, for realization of indirect manipulation of virtual objects, the system provides a facility to
recognize swing motion of the device. In response to the user’s motion, the virtual object is shaken as if it
tries to keep its posture due to inertia
• Motion estimation is carried out by using the Lucas and Kanade optical flow function provided in OpenCV.
• If the rate of acceleration of the motion is greater than a given threshold, the system executes the action of
tilting the snow man to the opposite direction of the motion.
• Figure shows a snapshot of the response. The left and right images of the figure correspond to the cases
before and after the device motion occurs, respectively

School of Mech Engg. M.Tech Design 35


Interaction Using QR Code Tracking
• Recognition of QR codes is executed in two phases, where pose tracking is executed separately from
code identification.
• The reason is the following: In order to scan a data pattern at the center of a QR code, the camera
should be positioned close to it. Otherwise it fails to do scanning. But, this task is required just once
and no need to continue anymore after its success.
• The user would rather like to manipulate the device to change his/her view to the virtual object
associated with the QR code. He/she may move the device (or the QR code) away. In this phase, what
is needed for the system is to track just where the code is.
• Here, for code identification, a certain input image area is recognized as a QR code if three special
corner symbols appear within it.
• Knowing the corner symbol is formed by a black square box with a black square border, the system
recognizes one corner symbol if the center of any two black square symbols matches.
• A square region organized by three corner symbols is finally determined as a QR code.

School of Mech Engg. M.Tech Design 36


Interaction Using QR Code Tracking
• After identification of the QR code, the coordinate values of its four corner points are registered and
updated as their positions move.
• Figure shows snapshots of the system. A wireframe box model is positioned on the QR code which is
detected. As you see, the code tracking is maintained even though the camera is not close to the code.

School of Mech Engg. M.Tech Design 37


Interaction Using Color-based Tracking
• Figure shows an interface for selection of the color to be tracked.
• When the system starts execution, a crossing marker appears to allow the user select a target color by
keeping its region centered for a few seconds.
• To initialize the color selection, the user tilts the phone up. After that, the user may change his/her
scope as he/she prefers.
• A pinwheel appears so that a region having the similar color with the selected one is considered as a
wing component of the pinwheel, as shown in Fig.2.
• When the camera position changes, its view changes accordingly, of course. Here, multiple regions in
the real world may be matched for the selected color.
• The largest one is chosen as the target in the current implementation

School of Mech Engg. M.Tech Design 38


Interaction Using Color-based Tracking

• Figure shows an interface for selection of the color to be


tracked.
• When the system starts execution, a crossing marker appears to
allow the user select a target color by keeping its region
centered for a few seconds.
• To initialize the color selection, the user tilts the phone up

School of Mech Engg. M.Tech Design 39


Interaction Using Color-based Tracking

• After that, the user may change his/her scope as he/she prefers.
• A pinwheel appears so that a region having the similar color
with the selected one is considered as a wing component of the
pinwheel, as shown in Fig.
• When the camera position changes, its view changes
accordingly, of course.
• Here, multiple regions in the real world may be matched for the
selected color.
• The largest one is chosen as the target in the current
implementation.

School of Mech Engg. M.Tech Design 40


Evaluating AR systems
• Most AR user evaluations fit into one of four categories:
(1) Experiments that study human perception and cognition with low-level tasks.
(2) Experiments that examine user task performance.
(3) Experiments that examine collaboration between users.
(4) System usability, system design evaluation.

• A rough distinction can be made between quantitative methods, qualitative methods, non-user
based usability evaluation methods, and informal methods.

School of Mech Engg. M.Tech Design 41


Evaluating AR systems
(1)Objective measurements
These should produce a reliable and repeatable assignment of numbers to
quantitative observations. They can be taken automatically or by an experimenter. Typical measures include
times (e.g. task completion times),
accuracy (e.g. error rates), user or object position, or test scores, etc.
(2) Subjective measurements
These rely on the subjective judgment of people and include questionnaires, ratings, rankings, or judgments (e.g.
depth judgment).
(3) Qualitative analysis
Qualitative analysis is not concerned with putting results in numbers. Data is gathered through structured
observations (direct observation, video
analysis) or interviews (structured, unstructured).

School of Mech Engg. M.Tech Design 42


Evaluating AR systems
(4) Non User-Based Usability evaluation techniques
This includes non user-based evaluations techniques such as cognitive
walkthroughs or heuristic evaluations as well as techniques that involve
people who are not end-users (e.g. expert-based usability evaluations).
(5) Informal testing
Many published AR papers only report on informal user observations or
feedback (e.g. gathered during demonstrations). It is surprising that reporting such limited findings still seems to
be very common and accepted
in AR contexts. By contrast, in CHI publications informal evaluation has
almost disappeared

School of Mech Engg. M.Tech Design 43


THANK YOU……

44

You might also like