You are on page 1of 25

Vision

Describe how the brain work as a computer?

1. Optics: light projection on the inside of our eye ball

2. Photo- transduction: from photons to neural signals

3. Optic nerve : data transfer from eye to brain

4. Cortex: image processing, interpretation, selection

How many degrees is the human field of view?

190

How many degrees is the human stereo vision (overlapping)?

120

How does retina work?

It creates light into electrical signals. The image on the retina is upside down due to lighting
bending.

How many types of receptor cells do we have and where are they located?

Two: rods (which are used for peripheral vision B/W) cones (color), They are located in the
retina

How is the receptive field of ganglion cells formed?

The receptive field of ganglion cells is a weighed integration of receptor cells.

What range of the electro magnetic spectrum is visible to the human eye?

380-700 nm

What is the main fluid in eye gel?

Water 99%

From which parameters is material light absorption influenced?

· Atoms : which absorb photons at discrete frequencies

· Molecules (at much more frequencies than atoms)


Which type of light is passed through water and what is its connection to the electro
magnetic spectrum?

· Water consist of H2O Molecules

· Visible light only

· Water is a band-pass wavelength-filter

Which type of light is passed through glass?

Glass is a high-pass wavelength filter and blocks only UV type of light.

Which are the three types of cones in color vision?

Red-Green-Blue

What is the downside of RGB in compare to the human color vision?

The human color vision can perceive more colors.

Name the three color channels based on three types of pigment code?

· Achromatic

· Red-Green Channels

· Blue-Yellow channels

Which gender is more likely to have color blindness and why?

Men’s because they have only 1 X-Chromosome while women have 2 and in order for them
to have color blindness both X-chromosomes must have a faulty gene

Name the three types of color blindness and analyze their disfunctions?

· Monochromacy: where all three cones are dysfunctional

· Dichromacy: where one cone disfunctions, usually red or green

· Anomalous Trichromacy : where cones have altered spectral sensitivity

Key notions :

· what is the human binocular vision? (the ability to use information from both
sides at once)

· Limited plasticity for optical deformations(upside down images)

· Name the three color channels?


· what is the difference between human vision and RGB?

· The eyes adapt to color intensity yielding color after effects.!!! (how do the eyes
adapt to colour intensity?)

Motion Vision

Why is motion vision useful?

· It allows depth perception

· Peripheral attention (attention is focused on dangers and opportunities)

· Figure-background segmentation (can separate figures from background)

· Anticipation of what happens

What is another name for motion perception?

Orientation detector in space-time:

· it allows apparent motion perception

· describe the wagon wheel effect

What are the differences between Parvo cells and magno cells?

Parvo cells: color sensitive, slow, high resolution. They can figure out shape and
color/luminance

Magno cells: color blind, fast, low resolution. They can figure out Motion/depth and shape

what can we say about local motion (downside) and object motion?

Local motion is ambiguous and must be further integrated into object motion.

Key notes:

· why is motion perception useful?

· Retinal motion is combined with self motion to estimate world motion!!!

· Motion perception can be seen as orientation detection in space and time

· Local motion is ambiguous and must be further integrated into object motion

· Motion perception is color blind


Stereo Vision

What is Stereo Vision?

Stereo Vision is the ability to recognize the depth in which objects appear by comparing the
different of views of the same scene (left- right eye)

How good is the sensitivity of depth at a distance of 1m?

Pretty good at 0.1mm

What happens in stereo blindness?

Stereo blindness is the inability to perceive stereoscopic depth by processing the disparities
between the images from both eyes.

What are the most commons reasons to for stereo blindness?

Medical disorders

Loss of vision in 1 eye

What are the similarities and differences between stereo and motion vision?

Stereo vision is similar to motion vision since they both need two different views of the same
scene. However stereo visions main difference is that depth requires the scene to be static

Key notes:

· How can we reconstruct depth from two different views? (we can do this as long as
we know the relative viewing angles of both eyes)

· How good is stereoscopic depth sensitivity?

· What is the difference between stereo and motion vision?

Paper 1 : Multiple sensors use in order to predict a cue. MLE model is used in order to
reduce the variance among the guessed outcomes. Reduce of variance was successful after
putting multiple sensors to test to predict 1 que (predict the size of an object using both
visual and haptic ques).

Paper 4:
· emotional responses to an environment are context dependent and not
dominated by a single sensory modality

· Multiple sensor modalitys that are well combined can evoke positive emotions

· Multiple sensor modalitys that are not well combined evoke negative effects

All in all the paper did not manage to make a firm conclusion on how different environmental
characteristics should be combined in order to enhance positive emotion.

Hearing

What is frequency coding?

Frequency coding is the local resonance of the basilar membrane where specific sound
waves at specific frequency’s trigger specific locations on the basilar membrane.

In what form does sound reach the brain?

In pulse waves with same frequency as sound waves.

How can we percept a musical tone?

· From Physical properties: fundamental frequency, overtones, amplitude

· From Perceptual properties: pitch, timbre, loudness, duration

· Note: name of a pitch

Is the pitch of a sound related to the sound’s frequency?

The pitch of sound is determined by the frequency of vibration of the sound waves that
produce them. A high frequency (e.g., 880 Hz) is seen as a high pitch, while a low frequency
(e.g., 55 Hz) is regarded as a low pitch.

What is the difference and which are the similarities between harmonics 1 to 10 and
pure tones?

· Similarities: same melody and pitch

· Difference: timbre

What is the difference and what is the similarity between harmonics 4 to 10 and pure
tones?

· Similarity: same sequence of pitches

· Difference: no frequency in common with pure tone


What is the main similarity of the pure tones, harmonics 1 to 10 and harmonics 4 to
10?

They all share the same pattern periodicity

How do we define measure sound loudness?

In decibel

What is a decibel?

The decibel (dB) is a logarithmic unit that indicates the ration between:

· The power of a sound

· Human perception threshold at 2-4 kHz

What is stereo hearing (what are the parameters that influence horizontal and vertical
direction of sound)?

Stereo is what gives sound directionality.

· Horizontal Direction is based on:

1. Difference in amplitude (loudness)

2. Difference in phase (arrival time)

· Vertical direction is based on:

1. Deformation of pinna (earflap)

Which compress techniques are used by MP3 in order to reduce the data of a song?

· Joint stereo: often same information in left/right channels thus with joint stereo we
reduce the final size by using less bits for the side channel

· Huffman encoding: creates variable length codes on a whole number of bits. Most
frequently occurring information have shortest code. The decoding step is very fast thus
it allows to save 20% space on avarege
· Psycho-acoustic masking: this method filters out low amplitude sounds that are played
alongside with high amplitude sounds. When this occurs in a song the low amplitude
sounds can not be heard by the human ear thus this methods excludes them.

Key notes:

• A sinusoid (harmonic) triggers a specific location of the basilar membrane


(frequency coding).

• Any periodic function can be expressed as superposition of harmonics (?????)

• We can ‘perceive’ the missing fundamental of a tone!!!!

• We can perceive the difference frequency (beat) of two tones.!!!!

• Loudness varies logarithmic with amplitude.!!!

• Perceptual thresholds are lowest for voice frequency. !!!

• Direction in horizontal plane is perceived based on amplitude and phase


differences.

• Direction in vertical plane is perceived based on (spectral) deformations by earflap.

• MPEG compression makes use of perceptual properties (psycho-acoustic


masking).

Touch

What is touch and what are its main features?

Touch is a somatosensory system:

· Proprioception (the ability to be able to tell the relative position of body parts. For
example being able to touch the nose with your index finger while having eyes closed)

· Cutaneous sense: any sense that is dependent on receptors in the skin. For example
pain, temperature, vibration etc.

Give some functions of touch?

· Feedback for motor coordination

· Warnings by touch and pain

· Motivation sexual activity


Name the systems that are used in haptic perception?

· Sensory system (touch,temperature)

· Motor system (moving fingers and hands)

· Cognitive system (processing information)

What is LORM glove ?

Communication and translation device for deafblind people.

Can we achieve navigation through touch ?

Yes

Key notes:

· Touch is important for motor coordination and pain warnings but can also be used to
communicate information

· Tactile interfaces can be used for spatial awareness, threat wanings, way finding and
crew communication

Multimodal Perception:

What configurations of sensory information do we have?

· Complementary (color and weight of an apple) where the sensors do not directly depend
on each other.

· Redundant (size and weight of an apple) where the different sensors present the same
information.

How does optimal sensory integration work?

It weighs the individual sensory estimates such that the total variance is minimal.

Do humans combine redundant sensory information optimally?

YES! Human brain takes into consideration reliability and uncertainty of each sensory
modality. However, there are situations where this is not possible because there is a conflict
between informations provided by each sensor. In these cases the brain may prioritize one
source of sensory over the other.

Give examples of when integration of iningruent sensory information can lead to


illusions?

· Vision alters perceived location of sound source(screen speaker fusion)

· Vision alters speech perception (lipmovement causes ba to perceived as da)


· Visual lovation captures tactile location (Rubber hand experiment with hammer)

· Sound alters visual percept (There was some example where two sound beaps made
people think that a light flashed twice)

· Sound influences motion direction (moving sound captures dynamic random dot pattern
(for more see documentation))

What is the influence of vision on olfaction?

Shape and color (vision) can affect olfaction.

Do vision and touch take the same perception time?

NO. the influence of vision is stronger

How can memory (recognition and localization) be improved?

By using multi sensory learning. (vision-tactile)

Key notes:

· Sensory information can be complementary (color and weight of an apple), or redundant


(size and weight of an apple)

• Optimal sensory integration weighs the individual sensory estimates such that the total
variance is minimal.

• The integration of incongruent sensory information can lead to illusions (e.g. rubber
hand, flash-tap illusion, odor).

• Time (interval) perception can differ for vision and touch.

• Memory (recognition and localization) of objects improves with multisensory


(visual-tactile) learning.

• Visual, vestibular and cognitive cues integrate into the percept of the subjective vertical.

Cross modal perception:

What is synesthesia?

Synesthesia is a naturally occurring condition where people experience information that is


usually experienced in one modality in a different modality (ELIZABETH SULSTON TASTES
NOTES AND THEY SAY I SHOULD STOP DOING DRUGS)

Give some examples where modern technology made synesthesia possible?

• Headset with camera example


• Blind mountain climber sees via video input fed to his tongue

VR Technology:

Why is VR useful?

· Natural multimodal perception

· Intuitive interaction, adaptive (gnwstes kinhseis,)

· Performance monitoring

· No real world constraints (unlimited, no risk,remote)

· FUN

What is the interaction loop of VR?

· Tracking: head and hand position

· Virtual world simulation

· Sensory feedback

· User control

What sensors-systems are present in VR?

· Gyroscopes (angular velocity, orientation)

· Accelerometers (linear acceleration)

· Magnetometers (direction of earth’s magnetic field)

Give the similarities and differences between natural vision, flat world vision,
Stereoscopic displays (VR).

Natural Vision 2D 3D 4D
Blur as depth cues
focus on either
object ü - - ü

Stereoscopic
disparities as depth
cues ü - ü ü

Vergence angle
and
accommodation ü - ü ü
convary and give
depth cues Fixed
accomodation

What are the two different displays of augmented reality? What are the advantages
and disadvantages of them?

· See-Through:

Advantages:

1. Real world resolution

Disadvantages:

1. Alignment between real and virtual world (hard depth extraction)

2. Occlusion (real behind virtual?)

· Digitize real world before combination:

Advantages:
1. Color change (DanKam app smartphone)

2. Occlusion (leave out camera pixels of real object)

Key notes:

· Virtual Reality technology offers : multimodal perception, intuitive interactions,


performance monitoring, and safe environments for various applications (e.g. training)

· The interaction loop consists of : (head and hand) tracking, virtual world simulation,
sensory feedback and user control.

· Visual stereoscopic feedback requires the separation of left/right eye images through
color differences, polarization or multiplexing. (what is required in order for visual
stereoscopic feedback to work?)

· Veridical stereoscopic perception requires matching the simulated viewing position and
direction with the actual viewing position and direction (through head tracking, CAVE, or
HMD). (what is required for veridical stereoscopic perception?)

· ‘See-through displays’ using digital combination have the advantages of color


manipulation and correct occlusion (but requires real world depth extraction).

· Proprioceptive feedback (for vestibular organ and muscles) requires motion platforms
and force feedback devices. (what is required to achieve proprioceptive feedback?)

VR Applications:

Give some VR applications?

· Education (virtual class room)

· Prototyping (architecture evaluation)

· Arts & leisure (games, virtual museums)

· Science (Molecular docking, simulating complex systems: supernovas, material testing)

· Training (military)

· Telepresence (Tele-operations, drone with eye camera)

What are the challenges and solution of the planetary rover example (Telepresence)?

Challenges: unknown environment, time delay, limited bandwidth

Solutions : reconstruct 3D environment, drive in VR, update model

Key notes: basically the same thing as vr applications

What is usability in user interfaces and how is it measured?


Usability is the degree to which the design of a particular user interface takes into account
the human psychology and physiology, and makes the process of using the system effective,
efficient and satisfying:

· Effectiveness (task performance, errors)

· Efficiency (effort, duration, mental load)

· Satisfaction (ease of use, attractiveness, trust)

What do we need to take into account while developing a user interface for vr?

· Interaction tasks (navigation comunication)

· Modulating factors (age, gender, psychological)

· Methodology (user saftery studies , usability studies)

When does cyber sickness occur?

When we have a congruency/matching between iFOV and eFOV cybersickness increases.

What happens with incongruency between iFov and eFov?

Reduction in immersion and sensory integration break down

What kind of space do VR displays form?

Perceptual space. Where position accuracy depends on visualization (stereoscopic) and


interface (virtual hand)

Key notes : 3 questions above

Navigation

Name the two types of navigation and analyze them?

· Navigation based on route representation: it involves sequences of local views that are
associated with certain actions (turn right)

· Navigation based on COGNITIVE MAPS: this type of navigation requires an


understanding of the spatial relationships between spatial features (landmarks)

How is cognitive maps navigation formed?

· It is formed based on self localization: internal and external self motion cues can be used
to maintain a sense of position and orientation
· Based on spatial attributes: visual information about spatial attributes of the environment
and the objects therein.

Genders have their own way of navigating what is the main difference in navigation
strategies between males and females?

Men use geometric cues for navigation while females apply a landmark strategy (memory
holding using landmarks).

Name the different navigation reference frames?

Allocentric directions (North,south…)

Egocentric directions (up, down, left,right…)

Are there multiple sensors used for navigation give an example where multisensory
navigation is better?

Yes (example: experiment with virtual mazes where Audio Visual landmark had better results
in terms of spatial memory and navigation performance in compare to just Visual or just
Audio)

Key notes:

• Spatial cognition reveals a variety of strategies

• Cognitive maps vs route representations

• Gender differences (natural strategies)

• Navigation requires reference frames

• 2D : egocentric versus allocentric

• Navigation uses multiple senses

• Head-slaved VR goggles facilitate vestibular information

• Memory processes benefit from audiovisual landmarks

Communication, emotions, embodiment

Can we improve persuasive force by selective gaze?

YES (looking at someone increases persuasive force)

What does the Uncanny hypothesis say?

When a robot is made more humanlike in its appearance and motion the emotional response
from a human being to the robot will become increasingly positive and empathetic. However
when a certain point is reached going beyond that will create strong repulsion.
Give the model for affective appraisal?

What is embodiment?

Ownership of simulated body.

Why is embodiment important in VR?

It enhances control, realism and presence. (more interactive)

Name the different types of presence?

· Physical Presence (being in another physical world then where your body is
located)

· Social Presence (experience the presence of another intelligence)

· Self presence (mental model of your body in VR)

Which sensations are needed to experience embodiment?

· Body ownership

· Agency (sensation of controlling body)

· Self-location (sensation that locations of you and your body coincide in space)

How can we measure embodiment?

· Questionaires

· Self-location or proprioceptive drift (perceived position of own (hidden) hand versus drifts
towards rubber hand)

· Physiological responses (heart rate, blood pressure)

Which factors influence embodiment?

· Anthropomorphism

· Synchronisation (movement, sensory)

· Visual discontinuity (full limb, wire hand, m-wrist)

Key notes:

• Communication in VR: gaze matters (selective gaze increases persuasive force).

• The Uncanny Valley : the bandwidth of near-realistic humans can yield strong repulsion.

• Affective appraisal: validation required


• Manipulation of visual perspective and correlated multisensory information is sufficient to
trigger the illusion that another person’s body is one’s own (subjective and physiological
evidence).

What are the two types of brain computer interfaces BCI?

· Invasive BCI(Electrocorticography)

· Non-invasive BCI (electroencephalogram)

Give examples of brain machine interaction (BMI)?

· Cybernetic binoculars tap your brain (EEG) for unconscious threats

· Moving chess pieces by thought

· Reading intentions (EEG)

· Brain control robot arm (invasive)

· Brain control own hand

· Writing the visual cortex (a way to decode picture so that blind people can see)

· Rat to rat communication (invasive)

· Brain to brain communication (non-invasive)

Key notes:

• BCIs are (non)invasive systems that measure CNS activity and convert it into artificial
output that replace, restore, enhance, supplement, or improve CNS output. (what are
BCIs?)

• Today, most promising for use in (mobile) human computer interaction are non-invasive
EEG based BCIs, which have high temporal resolution (good for control) but low spatial
resolutions (still hard to extract meaningful signals).

• Examples of reading the brain (non-invasive): attentional state, emotions, what someone
sees, (motoric) intentions.

• Examples of writing the brain: TMS (non-invasive), visual prosthesis (invasive).

• Examples of connecting brains: control someone else’s hand.

Cyborgs
See prosthetic memory!!

Key notes:

Google glass / Hololens: Augmented reality with internet connectivity. Far future: visual
prosthesis?

Exosomatic memories : capturing, retrieving and sharing experiences (AV &


EEG/physiological recordings).

Sharing through networks (BCI, brain-brain): team members tap each other’s sensors and
experiences.

A human evolution towards symbiotic distributed human organisms.

Individual human identities may dissolve.

Lecture7

Give interaction tasks of AR?

· Perception (recognition)

· Manipulation (selection, grabbing)

· Navigation (way finding, manoeuverin)

· Communication (verbal)

What is the difference between input devices and interaction design?

Input devices are unique while interaction design is flexible

Give different categories of input devices in AR?

· AR devices (handheld (phone))

· Bodyparts (mostly hand tracking) (also called natural interaction) (gorilla arm effect
downside – sometimes its hard to percept depth of object- does not feel real) (pros: using
gestures allows you to reach object far away)

· Special devices (handheld or worn)

· Natural devices (physical form that translates to digital information. Example paddle for
operations on a catalog book)

What is the main difference between AR and VR?

In AR you aslo see a part of the real world.

Give pros and cons of special devices and natural hand tracking methods in AR?
Special devices: + tactile sensation + accurate, -special hardware needed (unutural),
-difference between interacting with real and virtual objects

Natural hand tracking: + feels real, + user can interact with virtual object everywhere in
space, - usually no tactile feed back, -memorization, feedback,accuracy (for gesture
interaction)

Give common problems in AR interface design?

· Distracting,overwhelming

· Social, ethical aspects

· Performance

· Make it natural (naturalism) or not (Magic)

· User engagement experience

What do we need to take under consideration when creating interaction design for AR?

Human factor (hyper reality overload information example).

What is the best interaction approach in AR?

There is no one ideal interaction. However, usually the best approach is for the design to feel
natural.

Give some perceptional issues in AR interaction?

· Static and dynamic registration mismatch

· Restricted FOV

· Mismatch of resolution and image clarity

· Luminance mismatch

· Contrast mismatch

· Limited depth resolution

· Size and distance mismatch

Give some manipulation issues in AR interaction?

· Position

· Rotation

· Scaling
Give some selection issues in AR interaction?

· Make object active

· Indicate action to an object

· Travel to object location

What are the two different approaches in manipulation of AR design?

· Isomorphic (feels more natural)

· Non-isomorphic (Magic virtual tools that extend the working volume of the
arms(application dependant))

What variables can affect user performance in AR selection?

· Object distance from user

· Object size

· Density of objects in the area

Give some examples of VR selection design?

· Ray casting

· Cone casting

· Touching with real or virtual hand

· Aperture (basically cone casting with adjustable cone width)

· Image plane (manipulating a 2d projection of an object that you see in the 3d


environment.)

Novel selection techniques?

· Depth rays (deals with object occlusion)

· Bubble cursor (easy to select object near you)

What are the two different types of classification interaction metaphor?

· Egocentric metaphor (Ray casting, aperture etc. This are interactions that happen within
the virtual world)

· Exocentric metaphor (world in miniature. Here you basically get a miniature world of the
virtual world (within the virtual enviroment) and by interacting with it you make changes
to the virtual world (gives more control)).

Which parameters influence user performance on Manipulation of objects?


· Depth information

· Precision placement

· Amount of rotation

Key notes:

EXAM BASED

Different input device categories (know them including TUIs and characteristics, be
able to make relations between technology aspects & usage (sensors for tracking &
how they are used for interaction))

Interaction design & tasks (comparison of characteristics, pros and cons, relation
between technology and possible usage)

LECTURE 6

Augmented Reality Displays


● Handheld AR (phone, tablet etc)
○ Live feed from camera (reality)
○ Added 3d graphics (AR)

○ Could use compass, accelerometer and/or GPS (ex. Pokemon Go)


○ Tracking by corners, edges or markers optimized for the application.
● TrueDepth cameras on Iphones get depth information too
○ easy 3d recreation of objects in photos

● Field of view
Extra hardware and computation to get user’s perspective on screen cause of camera lens
and distortion.

● Eye focus
The human eye can focus only at a certain depth of our vision.

3D Perception: Depth cues

Oculomotor cues:
● Accommodation
● Convergence

Binocular depth cues:


● Binocular disparity (stereopsis)

Motion related cues


● Motion parallax
● Kinetic depth

3D perception depth cues examples

See-through HMDs
AR head mounted displays. Basically AR glasses. He goes over the problem of depth
focus, since we project on just a single display and how a work around that they are looking
into now is a specific camera that when you take a picture, it takes multiple 'versions' for all
depths (by recording all the light that comes towards it from all angles?). You can then use
that alongside eye trackers to automatically adjust what should be in focus in real time, with
better hardware.
Lecture5

Give AR characteristics?

 Real and virtual environment combined in Real-Time


 The combination “matches” perfectly
 Interaction with virtual elements in real-time

Give the different AR incarnations?

 Hand held
 Spatial
 Head worn

What is AR according to Azuma?

He defines AR as systems with three characteristics:

 Combines real and virtual world


 Is interactive in real time
 Is registered in 3 dimensions

According to Milgram et al. give the definition of mixed reality?

What is AV in mixed reality?

Real elements added to virtual world

What is AR in mixed reality?

Virtual elements added to real world

What are the main problems in Milgram’s definition?

 Multimodality?
 Application/usage?
 Multi-users

Give an example where for two different systems that are at the same spot on the
continuum the user experience would differ a lot?

Head mounted display, handheld display with same visuals and audio and everything exactly
the same. However, phone vs head mounted display are completely different in terms of
user experience.

Exam based: know what type of AR we can create with each incarnation?
What is the AR interaction loop?

Calibration, registration & tracking>>>>simulation>>>> Feedback>>>>>Control.

What is important in head tracking of AR?

 Head orientation (for correct perspective)


 Real-world information (for accurate placement)

Analyze calibration, registration and tracking?

 Registration : alignment of spatial properties (basically align objects of virtual and


real world)
 Calibration : offline adjustment of measurements (to check and adjust a sensors
accuracy)
 Tracking (continuous update in real time of position and orientation of ar display to
real objects)

VR/AR comparison :

 AR need less rendering (only parts of the world)


 Accurate visualization is much harder than VR (real world influence occlusion)
 Tracking and sensing is usually much harder (needs high precision)

Key notes:

AR: know what to use when and what you can or cant do given certain circumstances and
goals

Differences between ar & vr

Which characteristics do sensors have that are used for tracking in AR?

 Sensors that gives numbe of DOF


 Global vs local (sensor to understand changes in AR enviroment)
 Absolute vs Relative (sensor that gives this information)

What are the requirements for successful tracking in AR?

 High accuracy and precision, no jitter (noise, instability)


 Low latency (delay)
 High resolution and high update (min. 10fps)
 Low price

Name the tracking approaches in AR?

 Sensor based (GPS tracking using accelometers and gyroscopes)


 Optical (computer vision needed to see the enviroment)
 Other (tracking via markers)

What is the use of an accelerometer?

It measures acceleration of rotation in respect to gravity.


What is the use of a gyroscope?

Measures rotation rate of rotation better than accelerometers but drift over time.

What is the use of magnetometer?

Measures strength and direction of earths magnetic field

Marker tracking vs natural tracking in optical tracking?

Markers: must be installed, must be fully visible

Natural tracking : must be partly visible, natural features must be known

(((((Marker tracking usually uses black and white features cause of contrast () while in
natural tracking you need texture and when some amount of the same texture appears over
the screen the object is recognized as a flat surface ))))))

What are the issues of marker and natural tracing?

They don’t adapt well in size/distance and angle changes and they have high latency.

Which are the most common forms of tracking in AR?

 Inertial (gyroscope,accelometer)
 Optical (marker based, natural feature based)
 Structured light cameras
 others

You might also like