Eye Cancer Research Progress Edwin B. Bospene (Editor) 2008. ISBN: 978-1-60456-045-9 Non-Age Related Macular Degeneration Enzo B. Mercier 2008. ISBN: 978-1-60456-305-4 Optic Nerve Disease Research Perspectives Benjamin D. Lewis and Charlie James Davies (Editors) 2008. ISBN: 978-1-60456-490-7 2008. ISBN: 978-1-60741-938-9 (E-book) New Topics in Eye Research Lauri Korhonen and Elias Laine (Editors) 2009. ISBN: 978-1-60456-510-2 Eye Infections, Blindness and Myopia Jeffrey Higgins and Dominique Truax (Editors) 2009. ISBN: 978-1-60692-630-7 Eye Research Developments: Glaucoma, Corneal Transplantation, and Bacterial Eye Infections Alan N. Westerhouse (Editor) 2009. ISBN: 978-1-60741-1772 Retinal Degeneration: Causes, Diagnosis and Treatment Robert B. Catlin (Editor) 2009. ISBN: 978-1-60741-007-2 2009. ISBN: 978-1-60876-442-6 (E-book) Binocular Vision: Development, Depth Perception and Disorders Jacques McCoun and Lucien Reeves (Editors) 2010. ISBN: 978-1-60876-547-8

Understanding Corneal Biomechanics through Experimental Assessment and Numerical Simulation Ahmed Elsheikh 2010. ISBN: 978-1-60876-694-9 Retinitis Pigmentosa: Causes, Diagnosis and Treatment Michaël Baert and Cédric Peeters (Editors) 2010. ISBN: 978-1-60876-884-4 Color: Ontological Status and Epistemic Role Anna Storozhuk 2010. ISBN: 978-1-61668-201-9 2010. ISBN: 978-1-61668-608-6 (E-book) Coherent Effects in Primary Visual Perception V.D. Svet and A.M. Khazen 2010. ISBN: 978-1-61668-143-2 2010. ISBN: ISBN: 978-1-61668-496-9 (E-book) Conjunctivitis: Symptoms, Treatment and Prevention Anna R. Sallinger 2010. ISBN: 978-1-61668-321-4 2010. ISBN: 978-1-61668-443-3 (E-book) Novel Drug Delivery Approaches in Dry Eye Syndrome Therapy Slavomira Doktorovová, Eliana B. Souto, Joana R. Araújo, Maria A. Egea and Marisa L. Garcia 2010. ISBN: 978-1-61668-768-7 2010. ISBN: 978-1-61728-449-6 (E-book) Pharmacological Treatment of Ocular Inflammatory Diseases Tais Gratieri, Renata F. V. Lopez, Elisabet Gonzalez-Mira, Maria A. Egea and Marisa L. Garcia 2010. ISBN: 978-1-61668-772-4 2010. ISBN: 978-1-61728-470-0 (E-book) Cataracts: Causes, Symptoms, and Surgery Camila M. Hernandez (Editor) 2010. ISBN: 978-1-61668-955-1 2010. ISBN: 978-1-61728-312-3 (E-book)

EYE AND VISION RESEARCH DEVELOPMENTS BINOCULAR VISION: DEVELOPMENT. medical or any other professional services. This digital document is sold with the clear understanding that the publisher is not engaged in rendering legal. stored in a retrieval system or transmitted in any form or by any means. Inc. but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. JACQUES MCCOUN AND LUCIEN REEVES EDITORS Nova Science Publishers. The publisher has taken reasonable care in the preparation of this digital document. DEPTH PERCEPTION AND DISORDERS No part of this digital document may be reproduced. New York . No liability is assumed for incidental or consequential damages in connection with or arising out of information contained herein.

4. In addition. Vision. For permission to use material from this book please contact us: Telephone 631-231-7269.Copyright © 2010 by Nova Science Publishers. It is sold with the clear understanding that the Publisher is not engaged in rendering legal or any other professional services. 2. Binocular vision disorders. recording or otherwise without the written permission of the Publisher. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained in this book. If legal or any other expert assistance is required. Depth perception. Includes bibliographical references and index. 4. . Inc.8'4--dc22 2009038663 Published by Nova Science Publishers. McCoun. WW 400 B6145 2009] QP487. Dominance. Binocular vision. products. No part of this book may be reproduced. Jacques McCoun and Lucien Reeves. Reeves. or exemplary damages resulting. but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. Jacques. electrostatic. ideas or otherwise contained in this publication. 3. advice or recommendations contained in this book. Visual--physiology. or reliance upon. tape. this material. Lucien. This publication is designed to provide accurate and authoritative information with regard to the subject matter covered herein. consequential. no responsibility is assumed by the publisher for any injury and/or damage to persons or property arising from any methods. mechanical photocopying. Inc. All rights reserved. stored in a retrieval system or transmitted in any form or by any means: electronic.B56 2009 612. magnetic. Ocular--physiology. LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA Binocular vision : development. from the readers’ use of. and disorders / editors. [DNLM: 1. Computer vision. ISBN 978-1-61761-957-1 (eBook) 1.com NOTICE TO THE READER The Publisher has taken reasonable care in the preparation of this book. Pattern Recognition. 3. instructions. The Publisher shall not be liable for any special. p. depth perception. Independent verification should be sought for any data. FROM A DECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS. Vision Disparity--physiology. cm. 2. I. the services of a competent person should be sought.novapublishers. New York . in whole or in part. Any parts of this book based on government reports are so indicated and copyright is claimed for those parts to the extent applicable to compilations of such works. II. Binocular--physiology. Fax 631-231-8175 Web Site: http://www.

and Stereo Hossein Ebrahimnezhad and Hassan Ghassemian Ocular Dominance within Binocular Vision Jonathan S. Silhouette. La Gatta Antonio. Pointer Three-Dimensional Vision Based on Binocular Imaging and Approximation Networks of a Laser Line J. Cesarelli Mario. Bifulco Paolo and Fratini Antonio Evolution of Computer Vision Systems Vladimir Grishin Binocular Vision and Depth Perception: Development and Disorders Ken Asakawa and Hitoshi Ishikawa ix 1 Chapter 2 Chapter 3 63 81 Chapter 4 107 Chapter 5 Chapter 6 125 139 . Apolinar Muñoz-Rodríguez Eye Movement Analysis in Congenital Nystagmus: Concise Parameters Estimation Pasquariello Giulio.CONTENTS Preface Chapter 1 New Trends in Surface Reconstruction Using Space-Time Cameras: Fusing Structure from Motion.

Burns Temporarily Blind in One Eye: Emotional Pictures Predominate in Binocular Rivalry Georg W. Pitts and Denise A. Kirby K. Alpers and Antje B.M. Penisten. Goss. Douglas K. Sappa and Antonio M. Pointer Index 249 . Gerdes Stereo-Based Candidate Generation for Pedestrian Protection Systems David Geronimo. Angel D. López Development of Saccade Control Burkhart Fischer 155 Chapter 8 161 Chapter 9 189 Chapter 10 209 247 Short Commentary Ocular Dominance Jonathan S.viii Chapter 7 Contents Repeatability of Prism Dissociation and Tangent Scale Near Heterophoria Measurements in Straightforward Gaze and in Downgaze David A.

the authors of this book review the phenomenon of ocular dominance (OD) in the light of the types of test used to identify it. This book reviews our ability to use both eyes. Robust curve matching method in stereo cameras for extraction of unique space curves is explained. and refers to the special attributes of vision with both eyes open. Our perception under binocular conditions represents a highly complex coordination of motor and sensory processes and is markedly different from and more sophisticated than vision with one eye alone. Chapter 1 is intended to present an overview of new trends in three dimensional model reconstruction using multiple views of object. such as strabismus and amblyopia. Unique space curves are constructed from plane curves in stereo images based on curvature and torsion consistency. This book also describes the development of eye movement control. rather than one eye only. Other chapters in this book disclose new methodologies in congenital nystagmus eye movements analysis and evaluate heterophoria as an important element of assessment of binocular vision disorders. The shortcoming of outliers in motion estimation is extremely . The most popular method is known as structure from motion. and address some practical implications of OD as demonstrated in healthy eyes and in cases where there is compromised binocular function. question whether inter-test agreement of OD in an individual might be anticipated. particularly those that are important for reading. In addition.PREFACE "Binocular vision" literally means vision with two eyes. Three dimensional model reconstruction from image sequences has been extensively used in recent years. while also providing basic information on the development of binocular vision and on the clinical disorders that interfere with our depth perception. which employs feature and dense points matching to compute the motion and depth. which has been developed by the authors.

Experimental results demonstrate the privileged performance of the complicated system for a variety of object shapes and textures. continues to excite controversy today. Finally. even in pixellevel information. and speculate whether OD is essentially the product of forced monocular viewing conditions and habitual use of one or other eye. The recovered space curves are employed to estimate robust motion by minimizing the curve distance in the next sequence of stereo images. curve matching method deals with pixel range information and does not require the sub-pixel accuracy to compute structure and motion. This property makes the matching process very robust against the color and intensity maladjustment of stereo rigs. the authors introduce a complete automatic and practical system of three-dimensional model reconstruction from raw images of arbitrarily moving object captured by fixed calibrated perpendicular double stereo rigs to surface representation. the simple methods of motion estimation suffer from the statistical bias due to quantization noise. Chapter 2 will review the phenomenon of OD in the light of the types of test used to identify it. All together. influenced by the physical circumstances and viewing constraints prevailing at the point of testing. Besides. which has been the subject of much discussion and revision over the past four centuries. question whether inter-test agreement of OD in an individual might be anticipated. which the authors call space-time cameras. the visual hull of object is extracted from intersection of silhouette cones of all virtual cameras. or the eye whose functional vision appears superior on a given task or under certain conditions. or whose input is favoured when there is competing information presented to the two eyes. it finds the correspondence based on curve shape and does not use any photometric information. Ocular dominance (OD) can be defined and identified in a variety of ways. Using the robust motion information. the complicated system overcomes the bias problem. It might be the eye used to sight or aim. Then.x Jacques McCoun and Lucien Reeves reduced by employing the space curves. briefly consider the possibility of any relationship between OD and limb or cortical laterality. While. Furthermore. by fusing several constraints. The concept. An efficient structure of stereo rigs – perpendicular double stereo – is presented to increase accuracy of motion estimation. and outliers in the input data set. The chapter will conclude with remarks addressing some practical . What is becoming evident is that even in its most direct and behaviourally significant manifestation – sighting preference – it must be regarded as a flexible laterality within binocular vision. color information is mapped to the reconstructed surface by inverse projection from two dimensional image sets to three-dimensional space. a set of exactly calibrated virtual cameras is constructed. measurement error.

The majority of CN patients show a considerable decrease of their visual acuity: image fixation on the retina is disturbed by nystagmus continuous oscillations. By means of this network. The authors present a review of their computer vision algorithms and binocular imaging for shape detection optical metrology. opportunely processed. mainly horizontal. amplitude and frequency. its pathogenesis is still under investigation. CN is an ocular-motor disorder characterized by involuntary. the object shape is recovered by means of laser scanning and binocular imaging. neural networks. the measurements of the binocular geometry are avoided. This technique is tested with real objects and its experimental results are presented. To quantify the extent of nystagmus. In this chapter the current methods to record eye movements in . A Bezier approximation network computes the object surface based on the behavior of the laser line. the binocular images of the laser line are processed by the network to compute the object topography. binocular image processing. Also. Hence. conjugated ocular oscillations and. eye movement recordings are routinely employed. Use of eye movement recording. This kind of nystagmus is termed congenital (or infantile) since it could be present at birth or it can arise in the first months of life. It is because the errors of the measurement are not added to the computational procedure. the time processing is described. Along with other diseases that can affect binocular vision. The parameters of the binocular imaging are computed based on the Bezier approximation network.Preface xi implications of OD as demonstrated in healthy eyes and in cases where there is compromised binocular function. which are analytical functions that estimate expected visual acuity using signal features such as foveation time and foveation position variability. it is fundamental to develop robust and accurate methods to measure both those parameters in order to obtain reliable values from the predictors. the image of a given target can still be stable during short periods in which eye velocity slows down while the target image is placed onto the fovea (called foveation intervals). In this technique. while identified more than forty years ago. However. which performs the shape reconstruction. and computer vision parameters. the performance of the binocular imaging and the accuracy are improved. The binocular imaging avoids occlusions. The study of Chapter 3 involves: laser metrology. Congenital Nystagmus (CN) is of peculiar interest. This procedure represents a contribution for the stripe projection methods and the binocular imaging. reducing the visual quality of a subject. To describe the accuracy a mean square error is calculated. which appear due to the variation to the object surface. By applying Bezier approximation networks. allows computing “estimated visual acuity” predictors. allowing physicians to extract and analyze nystagmus main features such as waveform shape. Thus.

In Chapter 5. However. amblyopia. is the reason for-and the advantage of-having two eyes? From our visual information input. CVS are used for precision navigation. all UAV flight control tasks can be performed in automatic mode on the base of information that is delivered by CVS. “Binocular vision” literally means vision with two eyes. All these tasks have been successfully solved separately in various projects. In many projects. Therefore. while also providing basic information on . incorrect coordination between the two eyes can produce strabismus with its associated sensory problems. Use of those functions extends the information acquired with typical visual acuity measurement (e.. reducing the influence of repositioning saccades and data noise on the critical parameters of the estimation functions. The development of perspective CVS can be divided into two stages. the AI control system will obtain a high degree of awareness about the state of the environment. The first stage of perspective CVS development is the realization of all the above tasks in a single full-scale universal CVS with acceptable size. rather than one eye only. in order to identify nystagmus cycles and to evaluate foveation time. Firstly it will allow considerable improvement of CVS performance and reliability due to accumulation of additional information about the environment. landing (in particular shipboard landing). then.g. and refers to the special attributes of vision with both eyes open.xii Jacques McCoun and Lucien Reeves subjects with congenital nystagmus will be discussed and the present techniques to accurately compute foveation time and eye position will be presented. homing guidance and others. This integration will bring two great benefits. Chapter 4 aims to disclose new methodologies in congenital nystagmus eye movements analysis. Secondly.g.. Our perception under binocular conditions represents a highly complex coordination of motor and sensory processes and is markedly different from and more sophisticated than vision with one eye alone. the use of a pair of eyes can be disrupted by a variety of visual disorders. The second stage of CVS development is integration of CVS and control systems with artificial intelligence (AI). angular and linear UAV motion measurement. This allows the realization of a high degree of control effectiveness of the autonomous AI system in a fast changing and hostile environment. e. How is this accomplished? Chapter 6 is a review of our ability to use both eyes. Landolt C test) and could be a support for treatment planning or therapy monitoring. suppression and diplopia. All necessary technologies exist and the degree of its maturity is high. we can perceive the world in three dimensions even though the images falling on our two retinas are only two-dimensional. applications of computer vision systems (CVS) in the flight control of unmanned aerial vehicles (UAV) are considered. weight and power consumption. What.

This stage directs the search toward the road area thus avoiding irrelevant regions like . straightforward. First. downgaze. this results in a perceptual alternation between the pictures. is based on the use of both stereo imaging and scene prior knowledge (i. pedestrians are on the ground) to reduce the candidate searching space. such that only one picture is visible while the other is suppressed. data from this paradigm demonstrates that emotional pictures are perceived more intensively. The results show a better repeatability for the tangent scale procedure than for the von Graefe prism dissociation method. which consists of three stages. downgaze. These findings can be interpreted as evidence for preferential processing of emotional cues within the visual system.6. Chapter 9 describes a stereo-based algorithm that provides candidate image windows to a latter 2D classification stage in an on-board pedestrian detection system. Serving as subjects were 47 young adults. The preferential perception of visual emotional cues is apparent under conditions where different cues compete for perceptual dominance.8. Testing distance was 40 cm. A coefficient of repeatability was calculated by multiplying the standard deviation of the difference between the results from two examiners by 1.e. MT. The proposed algorithm. which extends beyond initial attentional capture. a successful road surface fitting algorithm provides estimates on the relative ground-camera pose. VG. straightforward. preferential perception of emotional cues may help an individual to respond quickly and effectively to relevant events. 22 to 35 years of age. The two procedures were von Graefe prism dissociation method (VG) and the tangent scale method commonly known as the modified Thorington test (MT). 6. Existing data supports this hypothesis by demonstrating that emotional cues are more quickly detected among neutral distractors. 2. such as strabismus and amblyopia. Several studies from our laboratory showed that emotional stimuli predominate over neutral stimuli in binocular rivalry. 6.2. Coefficients of repeatability in prism diopter units were: VG.96. Chapter 7 examined the interexaminer repeatability of two heterophoria measurement methods in a gaze position with no vertical deviation from straightforward position and in 20 degrees downgaze. This so called binocular rivalry involves different stages of early visual processing and is thought to be relatively independent from intentional control. As explained in Chapter 8. Taken together.. MT. 3. The evaluation of heterophoria is an important element of assessment of binocular vision disorders. Little data is available to demonstrate that emotional stimuli are also preferentially processed during prolonged viewing. When two incompatible pictures are presented to one eye each.Preface xiii the development of binocular vision and on the clinical disorders that interfere with our depth perception.6.

together with statistics of searching space reduction are provided. The Short Commentary discusses ocular dominance and the rationale behind this phenomenon.xiv Jacques McCoun and Lucien Reeves the sky. Finally. however. Then. (c) not uniformly distributed but according to a quadratic function (combined 2D. (b) uniformly distributed through the image (2D). Experimental results of the proposed algorithm. The age curves of the different variables show that the development of the voluntary component of saccade control lasts until adulthood. The saccadic reflex and the control of saccades by voluntary conscious decision and their role in the optomotor cycle will be explained on the basis of the reaction times and neurophysiological evidence. Chapter 10 describes the development of eye movement control. The diagnostic methods used in the next part of the book will be explained in this chapter. three different schemes are used to scan the estimated road surface with pedestrian-sized windows: (a) uniformly distributed through the road surface (3D). only those aspects of eye movements that are important for reading: stability of fixation and control of saccades (fast eye movements from one object of interest to another). The authors will consider.3D). . the set of candidate windows is reduced by analyzing their 3D content.

of Electrical and Computer Engineering. Dept. Robust curve matching method in stereo cameras for extraction of unique space curves is explained. Iran 2 Tarbiat Modaress University.aspx?id=5 b a .. Tehran.ir/ showcvdetail. The most popular method is known as structure from motion.a and Hassan Ghassemian2.ir.In: Binocular Vision Editors: J. McCoun et al.ac. SILHOUETTE. Tabriz. 1-62 ISBN: 978-1-60876-547-8 © 2010 Nova Science Publishers. Computer Vision Research Lab. The shortcoming of outliers in motion estimation is E-mail address: ebrahimnezhad@sut.ac.b Sahand University of Technology. of Electrical Engineering.ac. E-mail address: ghaasemi@modares. which employs feature and dense points matching to compute the motion and depth. Dept. pp. which has been developed by the authors [43]. This chapter is intended to present an overview of new trends in three dimensional model reconstruction using multiple views of object. Inc. AND STEREO Hossein Ebrahimnezhad1.sut. Iran 1 Abstract Three dimensional model reconstruction from image sequences has been extensively used in recent years. Unique space curves are constructed from plane curves in stereo images based on curvature and torsion consistency.ir. Chapter 1 NEW TRENDS IN SURFACE RECONSTRUCTION USING SPACE-TIME CAMERAS: FUSING STRUCTURE FROM MOTION. Web address: http://ee.

1. Keywords: 3D model reconstruction. For a fixed object with a moving camera (or a moving rigid object with a fixed camera) setup. is a challenging problem and an active research topic in computer vision. visual hull. curve matching method deals with pixel range information and does not require the sub-pixel accuracy to compute structure and motion. a set of exactly calibrated virtual cameras is constructed. through a sequence of photo images. In recent years. there has been extensive focus in literature to recover three-dimensional structure and motion from image sequences [1-6]. This problem is also known as bundle adjustment [7]. This property makes the matching process very robust against the color and intensity maladjustment of stereo rigs. Different types of algorithms are used because of the wide range of options. it finds the correspondence based on curve shape and does not use any photometric information. we introduce a complete automatic and practical system of threedimensional model reconstruction from raw images of arbitrarily moving object captured by fixed calibrated perpendicular double stereo rigs to surface representation. structure from motion. measurement error. space curves. Besides. While. the simple methods of motion estimation suffer from the statistical bias due to quantization noise. availability of camera calibration.g. the shape and motion recovery problem can be formulated as trying to find out the 6 motion parameters of the object. and outliers in the input data set. the complicated system overcomes the bias problem. perpendicular double stereo. the visual hull of object is extracted from intersection of silhouette cones of all virtual cameras.2 Hossein Ebrahimnezhad and Hassan Ghassemian extremely reduced by employing the space curves. e. e. even in pixel-level information. number of cameras and available views.g. Furthermore. structure from silhouette. the image projection model. space-time cameras. color information is mapped to the reconstructed surface by inverse projection from two dimensional image sets to three-dimensional space. Using the robust motion information. its position and orientation displacement together with the accurate 3D world coordinates for each point.. Introduction Reconstruction of surface model for a moving rigid object. feature types and model of the scene. Experimental results demonstrate the privileged performance of the complicated system for a variety of object shapes and textures. Then. An efficient structure of stereo rigs – perpendicular double stereo – is presented to increase accuracy of motion estimation. unique points. which we call them space-time cameras. All together. The recovered space curves are employed to estimate robust motion by minimizing the curve distance in the next sequence of stereo images. The standard method of rigid motion . Finally. by fusing several constraints.

and outliers in the input datasets. MEstimators reduce the effects of outliers by applying the weighted leastsquares [11]. On the other hand. Many other similar methods also are available [12-13]. Even small pixel-level perturbations can make the image plane information ineffective and cause the wrong motion recovery.New Trends in Surface Reconstruction Using Space-Time Cameras 3 recovery has been developed in the last decade based on sparse feature points [89]. This method is marked by a high-dimensional search space (typically n+6 for n image correspondences) and. Another standard method of shape recovery from motion has been developed in the last decade based on optical flow [1. The second approach directly minimizes the difference between observed and predicted feature coordinates using Levenberg-Marquardt algorithm [18]. unlike the epipolar constraint-based approach. calibrated stereo vision directly computes the structure of feature . In epipolar constraint approach. as the computations are very sensitive to noise and quantization of image points. The process of structure and motion recovery usually consists of the minimization of some cost function. the cost function reflects the amount of deviation of the epipolar constraint as made happen by noise and other measurement errors. the motion information can be achieved as the solution to a linear problem. the structure and motion from monocular view image sequences is inherently a knotty problem and has its own restrictions. Actually. Such methods are initialized with the output of the linear algorithms. There are two dominant approaches to choose the cost function. and 14]. In this method. measurement errors. it does not explicitly account for the fact that a one-parameter family of solutions exists. some methods minimize the cost function using the nonlinear iterative methods like Levenberg-Marquardt algorithm [8]. 2. which makes additional error in linear solution. RANdom SAmple Consensus (RANSAC) is known as a successful technique to deal with outliers [10]. Outliers occur in the feature-matching process due mostly to occlusions. In general. Presence of statistical bias in estimating the translation [15-17] as well as the sensitivity to noise and pixel quantization is the conventional drawback. The sparse method typically assumes that correspondences between scene features such as corners or surface creases have been established by tracking technique. It can compute only the traveling camera positions. Typically. the motion and structure computations are highly dependent to each other and any ambiguity in structure computation propagates to motion computation and vise versa. To improve the solution. Different robust estimation techniques have been proposed to handle outliers. The first approach is based on epipolar geometry leading to a decoupling of the shape and motion recovery. motion estimation methods suffer from instability due to quantization noise. and is not sufficient for modeling the object as it only reconstructs sparsely distributed 3D points.

4 Hossein Ebrahimnezhad and Hassan Ghassemian points. [22] recovered the stereo correspondence using motion of a stereo rig in two consecutive steps. are used to estimate object motion in sequential frames. The proposed curvetracking method works as well with pixel accuracy information and does not require the complicated process of position computation in sub-pixel accuracy. two sets of space curves.is presented to get as much accuracy as possible in motion estimation process. The first step uses metric data associated with the stereo rig while the second step employs feature correspondences only. An efficient structure of stereo setup . the perpendicular double stereo setup appears to be more robust against the perturbation of edge points. Park et. Its properties . Weng et al. In addition. the curve-matching scheme is very robust against the color maladjustment of cameras and shading problem during object motion. Although. which have been extracted from two distinct stereo rigs. Ho et al. Li et al. presence of statistical bias in estimating the motion parameters still has destructive effect in structure and motion estimation procedure. Young et al. a robust edge point correspondence with match propagation along the curves is presented to extract unique space curves with extremely reduced number of outliers. [20] derived a closed form approximate matrix weighted least squares solution for motion parameters from three-dimensional point correspondences in two stereo image pairs. the combination form of motion and stereo enhances the computations [25]. [21] proposed a two-step fusing procedure: first. translational motion parameters were found from optical flows in binocular images. Moreover. Any large error in depth direction of stereo rig 1 is restricted by minimizing the error in parallel direction of stereo rig 2 and vice versa. a constructive method is presented to moderate the bias problem using curve based stereo matching and robust motion estimation by tracking the projection of space curves in perpendicular double stereo images. [19] computed the rigid motion parameters assuming that depth information had been computed already by stereo vision. We prove mathematically and demonstrate experimentally that the presented method can increase motion estimation accuracy and reduce the problem of statistical bias. In section 2.the perpendicular double stereo . In this chapter. Dornaika et al. Some works in literature fuse stereo and motion for rigid scene to get better results. [23] combined stereo and motion analyses for three-dimensional reconstruction when a mobile platform was captured with two fixed cameras. then the stereo correspondences were estimated with the knowledge of translational motion parameters. al [24] estimated the object motion directly through the calibrated stereo image sequences. integrating stereo and motion can reasonably improve the structure and motion procedure. A space curve-tracking algorithm is presented by minimizing the geometric distance of moving curves in camera planes. In section 3. Therefore.

ordering. object's visual hull is recovered as fine as possible by intersecting the large number of cones established by silhouettes of multiple views.Object tracking and rigid motion estimation by curve distance minimization in projection of space curves to the image planes of perpendicular double stereo rigs Step3.g.Texture mapping to any point of visual hull through the visible virtual cameras 2. In section 5. . This goal is achieved by assuming the moving object as a fixed object and the fixed camera as a moving camera in opposite direction.Reconstruction of the object's visual hull by intersecting the cones originated from silhouettes of object in virtual cameras across time (Space-Time Cameras) Step5.Making virtual calibrated cameras as many as required from fixed real camera information and rigid motion information Step4. Components in the left image should be matched to those of the right one to compute disparity and thus depth. Then there is need to deal e. So. experimental results with both synthetic and real objects are presented. the moving object is supposed to be fixed and the camera is moved in the opposite direction. a set of calibrated virtual cameras are constructed from motion information. Step2. A hierarchical method is presented to extract the visual hull of the object as bounding edges. To utilize the benefits of multiview reconstruction. depth continuity and local orientation differences have been proposed to improve the matching process. We conclude in section 7 with a brief discussion.Extraction of unique space curves on the surface of rigid object from calibrated stereo image information based on curvature and torsion consistency of established space curves during rigid motion. There are many situations. In section 6. Reconstruction of Space Curves on the Surface of Object The problem of decision on the correspondence is the main obscurity of stereo vision. the new virtual cameras are constructed as the real calibrated cameras around the object. epipolar geometry.New Trends in Surface Reconstruction Using Space-Time Cameras 5 are discussed and proven mathematically. where it is not possible to find point-like features as corners or wedges. The total procedure of three dimensional model reconstruction from raw images to fine 3d model is done in the following steps: Step1. Several constraints such as intensity correlation. but their success has been limited. In section 4.

They apply epipolar constraint between two sets of curves and compute corresponding points on the curves.e. Assuming that the corresponding curves in stereo images are rather similar. Z ) be a point whose position in space is given by the equations X = f ( s).6 Hossein Ebrahimnezhad and Hassan Ghassemian with silhouette of the object instead of sparse local features. From the initial epipolar constraints obtained from corner point matching. Both these approaches apply many heuristics. space curves provide such geometrical cues. it is possible to predict the curvature in the third one and use that as a matching criterion. candidate curves are selected according to the epipolar geometry. In general. Besides. more accurate and robust than those from a sparse set of local features. and curve distance measures. By minimizing the potential associated to prior knowledge of space curves. g. Instead of first seeking a correspondence of image structure and then computing 3D structure. Han and Park developed a curve-matching algorithm based on geometric constraints [28].Y . average curvature. P traces a curve in space. lines. Moreover. The differential geometry of curves traditionally begins with a vector R ( s) = X ( s )i +Y ( s ) j+ Z ( s )k that describes the curve parametrically as a function of s that is at least thrice differentiable. Robert and Faugeras presented an edge-based trinocular stereo algorithm using geometric matching principles [26]. this assumption is not true. we focus on the space curves to develop an inverse problem approach. . as it will be discussed in section 2. 2. Schmid and Zisserman offered an extension to Robert method by fusing the photometric information and edge information to reduce the outlier matches [27]. there exist objects. they apply curve distance measure as a constraint of curve matching. they look for the candidate space curves. i. curve-end constraints. Kahl and August developed an inverse method to extract space curves from multiple view images [29]. they seek the space curve that is consistent with the observed image curves. They showed that given the image curvature of corresponding curve points in two views. Differential Geometry of Space Curves Let P = ( X . pose estimations of global object descriptions are.2. and h are differentiable functions of s. statistically. As s varies continuously. and potential associated to the image formation model. Y = g ( s) and Z = h( s) where f.1. Therefore. Whereas point features reveal little about surface topology. or circles. The main deficiency of this method is that the relative motion of the cameras is assumed to be known. which cannot be represented adequately by primitive object features as points.

New Trends in Surface Reconstruction Using Space-Time Cameras 7 Then the tangent vector T(s) is well-defined at every point R(s) and we may choose two additional orthogonal vectors in the plane perpendicular to T(s) to form a complete local orientation frame (see figure 1). which may be written in terms of the curve itself : κ ( s) = dT R ′( s)× R ′′(s) = 3 ds R ′(s) (3) τ ( s) = dB (R ′(s)× R ′′( s))⋅R ′′′(s) = 2 ds R ′( s)× R ′′( s) (4) Considering the vector R ( s) = X ( s )i +Y ( s ) j+ Z ( s )k . B(s ) = R ′(s )× R ′′(s ) .5 can be modified as: κ= (Y Z −Y Z ) + (Z X − Z X ) + (X Y − X Y ) (X +Y + Z ) 2 2 2 2 2 32 2 2 2 (5) τ= X Y Z −Y Z +Y Z X − Z X + Z X Y − X Y ( ) ( ) ( ) (Y Z −Y Z ) + (Z X − Z X ) + (X Y − X Y ) 2 (6) . respectively. κ ( s) and τ ( s) are curvature and torsion of the curve. Eq. the principal normal N(s) and the binormal B(s). which are given in terms of the curve itself: T(s ) = R ′(s ) R ′(s ) . We can choose this local coordinate system to be the Frenet frame consisting of the tangent T(s). N(s ) = B(s )×T(s ) R ′(s )× R ′′(s ) (1) Differentiating the Frenet frame yields the classic Frenet equations: ⎡T′( s)⎤ ⎡ 0 κ ( s) 0 ⎤⎡T( s)⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ 0 τ ( s)⎥⎢N(s)⎥ ⎢N′( s)⎥ = R ′(s) ⎢−κ (s) ⎢B′( s)⎥ ⎢ 0 −τ ( s) 0 ⎥⎢B(s)⎥ ⎣ ⎦ ⎣ ⎦⎣ ⎦ (2) Here.4 and Eq.

i. Space curve geometry in Frenet frame.8 Hossein Ebrahimnezhad and Hassan Ghassemian Figure 1. projections of space curve in two different camera planes do not necessarily have the same shape. and τ is the torsion [30]. 2. we consider the fundamental theorem of space curves as: Theorem 1. the assumption of shape similarity for correspondent curves will be reasonable because of the small variation of viewpoint. It is obvious that the plane curve is established by projection of the space curve to camera plane.2. Inverse Problem Formulation Given a sequence of edge curves of a moving object in the calibrated stereo rig. As it is shown in figure 2. the inverse problem of determining the space curve from plane curves is an ill posed problem.e. then there will exist exactly one space curve determined except for orientation and position in space. However. the problem is to extract the space curves on the surface of the object. for any curve pair in two camera planes we can find one space curve by intersecting the projected rays from plane curves into space through the camera centers. The fundamental theorem illustrates that the parameters κ ( s) and τ ( s) are intrinsic characteristics of space curve that do not change when the curve moves . If two single-valued continuous functions κ ( s) and τ ( s) are given for s > 0 . in general. up to a Euclidean motion. Therefore. In the small base line stereo setup. where s is the arc length. κ is the curvature. To find a way out to this problem.

for any curve in the left camera there is only one true match in the right camera. The space curves that fit in the surface of rigid object must be consistent in curvature and torsion during the object movement. Any pair of curves in the left and right camera images can make one space curve. To simplify the proof.e. Based on the fundamental theorem. we can establish one space curve by projecting them to 3D space through the related camera center and intersecting the projected rays by triangulation (figure 2). κ ( s) and τ ( s) be consistent during movement. we present a new method to determine the true pair match between different curves.New Trends in Surface Reconstruction Using Space-Time Cameras 9 in the space. R and T. Therefore.e. if the internal characteristics of space curve i. the proof will be completed. This cue leads us to propose a new method of space curve matching based on curvature and torsion similarity through the curve length. Any other configuration can be easily converted to this configuration. which transforms all points of the space curve to their new positions. each pair of curves in stereo cameras can define one space curve. Between different pairs of curves in the left and right images of calibrated stereo rig. Proof: For any pair of curves in stereo cameras. Proposition 1. only the pair is true match that its associated space curve is consistent in curvature and torsion during motion of the curve (or stereo rig). we can find a fixed motion matrix i. However. Space Curve is established by intersection of the rays projected from image curves into the space through the camera centers. In the following. if we show that it is impossible to find such a fixed motion matrix for all points of invalid space curve during motion. Hence. Figure 2. In the other words. all points of curve should have the same motion parameters. As illustrated in figure 2. only one of the established space curves fits to the surface of the object and the others are outliers (or invalid space curves). . we deal with the rectified stereo configuration (figure 3).

the following equations can be easily derived: i P( ) = ⎡ X ⎣ (i ) . as the epipolar line. Horizantal scan line is the epipolar line.Y (i ) t t T i (i (i . x Rj ) = ( ( ( ( y Li ) = y Ri ) = y L j ) = y Rj ) = Z (j) Z ( ij ) Y ( i ) Y (ij ) Y ( j ) Y ( ji ) = ( i ) = (ij ) = ( j ) = ( ji ) Z Z Z Z (j) Z X ( ij ) −T x ( ji ) −T x (9) . x Ri ) = X (i ) Z X (i ) −T x −T x = X ( ji ) ( ji ) Z (j) Z ( ji ) ( . Z ( ) ⎤ = ( i ) x ( j ) ⎡ x L ) .1⎤ ⎦ ⎦ xL −xR ⎣ (7) P( ij ) = ⎡X ⎣ X Z X (i ) (i ) (j) ( ij ) .10 Hossein Ebrahimnezhad and Hassan Ghassemian Figure 3.1⎤ ⎦ ⎣ ⎦ xL −xR (8) ( x Li ) = ( x Lj ) = = = X Z X ( ij ) ( ij ) ( . Applying the constraint of horizontal scan line. Reconstructed 3D points from different pairs of points in left and right camera images in rectified stereo configuration. Z ( ) ⎤ = ( i ) x ( i ) ⎡ x L ) . y L ) .Y ( ij ) t t T ij (i (i . y L ) .

which transform any point P (i ) of the valid space curve to new ( position Pmi ) after movement: ( ∀ s ∈ SpaceCurve (i ) → Pmi ) ( s ) = R ⋅ P (i ) ( s ) + T (10) In the next step. x Ri ) and P(ij ) is the invalid three( dimensional point. T = [T 1 . or not.11 and Eq. which has been ( ( reconstructed from the true match pair x Li ) .Y m . Z m ⎦ = ( i ) (j (i ⎣ ⎦ x Lm − x Rm) ⎣ X m) ( Zm) i ( ij ) ( ij ) ( ij ) ( ij ) ⎤t Tx (i ) (i ) t ( ( ⎡ X mi ) Y mi ) ⎤ ⎢ ( i ) . ( i ) . we can find a fixed matrices R and T. ( x Rj ) . we should inspect whether there are another fixed matrices R' and T'.T 2 . The following equations can be easily extracted: ⎡x Lm . Rt3 ⎤ . ⎢ Z ( i ) X ( j ) −T X ( i ) Z ( i ) X ( j ) −T X ( i ) X ( j ) −T m m ⎢1 − mi ) ⋅ m ( j ) x − mi ⋅ m ( j ) x − m (j) x ( (i ( (i Zm Ym ) Ym ) Zm Zm) Zm ⎢ Xm ⎣ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ t (11) ( t X mi ) = R1 ⋅ P ( i ) + T1 ( t R = ⎡ R1 . . which transform any point P (ij ) of the invalid space curve to its new ( position Pmij ) after movement. Based on the fundamental theorem of space curves.1⎤ = Pm = ⎡X m . which has been reconstructed from the false match pair x Li ) .T 3 ] → Y mi ) = Rt2 ⋅ P ( i ) + T 2 ⎣ ⎦ t t (12) Zm = R ⋅P t 3 (i ) (i ) +T 3 Combining Eq. Rt2 .12 results in: ⎡ ⎢ Tx Tx Tx ( . . y Lm .New Trends in Surface Reconstruction Using Space-Time Cameras 11 Suppose that P (i ) is the valid three-dimensional point.1⎥ j X ( ) −T ⎢ Z Zm ⎥ ⎦ − m (j) x ⎣ m Zm Tx t ⎡ ⎢ Tx Tx Tx =⎢ . Pmij ) = ⎢ ⎢ Rt P(i ) +T Rt P( j ) +T −T Rt P(i ) +T Rt P(i ) +T Rt P( j ) +T −T Rt P(i ) +T Rt P( j ) +T −T 3 3 3 3 1 1 1 1 1 1 1 1 x x ⎢1− t ( i ) − 1 t (j) 1 x ⋅ − ⋅ j Rt3 P( i ) +T3 R3 P +T3 Rt2 P(i ) +T 2 Rt2 P(i ) +T 2 Rt3 P( j ) +T3 ⎢ R1 P +T1 Rt3 P( ) +T3 ⎣ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ t (13) .

(ij ) . the elements of Pmij ) depend on Z (i ) Z ( ij ) which may vary for any point of . Z ( ) ⎤ = (ij ) P( ) ⎦ Z i (14) P( j ) = ⎡ X ⎣ (j) . T. ( Moreover. 0⎥ = (ij ) P +T x ⎟ ⎥ Z ⎠ ⎦ t ⎛ Z (j) ⎜1 − ( ij ) ⎜ Z ⎝ ⎞ t ⎟ [1.13.12 • i P( ) = ⎡ X ⎣ Hossein Ebrahimnezhad and Hassan Ghassemian and P ( j ) can be written as a function of P (ij ) using Eq. Z (ij ) ⎤ + ⎢T x ⎦ ⎢ ⎣ ⎛ Z (j) ⎜1 − (ij ) ⎜ Z ⎝ ⎤ ⎞ Z (i ) ( ij ) ⎟ .14 and Eq. R .0. ( i ) . Z ( ) ⎤ = Z ( ) ⎢ ( i ) .1⎥ = ( ij ) ⎡X ⎦ ⎣ Z Z ⎣Z ⎦ ⎣Z ⎦ Z t t ( ij ) .16 clarifies that Pmij ) is a nonlinear function of P (ij ) and we cannot find the ( fixed rotation and translation matrices that transform all points of P (ij ) to Pmij ) .9: (i ) . (ij ) ⎟ ⎜ ⎥ Z ⎟ ⎝ ⎠ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ( Pmij ) (16) ( Eq.15 in Eq. 0.Y ( ij ) ⎡ t .Z (j)⎤ = ⎦ t Z (j) ⎡ X Z ( ij ) ⎣ ( ij ) .Y ( ij ) t Z ( ) ij ij .Y (j) .1⎥ = Z ( ) ⎢ ( ij ) .Y (i ) i ij i (i ) ( ij ) t Y() ⎤ Y( ) ⎤ Z( ) i i ⎡X i ⎡X .0] ⎟ ⎠ (15) Substituting Eq. we obtain: ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ Tx ⎢ (i ) ⎛ Z (j) ⎞ Z (i ) t ( ij ) ⎢ Z R P + R11T x ⎜ 1 − ( ij ) ⎟ + T1 −T x R t3 P ( ij ) + T 3 ( ij ) 1 ⎢ ⎜ Z ⎟ ( ij ) Z ⎝ ⎠ ⎢ 1 − Z (i ) ⋅ (i ) ⎢ ⎛ Z (j) ⎞ Z Z t R1 P ( ij ) + T1 R t3 P ( ij ) + R 31T x ⎜ 1 − ( ij ) ⎟ + T 3 ⎢ ⎜ Z ⎟ Z ( ij ) Z ( ij ) ⎝ ⎠ ⎢ ⎢ Tx ⎢ = ⎛ Z ( j) ⎞ ⎢ Z (i ) Z ( i ) t ( ij ) Z ( i ) t ( ij ) t R P + R11T x ⎜ 1 − ( ij ) ⎟ + T1 −T x ⎢ ( ij ) R1 P ( ij ) + T 1 R 3P + T 3 ( ij ) 1 ⎜ Z ⎟ ( ij ) Z ⎠ ⎝ ⎢Z − Z (i ) ⋅ (i ) ⎢Z ⎛ Z (j) ⎞ Z Z ( i ) t ( ij ) ( ij ) ( ij ) t t R 2P +T 2 ⎢ ( ij ) R 2 P + T 2 R3 P + R 31T x ⎜1 − ( ij ) ⎟ + T 3 ⎜ Z ⎟ Z ( ij ) Z ( ij ) ⎢Z ⎝ ⎠ ⎢ Tx ⎢ ⎢ ⎛ Z (j) ⎞ Z ( i ) t ( ij ) Z ( i ) t ( ij ) R1 P + R11T x ⎜1 − ( ij ) ⎟ + T1 −T x ⎢ ⎜ ⎟ R P +T1 ( ij ) 1 Z ( ij ) ⎢ ⎝ Z ⎠ Z − ⎢ ⎛ Z (j) ⎞ Z ( i ) t ( ij ) Z ( i ) t ( ij ) ⎢ R3 P + T 3 R P + R 31T x ⎜1 − ( ij ) ⎟ +T 3 ( ij ) 3 ⎜ ⎟ ⎢ Z ( ij ) Z ⎝ Z ⎠ ⎣ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ i ⎥ ⎛ Z() ⎞ ⎥ = f ⎜ P ( ij ) .

This assumption is achievable for the proper length of curves with small amount of motion. . Eq. Therefore. Therefore. which applies shape consistency of space curve during motion. the stereo images are shown before and after motion of the rig. we cannot find a fixed motion matrix for all points of invalid space curve and the proof is completed. the wide base line stereo setup intensifies occlusion. there is a tradeoff between occlusion and depth accuracy to choose the proper length of base line. it can be used effectively to extract space curves from wide base line stereo setup. does not consider any shape similarity between plane curves in stereo images. Figure 4 illustrates the proof in graphical method. Part (c) and (d) display the established space curves by different pair of curves before and after motion. In part (a) and (b). we can get more precise depth values as illustrated in figure 5. but the invalid space curve established by false match is not. Part (e) illustrates that the valid space curve established by true match is consistent in shape during the movement. we assumed that the space curve is not occluded during movement. So. On the other hand. where projections of the space curve in stereo cameras do not have similar shape. In the proof.16 can be modified as: ⎡ ⎢ Tx ⎢ t ⎢ Rt3P(ij ) +T 3 R1P(ij ) +T1 −T x ⎢ ⋅ 1 − t ( ij ) R1 P +T1 Rt3P(ij ) +T 3 ⎢ ⎢ Tx = ⎢ t (ij ) t ⎢ R1 P +T1 Rt3P(ij ) +T 3 R1P( ij ) +T1 −T x − t (ij ) ⋅ ⎢ t (ij ) Rt3P( ij ) +T 3 ⎢ R2 P +T 2 R2 P +T 2 ⎢ Tx ⎢ t t ⎢ R1 P(ij ) +T1 R1 P(ij ) +T1 −T x − ⎢ t ( ij ) R3P +T 3 Rt3P( ij ) +T 3 ⎣ ⎤ ⎡ ⎥ ⎢ Tx ⎥ ⎢ t ⎥ ⎢ R1 P(ij ) +T1 −T x ⎥ ⎢ 1− ij t R1 P( ) +T1 ⎥ ⎢ ⎥ ⎢ Tx ⎥=⎢ t t ⎥ ⎢ R1 P(ij ) +T1 R1 P(ij ) +T1 −T x − ⎥ ⎢ t (ij ) ij Rt2 P( ) +T 2 ⎥ ⎢ R2 P +T 2 ⎥ ⎢ Tx ⎥ ⎢ t t ⎥ ⎢ R1 P(ij ) +T1 R1 P(ij ) +T1 −T x − ⎥ ⎢ t (ij ) ij Rt3 P( ) +T 3 ⎦ ⎣ R3 P +T 3 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎡ Rt P(ij ) +T ⎤ 1 ⎥ ⎢ 1 ⎥ ⎥ = ⎢Rt2 P(ij ) +T 2 ⎥ = RP(ij ) + T ⎥ ⎢ ⎥ ⎥ ⎢ Rt3 P(ij ) +T 3 ⎥ ⎦ ⎥ ⎣ ⎥ ⎥ ⎥ ⎥ ⎦ ( Pm ij ) (17) Referring to figure 3. Consequently. Two fixed space curves are captured by moving stereo rig in 3D-studio max environment. In special situation where Z (i ) = Z (ij ) .New Trends in Surface Reconstruction Using Space-Time Cameras 13 curve. The presented curve matching method. which can make the curve matching inefficient. this condition can occur if and only if curve i and curve j in both stereo images be identical (i=j).

(d) established space curves from different curve pairs after motion and (e) determining the true mathes from consistent space curve in curvature and torsion during motion. . (b) stereo images after motion.14 Hossein Ebrahimnezhad and Hassan Ghassemian Figure 4. (c) established space curves from different curve pairs before motion. Stereo curve matching by curvature and torsion consistency of established space curve during motion: (a) stereo images befor motion.

we describe the different steps involved in the curve matching process to extract 3D position of unique feature points by forcing the constraint of space curves as global object descriptions. In practice. The unique points are defined as the points in three-dimensional space that are matchless after forcing all the constraints. all potential matches are labeled in the right camera by intersecting all existence curves with the epipolar line. capability of curve forming with proper length by joining the neighboring points.New Trends in Surface Reconstruction Using Space-Time Cameras 15 Figure 5. Uncertanity in depth ΔΗ is reduced by increasing the length of base line b (In the formulation. the associated curves to test point and each labeled point are projected to space through their . The unique space curves are also composed from adequate number of continuous adjacent unique points. As illustrated in figure 6. Then.3. to check the uniqueness of any edge point in the left camera. 2. The true point can lie anywhere inside the shaded uncertanity region.e. and the uniqueness of such space curve. the right configuration is employed to make efficient use of camera plane. Stereo triangulation and uncertanity in depth. curvature and torsion consistency of relevant space curve in two or more consecutive frames. epipolar line. Reconstruction of Unique Space Curves Here. edge positioning. it is assumed that the maximum error of matching is one pixel in every camera image). i.

are grown n points from two sides to form the curves. The extracted image curves are never perfect in practice: there will be missing segments. Each intersection generates a potential space curve with different shape. the corresponding epipolar line in the right curve-image is computed.To extract the unique points. etc. Step4. Intersection of the epipolar line with edge curves are labeled as candidate match points.. It is assumed that the frame rate is adjusted as well to capture consecutive images with small amount of the object motion. to improve robustness to these imperfections we begin with one such segment in the left image and seek confirming evidence in the right one. This process is done by intersecting the moved version of curves in the next frame as illustrated in below part of figure 6. jumping up to one pixel gap.To distinguish the true match from other points. i =1. i ′ Step3.. which is considered as initial point. The curves with smaller length than 2n and the branched curves are discarded.2. Therefore. The key property of the curve with small movement is its proximity and similarity to the main curve. which should be checked for curvature and torsion consistency during the object motion. One or more match i ′ candidates c(R) ( s0 ).16 Hossein Ebrahimnezhad and Hassan Ghassemian camera centers.At any instance of time.The points c L (s0 ) and c(R) (s0 ). Edges are then linked into chains. The neighborhood of c L (s0 ) is inspected in the next sequence of left camera to find the shift of the curve c L ( s) as c Lm ( s) . the resulted left curve image in step 1 is scanned and all edge points are checked one by one for uniqueness. Only one of the candidate points is the true match and the other points are outliers. The 3D point on this curve that corresponds to the test point is labeled as unique point. erroneous additional segments. may be found in this step (see figure 6).. The consistent space curve in torsion and curvature is selected as valid space curve if there were only one solution.2. For each examined edge point c L (s0 ) ... The edge curves of image are extracted and thinned by the canny method. The small size edge regions are removed to reduce the effect of noise and get the more descriptive curves. i =1. Therefore. Step2. the moving object is captured by two calibrated fixed cameras.. we define a correlation function as a combination of curve distance and curvature difference along the curves: . the next sequence of stereo images is also considered. Details of the presented algorithm are given at the following: Algorithm: Step1.

The center of c Lm ( s) is also determined as an argument sj . 5 and 6. according to their proximity to c(R) (s0 ).cLm (s ))=max⎡cor cL (s0 ). For each space curve.New Trends in Surface Reconstruction Using Space-Time Cameras cor (cL (s )... is selected as the ′ consistent space curve and the pair c L ( s0 ) and c(Rq) ( s0 ) are selected as the unique points with determined depth value.2. Intersections of the curves with the epipolar line are labeled as i i) ′ c(Rm s ′j . i =1. 21 for i=1..SC( )m = i i ( ) 1 k =− n ∑κ n i SC( ) (sk ) −κSC(i ) s + ∑ τSC(i ) s −τSC(i ) s (k) m ( k +j ) m ( k +j ) k =− n n (21) The space curve i=q that maximizes the correlation function.c(Rm ( s ′j ) are { } established by projecting the two dimensional curves in to the space and intersecting the rays.cLm s j ⎤ . If there were more than one solution because of having the close values of correlation function. . i =1. i ′ Step 6. Step 5....The Space curves SC(i ) (s 0 ). i =1.2. corresponding to c Lm (s j ).2. The correlation between two space curves before and after motion is computed from Eq..c(R) (s0 ) ( ( )) ( ) ( ) { } i) and the Space curves SCm(i) ( s j ). i =1. 2. If there were only one solution. … cor SC( ) .cLm s j .. corresponding to c L ( s0 ). the third sequence is also inspected to find the more confident answer...cLm (s)) . the curvature and torsion are computed from Eq.. cor cL (s0 ).2. which maximizes the cor cL (s0 ).The epipolar line of cLm s j is computed in next sequence of the right image.cLm s j = j ⎣ ⎦ 17 ( ( )) ( ( )) 1 cL (s0 )−cLm s j +α ⋅ ∑ κcL (sk ) −κc s Lm ( k + j ) k =− n ( ) n ( ) (18) where: cL (s0 )−cLm s j = ( ) ∑ n k =− n ( x cL (sk ) −x c (s ) + y cL (sk ) − y c (s ) Lm k + j Lm k + j κ= xy − yx )( 2 ) 2 (19) (x 2 +y 2 ) 32 (20) The shift of c L ( s) is selected as the argument cLm (s) which maximizes the correlation function cor(cL (s).. the .

the unique space curves are composed from continuous adjacent unique points. Shape descriptivity. Our experiments show that the curve length between 20 to 40 points (for 480×640 image size) provides good result. branching. Figure 6. number of the similar . the proposed length is a representative value. and occlusion are considered as three factors to choose the proper length of curves in matching process.18 Hossein Ebrahimnezhad and Hassan Ghassemian point c L (s0 ) would be selected as known depth value.Going back to step 2 and repeat the procedure for all edge points to find adequate number of unique points. so that the number of appropriate curves reduces by increasing the length of curves. the long curves are more descriptive in shape and result in high confidence matching. the uniqueness-checking process will fail to find the best match. it would be labeled as non-unique point and rejected. Step 7. On the contrary. as the number of detected similar curves will be decreased. as the number of detected similar curves will be increased. Otherwise. The short curves are less descriptive in shape and result in low confidence matching. At the end. Depending on texture of the object. Of course. Hence. Curve stereo matching to find the unique points of the object. Occlusion and branching are the other factors that restrict lengthening of the curves.

the error component for each unique point in each projected camera image is defined as the minimum distance of that point from nearby curves in that camera. t z ⎤ where ⎣ ⎦ ϕx . To explain the problem mathematically. t x . R and T are parameterized as Θ = ⎡ϕ x .New Trends in Surface Reconstruction Using Space-Time Cameras 19 curve on surface of the object. ϕ y . Now. To find the minimum distance of each point from nearby curve in the camera image. suppose that Wi is the ith unique point. Nu is the total number of unique points. To get rid of photometric information. can be expressed as the six rotation and translation parameters. and contourk( m ) is the curve number m in camera plane k. we use a circle based search area with increasing radius (figure 7). the representative curve length may be varied. for single stereo rig). z components of translation vector.22 can be minimized by an iterative method similar to the Levenberg-Marquardt algorithm [8]: .contour )} (m ) k i k e = ∑ i =1 ∑ k =1 e ik . Therefore. we define the error function as a distance of unique points from the nearby curves after movement. 3. which describes the difference of points before and after motion. e ik = min distance m Nu K { (P (R ⋅ W + T). R and T. suppose that we have extracted a set of points on the surface of an object and the goal is to estimate 3D motion of the object across time. i. The total error function defined in Eq. and range of depth-variation across the curves. ϕz are the Euler angles of rotation and t x . Rigid Motion Estimation by Tracking the Space Curves The movement of a rigid object in space. ϕ z . an error function. R . t z are the x. y. The total error is also calculated by summing error components over all unique points and all cameras. t y . ϕ y . t y . T = arg min{e } (22) Where K is the total number of cameras (K=2.e. the minimum distance is determined as the radius of the first osculating circle with adjacent curves. To estimate the motion matrix. To estimate the motion parameters. k (R ⋅ Wi + T) is projection of Wi in camera plane k after P movement. should be minimized.

with increasing radius. a circle based search window. With an initial estimate Θ .20 Hossein Ebrahimnezhad and Hassan Ghassemian Figure 7. calculate the Hessian matrix H and the difference vector d as: ⎡ ∂e ik ⎢ ⎢ ∂φx ⎢ ∂e k ⎢ i ⎢ ∂φ y ⎢ k ⎢ ∂e i ⎢ ∂φ z H ik = ⎢ k ⎢ ∂e i ⎢ ⎢ ∂t x ⎢ k ⎢ ∂e i ⎢ ∂t y ⎢ ⎢ ∂e ik ⎢ ⎣ ∂t z ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ⎤ ⋅ ⋅ ⋅ ⋅ ⋅ ⎥ ∂φx ∂φx ∂φ y ∂φx ∂φz ∂φx ∂t x ∂φx ∂t y ∂φx ∂t z ⎥ ∂e k ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ⎥ ⎥ ⋅ i ⋅ ⋅ ⋅ ⋅ ⋅ ∂φx ∂φ y ∂φ y ∂φ y ∂φz ∂φ y ∂t x ∂φ y ∂t y ∂φ y ∂t z ⎥ ⎥ ∂e k ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ⎥ ⋅ i ⋅ ⋅ ⋅ ⋅ ⋅ ∂φx ∂φz ∂φ y ∂φz ∂φz ∂φz ∂t x ∂φz ∂t y ∂φz ∂t z ⎥ ⎥ ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ⎥ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⎥ ∂φx ∂t x ∂φ y ∂t x ∂φz ∂t x ∂t x ∂t x ∂t y ∂t x ∂t z ⎥ ⎥ ∂e k ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ⎥ ⋅ ⋅ ⋅ ⋅ ⋅ i ⋅ ∂φx ∂t y ∂φ y ∂t y ∂φz ∂t y ∂t x ∂t y ∂t y ∂t y ∂t z ⎥ ⎥ ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ∂e ik ⎥ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⎥ ∂φx ∂t z ∂φ y ∂t z ∂φz ∂t z ∂t x ∂t z ∂t y ∂t z ∂t z ⎦ ⋅ . To find the minimum distance of point from the adjacent curves in the camera image. The minimum distance is determined as the radius of the first touching circle with adjacent curves. ˆ 1. is considered.

Such points are excluded in calculation of the error function and hence the closer unique . inliers and outliers. e ik ⋅ i ⎥ ∂φx ∂φy ∂φz ∂t x ∂t y ∂t z ⎥ ⎢ ⎣ ⎦ t d = −2 ⋅ ∑ ∑ Nu i =1 K k =1 dik (24) ˆ 2.22 usually has one minimum and convergence of the algorithm will be guaranteed. consider the unique points in two groups. projection of the outlier point in the camera planes will not be close to the tracking curves. e ik ⋅ i . e ik ⋅ i . Go back to step1 until the estimate of Θ converges. i. are distinguished as outliers. Outlier points have destructive effect on convergence of the algorithm. However. As a result.e. To make the algorithm more efficient. the minimum distance of each unique point from nearby curve is checked after adequate number of iterations. e ik ⋅ i . e ik ⋅ i . the error function in Eq. To explain the problem mathematically. the error component ∑ N outlier j =1 e j has negligible effect compared to ∑ N inlier i =1 ei and estimation of the motion will go in the true way. the unique points will not join to the tracking curves during convergence. Naturally.New Trends in Surface Reconstruction Using Space-Time Cameras H = ∑i =1 ∑k =1 Hik Nu K 21 (23) ⎡ ∂e k ∂e k ∂e k ∂e k ∂e k ∂e k ⎤ dik = ⎢e ik ⋅ i . The error function can be rearranged as: e= ∑ N inlier i =1 ei + ∑ N outlier j =1 e j where : N inlier + N outlier = N u (26) In the provision that Noutlier is very small than Ninlier. Unless the object has periodic edge curves. Update the parameter Θ by an amount ΔΘ : 1 ˆ ˆ ˆ Θ ( n +1) = Θ ( n ) + ΔΘ = Θ ( n ) + H −1 ⋅ d λ (25) Where λ is a time-varying stabilization parameter. e i >> e N u ).e. The points that their distance is very greater than the average distance (i. ˆ 3. minimization of the error function cannot be accomplished accurately.

2r t z (see figure 8).e.Y w . At first. Moreover. Hence. the view angle is chosen as small as possible. to utilize the linear part of camera lens and to get rid of the complex computations of nonlinear distortion.ϕy .1⎤ ⎣ ⎦ T (29) 4. can be calculated by multiplying the motion matrix to the position vector.1. Z w . T= ⎢t y ⎥ ⎢ ⎥ ⎢tz ⎥ ⎣ ⎦ (28) The new position of each point.22 Hossein Ebrahimnezhad and Hassan Ghassemian points to the tracking curves. the motion matrix can be constructed as: ⎡ M = ⎢ R ϕx . Once the six motion parameters were estimated for two consecutive sequences. we present a double stereo configuration to get as much accuracy as possible in estimation of motion parameters. ϕz ⎣ 0 0 0 ( ) T⎤ 1⎥ ⎦ (27) Where: 0 ⎤ ⎡ cosϕz sinϕz 0⎤ ⎡cosϕy 0 −sinϕy ⎤ ⎡1 0 R ϕx . the base line of stereo rig is adjusted neither small nor wide to compromise between depth uncertainty and occlusion. ϕ y . W( n +1) = M n ⋅ W( n) where : 1 W( ) = ⎡X w . the size of the object is usually very smaller than its distance from the camera center. the single stereo setup is investigated and then a perpendicular double stereo configuration is presented and its dominance to the single stereo is demonstrated. 4. in the next frame. i. The basic idea to achieve this end is to find an arrangement of stereo cameras in which the sensitivity of image pose variation to space pose variation is maximized. Motion Estimation Using Double Stereo Rigs In this section. . can be achieved.ϕz = ⎢−sinϕz cosϕz 0⎥ ⎢ 0 1 0 ⎥ ⎢0 cosϕx sinϕx ⎥ ⎥⎢ ⎢ ⎥⎢ ⎥ 0 1⎦ ⎢sinϕy 0 cosϕy ⎥⎣0 −sinϕx cosϕx ⎦ ⎣ 0 ⎣ ⎦ ( ) ⎡tx ⎤ .3. with more precise motion parameters. Single Stereo Rig As mentioned in section 2.

Yw . Single stereo setup with small view angle ( t z >> 2r ). Yw . we assume that the optical axis of camera1 is in the depth direction of world coordinate (i. Now. −f y 1 ⋅ ⎟ Zw + tz Zw + tz ⎠ (30) By differentiating. Z w ) in the image plane of camera1 is computed as: ( x im 1 . t z >> 2r ) and assuming X w . we have: . we can write: ∂x im 1 ∂x im 1 ∂x im 1 ⎧ ⎪Δx im 1 = ∂X ΔX w + ∂Y ΔY w + ∂Z ΔZ w ⎪ w w w ⎨ ∂y im 1 ∂y im 1 ∂y im 1 ⎪Δy = ΔX w + ΔY w + ΔZ w ⎪ im 1 ∂X w ∂Y w ∂Z w ⎩ ⎧ f x1 ⎪Δx im 1 = Zw +tz ⎪ ⎨ f y1 ⎪ ⎪ Δy im 1 = Z + t w z ⎩ ⎛ ⎞ Xw ΔZ w ⎟ ⎜ −ΔX w + Zw +tz ⎝ ⎠ ⎛ ⎞ Yw ΔZ w ⎟ ⎜ −ΔY w + Zw +tz ⎝ ⎠ (31) (32) For the provision of small view angle (i. we would like to answer the question that how much accuracy is achievable in space motion estimation by tracking the projection of points in camera planes. Zw). For the sake of simplicity. Projection of any point ( X w .New Trends in Surface Reconstruction Using Space-Time Cameras 23 Figure 8.e. Z w ≤ r . y im 1 ) = ⎜ −f x 1 ⋅ ⎝ ⎛ Xw Yw ⎞ .e.

ΔZ w ≈ 0 ⎪ Yw Δy im 1 ≈ 0 → ⎨ ΔY ≈ ΔZ w →ΔZ w >> ΔY w ⎪ w Z w +t z ⎩ (34) This equation reveals that the inverse problem of 3D motion estimation by tracking the points in camera plain is an ill posed problem and does not have one solution. we present a combined double stereo setup. This combination is composed of two single stereo rigs in which they make angle θ from each other (see figure 9). Hence. ΔZ w >>ΔX w or ΔZ w >>ΔYw ). the assumption of Δxim1 ≈ 0 and Δyim1 ≈ 0 will be reasonable for each tracking point after convergence of motion estimation algorithm. Zw + tz Yw << 1 Zw + tz (33) Therefore.24 Hossein Ebrahimnezhad and Hassan Ghassemian Xw << 1.e. To take the advantages of both small and wide base line stereo cameras. Δ Z w ≈ 0 ⎪ Xw Δx im 1 ≈ 0 → ⎨ ΔX ≈ ΔZ w →ΔZ w >> ΔX w ⎪ w Z w +t z ⎩ or ⎧ ΔY w .33 can be resulted in: or ⎧ ΔX w .2. xim1 and yim1 are very sensitive to Xw and Yw compared to Zw. both stereo cameras have approximately the same effect in motion estimation process. 4. . the six motion parameters are adjusted in which the distance error in image planes to be minimized. As we explained in section 3.32 and Eq. Double Stereo Rigs Due to the limitation of large base line selection in single stereo rig. Combination of this assumption with Eq. the total 3D positional error 2 2 2 ΔX w +ΔYw +ΔZ w will be notably increased and the inaccurate 3D motion parameters will be estimated. Therefore. ΔX w ≠ 0 or ΔYw ≠ 0 ) imposes a large estimation error of Zw (i. Any small estimation error of Xw or Yw (i.e.

we can minimize the 3D motion estimation errors ∆Xw and ∆Yw by minimizing ∆xim1 and ∆yim1. Therefore. Structure of double stereo setup: (a) Double stereo setup with angle θ .New Trends in Surface Reconstruction Using Space-Time Cameras 25 Figure 9. it is possible to increase the sensitivity of xim3 and yim3 to Zw as much as possible. Similar to single stereo setup and considering the rotation angle of θ for camera3. (b) Perpendecular double stereo setup. it can be easily shown that: X w cos θ − Z w sin θ ⎧ ⎪ x im 3 = −f x 3 ⋅ X sin θ + Z cos θ + t + x o 3 ⎪ w w z ⎨ Yw ⎪y = −f y 3 ⋅ + y o3 ⎪ im 3 X w sin θ + Z w cos θ + t z ⎩ f x3 ⎧ ⎪Δx im 3 = A ( − ( Z w + t z cos θ ) ΔX w + ( X w + t z sin θ ) ΔZ w ⎪ ⎨ f y3 ⎪ Δy (Y w sin θ ⋅ ΔX w − A ⋅ ΔY w +Y w cos θ ⋅ ΔZ w ) im 3 = ⎪ A ⎩ Where: A = ( X w sin θ + Z w cos θ + t z (35) ) (36) ) 2 (37) By choosing a proper amount of θ . and the estimation .

It can be verified. Δyim3 ≈ 0 ) for each tracking point in camera3 after convergence of motion estimation algorithm. the total 3D positional error 2 2 2 ΔX w +ΔYw +ΔZ w will be notably decreased in perpendicular double stereo setup and more precise motion parameters will be resulted. 5. For θ = 90 . ΔZ w ≈ 0 ⎪ Δx im 3 ≈ 0 → ⎨ Z w ΔX w ≈ ΔZ w →ΔX w >> ΔZ w ⎪ X w +t z ⎩ or ⎧ ΔY w .34 and Eq. we can assume ( Δxim1 . ΔZ w ≈ 0 . the Eq. Visual hull is defined as a rough model of the object surface. Shape Reconstruction from Object Silhouettes Across Time Three-dimensional model reconstruction by extracting the visual hull of an object has been extensively used in recent years [31-34] and it has become a standard and popular method of shape estimation. Therefore.26 Hossein Ebrahimnezhad and Hassan Ghassemian error ∆Zw by ∆xim3 and ∆yim3. which can be calculated from different views . that the maximum value of sensitivity is achieved by θ = 90 .39 result in ΔX w . Δyim1 ≈ 0 ) for each tracking point in camera1 and ( Δxim3 . ΔYw . Hence: or ⎧ ΔX w . Δ Z w ≈ 0 ⎪ Y Δy im 3 ≈ 0 → ⎨ w ΔX w ≈ ΔY w →ΔX w >>ΔY w ⎪ X w +t z ⎩ (39) Combination form of the Eq.36 is simplified as: ⎧ f x3 ⎪ Δx im 3 = X w +tz ⎪ ⎨ f y3 ⎪ Δy im 3 = ⎪ X w +tz ⎩ ⎡ −Z w ⎤ ΔX w + ΔZ w ⎥ ⎢ ⎣ X w +tz ⎦ ⎡ Yw ⎤ ΔX w − ΔY w ⎥ ⎢ ⎣ X w +tz ⎦ (38) Similar to single stereo. by differentiating.

The silhouette of an object in an image refers to the curve. which translates the 3D point W in the world coordinate to ( x im . combination with stereo matching can be employed. y im ) in the image coordinate of the camera plane: . it needs a large number of different views for recovering the fine details. is very robust against the color maladjustment of cameras and shading during the object motion. all of the silhouette images are treated as being captured at the same time instant and the shape of the object is refined. These CSPs are then used in a 3D image alignment algorithm to find the six rotation and translation parameters of rigid motion between two visual hulls. To moderate the first drawback of visual hull. variation of the light angle while the object moves around the light source produces additional error. Once the rigid motion across time is known. The visual hull cannot recover concave regions regardless of the image numbers that are used.1. it is assumed that the motion parameters are known for multiple views of the object and the goal is to reconstruct 3D shape of the object from silhouette information across time. Space-Time or Virtual Camera Generation Let P defined in Eq. To get rid of the second drawback. be the projection matrix of camera.New Trends in Surface Reconstruction Using Space-Time Cameras 27 of the object's silhouette. In addition.40. Nonaccurate color adjustment between cameras is one problem that makes some error in color-consistency test. Moreover. they use multi-view stereo to extract these touching points called Colored Surface Points (CSP) on the surface of the object. In the remainder of this section. which affirms that each bounding edge must touch the object in no less than one point. Motion estimation by CSP method suffers from some drawbacks. Moreover. 5. more silhouettes of the object can be captured by the limited number of cameras across time. They utilize the color consistency property of the object to align the CSP points. it can be effectively used to extract visual hull of poorly textured objects. Employing a basic property of visual hull. Our presented method of motion estimation which uses only the edge information as the space curves. Cheng et al. presented a method to enhance the shape approximation by combining multiple silhouette images captured across time [34]. which separates the object from background.

Y w . y0 are the coordinates of principal point in the camera plane. R c and Tcw are the rotation and translation of the camera coordinate system to the world coordinate system. Z w . a new projection matrix is deduced for any sequence. which produces a new silhouette of the object. There are two conventional approach to extract the visual hull: voxel . 5. Each silhouette makes one cone in the space. From Eq.28 Hossein Ebrahimnezhad and Hassan Ghassemian ⎡ −f x P=⎢ 0 ⎢ ⎢ 0 ⎣ 0 −f y 0 T x0⎤ y 0 ⎥ ⎡ Rw ⎥⎢ c ⎣ 1⎥ ⎦ ( ) − (R ) −1 w c w −1 ⋅ Tcw ⎤ ⎥ ⎦ (40) [ x im .1⎤ ⎦ T (41) w Where. y im . respectively. Z w . by generating the virtual cameras. more cones are constructed.1⎤ ∝ P ( n ) ⎡ X w . fx and fy are the focal length in x and y direction and x0.29. shape from silhouette has been used widely to reconstruct three dimensional shape of an object.1] ∝ P ⎡ X w . yim ) . we can get: W ( ) = M n −1 ⋅ W ( n n −1) = M n −1 ⋅ ⋅ M 2 ⋅ M1 ⋅ W ( ) 1 (42) By multiplying the projection matrix to the motion matrix. This matrix defines a calibrated virtual camera for that sequence as: P( n) = P ⋅ ( M n −1 ⋅ ⋅ M 2 ⋅ M1 ) (43) The matrix P( n ) . The visual hull is defined as a shared volume between these cones. projects any point in the world coordinate to the image plane of the virtual camera n as: (n (n ⎡ xim ) . Using more cameras from different views of object. Visual Hull Reconstruction from Silhouettes of Multiple Views In the resent years. with the camera center.1⎤ ⎣ ⎦ ⎣ ⎦ T T (44) In fact.2.Y ⎣ . the moving object and fixed camera system is substituted by the fixed object and moving camera that moves in the opposite direction.

the points of ray are checked hierarchically. Volume Based Visual Hull Many algorithms have been developed to construct the volumetric models from a set of silhouette images [35. the points are checked on the edges of octree cubes rather than the inside of volume. a discrete number of voxels are constructed around the volume of interest. To make the projection and intersection test more efficient. The octree division method is optimized. Intersection Test in Octree Cubes The most important and time-consuming part of octree reconstruction is the cubes intersection check with silhouette images.1. In view ray sampling. Voxel carving can be accelerated using octree representation which employs coarse to fine hierarchy. Moezzi et al. For each viewing ray in some desired view.2. by minimizing the number of check-points. to find intersection between cubes and silhouette images. To accomplish this function. [38] construct the visual hull using voxels in an off-line processing system. each voxel is checked for all silhouettes and any voxels that project outside the silhouettes are removed from the volume. two efficient algorithms are presented to improve the speed of computations in visual hull extraction. corresponding to the intersection of back-projected silhouette cones. 39]. [41] also runs at interactive rate.1. the intersection points with all surfaces of the visual hull are computed. The second algorithm employs the ray sampling method to extract the bounding edge model of visual hull.36] and view ray sampling [37]. a sampled representation of the visual hull is constructed. In the voxel carving method. The visual hull is sampled in a view-dependent manner. Furthermore. 5.2. Cheung et al. The most important step in these algorithms is the intersection test. the points are checked hierarchically and their number is changed corresponding to the size of octree cubes. The polyhedral visual hull system developed by Matusik et al. The task is finding which voxels belong to the surface of 3D object. Then. most methods use an octree representation and test voxels in a course-to-fine hierarchy. 40] show that the voxel method can achieve interactive reconstruction results. The first algorithm accelerates the voxel carving method. In this section. 36.New Trends in Surface Reconstruction Using Space-Time Cameras 29 carving [35. 5.1. This algorithm reduces the number of check-points at intersection test procedure. To find the segments of any ray which lies inside the other silhouette cones. All algorithms use one common . [39. Starting from a bounding volume that is known to surround the whole scene. 37. the volume is divided into voxels.

In all methods. The number of check points may be constant in all size of cubes. NL= maximum number of checking points to identify the mark of cube in level L. . Since each grey cube is divided to 8 sub-cubes in octree division. so the number of grey cubes in level L-1 will be equal or greater than 1/8 grey cubes in level L according to the number of child grey cubes. the following parameters will be considered: L = level of octree division. Checking methods in octree cubes. a) Sequential check in volume b) Random check in volume c) Sequential check on edges d) Hierarchical check on edges.30 Hossein Ebrahimnezhad and Hassan Ghassemian rule to decide whether or not intersection is happened between cube and object. the cube will be marked as intersected cube and the process will be terminated. If there was color difference during check. To compare the complexity of different types of intersection check in the octree cubes. Cube is known as outside if all points inside cube are “1” and known as inside if all points inside cube are “0”. occurrence of intersection will be inferred and the process for this cube can be terminated. the cube will be identified as outside (or inside) according to the color of points "1" (or "0"). intersected cube is the cube which has at least two different color points. or change dynamically based on the size of cube. Different methods of point checking are classified in figure 10. S = number of silhouettes. if there was no color difference. CL = number of grey (intersected or surface) cubes in level L. the 8 corners of each cube are checked by projecting them to all the silhouettes. otherwise more points in the cube should be checked. The total number of point projections to silhouette images in the worst case will be: N to t (m ax ) = S ⋅ ( N L C L + N L -1 C L -1 + N L -2 C L -2 + N L -3 C L -3 + · · · ) N N N ⎛ ⎞ ≥ S ⋅ C L ⎜ N L + L -1 + L -2 + L -3 + · · · ⎟ 8 64 512 ⎝ ⎠ (45) Figure 10. After checking all points. If there were at least two different color corners. Also.

the total number of check points will be smaller than Ntot(max). we can minimize Ntot (max) as below: 8k k ⎛ 8k 8k ⎞ ⎛ k k ⎞ N tot ( max ) ≥ S ⋅ C L ⎜ 8+ 1 + 2 + 3 + ··· ⎟ ≥ 8 ⋅ S ⋅ C L ⎜ 1+ 1 + 2 + 3 + ··· ⎟ (46) 8 64 512 8 64 512 ⎝ ⎠ ⎝ ⎠ The final approach to increase the speed is to check the edge points hierarchically.2. Of course there is one exception case when the object is small and posed inside the cube so that there is no intersection between object and cube through the edges. If the size of bounding cube be selected properly. Any intersection between cube and silhouette should be occurred through edges in one-piece objects. Therefore checking some points inside the volume will be inevitable for this situation. we have tested it on a synthetically generated image named Bunny and Horse. the use of edge based intersection test for one piece object can be applied with certainty. 18 silhouettes of bunny from equal space . Simulation was run on PC Pentium-III 933Mhz using Matlab and C++ generated Mex files.1. In small cubes. It is obvious that the computing time is proportional with Ntot.2. In this way the chance to find two different color points in early checks could be increased. comparable to that of object. In this analysis. This number can be used as a measure to compare different methods of intersection check. the large cubes may intersect with small parts of silhouette and it needs checking of more points to identify the intersection. this situation can not be occurred and there is no need to check more points. Since the octree division is done at all times without checking the occurrence of intersection in first level. the edge base method can not be employed to decide if object is inside the cube or intersects with cube through the face. For such cases. In fact. Another approach to decrease the number of check points is to change the number of points dynamically in each level. By choosing NL=8 (checking only corners of cube in last level) and increasing checked points with the factor of 'k' in lower levels. Edge based checking is one approach to decrease the number of checking points to identify the mark of cube without loss of accuracy. Synthetic Model Results To determine the capability of presented algorithm and to quantify its performance. because intersected cubes normally will be recognized in early checking.New Trends in Surface Reconstruction Using Space-Time Cameras 31 Obviously. the cube will be larger than the object only in first level of division. 5. So the ambiguity of intersection test through the edges will be stay only for the first level.

The different levels of octree division are illustrated in figure 11-b and the depth-map of reconstructed 3d-model is shown in figure 11-c . 'S'. because some check points may be chosen near each other as it is cleared in figure 10-b. To compare the efficiency of methods. 1000 Computing Time (Sec) 100 CNSV DNSV CNSE CNHE DNHE CNRV 10 1 6880 6900 6920 6940 6960 6980 7000 Num ber of Recovered Voxels of Visual Hull Figure 11. DNHE method gives the best result and CNRV method gives the worst result. In this figure 'CN' and 'DN' mean Constant Number or Dynamic Number of points should be checked in different size of cubes. respectively. a synthetic object named Bunny has been applied to reconstruct the 3D shape using DNHE algorithm. 'H' and 'R' mean Sequential. Computing time for random check method is high. As it is cleared in figure. These silhouettes are shown in figure 11-a. In figures 12. computing time for a fix number of recovered cubes (voxels) in the last level could be evaluated for different types of intersection check. Computation time for different types of intersection check. The last word 'V' and 'E' mean that the check points are selected inside the Volume or on the Edges of cube. Figure 11 shows the result of simulation for different methods. The synthetic object has been captured from different views and 18 silhouettes of object have been prepared.32 Hossein Ebrahimnezhad and Hassan Ghassemian angle viewpoints have been used. Hierarchical and Random method to check the points.

Figure 13 shows the result of shape reconstruction for another synthetic object named Horse.New Trends in Surface Reconstruction Using Space-Time Cameras 33 from different views. Three-dimensional shape Reconstruction of Bunny from 18 silhouettes using DNHE algorithm. Figure 12. . a) different silhouettes of object from 18 view angles b) different levels of octree division using DNHE algorithm c) depth-map of reconstructed 3d-model in different view angles.

.34 Hossein Ebrahimnezhad and Hassan Ghassemian Figure 13. Three-dimensional shape Reconstruction of Horse from 18 silhouettes using DNHE algorithm a) different silhouettes of object from 18 view angles b) different levels of octree division using DNHE algorithm c) depth-map of reconstructed 3d-model in different view angles.

2. This concept is illustrated in the right part of figure 14-b. This procedure is repeated hierarchically to find the start point of the bounding edge with the favorite accuracy. If it was inside in all the silhouettes. This ray is projected to other silhouette images. So. the points can be checked in the coarse-to-fine method. To resolve this ambiguity. it is enough to find the start and end position of each segment. the quarter point in the left side is checked. S3 and S4.New Trends in Surface Reconstruction Using Space-Time Cameras 5. We employ a hierarchical checking method to extract the start-end points very fast. First. The idea has been illustrated in figure 14-b. Upon changing the status. This process is repeated to reach the end of the ray. Note that the bounding edge is not necessarily a continuous line. Edge Base Visual Hull 35 Cheng et al. N ray is the number of points on each ray. The total number of projections is given by: . hierarchical checking is applied to find exact position of the start-end points for each segment. To reconstruct the bounding edges. decision is made to see which point is inside in all the silhouettes. the quarter points in both sides must be checked. a ray is formed by projecting any point on the boundary of each silhouette image into space through the camera center. suppose that: m j is the number of points on the boundary of silhouette j. they are checked in hierarchical method.2. Then. otherwise. To have a sense on computation complexity of the hierarchical method and see its efficiency. Still. To represent each bounding edge. Those segments of the ray whose projections remain inside the other silhouettes will be selected as the bounding edges. the middle point of the ray is projected to all the silhouette images and its status is checked. it is unclear how to find the start and end when the ray consists of more than one segment (in concave parts). suggested a representation of visual hull that uses a onedimensional entity called bounding edge [34]. In the ordinary method. the visual hull of the object will be reconstructed. This point has been illustrated in figure 14-a where ray1i consists of two shared segments in S2. Therefore. the points on the surface of each cone are projected to all other silhouettes. and N sil is the number of silhouettes. Once the start point is found. Instead of checking all the points of the ray. hierarchical checking is restarted to find the end of the segment. by unifying these bounding edges of all rays of all silhouettes. position of the start-end points in multi-segment bounding edges can definitely be determined. This means that the points are checked in large steps on the ray at first. It may consist of several segments if any of the silhouette images are not convex.

the total number of projections will be: N tot = ∑ m j ⋅N ray ⋅ 1+ k 1 j + k 1 j k 2 j +. . Therefore. To find the exact position of each start and end points.36 Hossein Ebrahimnezhad and Hassan Ghassemian N tot ( max ) = ∑ m j ⋅ N ray ⋅ ( N sil − 1) j =1 N sil (47) It is possible to decrease this number by removing the points that are identified as outside in one silhouette and it would not be necessary to project such points to other silhouettes. The amount of kij depends on the shape of cone(j) and silhouette i.. bounding edges are formed in one segment.. (a) Projection of ray to the silhouette images to extract the bounding edges (b) hierarchical method to find the start and the end of segments...+ k 1 j k 2 j . consider that each ray is divided into n part at should be checked. In convex parts.k ( N sil −2) j j =1 N sil ( ) (48) Where 0 < kij < 1 is the percentage of points on cone(j) which project to the inside of silhouette i. log 2 ( N ray / n ) points For hierarchical checking method. Therefore. first. the number of checking points in each ray is: NH ray = n + 2 ⋅ log 2 N ray / n ( ) (49) Figure 14.

New Trends in Surface Reconstruction Using Space-Time Cameras And total number of projections for the convex object is:
NH tot = ∑ m j ⋅NH ray ⋅ 1+ k 1 j + k 1 j k 2 j +...+ k 1 j k 2 j ...k ( N sil −2) j
j =1 N sil





In concave parts, the bounding edges are formed in two or more segments. For q segmented bounding edges, the total number of checking points in the ray is given by:
NH ray (q ) = n + 2 ⋅ q ⋅ log 2 N ray / n



(51) Synthetic Model Results To show the capability of the presented algorithm and to quantify its performance, we have tested it on a synthetically generated image named Bunny. In this analysis, 18 silhouettes of Bunny from equal space angle viewpoints have been captured and used. Table 1 shows the result of simulation for different methods. To compare the efficiency of methods, computing time for a fix number of recovered points on visual hull can be evaluated for different types of intersection check. Table 2 shows the result for voxel based DNHE extraction for the same silhouettes. As it is clear, hierarchical method of bounding edge extraction gives very good results compared to ordinary bounding edge method and voxel based DNHE method, especially for high number of recovered points which is proportion to high accuracy. Table 1. Computing time to extract bounding edges of bunny
Num. of points on each ray of bounding cone 55 80 150 250 500 1000 Recovered points on visual hull 7192 10353 19421 32351 64764 129578 Computing Time (sec) Ordinary method 0.25 0.33 0.59 0.95 1.83 3.52 Hierarchical method n=10 0.1 0.11 0.112 0.113 0.115 0.12 n=30 0.17 0.18 0.19 0.191 0.195 0.21


Hossein Ebrahimnezhad and Hassan Ghassemian Table 2. Computing time for voxel base visual hull extraction
Num. of voxels in each edge of bounding cube 27=128 28=256 Recovered voxels on visual hull 6890 29792 Computing Time (sec) (DNHE method) 2.2 12.4

Table 3. Computing time to extract bounding edges of bunny from different number of silhouettes
Number of silhouettes 48 18 8 4 Number of points on each ray 500 500 500 500 Computing Time (sec) (Hierarchical Method) n=10 n=30 0.22 0.115 0.05 0.03 0.39 0.195 0.09 0.05

The computing time 0.12 or 0.21 sec to extract object from 18 silhouettes with the resolution of 1000 is very low and it makes possible to use algorithm for real-time extraction purposes. Table-3 shows the result of simulation for different number of silhouettes of Bunny.

Figure 15. Continued on next page.

New Trends in Surface Reconstruction Using Space-Time Cameras


Figure 15. Start-End points, Bounding-Edges and Depth-Map for 3 models named bunny, female and dinosaur which have been extracted through 18 silhouettes from different views using hierarchical algorithm.

In figure 15 we have demonstrated the start-end points of edges, bounding edges and depth map of reconstructed 3d models for synthetically objects named horse, bunny, female and dinosaur from 18 silhouette by hierarchical method has been shown.

Implementation and Exprimental Results
To evaluate the efficiency of our approach to 3D model reconstruction by tracking the space curves using the perpendicular double stereo rigs, there are


Hossein Ebrahimnezhad and Hassan Ghassemian

two questions that we need to consider. First, how good is the estimation of motion parameters? Second, how efficient is our method in practice or how robust is our method against the disturbing effects like noise? In this section, we present the results of experiments designed to address these two concerns. Experimental results are conducted with both synthetic and real sequences that contain different textures of objects. Two synthetic objects named Helga and Cow were captured by perpendicular double stereo cameras in 3D-StudioMax environment. The objects were moved with arbitrary motion parameters and 36 sequences were captured by each camera. Figure 16 shows the camera setup to capture Helga and Cow models along with their extracted space curves. It is clear that the number of outlier points is significantly less than the number of valid unique points. Figure 17 illustrates the curve tracking and motion estimation process by minimizing the geometric distance of curves in the camera images. Two sets of space curves, which have been extracted by two distinct stereo rigs, are projected to the camera planes in the next sequence and motion parameters are adjusted in which the projection of space curves in the related camera planes to be as close as possible to nearby curves. Figures 18 and 19 tend to demonstrate how the projection of unique points becomes closer and closer to edge curves, in each iteration, to minimize the geometric distance error. To evaluate the estimation error of motion process, variation of the six motion parameters across time has been plotted in figures 20 and 21 for Cow and Helga sequences. Comparing diagrams of true motion with diagrams of estimated motion by single stereo and perpendicular double stereo setup reveals the superiority of the perpendicular double stereo to the single stereo. The assessment is also given numerically in table 4 and table 5. Figures 22 and 23 show the temporal sequence of Helga and result of implementation for virtual camera alignment across time. In addition, this figure illustrates how the silhouette cones of virtual cameras are intersected to construct the object visual hull. Figure 24 compares the quality of reconstructed Helga model using true motion information and estimated motion information by single and perpendicular double stereo setups. Figures 25 to 27 demonstrate the result of implementation for Cow sequences. To evaluate the robustness of motion estimation against the noise and color maladjustment, comparison between single stereo and perpendicular double stereo are given both quantitatively and qualitatively in table 6 and figure 28. To get the qualified edge curves in noisy image, it will be necessary to smooth the image before edge detection process. At the same time, smoothing the image makes small perturbation in the position of edge points. The perpendicular double stereo setup appears to be more robust against the perturbation of edge points. Figure 29 to 34 demonstrate the result of

New Trends in Surface Reconstruction Using Space-Time Cameras


implementation for real objects sequences named Buda, Cactus and Head by perpendicular double stereo setup. All objects have been captured through the turntable sequences. Notice the small perturbations from circular path in the alignment of virtual cameras for Head sequence. These perturbations are caused by the none-rigid motion of body in the neck region. In this experiment, the process of motion estimation has been accomplished only based on the head (not body) region information. Both synthetic and real experimental results demonstrate the honored performance of the presented method for the variety of motions, object shapes and textures.

Figure 16. Reconstructed spaced curves on the surface of synthetic models Helga and Cow (captured in 3D Studio Max).

42 Hossein Ebrahimnezhad and Hassan Ghassemian Figure 17. Motion estimation by tracking the projections of space curves in perpendecular double stereo images. .

.New Trends in Surface Reconstruction Using Space-Time Cameras 43 Figure 18. Motion estimation by minimizing the geometric distance of space curves from adjacent curves in four camera images. Convergence of algorithm is shown for different number of iterations.

44 Hossein Ebrahimnezhad and Hassan Ghassemian Figure 19. Convergence of algorithm is shown for different number of iterations for (a) Cow and (b) Helga. Motion estimation by minimizing the geometric distance of space curves from adjacent curves in the projected camera images. .

28 0.24 1.True and estimated motion parameters for Cow sequences by single and perpendicular double stereo setup.34 0.41 0.08 0.51 0.72 0.07 1.17 0.27 0.97 1.13 0.02 1.02 0.New Trends in Surface Reconstruction Using Space-Time Cameras 45 Figure 20.10 0. Estimation error of motion parameters for Cow sequences by single and perpendicular double stereo setup Motion Parameter Mean of Abstract Error Double Single Perpendicular Stereo Rig Stereo Rigs 0.61 0.39 0.53 0.08 1.97 1.50 1.46 0.18 Maximum of Abstract Error Double Single Perpendicular Stereo Rig Stereo Rigs 1.14 0.02 Δφx (deg) Δφy (deg) Δφz (deg) Δφtotal (deg) ΔT x (mm) ΔT y (mm) ΔT z (mm) ΔT total (mm) .31 1.30 0.12 0.61 0. Table 4.13 0.15 0.23 0.

28 0.28 0.24 0.14 0.54 0.96 0.18 0.46 0. Estimation error of motion parameters for Helga sequences by single and perpendicular double stereo setup Motion Parameter Mean of Abstract Error Double Single Perpendicular Stereo Rig Stereo Rigs 0.14 0.75 0.20 Maximum of Abstract Error Double Single Perpendicular Stereo Rig Stereo Rigs 2.19 0.40 0. Table 5.61 0.53 1.23 0.61 0.75 0.19 0.44 1.92 0.32 0.92 2.25 0.20 0. True and estimated motion parameters for Helga sequences by single and perpendicular double stereo setup.17 2.46 Δφx (deg) Δφy (deg) Δφz (deg) Δφtotal (deg) ΔT x (mm) ΔT y (mm) ΔT z (mm) ΔT total (mm) .20 0.43 0.57 0.46 Hossein Ebrahimnezhad and Hassan Ghassemian Figure 21.91 1.34 0.

. Different views of Helga in 36 sequence of its motion.New Trends in Surface Reconstruction Using Space-Time Cameras 47 Figure 22.


Hossein Ebrahimnezhad and Hassan Ghassemian

Figure 23. Three-dimensional model reconstruction from multivies for Helga sequences. (top) Trajectory of estimated virtual cameras by perpendicular double stereo rigs along with two silhouette cones intersection, (middle) extraction of visual hull by all silhouette cones intersection and, (down) color mapping from visible cameras.

Figure 24. Reconstructed model of Helga including the bonding edges visual hull, depthmap and texture mapped 3D model: (a) estimated-motion with single stereo, (b) estimatedmotion with perpendicular double stereo, and (c) true-motion.

New Trends in Surface Reconstruction Using Space-Time Cameras


Figure 25. Different views of Cow in 36 sequence of its motion.


Hossein Ebrahimnezhad and Hassan Ghassemian

Figure 26. Trajectory of estimated virtual cameras by perpendicular double stereo rigs and extraction of visual hull by silhouette cones intersection (Cow sequences).

Figure 27. Reconstructed model of Cow including the bonding edges visual hull, depthmap and texture mapped 3D model: (a) estimated-motion with single stereo, (b) estimatedmotion with perpendicular double stereo and (c) true-motion.

New Trends in Surface Reconstruction Using Space-Time Cameras


Table 6. Estimation error of motion parameters for noisy sequences of Helga by single and perpendicular double stereo setup (σn2 = 0.1)
Motion Parameter Mean of Abstract Error Double Single Perpendicular Stereo Rig Stereo Rigs 1.02 1.89 0.71 1.21 0.33 0.38 0.31 0.34 0.29 1.43 0.21 0.64 0.24 0.30 0.21 0.25 Maximum of Abstract Error Double Single Perpendicular Stereo Rig Stereo Rigs 5.80 10.30 5.12 10.30 0.54 0.94 0.68 0.94 1.41 3.93 0.61 3.93 0.38 0.54 0.51 0.54

Δφx (deg) Δφ y (deg) Δφz (deg) Δφtotal (deg) ΔT x (mm) ΔT y (mm) ΔT z (mm) ΔT total (mm)

Figure 28. Effect of noise and color unbalance in 3D reconstruction: (a) noisy images of different cameras with σ2 = 0.1, (b) reconstructed model with single stereo rig and (c) reconstructed model with perpendicular double stereo rigs.


Hossein Ebrahimnezhad and Hassan Ghassemian

Figure 29. Different views of Buda in 36 sequence of its motion.

New Trends in Surface Reconstruction Using Space-Time Cameras


Figure 30. Reconstruction of Buda statue by perpendicular double stereo rigs (circular motion with turn table).

54 Hossein Ebrahimnezhad and Hassan Ghassemian Figure 31. Different views of Cactus in 36 sequence of its motion. .

.New Trends in Surface Reconstruction Using Space-Time Cameras 55 Figure 32. Reconstruction of Cactus by perpendicular double stereo rigs (circular motion with turn table).

.56 Hossein Ebrahimnezhad and Hassan Ghassemian Figure 33. Different views of Head (the picture of author) in 36 sequence of its motion.

Conclusions In this chapter.New Trends in Surface Reconstruction Using Space-Time Cameras 57 Figure 34. an efficient method has been presented to reconstruct the three dimensional model of a moving object by extracting the space curves and tracking . Reconstruction of Head by perpendicular double stereo rigs (non-circular motion).

it does not require the accurate color adjustment during camera setup and provides a better result compared to the other methods. pp. IEEE Trans.23 mm to 0. the presented method is not limited to the circular turntable rotation. The concept of virtual cameras.14 deg in perpendicular double stereo rigs and.24-0.34-0. Acknowledgment This research was supported in part by ITRC. Projection of the space curves in the camera images were employed for tracking the curves and for the robust motion estimation of the object. . In addition. The respective values in Helga sequence are Δφtotal 0. which are constructed from motion information. Moreover. the Iran Telecommunication Research Center. the double perpendicular stereo setup has been presented as a way to reduce the effect of statistical bias in motion estimation and to enhance the quality of 3D reconstruction. The nature of space curves makes the extraction very robust against the poor color adjustment between cameras and changing the light angle during the object motion. which use the color consistency property.51 deg in single stereo rig to 0. Main part of this chapter has been published previously in [43] by the authors. Finally. These values are increased in noisy sequence of Helga to Δφtotal 1. under grant no.25 mm.28 deg and ΔT total 0.21 . was introduced.20 mm.0.61 . TMU 85-05-33.269-282.0. A new method of space curve extraction on the surface of the object was presented by checking the consistency of torsion and curvature through motion. Experimental results show that the presented edge based method can be effectively used in reconstruction of visual hull for poorly textured objects. (15) (2) (2005). Geometric algorithms for least squares estimation of 3-D information from monocular image.18 mm for Cow sequence. Han. Circuits and Systems for Video Technology. References [1] Y. the average of abstract error in ΔT total is reduced from 0.58 Hossein Ebrahimnezhad and Hassan Ghassemian them across time using perpendicular double stereo rigs.64 deg and ΔT total 0. Quantitatively. Constructing the virtual cameras makes possible to use a large number of silhouette cones in the structurefrom-silhouette method. the average of abstract error in Δφtotal is reduced from 0.

(24) (6) (1981). (33) (4) (2003). (10) (4) (2000). Circuits and Systems for Video Technology. Computer Vision. (27) (2) (1998). Hartley and A. Pattern Analysis and Machine Intelligence. J.261277. IEEE Trans. and M. P. Zhang and C. K. Automatic Reconstruction of Stationary 3-D Objects from Multiple Uncalibrated Camera Views.J. Eisert. pp. (1883) (2000). On 3-D Scene Flow and Structure Recovery From Multiview Image Sequences.G. Int’l J. A Quasi-Dense Approach to Surface Reconstruction from Uncalibrated Images. pp. Determining the epiipolar Geometry and its Uncertainity: a Review. Vergauwen. M. (28) (2) (2006). A. The Development and Comparison of Robust Methods for Estimating the Fundamental Matrix. pp. 138-156. Bolles. pp 298-372. Second Edition. IEEE Trans.A. (78) (2000). (10) (2) (2000). A. Pollefeys. 161-195. 418-433. Kriegman. Murray. Computer Vision and Image Understanding. (1998). Hartley and A. Quan. and L.Part B. Metric 3D Surface Reconstruction from Uncalibrated Image Sequences. Computer Vision. 271-300. 381-395.C. Cambridge Univ. Y. Int’l J. Robust Structure and Motion from Outlines of Smooth Curved Surfaces. pp. .I. Robust Estimation of Rigid-Body 3-D Motion Parameters Based on Point Correspondences. P. IEEE Trans. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Strintzis. Y. Systems. pp. Zisserman. L. Pattern Analysis and Machine Intelligence. Multiple View Geometry in Computer Vision. 139-154. Triggs.a modern synthesis. Lhuillier. McLauchlan. Papadimitriou. Diamantaras. M. and Cybernetics . pp. Ponce. European Workshop 3D Structure from Multiple Images of Large-Scale Environments. Lecture Notes on Computer Science. R. E. M. Girod. Kambhamettu. pp. Sethi.541-549. Steinbach. 592-606. B. Proc. (24) (3) (1997). Van Gool. P. Fiztgibbon. Torr. Z.I. 2003. (27) (3) (2005). and D. Zisserman. In vision Algorithms: Theory and Practice. and B.New Trends in Surface Reconstruction Using Space-Time Cameras [2] 59 [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] T. M. Bundle adjustment. R. Circuits and Systems for Video Technology. pp. 302-315. Zhang. Koch. Furukawa. Comm. ACM. Fischler and R. IEEE Trans. M. Man. Sppringer-Verlag. Press. R. Torr and D. pp. P. pp. IEEE Trans. MLESAC: A New robust estimator with application to estimating image geometry. Roumeliotis.

362-382. Image Representation. pp. 1713-1728. [19] G. Faugeras. [26] L. Chowdhury and R. IEEE Trans. [17] A. 3D Shape Reconstruction of Moving Object by Tracking the Sparse Singular Points. Chellappa. Li and J.K. . Machine Intell. IEEE Trans. (5) (1) (1994). Pattern Anal. Machine Intell. Kang. 215–220. Pattern Analysis and Machine Intelligence. IEEE Conf. pp. B. Ho and R. Robust and direct estimation of 3-D motion and Scene depth from Stereo image sequences. Press. Rebibo. Szeliski and S. pp. Image Processing. Kweon." IEEE Trans. "Motion and Structure Estimation from Stereo Image Sequences . pp. [23] P. Int’l J. pp. P. (8) (3) (1992). pp. [25] H. 70–75. 562-574. 657–667. Recovering 3D shape and motion from image streams using nonlinear least squares. (12) (1990). Robotics and Automation. pp. (1991). 39–62. Chellappa. and uniqueness results. Visual Commun. Stereo-Motion with Stereo and Motion in Complement. (22) (2) (2000). [21] L. [15] K. Young and R.K. (15) (1993). 10–28.K. 1057-1062. Jepson and D. Dornaika and R. 267–282. Cohen. (34) (9) (2001). Calway. (1999). Weng. Heeger. Proc. Machine Intell. [20] J.R. 192-197. [18] R. 57–62. Stereo correspondence from motion correspondence. IEEE Trans. Recursive Estimation of 3D Motion and Surface Structure from Local Affine Flow Parameters. IEEE Trans.D. Statistical Bias in 3-D Reconstruction from Monocular Video. Pattern Anal. Robert and O. (11) (3) (1993). D. Pattern Anal. [16] A. in Spatial Vision in Humans and Robots. pp. pp. J. pp. 3-D motion estimation using a sequence of noisy stereo images: Models. 735–759. Computer Vision Pattern Recognition. (14) (8) (2004). IEEE International workshop on Multimedia Signal Processing. H. Chung. Duncan. Computer Vision Pattern Recognition. 3-D translational motion and structure from binocular image flows. (1993).S. Ebrahimnezhad. Chung.. J. pp. pp. IEEE Trans. [22] F. 3D interpretation of optical flow by renormalization. Cambridge Univ. Ghassemian.. Kanatani. Curve-based stereo: figural continuity and curvature.. pp. estimation. [24] S. Computer Vision. Proc. IEEE Conf.60 Hossein Ebrahimnezhad and Hassan Ghassemian [14] A. Pattern Recognition. and N. Park and I. Linear subspace methods for recovering translation direction. (27) (4) (2005). (2006).

CVGIP: Image Understanding 58. 2. Zisserman. J. Baker. D.Y. June 2000. 1987. Second Edition. 2000. Vol. Int. CRC Press. [36] R.K. "Visual hull alignment and refinement across time: a 3D reconstruction algorithm combining shape-frame-silhouette with stereo. Journal ComputerVision. 181–186. Moezzi. Shape-from-Silhouette Across Time Part I: Theory and Algorithms. pp. pp. Introducing a New Problem: Shape-fromSilhouette When the Relative Positions of the Viewpoints is Unknown. Man and Cybernetics. Pattern Analysis and Machine Intelligence. Computer Vision. pp. and M. Holler. (1997).Conf.Based Visual Hulls". 23. Multiview Reconstruction of Space Curves. [39] G. A. R. "Generating Octree Models of 3D Objects from their Silhouettes in a Sequence of Images". (22) (4) (2000). Bottino.H. IEEE Trans. pp. Buehler. Kuramura. 1. (2003). T. J. [33] A. pp. pp.New Trends in Surface Reconstruction Using Space-Time Cameras 61 [27] C.374. on Systems. Nelson Max. pp. 221-247. Raskar. 1484-1492. Journal of WSCG. V. Bouguet. and R. U. Silhouette-based 3-D model reconstruction from multiple images. Gortler. [30] A. (12) (1-3) (2004). (40) (3) (2000). 582-591." . Part B. "Rapid Octree Construction from Image Sequences". The geometry and matching of lines and curves over multiple views. McMillan. J. [29] F. C. (62) (3) (2005). pp. Augut. Computer Vision. Y. George Chen. [34] G. A. Atalay. November 1996. Park.M. Cheung.28. Yilmaz. Kahl.Gray. [35] M. 714–720. [38] S.Y. SIGGRAPH 2000. T.32. (25) (11) (2003). (33) (4) (2003). 219-222.M. Cheung. pp. [37] W. S.Szeliski.K.M. Int’l J. [28] J. IEEE Computer Graphics and Applications. Modern Differential Geometry of Curves and Surfaces with Mathematica. 369. Schmid and A. S. In Int. Matusik. 1-29. S. pp. Kanade. Potmesil. Laurentini. CVGIP 40. [40] G. In Proceedings of the 2000 IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’00). July 23. Mulayim. Cheung. "Reality modeling and visualization from multiple video sequences". 199–233. Pattern Analysis and Machine Intelligence. Peter McGuinness.K. et all. 58– 63. Han. Jain. July 1993. IEEE Trans. pp. IEEE Trans.358370. 16( 6). "Image. pp. Contour Matching Using Epipolar Geometry. [31] A. Christian Hofsetz. Visual Hull Rendering with Multi-view Stereo. Katkere. [32] Yang Liu. "A real time system for robust 3D voxel reconstruction of human motions". Kanade.

H. and J. Journal of Image and Vision Computing. Ponce. pp. pp. [41] W. E. Lazebnik. In Proceedings of 12th Euro graphics Workshop on Rendering. pp.62 Hossein Ebrahimnezhad and Hassan Ghassemian In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition 2003. [43] H. 1397-1420.10. Ebrahimnezhad. Matusik. June 2003. "Polyhedral visual hulls for real-time rendering". Elsevier. “Robust Motion from Space Curves and 3D Reconstruction from Multiviews Using Perpendicular Double Stereo Rigs. Kauai HI.Vol 26. Ghassemian. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR'01). et all. "On computing exact visual hulls of solids bounded by smooth surfaces". No. Vol. 2. December 2001. . 375-382. [42] S. 2008. Boyer. June 2001. 115–125. Oct.

In: Binocular Vision Editors: J. 4A Market Square. Northamptonshire NN10 8BP. briefly consider the possibility of any relationship between OD and limb or cortical laterality. or whose input is favoured when there is competing information presented to the two eyes. McCoun et al. which has been the subject of much discussion and revision over the past four centuries. UK Abstract Ocular dominance (OD) can be defined and identified in a variety of ways. or the eye whose functional vision appears superior on a given task or under certain conditions. pp. Higham Ferrers. What is becoming evident is that even in its most direct and behaviourally significant manifestation – sighting preference – it must be regarded as a flexible laterality within binocular vision. This chapter will review the phenomenon of OD in the light of the types of test used to identify it. Chapter 2 OCULAR DOMINANCE WITHIN BINOCULAR VISION Jonathan S. Inc. . influenced by the physical circumstances and viewing constraints prevailing at the point of testing. and speculate whether OD is essentially the product of forced monocular viewing conditions and habitual use of one or other eye. The concept. Pointer Optometric Research. It might be the eye used to sight or aim. continues to excite controversy today. 63-80 ISBN: 978-1-60876-547-8 © 2010 Nova Science Publishers. question whether inter-test agreement of OD in an individual might be anticipated. The chapter will conclude with remarks addressing some practical implications of OD as demonstrated in healthy eyes and in cases where there is compromised binocular function.

The apparent behavioural performance superiority of one eye is recognised by a variety of terms: these include ocular dominance. respectively). eye). 1998). All sighting tasks determine OD on the basis of the alignment of two objects presented at a stereo-disparity sufficiently far outside Panum’s area such that fusion is denied (Kommerell et al. But while its existence has been acknowledged for over 400 years. 1979). Porac & Coren. although naturally paired. 1930) – bestow upon the individual? This chapter will describe and review the phenomenon of ocular dominance. 1929. 2005. eyedness (Porac & Coren. Historically. So what is ‘ocular dominance’? And what conceivable benefit might such a preferent facility – apparently demonstrable in the majority of binocularly sighted persons (Ehrenstein et al.. in terminology analogous to ‘handedness’ or ‘footedness’ (motor preference demonstrated by an upper or lower limb.. 1938). 1981). Under these circumstances one eye is unconsciously chosen (when viewing in binocular free space) or consciously selected (when using a gun or monocular instrument) to undertake the task (Miles. are we any nearer to understanding this putative lateral oculo-visual preference? The human eyes. microscopy. eye preference. although it should be noted that this choice is not intended to imply that any dominance is of ‘ocular’ origin or even a unitary concept (Warren & Clark. discussion of eye dominance has occurred in the context of theories of binocular visual function (Wade.64 Jonathan S. OD means different things to different people. Pointer Introduction: Ocular Dominance Walls (1951: p. astronomy). 1976): unconscious and conscious sighting choices as regards right or left eye use are reportedly in agreement approximately 92% of the time (Coren et al. when participating in aiming sports activities (eg. clay pigeon shooting). 2003): the subject is forced to choose between one or the other image (ie. not infrequently manifest functional asymmetries. sighting dominance or. Ocular dominance (OD) is the term that will be used preferentially throughout this chapter to embrace this concept. The lay person might perhaps encounter the phenomenon when aligning objects in space during DIY tasks or when threading a needle.. Miles. The clinical research scientist might consider criteria other than a sighting (motor) preference as providing a more appropriate indication of ocular laterality preference under particular circumstances. 394) has observed: “… dominance is a phenomenon of binocular vision: an eye does not become dominant only when the other eye is out of action”. These alternatives are likely to be . or if engaged in specific occupations or pastimes that require monocular use of an optical (usually magnification) aid (eg.

Ocular Dominance within Binocular Vision 65 performance-related measures of sensory origin. the viewer aligns the tip of the rod with a defined object in the mid-distance – the eye which retains an aligned view of the rod and the fixation object when the eyes are alternately closed is the preferred (sighting) eye. 1930). originally published in Latin. 1593: Wade. Neapolitan scholar. deceased 4 February 1615). p. with both eyes open. 1901. Portrait in profile: frontispiece engraving to expanded 20-volume edition of Magiae Naturalis (Naples. Porta (1593. Translations of Porta’s text. or that eye which shows greater facility to suppress a rivalrous retinal image under conditions of binocular viewing (Washburn et al. polymath and playright. Miles. . The first recorded description of what we now call OD is usually attributed to Giovanni Battista della Porta (ca1535-1615) [figure 1] in Book 6 of his treatise De Refractione (Porta. 142-143) described a pointing test to determine the preferred eye: a rod is held directly in front of the body and. Giovanni Battista (sometimes Giambattista) della Porta (born probably late 1535. 1934). two longstanding popular examples are the eye with apparently better visual acuity (van Biervlet. It has also been suggested (Wade. 1998). pp. Figure 1. 793) that the Flemish artist Peter Paul Rubens (1577-1640) might have made an engraving depicting Porta’s sighting test. 1987. have been provided by Durand & Gould (1910) and Wade (1998). 1589)..

Walls (1951) compiled a list of 25 tests (and indicated that his inventory could not be regarded as exhaustive). Shneor & Hochstein. 2006 and 2008). p. For the interested reader Wade (1998) provides a wide historical survey of studies of eye dominances: his narrative places discussion of reported oculo-visual preferences in the context of evolving theories of human binocular visual function. Suffice to say here that on the grounds of the results of Coren & Kaplan (1973) if one were to choose a single technique to predict eye laterality it could reasonably – as four hundred years ago – be that of sighting alignment. subsequent research has suggested that the sighting eye thus determined seems to extract and process visual spatial information more efficiently than its fellow (Porac & Coren. Demography of Ocular Dominance The majority of individuals can record a sighting (motor) dominant eye. Australia. 1977. Japan. A number of studies throughout the twentieth century have explored the laterality distribution of OD in large samples of normally sighted human populations. 1993). 1927. 1983. This laterality is apparently established in early life. possibly around four years of age (Barbeito. and undertaken in North America. Porac & Coren (1981) have concluded that familial traits are absent..66 Jonathan S. Gronwall & Sampson (1971) compared 18 techniques. 884) have surveyed much of this work. Porac & Coren ( 1976: Table 1. and Coren & Kaplan (1973) analysed 13 measures. Any contribution of genetic factors to an individual’s OD has been little explored. Reiss (1997) has expressed equivocation. Pointer It might also be noted that around 330BC Aristotle (ca384-322BC) described (without appreciating the significance of his observation) the ability of most individuals to wink or temporarily close one eye despite both eyes having similarly acute vision (Ross. The issue of test appropriateness and comparative inter-technique agreement will be addressed below. Over the years numerous techniques have been devised to define OD (Crider. 1996).. p. 32% with the left and . Dengis et al. Furthermore. the UK. although admitting that in an examination of fresh family data OD failed to align with any direct recessive or dominant Mendelian model of genetic transfer. drawing on studies published between 1929-1974. and becomes stable by the middle of the human development period (Dengis et al. and Africa. 959). 1944). The broad conclusion was that approximately 65% of persons sighted with their right eye.

Subsequent to this summary. but the trend with advancing chronological age was weak and not statistically significant. 1981) published the results of a substantial questionnaire-based population study of eye (also ear and limb) laterality. 1938) we can still legitimately ask: “What is the purpose of ocular dominance?” Controversially. 2008).1%): this gender imbalance has recently been reported again in an independent study (Eser et al. the same authors (Porac & Coren. this apparently included a tiny proportion of persons who were unable to indicate a preference.Ocular Dominance within Binocular Vision 67 only 3% demonstrated no consistent preference. does it have a purpose or – given that the eye that is considered dominant in a given person might vary with the task and circumstances (see below) – might the phenomenon be considered an artefact resulting from a particular test format or approach? .. 2008) the development of childhood myopia has been shown to be free of the influence of OD. Unfortunately this burgeoning literature has not produced a consistent or unifying theory of OD. respectively) in their sighting preferences (regardless of whether laterality was dextral or sinistral). Male subjects have also been shown (Porac & Coren. The survey was completed by 5147 individuals (representing an approximately 25% response rate to the mailing) across North America. Suggestions that refractive error and OD might be associated have not been substantiated in a recent large population study of adult subjects (Eser et al.9%) were revealed in this survey as being statistically significantly more right eyed than females (69.. Adults in the North American postal survey of Porac & Coren (1981) were possibly more dextral than children. A Taxonomy of Ocular Dominance Over the four centuries subsequent to Porta’s (1593) description of sighting preference the bibliography of the topic covering theoretical. Chronological age and country of origin appeared to be negligible influences in this compilation. 1980). Replicating the outcome of a previous study (Porac et al. The balance (28... over a twoyear longitudinal study (Yang et al. 1975) to be statistically significantly more consistent than females (81% versus 63%. 2008). 1975. males (at 72.9%) of respondents were left sighting preferent. Warren & Clark.1% of respondents were right sighting preferent.. Furthermore. updated by Mapp et al. As others have voiced previously (Flax. 1966. Results indicated that 71. 2003). practical and conjectural issues has expanded to perhaps 600 or more articles (Coren & Porac.

The test scoring methodology indicated the strength as well as the laterality of dominance. as discussed by Woo & Pearson. (ii) acuity – a visual functional superiority indicated directly by the comparative inter-eye level of Snellen visual acuity. and dominance of the lateral (right or left) hemi-field. 1975). 1976): monocular sighting dominance. There have been those who claim a generalised laterality (examined by Porac & Coren. The outcome of the study by Coren & Kaplan (1973) in particular provides us with an entry into our probing of the form and function of OD. Lederer (1961) contemplated five varieties of OD (a proposal examined further by Gilchrist. These three alternative bases for a dominant eye will each be considered further. 167-169). with OD matching hand/foot preferences: an early supporter of absolute unilateral superiority of sensori-motor function was the anatomist G. Jasper & Raney (1937) added acuity dominance and a dominant cerebral hemisphere to motor control. 1953. 1952) considered OD to be composed of two factors. Walls (1951) regarded OD as a composite of sensory and motor (eye movement) dominance. A factor analysis of the results identified three orthogonal determinants of OD: (i) sensory – as revealed by (form) rivalry of stereoscopically presented images. sensory dominance of one eye. Cohen. pp. M. More recently Gronwall & Sampson (1971) and Coren & Kaplan (1973) have both brought some enlightenment to a reappraisal of the issue by actually undertaking fresh examinations and applying modern comparative statistical analyses to the results. Humphrey (1861. . namely sighting dominance and rivalry dominance. Is Ocular Dominance Test Specific? Coren & Kaplan (1973) assessed a group of fifty-seven normally sighted subjects using a battery of thirteen OD tests that intentionally covered the broad range of sensori-motor approaches suggested by other investigators over the years. A weakness of many of these claims is that essentially they are based on individual observation or founded on theoretical considerations associated with the extant literature. Others (Berner & Berner. orientational dominance. Pointer A way forward might be provided by considering a taxonomy of eye dominance based on whether OD might be regarded as a unitary concept or a multi-factorial phenomenon. and (iii) sighting – ocular preference indicated by an aiming. viewing or alignment type of task. motor dominance of one eye in binocular circumstances.68 Jonathan S. 1927.

II. although both eyes possessed equal visual acuity. this opinion prevailed for nearly two millennia. In addition. Miles. laterality correlations with other recordable ocular asymmetries (eg. it is recognised that in normally sighted (non-amblyopic) individuals. Tests of Asymmetry Historically (as summarised by Wade. superior accuracy was achieved using one eye. III. the level of visual acuity is demonstrably similar in the two eyes (Lam et al. 1593: Wade. 1972) are absent or unexplored. Tests of Rivalry Suppression of rivalrous stimuli has been a consistently recognised feature of eye dominance since the earliest writing on the phenomenon (Porta. But this inter-eye rivalry is only possible when the stimuli are small and discrete (Levelt. vergence and version saccades: Barbeito et al.Ocular Dominance within Binocular Vision 69 I. 1934). Only in the eighteenth century was the possibility of an inter-ocular acuity difference given wider credence. throwing into question reliance upon oculo-visual functional asymmetries as indicators of OD either in the individual or on a universal scale. 2001 and 2007). The viewing of superficially similar but subtly different stimulus pairs. immediately questioning the ability of such a rivalry technique to provide a wider ‘global’ indication of OD. Test variations are many but include viewing a discrete distant target either through a circular hole cut in a . Sighting Tests Probably the most direct and intuitive demonstration of OD originates with the pointing/alignment test described by Porta (1593). 1968). 1998). Nowadays. 1930). Pickwell.. Suppression of competing stimuli is demonstrably not limited to one eye. 1988a. 1986.. Unfortunately many investigators have been tempted to claim the better-sighted eye as the dominant one (Duke-Elder. being usually in a state of flux. 1998) Aristotle in the third century BC contended that. The problem is that the evidence base for such a supposition is weak (Pointer. separated using different coloured optical filters or cross-polarised lenses. 1996) but not atypically one eye might perform very slightly better than its fellow.. can be used to quantify the proportion of time that the view of one or the other eye holds sway in the binocular percept (Washburn et al. Mallett. 1952.

1982).. Apparently one has only to apply one’s choice of test(s) as summarised in the previous section (or identify the writing hand or the ball-kicking foot) and the laterality of the dominant eye is established. foot) preference have failed to show a congruent relationship: selected examples include Annett (1999). Humphrey. 1929). 2004). . Porta. McManus et al.. Sighting dominance in the study by Coren & Kaplan (1973) accounted for the greatest proportion (67%) of the variance among all of the tests analysed. in an unwarranted leap of reasoning. these several tests unfortunately indicate that more often than not OD appears to vary with test selection or circumstances. Gronwall & Sampson. Quartley & Firth. 2001). 1929). 2004). 2002. But as we have just discussed. (1999).. the majority of studies investigating OD in tandem with hand (and rarely. Prima facie OD thus determined is usually clearly defined and consistent within (Miles. also Henriques et al. 1861. aspects of operational detail for even this simplest of tests (Ono & Barbeito. Pointer (2001). 1976) and – importantly (see below) – between tests (Coren & Kaplan. The dilemma that arises when OD appears to be test specific is simply where to place one’s reliance: has dominance switched between eyes or is the outcome an artefact of the testing technique? This rather begs the question: what is the purpose of OD? Some Misconceptions It remains a belief in some quarters that OD is a fixed characteristic in a given individual (a claim disputed by Mapp et al. 1973. and Snyder & Snyder (1928). displays the same laterality as limb (hand/foot) preferences (Delacato. the fact that dominance. and not least (for sighting tests which have to be held). 1959. has been shown to cross between eyes at horizontal viewing angles as small as 15 degrees eccentricity (Khan & Crawford. the potentially modulating influence of hand use (Carey.70 Jonathan S. 1593). Also. Gronwall & Sampson (1971). 2001. being subject to possible corrupting influences which include: observer (subjective) expectation and test knowledge (Miles. 1910). it has come to be realised that even sighting dominance is not entirely robust. or through the circular aperture created at the narrow end of a truncated cone when the wider end is held up to the face (A-B-C Test: Miles. possibly as a result of relative retinal image size changes (Banks et al. Merrell (1957). Porac & Coren. 1971). 1928 and 1929. 2003) or even. Coren & Kaplan (1973). Pointer piece of card or through a short tube (Durand & Gould. Porac & Coren (1975). Unfortunately.

1997. 313-314). (2003. 1981. Porac & Coren. that both eyes participate in determining visual direction (Barbeito. This paired influence is also of course in accord with Hering’s (1868/1977) concept of binocular vision. However. 2001. 1996). . 1951). is doubtful (Gilchrist. wherein the two eyes are composite halves of a single organ. linking a monocular task with a binocular system. Put succinctly. the eye which most dominated during binocular rivalry tests. Quite simply. This situation is quite unlike the straightforward contra-lateral cortical representation pertaining for the upper or lower limbs. Rombouts et al. It has been convincingly argued by Mapp et al. 1981. Suggestions continue to be made that a greater proportion of the primary visual cortex is activated by unilateral stimulation of the dominant eye than by that of the companion eye (eg.. Walls. With an equal longevity to misunderstandings surrounding a claimed cortical basis for OD is the suggestion that sighting laterality provides the reference frame for the spatial perception of visual direction (Khan & Crawford.Ocular Dominance within Binocular Vision 71 The content of the previous section of this chapter was based on the results of the modern analytical comparative study of OD tests undertaken by Coren & Kaplan (1973). 1968). Menon et al. Undaunted. 1986) and not the sighting dominant eye alone (eg. 2001).. the eye with the better visual acuity and (most significantly) the eye used for sighting. Porac & Coren. Sheard. 1952. 1976). 1966). Hemispheric cortical specialisation has frequently been claimed as the causal factor underlying all dominances of paired sensory organs or motor limbs. which results in the unique bi-cortical representation of each eye (Duke-Elder. pp.. what must be remembered in the specific case of human ocular neuroanatomy is that there is semi-decussation of the optic nerve fibres at the optic chiasma (Wolff. ocular neuro-anatomy denies any unifying concept of laterality. a neuro-anatomical explanation for this functional inconsistency has been essayed. (2003) have listed 21 such relevant studies published between 1925 (Mills) and 2001 (Pointer). Flax. This outcome formalised the several results reported by many investigators before and since: in fact Mapp et al. However as long as seventy years ago Warren & Clark (1938) disputed any relation between OD and cortical laterality. The justification for this assertion. 1926. Three statistically significant but independent features were common in analysis: viz. drawing on both their own research and independent evidence in the literature. Khan & Crawford. OD measured in the individual with one test format does not necessarily agree with that determined using an alternative approach. but still speculation has not been entirely silenced.

as we have touched on in the previous section of this chapter and also stated when discussing tests of rivalry. This form of task specifically allows the selection of only one eye to accomplish its demands (Miles. . whether by ease or habit most individuals usually perform reliably and repeatedly in their choice of eye under these constrained circumstances (Porac & Coren. Smith. which normally seeks to maintain binocular single vision. and the almost universal demonstration of OD from an early (preschool) age. will ‘naturally’ occasionally resort to monocularity? And in this regard. 892) to the opinion that: “… monocular viewing via the dominance mechanism is as natural and adaptive to the organism as binocular fusion”. But how reasonable is it to suggest that the highly evolved human visual system. with the functional consequence of enhanced depth perception through binocular disparity. perhaps suppression of the image in the non-dominant eye removes the interference arising from disparate and non-fusible left and right images that would otherwise confuse the visual percept. The apparent consistency of right and left OD proportions across the human population. aiming or alignment) test format is the only technique that apparently has the potential to identify OD consistently in a binocularly sighted individual (Coren & Kaplan. 1973). we have apparently arrived at a paradoxical position. is that the image in the dominant eye prevails. This has led Porac & Coren (1976) to speculate that perhaps OD is important to animals (including man) where the two eyes are located in the frontal plane of the head: this anatomical arrangement means that the left and right monocular visual fields display a substantial binocular overlap.72 Jonathan S. have inclined Porac & Coren (1976. in the individual with normal (ie. On the one hand the majority of binocularly sighted persons would quite reasonably claim to have experienced a demonstration of OD. 1928). p. 1976). 1957. or identify a clearly defined oculo-visual role for the phenomenon. Of the non-human species. It is evident that the sighting (syn. non-amblyopic) sight suppression of competing or rivalrous stimuli is not limited to one eye and one eye alone but rather is fluid from one moment to the next. only primates have been considered to show characteristics consistent with having a sighting dominant eye (Cole. on the other hand we are unable to confirm uniformity of laterality in the individual. Thus the result when undertaking a sighting task. given that there is fusion only for objects stimulating corresponding retinal points. 1970). However. for example. Pointer Resolving the Paradox of Ocular Dominance From the foregoing discussion of the phenomenon of OD and attempts to define and measure it.

the identification of a preferred (sighting) eye could become functionally beneficial under specific circumstances. The elaboration of convoluted. Some Clinical Implications of Ocular Dominance The concept of OD as a phenomenon identified under circumstances where monocular forced-choice viewing conditions prevail does not exist in a void. In an optometric context.. for the maximum relief of symptoms (including blurred vision and headaches) associated with uncompensated heterophoria (a tendency for the two eyes to deviate from the intended point of fixation). 1983). Mallett (1988b) advised that the greater part of any prescribed prism should be incorporated before the non-dominant eye. Consequently. a similar effect is observed in infants: when asked to grasp and look through a tube. the functional significance of OD simply extends to identifying which of a pair of eyes will be used for monocular sighting tasks (Mapp et al. Gathering these strands together. 1930) forcing the (likely consistent) choice of one or the other eye.. parsimonious or test-specific explanations of OD in an attempt to reconcile observed phenomena with the known facts regarding binocular vision then becomes unnecessary. Dengis et al. resulting in failure to choose a sighting eye but rather place the tube at a point on or either side of the bridge of the nose. For example. Barbeito (1981) reported that when performing the hole-in-the-card test with the hole covered (or the Porta pointing test with the view of one’s hand obscured) the imagined hole (or unseen finger or rod) is located on a notional line joining the target and a point between the eyes. (1998) have replicated this latter outcome in binocular adult subjects: an electronic shutter obscured the view of the target as a viewing tube was brought up to the face. 2003). perhaps it is possible to reconcile the conflicting views of OD by considering it to be (in older children and adults) a phenomenon demonstrated under constrained viewing conditions (Mapp et al.Ocular Dominance within Binocular Vision 73 It is precisely when the restricted condition of monocular sighting is denied that the consistency of eye choice deteriorates. Interestingly. with convenience or personal habit (Miles. pathological or physiological (usually age-related) changes might impact on their binocular status. this chapter will conclude with a consideration of the clinical implications of OD in patients for whom unilateral refractive. for example. they will invariably place the tube between rather than in front of one or other eye (the Cyclops effect: Barbeito. clinical . 2003). In summary.. While the normally sighted binocular individual may not substantially depend on a dominant eye for the majority of daily activities.

2007. Prospective candidates for this procedure include middle-aged (usually longstanding) contact lens wearers. Given the wealth of stimuli and changeable viewing circumstances in the ‘real world’. A specialist clinical area that has come to the fore in recent years is that of monovision. furthermore. Typically in monovision it is the dominant eye (usually identified by a sighting test) that is provided with the distance refractive correction. While the visual sensori-motor system shows great adaptability and. as we have seen. near focus tasks are allotted to the companion eye. 1974) so will not be considered further here. In addition. and persons electing to undergo laser refractive surgery. In this approach (Evans. 2007). 2007). it is claimed that this ‘dominant/distance’ clinical approach produces better binocular summation at middle distance and reasonable stereo-acuity at near (Nitta et al.. even sighting tests may not reliably indicate which eye should have the distance correction. Evans. occupational and vocational caveats surround such a specific approach to visual correction. such actions relying on an accurate sense of absolute visual direction.. it might be remarked that a fundamental area of concern remains centred on the identification or the procedural choice of the ‘dominant’ eye. 1990. Subjects usually require a variable period of spatial adaptation due to compromised binocular function and the evident necessity for visual suppression as viewing circumstances demand.. a novel method of correcting presbyopia (the physiological agerelated deterioration in near vision as a consequence of impaired accommodative ability). 1996). 1990) that performance with the dominant eye is superior for spatiolocomotor tasks (including ambulatory activities and driving a vehicle). This allocation recognises (Erickson & Schor. Medico-legal. OD can switch between eyes depending upon circumstances. not only when fitting contact lenses (Erickson & McGill. Pointer experience suggests that inattention to this point might fail to resolve the patient’s asthenopia or visual discomfort. great care should be taken when practitioners choose to prescribe monovision. that eye which is likely to take on the distance-seeing role.74 Jonathan S. McMonnies. one eye is optically corrected for distance viewing and its companion focused for near. Appropriate patient selection and screening are essential features (Jain et al. However. . ie. Others have disputed the necessity to adhere to such a rigid rule. 2001). such tests cannot accurately predict or guarantee the success of monovision in an individual. These and related features associated with this slightly controversial clinical modus operandi have been well reviewed as experience with the procedure has evolved (Erickson & Schor. 1992) but also when undertaking refractive surgery (Jain et al.

1998). What can be stated is that OD must be recognised as a dynamic concept. However. 1953). Conclusion It might be that a conclusion drawn by Warren & Clark (1938: p. the years of burgeoning knowledge have perhaps tended to obscure rather than clarify many issues surrounding OD and its relation to oculo-visual performance. unilateral sighting might be considered an advantage compared to the continued use of two eyes has continued to be asked (Wade. 302) seventy years ago remains pertinent today: “eye dominance as a single unitary factor does not exist”. 1930) in binocular vision. 1999)? While the functional basis of OD remains uncertain in a species with a highly evolved binocular visual system. but often to differing degrees and progresses at different rates. . Unfortunately. It has also been recorded (Waheed & Laidlaw. Since classical times the paradoxical question of why.. eventually requiring first then frequently second eye surgery with artificial lens implantation to restore reasonable visual acuity.Ocular Dominance within Binocular Vision 75 A naturally occurring version of monovision might temporarily appear in patients affected by physiological age-related lens opacities (cataracts). again. 2003) that the debilitating effects of monocular injury or pathology may more markedly impair mobility or quality of life if it is the performance of the hitherto sighting dominant eye that is primarily compromised. The visual deterioration usually occurs in both eyes. what could be the oculo-visual purpose or benefit to the individual of a dominant eye whose laterality can apparently be modified by test conditions (Carey. it has been reported (Talbot & Perkins. 2001). 2007). its demonstrable existence in the majority of normally sighted individuals has been linked to a number of perceptual and clinical phenomena. and by attentional factors (Ooi & He. by vision training (Berner & Berner. Perhaps OD is no more than a “demonstrable habit” (Miles. 1976). fluid and deformable in the context of specific viewing conditions and with regard to the methods used to identify it. Allied to this. 1998) that following the improvement in binocular function after second eye treatment the laterality of OD may change. However. adopted when viewing circumstances demand that only one eye can conveniently be used (Porac & Coren. under binocular conditions. the possibility of sighting dominance switching in patients with unilaterally acquired macular disease cannot be discounted (Akaza et al.

J. not eye position. A. R828-R830. Am. & Yuzawa. Banks. C. Ghose. H. Annett. 44. 31. 327-331. Percept. Barbeito. J. The Treatment and Prevention of Reading Problems (The Neuropsychological Approach). Optom. M. . 50. J. The Cyclops effect in adults: sighting without visual feedback. J. Coren. van Biervlet.Opt. Vision Res. Cole. (1959). W. M. Belgique. Ocular dominance: an annotated bibliography. M.. (Ms. determines eye dominance switches. (1952). 50. D. 561-564. Am. Optom. H. J. H. (1901). Illinois: Charles C. S. J.. 11. Laterality. Arch. Eye dominance. (1944). 855-860. P. Crider.. 65. S. Acad. Two factors affecting saccadic amplitude during vergence: the location of the cyclopean eye and a left-right bias. and eye in monkeys. T. 3.. 55-64. Springfield. Eye dominance in families predicted by the right shift theory. Berner. Coren. (1981). T. Nippon Ganka Gakkai Zasshi. 296-299. 38.. & Kaplan. (2001). (1975). Barbeito. Vision Res. Steinbach. 167-172. C. L. E. 229-234. Bull. S. Sci. 33. C. (1998). Shimada. & Berner. 4. & Ono. (1953). 179-190. A behaviourally validated self-report inventory to assess four types of lateral preference. Am. l’Acad. & Ono. C.. No. J. Barbeito. Psychol. Ophthalmol. (1973). G. foot. 679-694. Gen.. P. 322-325 [in Japanese]. R. M. J. Thomas. D.. (1983). 922). C. J. Biol. Psychol. 5. Tam.. (2004).. E.. Curr. 111. (1979). (1957). Comp.. 6. Laterality in the use of the hand. Coren. Neuropsychol. Roy. Relation of ocular dominance. 634-636. R. B. 229-230. E. Ophthalmic Physiol. P. 283-292. Arch.. & Porac. Porac. A battery of tests for the dominant eye. Dengis.. & Duncan. (1999). J.76 Jonathan S. A. 50.. Carey. Clin. (2007). Sighting from the cyclopean eye: the Cyclops effect in preschool children. Psychol. Relative image size. 603-608.. Cohen. Nouvelle contribution a l’étude de l’asymetrie sensorielle. M. Patterns of ocular dominance. & Hillis. S. handedness and the controlling eye in binocular vision. A. Physiol. J. J. Sighting dominance in patients with macular disease. K. 201-205. M. Psychophys. 1. Vision research: Losing sight of eye dominance. Sighting dominance: an explanation based on the processing of visual direction in tests of sighting dominance. (1986). JSAS Catalogue of Selected Documents in Psychology. Delacato. R... 21. Pointer References Akaza. Vision Res. Fujita. H. Simpson.

Med. 32-39. C. Ophthalmic Physiol.. & Azar.. M. P. (1992).. Prog. Duke-Elder. A. (2007). G. Sci. 55. Erickson. E. H. Ophthalmol... S. (1977). and ocular dominance in monovision patient success. London: Henry Kimpton. . Eser. H. Durrie. Am.. Refract.. Pediatr. Optom. W. Vision Res. P. Dengis. Surg. Role of visual acuity. 36. E. 43. C. D. D. Bridgeman & L. D. Gunther. N. Flax. Ehrenstein. C. Visuomotor transformation for eye-hand coordination. Optom.. & Stager.. Monovision: a review. New York: Plenum. D. and trans.. Acad. B. Psychol. B. Fanfarillo. (1861). 201 et seq. A method of determining ocular dominance. (2005). (1993). 69. J. & Gould. Visual function with presbyopic contact lens correction.. 926-932. Exp. 22-28. Clin. 491-499. 24. and monocularly enucleated children. (1910). 369-370. E.. Vis. (1996). J. (1996). & Schor. C. Medendorp. Br. Evans. stereoacuity. Eye preference within the context of binocular functions. The Theory of Binocular Vision (B.Ocular Dominance within Binocular Vision 77 Dengis. Surv. M. 323-326. & Sampson. 62. J. Erickson. A. H. Am. Textbook of Ophthalmology. & Stahl. C.). & McGill. P.. Optom. 140. The clinical significance of dominance. Arch. Goltz. R. I. S. (1971). J. & Jaschinski. 4.. W.. Brain Res. 3237-3242. Graefes Arch. (Originally published 1868). 417439. T. Vol. Strabismus. 685-689. 243. A. Cambridge: Cambridge University Press. J. J. J. Jain. Stark. J. J.. pp. M. Visual alignment from the midline: a declining developmental trend in normal. 27. Opt. Am. M.. (2008). 30. 175-185. M. Optom. Humphrey. 566-581. J.. strabismic.. M. G. & Crawford. Henriques. Assoc. J. Association between ocular dominance and refraction. Ocular dominance: a test of two hypotheses. Physiol. C. Arora. Ophthalmol. (1990). Steinbach. E. Learning to look with one eye: the use of head turn by normals and strabismics. (1952). Khan. W. Vis. Schwendeman. 329-340. K. A. Durand. Y. Success of monovision in presbyopes: review of the literature and potential applications to refractive surgery. Opt. Eds.. The Human Foot and the Human Hand. I. Dominance in the visual system. 40. W. Steinbach.C. Ophthalmol. N... S. Z. Sci. 67. Steeves. Arnold-Schulz-Gahmen. H. Br. 31. J. J. Gilchrist. Hering.. 761764. (1966). & Postiglione. (2002). Ono... (1976). L. D. F. Gronwall. S.

Optometry (p.. S. C. Ogawa.. & Boucher. 2780-2787.. (1997). (1929). 1397-1403. 189-195. 65. D. E. & Man. R. In K.. 57.. The phi test of lateral dominance. writing hand.. R. Lederer. Monovision outcomes in presbyopic individuals after refractive surgery. & Azar. Psychol. (2003). B. Effect of naturally occurring visual acuity differences between two eyes in stereoacuity... Mallett. H. Y. R. (1999). Menon. Percept. K. Aust.. M. McManus. J. (2001). T. Ocular dominance demonstrated by unconscious sighting. S. On Binocular Rivalry. Levelt. 44. & Crawford. Ophthalmol. J. 108. 113-126. & Raney. (2001). (1937). Vision Res. 281-282). Bull. Llewellyn (Eds. (1961). J. J. W.. (1925). What does the dominant eye dominate? A brief and somewhat contentious review. J.. Y. R. 8 (Series 3). Ocular dominance. M. Optometry (pp. S. G. J. A. 12. S. London: Butterworths. 310-317.. Gen. W. 29. A.). M. Miles. & Bach. Mills.. (1968).. Llewellyn (Eds. Miles. D. Edwards & R. J.. 43. 25. 77. J. Laterality. 1743-1748. H. J. 450-457. Schmitt. R. Jasper. (1996). (1988b). Ocular dominance: methods and results. D. Ophthalmic Physiol. Optom. Ono. H. Paris: Mouton.78 Jonathan S. Techniques of investigation of binocular vision anomalies. 933941.). O. Pointer Jain. Opt. Ocular dominance reverses as a function of horizontal gaze angle. 155-156.. J. P. G. Z. J. 28-32. McMonnies. Lam.. (2003). 49. Am. Leung. 570-574. H. Merrell. Lam. W.. Optom.. T.. 173-192. Ocular dominance in human adults. C. Chau. The management of binocular vision anomalies. Vision Res. P. Psychophys. Ocular dominance in human V1 demonstrated by functional magnetic resonance imaging. In K. 1430-1433. J. K. Aust.. & Ugurbil. W. (1988a). R. (1930). (1928). Am.. P. Edwards & R. Ou. Mallett. Psychol. Eye dominance. Y. S. Kromeier. L. London: Butterworths. Strupp. 4. J. Monocular fogging in contact lens practice. Ophthalmology. M. Bryden. A. Mapp. Kommerell. Psychol. R. F. Human Biology. W. 16. Porac. C. I. Psychol. C. J. Exp. F.. Dominance of eye and hand. Miles. W. . R. & Barbeito. 412420. R. Ocular prevalence versus ocular dominance. and throwing hand. (1974). A. Khan. 41. (1957). 314-328. Eyedness and handedness. 3. C. 266). Neurophysiol. 531-539.

Y. J. Psychophys. Rombouts. Laterality. Percept. handedness. Porac.. J. 715-721. Pickwell. (1593). & He. Vision Res. . 35.. Perception. (1986). S. W.. (1975). 1709-1713. 111. Nippon Ganka Gakkai Zasshi. Binocul. Porac. & Barbeito. and visual acuity preference: three mutually exclusive modalities? Ophthalmic Physiol. S. Lateral Preferences and Human Behaviour. H.. 83. Sighting dominance. & Coren. Sighting dominance and egocentric localization. (1997). (1982). B. The absence of lateral congruency between sighting dominance and the eye with better visual acuity. 880-897.. 341-346. C. S. J. (1996). J. Ono. Pointer. The dominant eye. (1927). Shimizu. The functional basis of ocular dominance: functional MRI (fMRI) findings. Mot. A. & Firth. Percept.. Porac. Porac. C. (1981). 117-126. the sighting-dominant eye as the centre of visual direction. Opt. 201-210. 2530. 21. (1972). M. 40. Ross. Bull. The Works of Aristotle: Volume 3 (Ed. L. S.. (1976). S. T.). Lett. & Coren. Coren. 32. Ooi. & Coren.. (1977). 7-16. 1499-1507. 26. C. (2004). (2007). Oxford: Clarendon Press... A. & Scheltens. Valk.. Vision Res. Libri Novem. The cyclopean eye vs. L. 28. Reiss. 221. 1-4. S. J. Quartley. 551-574. R. New York: Springer-Verlag. Psychol. C. J. 19. 106-110. M.. S.. Porta. Binocular rivalry and visual awareness: the role of attention. G. Psychophys. (1999). Z. Optices Parte. Sprenger. Is eye dominance a part of generalized laterality? Percept. Binocular sighting ocular dominance changes with different angles of horizontal gaze. & Niida. P. Vis.Ocular Dominance within Binocular Vision 79 Nitta. T. Life-span age trends in laterality. Strabismus Q. De Refractione. D. Naples: Salviani. 21. C. Ocular dominance: some family data. F. S. The influence of ocular dominance on monovision – the interaction between binocular visual functions and the state of dominant eye’s correction. Hering’s law of equal innervation and the position of the binoculus. 763-769. Ophthalmic Physiol. Gerontol. Neurosci. 12. (2007). K. M. & Coren. Porac. P. (2001). & Coren. 434-440 [in Japanese]. Opt. Porac. D. 2. Barkhof. Pointer. 27. Skills. (1980). C. The assessment of motor control in sighting dominance using an illusion decrement procedure. & Duncan. S.. R.

A... Early studies of eye dominances. (2003). Z.. J. The benefit of second eye cataract surgery. L. E. Ophthalmol. (1926). S. (1987). 12. 3. 626-628. N. Wade. A. (1970). Woo. Eye. 785818. 341-344). W. N. Chen. L. 45. J. A. & Clark. Wolff. Psychol. W.. Liu. 558-567. 19. Snyder. 87. M. 983-989. E. Invest. J. M. 97-108. & Perkins. E. 48.. A theory of ocular dominance. & Pearson. Vision Res. & Hochstein. 7. J. Lewis. On the late invention of the stereoscope.. 4258-4269... H. 633-636. 657-658. L. 431-433. Ophthalmol. J. Eye dominance in a monkey. J. Walls. Mot. 4779-4783. Anatomy of the Eye and Orbit (6th edition. Shneor. The Visual Pathway. Vision Res. Ophthalmol. and visual handicap in patients with unilateral full thickness macular holes. (1928). C. T. Am. Optom. Talbot. Eye preference tendencies. N. X. Nie. . 49. 16. In R. H. Optom. 19. 31. Last (Ed. Dextrality and sinistrality of hand and eye.).. Yu. Arch. A.. Br. J. Vis. (2006). Warren. Am. R. B.. D. K. C. & Laidlaw. & Scott. & Snyder. London: H. 46. Wade. E. 387-412. Biometrika. Pointer Sheard. Eye dominance effects in conjunction search. Smith. Washburn. 35. Waheed. J. Shneor. Ed. Bull.. M. M. Psychol. Association of ocular dominance and myopia development: a 2-year longitudinal study. 165-199. (1968). pp. S. Disease laterality. Psychol. M. (2008). Unilateral sighting and ocular dominance. A consideration of the use of the term ocular dominance. Am. A. A. (2008). Percept. Laterality. 46. Eye dominance effects in feature search. (1927). Arch. (1998). Faison. 298-304. L. & Hochstein.. A comparison between the Miles A-B-C method and retinal rivalry as tests of ocular dominance.80 Jonathan S.. Perception. (1998). eye dominance. (1934). 1592-1602. M. G. Yang. (1951). Skills. K. Acad. & Ge. Sci. F. (1938).. Lan.

Chapter 3 THREE-DIMENSIONAL VISION BASED ON BINOCULAR IMAGING AND APPROXIMATION NETWORKS OF A LASER LINE J. the binocular images of the laser line are processed by the network to compute the object topography.mx. . Tel: (477) 441 42 00.In: Binocular Vision Editors: J. Gto. the object shape is recovered by means of laser scanning and binocular imaging. By applying Bezier approximation networks. * E-mail address: munoza@foton. neural networks. which performs the shape reconstruction. binocular image processing. pp. and computer vision parameters. In this technique. C. The binocular imaging avoids occlusions. which appear due to the variation to the object surface.cio. A Bezier approximation network computes the object surface based on the behavior of the laser line. By means of this network. the measurements of the binocular geometry are avoided. To describe the accuracy a mean square error is calculated. This procedure represents a contribution for the stripe projection methods and the binocular imaging. Abstract We present a review of our computer vision algorithms and binocular imaging for shape detection optical metrology. 81-105 ISBN: 978-1-60876-547-8 © 2010 Nova Science Publishers. A. Thus. the performance of the binocular imaging and the accuracy are improved. The study of this chapter involves: laser metrology. The parameters of the binocular imaging are computed based on the Bezier approximation network. McCoun et al. Leon. It is because the errors of the measurement are not added to the computational procedure. 37150 Mexico. Apolinar Muñoz-Rodríguez* Centro de Investigaciones en Optica. Inc.

Also. the object is moved in the x-axis and scanned by a laser line. This position is processed by the network to determine the object shape. and the distance between the two cameras are avoided. a pair of binocular images of the line is captured. the object shape is retrieved. By means of a Bezier network. Keywords: Binocular imaging. occluded areas are retrieved by binocular imaging. In the proposed technique. In this manner. From the line position and the geometry of the optical setup. Thus. the disparity is determined by detecting the line position in each pair of the binocular images. the disparity is needed to retrieve the object depth in a binocular system [9]. the line position is shifted in the image due to the surface variation and the camera position. the line occlusion is avoided by the binocular imaging. Therefore. all steps of the proposed technique are performed automatically by . Based on the line behavior in the image. To extract the object surface. Thus. The line position is detected by means of Bezier curves with a resolution of a fraction of pixel. Typically. the position of the line disparity is proportional to the object depth. laser line. the network calculates the depth dimension for a position of a stripe displacement given. Also. the occluded area appears only one of the two images. in the area of the line occlusion the topographic data are not retrieved [7-8]. a mathematical relationship between the line position and the object depth is generated. In this manner. Introduction In computer vision. When the surface produced a line occlusion. optical systems have been applied for shape detection. Apolinar Muñoz-Rodríguez This technique is tested with real objects and its experimental results are presented. the time processing is described. The disparity is detected measuring the position of the line displacement in the image. The main aim of the line projection is the detection of the line behavior in the image [4-6]. In each step of the scanning. The Bezier network is constructed using the disparity of the line. In the proposed setup. In this manner. the line position can not be detected. When the surface variation produces an occlusion. The use of structured illumination makes the system more reliable and the acquired data are easier to interpret.82 J. the object contour is deduced. the measurements of the focal length. the geometry of the setup is measured to obtain the object topography. which is projected on the objects with known dimensions. In the proposed technique here. Bezier approximation networks. When a laser line is projected on an object. 1. A particular technique is the laser line projection [1-3]. the data missed in each image can be retrieved from each other.

The contouring is described based on the geometry shown in figure 2. the occluded line in the first camera can be observed by the second camera. The focal length is indicated by F. In each step of the scanning. which are multiplied by a weight. y-axis and the object depth is indicated by h(x. The platform moves the object in the x-axis. d is the distance between the two cameras and the image center of each . y) in the z-axis. The points o and p correspond to line projected on the reference plane and object surface. The profilometric method is based on the line deformation. the object depth is determined. This kind of computational process improves the performance and the accuracy. To retrieve the complete object contour. From the scanning. Basic Teory The proposed setup figure 1 consists of a line projector. the laser line is digitized by two CCD cameras. two CCD cameras. the object is moved in the x-axis and scanned by a laser line. The structure of the network consists of an input vector. the binocular imaging is applied. 2. Thus. Each one of these images is processed by the Bezier curves to determine the position of the line disparity. the object contour can be computed completely. the physical measurements on the setup are avoided. the line position is main parameter to perform the object contouring. This position is processed by the network to determine the object depth. The data of transverse sections are stored in an array memory to obtain the complete object shape. a set of binocular images of the line is captured. By detecting the position of the line deformation. The object contour is broken when an occlusion appears. In this system. Also. the line is deformed in the x-axis according to the object surface. The output layer is formed by the summation of the neurons. In each image.Three-Dimensional Vision Based on Binocular Imaging… 83 computational algorithms. a contribution is provided in the binocular imaging methods. the binocular system overcomes the occlusion in the line projection. In this manner. On the reference plane are located the x-axis. an electromechanical device and a computer. Thus. respectively. the object is fixed on a platform of the electromechanical device. In the proposed technique. The results obtained in this technique are achieved with very good repeatability. the occluded line in the second camera can be observed by the first camera. The produced information by a pair of the binocular images corresponds to a transverse section of the object. In this arrangement. Thus. The input vector includes: object dimensions. line position and parametric data. Therefore. a hidden layer and an output layer. The hidden layer is constructed by neurons of Bezier Basis functions. A laser stripe is projected onto the object surface by a laser diode to perform the scanning.

Also. (3) from this equation.y) using as(x.y)= ep . δ+ε is the disparity. In our contouring system.y). The line displacement in the image plane for each camera is represented by as(x. the line position is moved from ao to ap in the image plane of the camera a. the ep of the line disparity is used to compute the object depth h(x. the object depth h(x. the parameters d and F are obtained by an external procedure to the contouring.y) is missing.(3).(3). The distances of the disparity are deduced by δ = ac.y). Thus. When a line occlusion appears in the camera a. Based on the pinhole camera model [10].y) for the camera e. Then. as(x. the line displacement respect to the reference point “o” is given by as(x. the performance for object contouring is improved. To compute the surface zi by means of Eq. the focal lens. d. To detect the disparity. Apolinar Muñoz-Rodríguez camera is indicated by ac and ec. es(x. When a laser line is projected on the surface. this network provides information of the camera orientation. ec and camera orientation should be known.y)= ao . the distance between tow cameras. respectively. the line position is measured in every row of the binocular images.84 J. can not be computed in the image processing of the laser line.ap. In this procedure. ac. the surface zi in a binocular system is deduced by zi = d F δ +ε . the line position is moved from eo to ep in the camera e. respectively. Then. This means that Eq. This position is computed by detecting the maximum of the line intensity in each .y) is computed directly in the image processing of line position. In this manner. these parameters are given to the computer system to reconstruct the object shape. the occlusion problem of the laser line is solved by the binocular imaging. (1) (2) By means of the positions ep and ap. the center coordinates.y) for the camera a and es(x.ap and ε = ec – ep. disparity and center coordinates. a Bezier network provides a function that computes the object depth based on the line displacement. the constants F.e0. This means that the network produces the depth h(x. The image processing provides a least one of the two displacements. For the proposed technique. the object depth is computed by the network based on the line position using only one camera of the binocular system. At the same time. Typically.y) or es(x.

0 ≤ u ≤ 1.. x1.. are substituted into Eq. The result is x* = 34. z2.. To carry it out.. the maximum intensity measured in the image.u) 0 un zn. Then.(xn..u) 0 un xn. x0. 0 ≤ u ≤ 1. (5).Three-Dimensional Vision Based on Binocular Imaging… 85 row of the image. then us=u*.. the second derivative z”(u) > 0 in the interval 0≤ u≤1. Next. (x . The intensity projected by a laser diode is a Gaussian distribution in the x-axis [11]. the maximum is detected by the first derivative equal to zero z´(u)=0 [13] via bisection method.. To detect the line position. z0). (10) and z0. then ui = u*.(4) represents the pixel position and Eq.. are substituted into Eq. u* is taken as the mid point of the last pair values that converges to the root. u* is halfway between ui and us..u)n u0 z0 + ⎜ ⎟ (1 .zn. The intensity in every row of the image is represented by (x0..274 pixels.. The nth-degree Bezier function is determined by n+1pixels [12].. Beginning with a pair of values ui = 0 and us =1.. where xi is the pixel position and zi is the pixel intensity. The nth-degree Bezier function is determined by two parametric equations.. The value u* where z´(u) = 0 is substituted into Eq. Then. If z´(u) evaluated at u = u* is negative.xn.u)n u0 x0 + ⎜ ⎟ (1 . the line position is processed by the network to obtain the object contour..274 and the stripe position is ap = 34.. ⎜ n⎟ ⎝ ⎠ (5) Eq.. z2). because z(u) is defined for the interval 0 ≤ u ≤ 1. these equations are evaluated in the interval 0≤ u≤1. To fit the Bezier curve shown in figure 3. ⎜ n⎟ ⎝ ⎠ ⎛ n⎞ ⎛ n⎞ (4) z(u) = ⎜ ⎟ (1 . x2.(5) to obtain maximum position x*. z1).(5) represents the pixel intensity.. Bezier curves and peak detection are used..u) n-1u z1 + ⎜0⎟ ⎜ 1⎟ ⎝ ⎠ ⎝ ⎠ ⎛ n⎞ + ⎜ ⎟ (1 . Therefore.. In this case. z1. The procedure of stripe detection is applied to all rows of the image. If z´(u) evaluated at u = u* is positive. which are described by x(u) = ⎜ ⎟ (1 . .u) n-1u x1 + ⎜0⎟ ⎜ 1⎟ ⎛ n⎞ ⎛ n⎞ ⎝ ⎠ ⎝ ⎠ ⎛ n⎞ + ⎜ ⎟ (1 . zn). (x1. which is shown in figure 3.

Experimental setup. Geometry of the experimental setup. Apolinar Muñoz-Rodríguez Figure 1. Figure 2. .86 J.0.

a linear combination LCa and LCe are determined [14] to compute u and v. the stripe displacements as. The network structure consists of an input vector. Bezier Networks for Surface Contouring From the binocular images.Three-Dimensional Vision Based on Binocular Imaging… 87 Figure 3.asn and es0. a hidden layer and an output layer. es and the parametric values u and v.esn are the stripe displacements obtained by image processing described in section 2.. The relationship between the displacement and the parametric values is described by u= b0 +b1as. The input includes: the object dimensions hi. This network is shown in figure 4. By means of these displacements.….y).y) or es(x. es2. es1.y) based on the displacement as(x. the line displacement is proportional to the object depth. A Bezier network is built to compute the object depth h(x. as1. 3. The input data as0.. (6) (7) . Maximum position from a set of pixels fitted to a Bezier curves. This network is constructed based on a line projected on objects with known dimensions. as2. a parametric input.…. v= c0 +c1es. Each layer of the network is deduced as follow.

Thus. u=1) in Eq.. h1.. the values hi and its u are substituted in Eq. Therefore.(6) is completed. for the displacement as and es.(6). hi is the known dimension of the pattern objects and Bi is the Bezier basis function Eq.(10). two equations with two unknown constants are obtained. The output is the depth h(x.(6) based on the displacement as. which is described by ⎛ n⎞ Bi (u ) = ⎜ ⎟u i (1 − u ) n−i . the parametric values u and v are computed via Eq.(8). Solving these equations b0 and b1 are determined and Eq. The value u is computed via Eq. For the depth h0. the displacements as0 and es0 are produced.. Apolinar Muñoz-Rodríguez where bi and ci are the unknown constants.. which is represented by ah(u) and eh(v). These two outputs are described by the next equations a h (u ) = ∑ w B ( u )h .. the suitable weights wi and ri should be determined.. Substituting the values (as0. ⎜i ⎟ ⎝ ⎠ ⎛ n⎞ n! ⎜ ⎟= ⎜ i ⎟ i!(n − i )! ⎝ ⎠ (8) denotes the binomial distribution from statistics. ..hn are obtained from the pattern objects. Substituting the values (es0. h1. The hidden layer is constructed by Bezier basis function [16].. u=0 for as0 and v=0 for es0.(7).(9) and Eq. (10) where wi and ri are the weights.y). (9) e h( v ) = ∑ r B ( v )h . i =0 i i i n 0 ≤ v ≤ 1. For hn the displacements asn and esn are produced.hn by means of an adjustment mechanism... Solving these equations c0 and c1 are determined and Eq.(7). w2. The output layer is the summation of the neurons of the hidden layer. which are multiplied by a weight. whose dimensions are known. which corresponds to the known dimension hi.(9). respectively.wn. Therefore. v=0) and (esn. h2.88 J. v=1) in Eq. the network is being forced to produce the outputs h0.. To carry it out.(7) is completed. The input h0.. To obtain the network Eq. i =0 i i i n 0 ≤ u ≤ 1. u=0) and (asn.(6) and Eq. w1. the u=1 for asn and v=1 for esn is.…. To obtain the weights w0. The Bezier curves are defined in the interval 0≤ u ≤ 1 and 0≤v≤ 1 [17]. h2. two equations with two unknown constants are obtained.

1+.n This equation can be rewritten as the product between the matrix of the input data and the matrix of the corresponding output values: βW = H...(11) can be represented as h0 = w0β0. 0 ≤ u ≤ 1...….. β 0.u) n-1uh1 +.u) 0 un hn.u)n u0 h0 + w1 ⎜ ⎜ ⎛ n⎞ ⎟ ⎟ ⎝ 1⎠ (1 . (11) ahn =hn = w0 ⎜ ⎜ ⎛ n⎞ ⎟ ⎟ ⎝0⎠ (1 .(15) has been completed.. ah1 =h1 = w0 ⎜ ⎜ ⎛ n⎞ ⎟ ⎟ ⎝0⎠ (1 .u) n-1uh1 +.….Three-Dimensional Vision Based on Binocular Imaging… 89 For each input u.1+. an output ah is produced.u) n-1uh1 +..1 β 1.….wn are calculated and Eq.+wnβ1...... β n .1 β 0...n ⎦ ⎢wn ⎥ ⎣ ⎦ ⎢ ⎥ ⎢ ⎥ ⎣hn ⎦ (13) β n ....u) 0 un hn.n (12) hn = w0βn.+wnβn. w1..1+...u) 0 un hn.. 2 .u)n u0 h0 + w1 ⎜ ⎜ ⎛ n⎞ ⎟ ⎟ ⎝ 1⎠ (1 .n ⎥ ⎢ w1 ⎥ ⎢ h1 ⎥ ⎥⎢ ⎥ = ⎢ ⎥ ⎥⎢ ⎥ ⎥ .0 + w1β1..+ wn ⎜ ⎜ ⎛ n⎞ ⎟ ⎟ ⎝ n⎠ (1 .... 0 ≤ u ≤ 1.. the next equation system is obtained ah0 = h0 = w0 ⎜ ⎜ ⎛ n⎞ ⎟ ⎟ ⎝0⎠ (1 .. Thus.+wnβ0.0 β 0.+ wn ⎜ ⎛ n⎞ ⎜ n⎟ ⎟ ⎝ ⎠ (1 ...n h1 = w0β1..... w2. 2 This system Eq. The linear system represented by the next matrix ⎡ β 0. 0 ≤ u ≤ 1...…...n ⎤ ⎡ w0 ⎤ ⎡h0 ⎤ β 1.. Thus the weights w0.u)n u0 h0 + w1 ⎜ ⎛ n⎞ ⎜ 1⎟ ⎟ ⎝ ⎠ (1 .(13) is solved by the Chelosky method [17]. β 1.0 ⎢β ⎢ 101 ⎢ ⎢ ⎣ β n ...0 + w1β0..1 β n .0 + w1βn...+ wn ⎜ ⎜ ⎛ n⎞ ⎟ ⎟ ⎝ n⎠ (1 . This linear system of Eq. . 2 .

v) n-1vh1 +.v)n v0 h0 + r1 ⎜ ⎜ ⎛ n⎞ ⎟ ⎟ ⎝ 1⎠ (1 . Then. For each input v. (14) ehn =hn = r0 ⎜ ⎜ ⎛ n⎞ ⎟ ⎟ ⎝0⎠ (1 .v) 0 vn hn.y)= ah(u) and h(x. ℑ1.90 J.. From these images.…. 0 ≤ v ≤ 1. respectively.. 2 .(14) can be represented as the product between the input matrix and the matrix of the corresponding output: ℑ R = H.…..v) n-1vh1 +. ri. ...1 ℑ1. 2 ℑ1. r1. r2.(2).n ⎦ ⎢rn ⎥ ⎢hn ⎥ ⎣ ⎦ ⎣ ⎦ (15) This linear system Eq.. 0 ≤ v ≤ 1. Eq.(7) using the displacement es.(15) is solved and the weights r0. eh1 =h1 = r0 ⎜ ⎜ ⎛ n⎞ ⎟ ⎟ ⎝0⎠ (1 .1 ℑ0 ...v)n v0 h0 + r1 ⎜ ⎜ ⎛ n⎞ ⎟ ⎟ ⎝ 1⎠ (1 . 0 ℑ0. This network is applied to the binocular images shown in figure 5(a) and figure 5(b). ℑn .y) are computed via Eq. the network is being forced to produce the outputs h0. r2. the network produces the shape dimension via Eq. which corresponds to the dimension hi.v) n-1vh1 +..(1) and Eq.y) and es(x. The value v is calculated via Eq. the stripe position ap and ep are computed along the y-axis.v)n v0 h0 + r1 ⎜ ⎛ n⎞ ⎜ 1⎟ ⎟ ⎝ ⎠ (1 . ℑ0. respectively.v) 0 vn hn..….(16) has been completed.... In this manner..y)=eh(v). an output eh is produced and the next equation system is obtained eh0 = h0 = r0 ⎜ ⎛ n⎞ ⎜0⎟ ⎟ ⎝ ⎠ (1 .v) 0 vn hn.+ rn ⎜ ⎜ ⎛ n⎞ ⎟ ⎟ ⎝ n⎠ (1 . the BAN provides the object depth by means of h(x. 0 ≤ v ≤ 1.. the values v and hi are substituted in Eq.1 ℑn ..(10) based on the line displacement as and es respectively. Apolinar Muñoz-Rodríguez To determine the weights r0. the displacement as(x.. The binocular images correspond to a line projected on a dummy face.+ rn ⎜ ⎛ n⎞ ⎜ n⎟ ⎟ ⎝ ⎠ (1 . 2 ℑn .rn.(9) and Eq.. again. In this manner....+ rn ⎜ ⎜ ⎛ n⎞ ⎟ ⎟ ⎝ n⎠ (1 .n ⎤ ⎡ r0 ⎤ ⎡h0 ⎤ . h2.…. h1. 0 ⎢ℑ ⎢ 101 ⎢ ⎢ ⎣ℑ n . To carry it out. Thus.(10).hn. The linear system represented by the next matrix ⎡ ℑ0 . The linear system Eq..n ⎥ ⎢ r1 ⎥ ⎢ h1 ⎥ ⎥⎢ ⎥ = ⎢ ⎥ ⎥⎢ ⎥ ⎢ ⎥ ⎥ .rn are determined.. r1.…..

Three-Dimensional Vision Based on Binocular Imaging… 91 The contours provided by the network are shown figure 6(a) and figure 6(b). Figure 4. But. The contour of figure 6(a) is not completed due to the occlusion in figure 5(a). figure 5(b) does not contain line occlusions and the complete contour is achieved in figure 6(b). Structure of the proposed Bezier network. . respectively.

92 J. First line captured by the binocular system. . Figure 5 (b). Apolinar Muñoz-Rodríguez Figure 5 (a). Second line captured by the binocular system.

Three-Dimensional Vision Based on Binocular Imaging… 93 Figure 6 (a). The camera orientation in the xaxis is determined by means of the geometry figure 8(a). distortion and camera orientation. Figure 6 (b). the object shape is reconstructed based on the parameters of the camera and the setup. The camera parameters are determined based on the pinhole camera model. the optical axis of the cameras is perpendicular to the reference plane. the camera parameters are determined based on the data provided by the network and image processing. the line . which is shown in figure 7. Parameter of the Vision System In optical metrology. these parameters are computed by external procedure to the reconstruction system. In the proposed binocular system. In the binocular system. The camera parameters include focal distance. Surface profile computed by the network from figure 5(a). pixel dimension. image center coordinates. Surface profile computed by the network from figure 5(b). Usually. 4. In this geometry.

For this geometry. But. Apolinar Muñoz-Rodríguez projected on the reference plane and the object is indicated by ao and ap at the image plane. this derivative is the more similar to a constant. the camera orientation is performed based on si and hi from the network. ao are constants and hi is computed by the network based on si. a linear q produces a linear t at the image plane. the optical axis is aligned perpendicular to the y-axis. For an optical axis perpendicular to the reference plane y-axis.ao). In this case.94 J. the derivative dk/ds is slightly different to a constant. respectively. Due to the distortion. the network and image processing provide an optical axis aligned perpendicularly to the reference plane.ap) . The object dimension is indicated by hi and D = zi + hi. Thus. the generated network corresponds to an optical axis aligned perpendicularly to the x-axis. Thus. xc. The distance between the image center and the laser stripe in the x-axis is indicated by a. the projection ki at the reference plane is computed by ki = F hi si + xc − a o (16) From Eq. In this process.(16) F. the object depth hi has a projection ki in the reference plane. Other configuration is an optical axis not perpendicular to the reference plane. the dash line is dk/ds for β minor than 90° and the dot line is dk/ds for β major than 90°. In this case. the object position is detected in the line in each movement. From figure 8(a). Thus. the pattern position changes from ayp to ayi in the laser stripe. In this manner.(ayc-ayi). the object is moved in y-axis over the line stripe. the displacement is defined as si = (xc . ki is a linear function. Therefore. ki is computed from hi provided by the network. the camera . the orientation camera is performed by means of dk/ds = constant for the x-axis and dt/dq = constant for the y-axis. the derivative dt/dq is a constant. the derivative dk/ds is not a constant. The orientation of the camera in y-axis is performed based on the geometry of figure 8(b). si does not produce a linear function ki. the optical axis is aligned perpendicular to x-axis and y-axis. But. qi is provided by the electromechanical device and t is obtained by image processing. Also. this derivative is the more similar to a constant.(xc . In this case. In this case. which is shown in figure 8(c). When the object is moved. t = (ayc-ayp) . the derivative dt/dq is not exactly a constant. Thus. the derivative ki respect to si dk/ds is a constant. For the orientation in y-axis. In this figure. Based on the optical axis perpendicular to reference plane. Therefore. For the orientation in x-axis. Due to the distortion. According to the perpendicular optical axis. Based on these criterions.

. From Eq.(18) the constants D.…. xc and ao should be determined.(17) is rewritten as D − hi a = D − hi + F η ( si + xc − a o ) + (18) a Where D is the distance from the lens to the reference plane.ao). . To carry it out. the network produces the depth hi based on si for the calibration.. h2. h6. η.Three-Dimensional Vision Based on Binocular Imaging… 95 parameters are obtained.η (xc . To carry it out.…. a (17) From this equation η is scale factor to convert the pixels to millimeters. The geometry of the setup figure 8(a) is described by zi a = zi + F η ( xc − a p ) + .(19) and the equation system is solved. a. are computed by the network according to s1. s2.(18) is rewritten as equation system h1 = D − h2 = D − h3 = D − h4 = D − h5 = D − h6 = D − F F F F F F a η ( s1 + xc − ao ) a η ( s 2 + xc − a o ) a η ( s 3 + xc − a o ) a (19) η ( s4 + xc − ao ) a η ( s5 + x c − a o ) a η ( s6 + xc − ao ) The values h1. Thus.ap) . Eq. Using D = zi + hi and ηsi = η (xc . s6. These values are substituted in Eq. Eq. F.

η. ti. Therefore. are provided by the electromechanical device. the real data ap are not linear. qi.ao).(xc. are taken from the orientation in y-axis. η ( b − qi −1 ) b. hi are known and ayc. a.(22). The distortion is observed by means of the line position ap in the image plane. the distortion is included in the network. . xc and ao are determined.(20) is rewritten as equation system for an hi constant by t1 = ( ay c − ay p ) − t 2 = ( ay c − ay p ) − t 3 = ( ayc − ay p ) − F ( D − h1 ) η ( g − q0 ) F ( D − h1 ) η ( g − q1 ) F ( D − h1 ) η ( g − q2 ) (21) The values t1. q2. the behavior of ap respect to hi is a linear function. Thus. ayp should be determined. J.(21) and the equation system is solved. Eq. To carry it out. the constants ayc. t2. q0= 0 and the values q1. In this manner the camera parameters are calibrated based on the network and image processing of the laser line.ap) . which is shown in figure 8(d). The coordinate ayc is computed from the geometry figure 8(c) described by t i = ( ayc − ay p ) − F ( D − hi ) . F.(20) the constants D. t3. η. However. These values are substituted in Eq. Apolinar Muñoz-Rodríguez F. due to the distortion. ayp and b are determined. Thus. the network produces a non linear data h. (20) From Eq. which computes the object depth in the imaging system. which is described by ap = F a D − hi + xc (22) Based on Eq.96 the constants D. The network is constructed by means of the real data using the displacement si =(ac.

Three-Dimensional Vision Based on Binocular Imaging… 97 Figure 7. Geometry of an optical axis perpendicular to x-axis. Geometry of the pinhole camera model. Figure 8 (a). .

Apolinar Muñoz-Rodríguez Figure 8 (b). Derivative dk/ds for an optical axis perpendicular to x-axis and for an optical axis not perpendicular to x-axis. . Geometry of an optical axis perpendicular to y-axis. Figure 8 (c).98 J.

The line is captured by two CCD cameras and digitized by a frame grabber of 256 gray levels. The experiment is performed with three objects. Experimental Results The approach of the Binocular imaging in this technique is to avoid the line occlusions. By means of the network Eq. Figure 1 shows the experimental setup. the stripe displacement as or es is calculated via Eq. The line occlusion is detected based pixel of the stripe.27 mm and binocular images of the line are captured. the network produces a transverse section based on the binocular images.Three-Dimensional Vision Based on Binocular Imaging… 99 5.(6) and Eq. The information of all transverse sections is stored in array memory to construct the complete object shape. the root mean squared error (rms) is calculated [18] based on a contact method. A laser line is projected on the target by a 15 mW laser diode to perform the scanning. In this manner. the object contour is not completed. a line occlusion is detected in the image. By means of image processing.(9) or Eq. the object depth h(x.77 mm. Based on the network. Thus. If the first image contains line occlusions. the parameters of the binocular setup are computed and physical measurements are avoided. By image processing. From each pair of images. a transverse section of the object is produced. However. Then. Therefore. When an occlusion appears. the object depth is computed via u or v.y) is computed via Eq. The rms is described by the next equation . the values u and v are deduced via Eq.(10). the object reconstruction can be done by means of the binocular imaging. the second image will be used to detect the stripe displacement es.(1) or Eq(2) respectively. in a binocular imaging one of two images provides the occluded line and the object contour is completed.(10). From each pair of the binocular images. To carry it out. In this case. The object is moved in the xaxis by means of the electromechanical device in steps of 1. The object surface produces stripe occlusions shown in figure 9(a). the object is measured by a coordinate measure machine (CMM). To perform the contouring. If the pixels of high intensity is minor than three in a row. The first object to be profiled is a dummy face see figure 9(a) and 9(b).(9) or Eq. the first image is used to detect the stripe displacement as. respectively. the computational performance provides all parameters to reconstruct the object shape.(7). To know the accuracy of the data provided by the network. the second image figure 9(b) provides the area of the occluded stripe. Then. the dummy face is scanned in x-axis in steps of 1. the displacement as and es are computed.

The rms was computed using n=1122 data and the result is a rms = 0. sixty eighty lines were processed to determine the complete object shape shown in figure 9(c). The second object to be profiled is a metallic piece figure 10(a). The scale of this figure is mm. First image of the dummy face whit line occlusion. which were provided by the network and by the CMM as reference. Apolinar Muñoz-Rodríguez rms = 1 n ∑ (hoi − hci ) 2 .100 J.27 mm. Figure 9(a). The contouring is performed by scanning the metallic piece in steps of 1. Fifty eight lines were processed to determine the complete object shape shown in figure 10(b). In this case.114 mm. The metallic piece was measured by the CMM to determine the accuracy provided by the network. The rms is calculated for this object is a rms = 0. . In this case. n i =1 (30) where hoi is the data measured by the CMM.y) by network and n is the number of data.155 mm for the dummy face. hci is the calculated data h(x. The rms was computed using n=480 data.

Three-Dimensional Vision Based on Binocular Imaging… 101 Figure 9(b). . Second image of the dummy face with out line occlusion. Figure 9(c). Three-dimensional shape of the dummy face.

Three-dimensional shape of the metallic piece. The value n has an influence in the confidence level respect to the precision of the error calculated. Apolinar Muñoz-Rodríguez Figure 10(a). To determine if the value n is according to the desired precision. the confidence level is calculated by the next relation [19] ⎛ σ ⎞ n = ⎜ zα x ⎟ . Figure 10(b).102 J. Metallic piece with to be profiled. e ⎠ ⎝ 2 (31) .

distances of the geometry of the setup to obtain the object shape are not used. therefore using the rms the error is 0. Therefore. the procedure is easier than those techniques that use distances of the components of optical setup. The employed computer in this process is a PC to 1 GHz. the technique is performed by computational process and measurements on optical step are avoided. The complete shape of the dummy face is profiled in 4. The described technique here provides a valuable tool for inspection industrial and reverse engineering. the result is zα =3.0. Therefore. The confidence level desired is 95 %.(31) is applied. the confidence level according to the data n can be described by zα = e σx n. To determine the precision of this error. The electromechanical device is moved also at 34 steps per second. In this procedure. binocular imaging and approximation networks has been presented. The average of the height of the face surface is 19. and σx is standard deviation. which corresponds to zα=1. the confidence level is greater than 95% for metallic piece.(5).96 according to the confidence table [26]. It indicates a confidence level greater than the 95%.50 mm. because . Also. a good repeatability achieved in experiment of a standard deviation +/. e is the error expressed in percentage.0079. Conclusions A technique for shape detection performed by means of line projection. as is common in the methods of laser stripe projection.22 sec. In this technique. the parameters of the setup are obtained automatically by computational process using a Bezier network. Substituting the values in Eq. Therefore.011 sec.Three-Dimensional Vision Based on Binocular Imaging… 103 where zα is the confidence desired.18 sec. which represents a 0. Each stripe image is processed in 0. the confidence level is calculated for the n=1122. The capture velocity of the camera used in this camera is 34 fps.(32).79 and standard deviation is 7. The automatic technique avoids the physical measurements of the setup.14.01 mm. (32) To know if the value n chosen is according with the confidence of level desired Eq.7061. In this manner. It improves the accuracy of the results. This time processing is given because the data of the image is extracted with few operations via Eq.79 % of error.(4) and Eq. e = 0. and the metallic piece is profiled in 3.

K. p. “A paintbrush laser range scanner”. “Stereo matching and occlusion detection with integrity and illusion sensitivity”. Zagorchev and A. p. 2956-2966 (2003). p. 42 No. 49.S. “Realistic model for the output beam profile of stripe and tapered superluminescent light–emitting diodes”. Mclvor. p. Nakamizo and H. Goshtasby. [6] J. Pattern recognition letters. 2730-2735 (1996). Optical Engineering. 10. Vol. Gasson. Ono. Sarma. Koschan. 1143-1149. [1] . Xu. “A long-distance stereoscopic detector for partially occluding surfaces”. S. L. Reng and K. Applied Optics. 1180-1186. (2005). Zhou.104 J. A. G. 4341-4348 (2003).1. Apolinar Muñoz-Rodríguez measurement errors are not introduced to the system for the shape detection. Computer vision and image understating. Optical Engineering. Luo. 1989. (2002). Vol. In this technique. p. Vol. Vol. Journal of Modern Optics. [10] R. 137-154 (2003). Vol. 219-232. Causa and J. the ability to measure the stripe behavior with a sub-pixel resolution has been achieved by Bezier curves. Schluns and A. Mitsudo. p. Geometry of spatial forms.1. Andersen. Muñoz Rodríguez and R. It is achieved with few operations. 9. 10. Vol. 3. Computers and Electronics in Agriculture. p. Singapore 1998. Konishi and N. (2003). U. Optical Engineering. [2] Z. p. 65-86 (2006). Yu and D. 42 No. “Non contact profilometric measurement of large form parts”. Therefore. M. Zhang and Y. [4] W. [3] A. p. “Evaluation of the light line displacement location for object shape detection”. 46. [5] M. [12] Peter C. John Wiley and Sons. Klette. 24. [7] Q. J. T. References L. S. “Geometric plant properties by relaxed stereo vision using simulated annealing”. Baba. Ch. 42 No. Xiao. Tai and M. Kirk. Chang. 241-249 (1998).21. Rodríguez-Vera. Computer vision: Three-dimensional data from images. “Calibration approach for structured–lightstripe vision sensor based on invariance of double cross-ratio”. Journal of Optics. By using this computational-optical setup a good repeatability is achieved in every measurement. Vol. 35 No. Vol. Springer. J. [9] H. Vision Research. Vol. this technique is performed in good manner. p. “A novel fast rangefinder with nonmechanical operation”. [11] F. “Nonlinear calibration of a laser profiler”.A. 50 No. Wei. Vol. [8] H. (2006). Kobayashi. 205-212. 29 No.

P. [17] W. Shin. 405–416. Numerical Recipes in C. B. U.Flannery.H. “Optimal multi-degree reduction of Bézier curves with constraints of endpoints continuity”.S. (2004). Cambridge Press. 1993. S.Three-Dimensional Vision Based on Binocular Imaging… 105 [13] H. Modern Elementary Statistics. U. U. [18] T. p. [15] Y.T. (1979). Practical Neural Networks Recipes in C++. Y.Vetterling.A. 167.A. Computer Aided Geometric Design Vol.S. J. (2002). [14] Robert J. . Kim and Y. Chen and G. Lieberman. W.S. E. Schalkoff. D. 19. J. Artificial Neural Networks. S. 1997. Journal of Computational and Applied Mathematics. Frederick and G. Mc Graw Hill. U.A. [19] J. Freund. [16] G. McGraw.S. 1982. J.Teukolsky. Introduction to operations research. 365–377. p. Vol. Press. Prentice Hall. Ahn. Masters.A 1993.A. Wang. U.A.S. “Approximation of circular arcs and o set curves by Bezier curves of high degree”.Hill. Academic Press.


However. To quantify the extent of nystagmus. Ferrara. via Saragat. The majority of CN patients show a considerable decrease of their visual acuity: image fixation on the retina is disturbed by nystagmus continuous oscillations. allowing physicians to extract and analyze nystagmus main features such as waveform . its pathogenesis is still under investigation. Italy 2 Math4Tech Center. Congenital Nystagmus (CN) is of peculiar interest. Via Claudio. La Gatta Antonio2. This kind of nystagmus is termed congenital (or infantile) since it could be present at birth or it can arise in the first months of life. 80125. reducing the visual quality of a subject. 44100. pp. the image of a given target can still be stable during short periods in which eye velocity slows down while the target image is placed onto the fovea (called foveation intervals). mainly horizontal. while identified more than forty years ago. University “Federico II” of Naples. 107-123 ISBN: 978-1-60876-547-8 © 2010 Nova Science Publishers. conjugated ocular oscillations and. 1. Bifulco Paolo1 and Fratini Antonio1 Dept. Italy 1 Abstract Along with other diseases that can affect binocular vision. Electronic and Telecommunication Engineering. of Biomedical. 21. Chapter 4 EYE MOVEMENT ANALYSIS IN CONGENITAL NYSTAGMUS: CONCISE PARAMETERS ESTIMATION Pasquariello Giulio1.In: Binocular Vision Editors: J. McCoun et al. Napoli. eye movement recordings are routinely employed. Inc. CN is an ocular-motor disorder characterized by involuntary. University of Ferrara. Cesarelli Mario1.

moreover they tend to damp in absence of visual activity. La Gatta Antonio et al. conjugated. opportunely processed.108 Pasquariello Giulio. aniridia and congenital cataract. Introduction Congenital nystagmus (CN) is an ocular–motor disorder that appears at birth or during the first few months of life. or rotary. depending on the stimuli and viewing conditions. amplitude and frequency. In vertebrates. smooth pursuit. characterized by involuntary. with some torsional and. dysfunctions of at least one of the ocular stabilization systems have been hypothesized. optokinetic and vergence systems. This study aims to disclose new methodologies in congenital nystagmus eye movements analysis. estimates of the prevalence of infantile nystagmus range from 1 in 1000 to 1 in 6000 [23. Pathogenesis of the congenital nystagmus is still unknown. allows computing “estimated visual acuity” predictors. Landolt C test) and could be a support for treatment planning or therapy monitoring. or any combination of these. Use of those functions extends the information acquired with typical visual acuity measurement (e. The vestibular system is driven by non-visual signals from the semicircular canals. shape. vertical. Both nystagmus and associated ocular alterations can be genetically transmitted.. Cesarelli Mario. Use of eye movement recording. Hence. vestibular. In this chapter the current methods to record eye movements in subjects with congenital nystagmus will be discussed and the present techniques to accurately compute foveation time and eye position will be presented. eye movements are controlled by the oculomotor system in a complex manner. it is fundamental to develop robust and accurate methods to measure both those parameters in order to obtain reliable values from the predictors. vertical motion [1]. while the other systems are mainly driven by visual signals encoding target information. In the human eye.g. in order to identify nystagmus cycles and to evaluate foveation time. Nystagmus can be idiopathic or associated to alteration of the central nervous system and/or ocular system such as achromathopsia. Nystagmus oscillations can persist also closing eyes. which are analytical functions that estimate expected visual acuity using signal features such as foveation time and foveation position variability.38]. reducing the influence of repositioning saccades and data noise on the critical parameters of the estimation functions. CN is predominantly horizontal. but no clear evidence was reported. Clinical descriptions of nystagmus are usually based on the direction of the fast phase and are termed horizontal. . with different modalities. An attempt to bring the image of a target onto the fovea can involve up to five oculomotor subsystems: the saccadic. rarely. the little portion of the retina which allows the maximal acuity of vision is called the fovea.26. bilateral to and fro ocular oscillations.34.

as this form of nystagmus tends to worsen with attempted fixation. Hertle.g. including those with identifiable causes. for each nystagmus cycle. the term infantile nystagmus syndrome (INS) is a broader and more inclusive term that we prefer not to use since it refers to the broad range of neonatal nystagmus types. congenital cataracts). In general. retinal dystrophy). Some authors (e. optic nerve hypoplasia).42]. Although the set of physiological circumstances may differ. . The terms congenital nystagmus (CN). Terminology (Definitions) Efforts are being made to add precision and uniformity to nystagmus terminology.30.g. familial X-linked) of ocular motor calibration. This happens more often when the child is concentrating on a distant object. idiopathic nystagmus can be classified in different categories depending on the characteristics of the oscillations [2].. 2006) hypothesized that CN may also result from abnormal cross-talk from a defective sensory system to the developing motor system at any time during the motor system’s sensitive period. According to the nystagmus waveform characterization by Dell’Osso [20].. typically in CN eye movement recordings are possible to identify. in case the return phase is slow then the nystagmus cycle is pendular or pseudo-cycloid..17.g. the fast (or slow) return phase. CN may result from a primary defect (e. The cause or causes and pathophysiological mechanisms of CN have not been clarified. CN is present in most cases of albinism.g. or after birth during infancy (e.. the final common pathway is abnormal calibration of the ocular motor system during its sensitive period. the slow phase. which is used to maintain the eyes in the position of gaze in which the nystagmus is minimum. during embryogenesis due to a developmental abnormality (e. this can occur from conception due to a primary defect (e. Children with this condition frequently present with a head turn..Eye Movement Analysis in Congenital Nystagmus 109 CN occurrence associated with total bilateral congenital cataract is of 50– 75%. However. while this percentage decreases in case of partial or monolateral congenital cataract. infantile nystagmus and idiopathic motor nystagmus have become synonymous with the most common form of neonatal nystagmus [4.31. CN waveform has an increasing velocity exponential slow phase [2]. taking the target away from the fovea. According to the bibliography. The head turn is an attempt to stabilize the image under these conditions. if the return phase is fast then the waveform is defined as jerk (unidirectional or bidirectional).g. This theory of the genesis of CN incorporates a pathophysiological role for the sensory system in its genesis and modification.

nystagmus period and amplitude. In CN patients. Figure 1. on the picture are depicted various nystagmus features.e. Abnormal head posture could be alleviated by surgery (mainly translating the null zone to straight-ahead position). include increased intensity with fixation and decreased intensity with sleep or inattention. A schematic illustration of a jerk nystagmus waveform (bold line) with fast phase pointing to the left. The baseline oscillation is shown as a dashed line. in which a smaller nystagmus amplitude and a longer foveation time can be obtained. i. changing direction in different positions of gaze (about a so-called neutral position). In normal subjects. La Gatta Antonio et al. Other clinical characteristics. Such so-called ‘null zones’ (or null positions) correspond to a particular gaze angle. A schematic illustration of a unidirectional jerk nystagmus waveform (pointing to the left) is presented in figure 1. in many cases. thus reaching a better fixation of the visual target onto the retina. CN patient visual acuity reach a maximum when eyes are in the position of least ocular instability. hence. in order to bring the zone of best vision into the straight-ahead position. Cesarelli Mario. visual acuity and contrast sensitivity decrease. variable intensity in different positions of gaze. CN patients show a considerable decrease of the visual acuity. not affected by nystagmus. decreased intensity (damping) with convergence. fixation is disrupted by nystagmus rhythmical oscillations.110 Pasquariello Giulio. such as: fast and slow phase components. the grey box on each cycle represents the foveation window. which result in rapid movements of the target image onto the retina [6]. since image fixation on the fovea is reduced by nystagmus continuous oscillations. In general.. Ocular stabilization is achieved during foveation periods . when the velocity of the image projected on the retina increases by a few degrees per second. strabismus and an increased incidence of significant refractive errors. not always present. a compensatory head malposition is commonly achieved. and its amplitude is also shown.

eye movement recording is often one of the necessary steps. 35]. If a subject is diagnosed with CN. in some cases. During the first examination. but a general physical examination and the assessment of vision are usually preliminary performed. Ocular motility analysis in CN subjects is also the most accurate method to determine nystagmus changes with gaze (null and neutral zones). conjugacy. Visual acuity was found to be mainly dependent on the duration of the foveation periods [2. . Clinical Assessment The clinical examination of a subject affected by congenital nystagmus is a complex task. in this short time interval called ‘foveation window’. gaze effects.Eye Movement Analysis in Congenital Nystagmus 111 [2] in which eye velocity slows down (less than 4 degrees/s) while the visual target crosses the foveal region (± 0. it is said that the subject ‘foveates’. In addition an adequate fundus examination is often carried out. presence of anomalous head positions while viewing distant or near objects. but the exact repeatability of eye position from cycle to cycle and the retinal image velocities also contribute to visual acuity [1. Accurate. and repeatable classification and diagnosis of nystagmus in infancy as CN is best accomplished by a combination of clinical investigations and motility analysis.31. This concept is consistent with the theory that jerk waveforms reflect modification of the nystagmus by growth and development of the visual system [28.20. ocular motility study can also be helpful in determining visual status. Analysis of binocular or monocular differences in waveforms and foveation periods could be an important information in therapy planning or can be used to measure outcome of (surgical) treatment.5 degree). eye movement recording and analysis are indispensable for diagnosis. such as direction of eyes’ beating.42].22]. in order to asses eventual prechiasmal visual disorders.29]. Presence of pure pendular or jerk waveforms without foveation periods are indicators of poorer vision whereas waveforms of either type with extended periods of foveation are associated with good vision. moreover significant interocular differences in a patient reflect similar differences in vision between the two eyes. movement intensity.17. Numerous studies of CN in infants and children confirm an age-dependent evolution of waveforms during infancy from pendular to jerk [4. Complete clinical evaluation of the ocular oscillation also includes identification of fast-phase direction. the physicians can assess the most important features of nystagmus. uniform.

and the smooth pursuit system. Visual acuity of the patient is habitually tested with both eyes open (binocular viewing) and with one eye covered (monocular). Measurement of the nystagmus waveform. The first technique relies on the fact that the eye has a standing electrical potential between the front and the back. Nowadays more complex and less invasive methods are available. Cesarelli Mario. magneto-oculography (MOG) also known as scleral search coil system (SSCS) and video-oculography (VOG). as the eye turns a proportional . Examination Techniques: Motility However. the best choice to assess visual acuity in an older child and cooperative adult is the ETDRS chart. the visual stabilization system. it is well documented that differentiating true nystagmus from saccadic oscillations and intrusions is sometimes impossible clinically. Horizontal EOG is measured by placing electrodes on the nasal and temporal boundaries of the eyelids. La Gatta Antonio et al. Such measurements help differentiate acquired nystagmus from congenital forms of nystagmus and from other saccadic disorders that lead to instability of gaze [36]. from biopotential recording up to high-speed photographic methods. nystagmus is caused by disorders of the mechanisms responsible in holding gaze steady: the vestibular system. Changes in the character of the nystagmus with convergence or monocular viewing are often evaluated. Ocular Motility Recordings Qualitative or quantitative analysis of eye movements was attempted since the early twentieth century.4]. These two are often very different in patients with nystagmus and both has to be tested in CN subjects. especially those between 20/400 and 20/100 [29]. convergence effects. infrared oculography (IROG). the gaze-holding mechanism. As stated above. Recent advances in eye movement recording technology have increased its application in infants and children who have disturbances of the ocular motor system [1. is often helpful in securing a diagnosis. Among the various available tests. and effect of monocular cover. Various techniques are currently in use to record eye movements: electro-oculography (EOG).112 Pasquariello Giulio. since it provides LogMar evaluation of all acuities. Thus. evaluation of a patient’s nystagmus requires a systematic examination of each functional class of eye movements. using reliable methodology. It is important not to forget that binocular acuity is the “person’s” acuity and monocular acuity is the “eye’s” acuity. with primitive electronic technology available at that time.

994 to 1. and van der Steen. +/-1° over a tested range of 40 by 40° of visual angle) and linear fits near one (range..096) for saccadic properties. often an infrared device coupled with an infrared illuminator (in order to avoid disturbing the subject). They found a very good correspondence between the video and the coil output. by convention. as example. during continuous periods of time. The scleral search coil method is based on electromagnetic interaction at radio frequencies between two coils. and noise. Canada) with 2D scleral search coils. one (embedded in a contact lens) fixed on the eye sclera and the other external. making it suitable for children younger than 10 years old. Infrared emitters and detectors are located in fixed positions around the eye. and applying image processing techniques. The IROG approach relies on measuring the intensity of an infrared light reflected back from the subject’s eye. in addition video oculography is not at all invasive. the latest generation infrared video system is a good alternative to scleral search coils. computed eye velocity is shown . Ontario. The amount of light reflected back to a fixed detector varies with eye position. IR video systems have become increasingly popular in research laboratories and in the clinical setting. with up being rightward eye movements and down being leftward eye movements. drift. However. hence the comparison between IR and the scleral search coil method has become an actual issue. with a high correlation of fixation positions (average discrepancy. and inherent small instabilities of pupil tracking algorithms still make the coil system the best choice when measuring eye movement responses with high precision or when high-frequency head motion is involved. Bilateral temporal and nasal electrode placement is useful for gross separation of fast and slow phases but is limited by nonlinearity. Mississauga. Different studies analyzed this subject reporting a good performance of video oculography compared with scleral search coils. 0. a signal tracts recorded from actual CN patients. Position and velocity traces are clearly marked. [33] found that lower time resolution. the quality of torsion of the infrared video system is less compared with scleral search coils and needs further technological improvement. Figure 2 reports. Infrared reflectance solves these problems and can be used in infants and children but it is limited by difficulty in calibration. However. CN eye movement recordings are often carried out only on the horizontal axis and display the data. possible instability of the head device of the video system. SR Research Ltd. For less demanding and for static tests and measurements longer than a half an hour. The VOG approach relies on recording eye position using a video camera..Eye Movement Analysis in Congenital Nystagmus 113 change in electrodes potential is measured. Goumans. Van der Geest and Frens [43] compared the performance of a 2D video-based eye tracker (Eyelink I. Houben.

It has been used with success to separate . 1989-1990. An example of eye movement recording.5 s and 27. However resolution of this technique is limited by the duration of the windows in which the signal is divided [3]. Figure 2. such as nystagmus amplitude.g. Semiautomatic Analysis of Eye Movement Recordings Congenital nystagmus is a rhythmic phenomenon and researchers have tried to analyze eye movements signals using methodologies specific for frequency analysis such as spectral and wavelet analysis (Reccia et al. Miura et al. 2003). Wavelet analysis seems more useful since it is able to assess how much the signal in study differs in time from a specific template adopted as a reference and it is able to localize a brief transient intrusion into a periodic waveform [37]. 2002.114 Pasquariello Giulio. Cesarelli Mario. 2004) applied the Short Time Fourier Transform (STFT) to congenital nystagmus recordings. It is possible to identify some nystagmus characteristics. jerk left The eye velocity is also depicted with a 0 °/s threshold. La Gatta Antonio et al. Clement et al. the figure also shows (between 26.. Hosokawa... in order to highlight modifications in the principal component and in the harmonics during time. underneath the eye movement signal.6 s) a saccade of about ten degrees corresponding to a gaze angle voluntary shift. frequency and the fast and slow phases. Fewer authors (e.

who defined a similar function (NAEF) [10]. On the contrary.Eye Movement Analysis in Congenital Nystagmus 115 fast and slow phases in caloric nystagmus. time domain analysis techniques. as demonstrated by Dell’Osso et al. . region-based foveation identification. An example of slow phase and fast phase (bold) separation in CN eye movement recordings. the outcome of this analysis is a time-frequency plot or a coefficient sequence. syntactic recognition or time series analysis. Usually. visual acuity increases in people suffering from congenital nystagmus if the foveation time increases and the signal variability decreases. as stated by Abel [3]. who defined an analytic function to predict visual acuity (NAFX) [15] and by Cesarelli et al. either congenital or vestibular. such as velocity thresholds. which are difficult to relate to a subject visual ability. Figure 3. Moreover. the foveation time modification in each cycle and the variability of position between successive foveations can hardly be highlighted using STFT and wavelet analysis [3]. An analysis of the signal characteristics near the desired position (target position) can easily take place in the time domain. However. have been routinely employed in the last decades to analyse nystagmus.

corrective phase. however its application with semi-automatic methods still needs improvement both in performance and reliability. Hence. a local foveation window can be defined. Figure 4. Time domain analysis of congenital nystagmus is the most used technique and. jerk with extended foveation followed by pendular and pseudo-cycloid [1]. which allows to separate the effects of changes in foveation time and alteration in eye position on visual acuity [10]. The first step of each algorithm for the time analysis of rhythmic eye movements is the cycles identification: in congenital nystagmus. the foveation takes place when eye velocity is small. however only the first waveform allows foveation time which ensure a good visual ability. The analysis oh these two separate effects is of strong importance due to the presence of a slow ‘periodic’ component in the eye movement signal. followed by the fast. located at the end of each fast phase and at the beginning of the slow phase.116 Pasquariello Giulio. .39]. The local foveation windows identified for each nystagmus cycle. which happens after the fast phase. in our opinion. La Gatta Antonio et al. which we called baseline oscillation (BLO) [7. best option so far and it is able to estimate the visual ability of a subject at different gaze angles. the most common waveforms are jerk. Cesarelli Mario. The CN jerk waveforms can be described as a combination of two different actions: the slow phase taking the eye away from the desired target.

i. slow oscillation. . Evans [24] reported that some of the analyzed patients fail to coordinate target with fovea position (approximately 50% of patients). Currie et al. 1986). pointed us to characterize in more details such foveation variability. Similarly. The high frequency limit result from the lowest frequency commonly associated to nystagmus (accordingly to Bedell and Loshin.16] it is possible to recognize slow oscillations superimposed to the nystagmus. a common least mean square (LMS) fitting technique could be used.12] and its relation with the SDp was estimated [7].e. Their results are that acuity depends on both foveation duration and position variability. was also reported by Gottlob et al.16]. [14] evaluated acuity for optotypes in healthy subjects using moving light sources to simulate retinal image motion that occurs in nystagmus. caused a worsening of visual acuity.g. which didn’t correspond to large extensions n foveation time. Nystagmus and the slow oscillation could modify visual acuity. in eye movement recordings presented by Dell’Osso et al. Moreover.1–1. using dynamical systems analysis to quantify the dynamics of the nystagmus in the region of foveation. [27]. Akman et al. In addition. tracking moving targets. superimposed to nystagmus. [15.11. obtained with botulinum toxin treatment.Eye Movement Analysis in Congenital Nystagmus 117 Slow Eye Movements in Congenital Nystagmus The role of the standard deviation of eye position (SDp) during foveations with respect to visual acuity has been discussed in the past ten years [10. although the presence of other sensory defects (e. or steady state. 1991. A slow sinusoidal-like oscillation of the baseline (baseline oscillation or BLO) was found superimposed to nystagmus waveforms [9.5 Hz can be considered as an estimator of the BLO frequency. is not unique. Physiologically this means that the control system does not appear to maintain a unique gaze position at the end of each fast phase.22 Hz) waves to the light stimuli. [5]. Presence of similar slow pendular waveforms. In order to estimate the slow sinusoidal oscillations. and Abadi and Dickinson. found that the state-space fixed point. they found that an addition of low-frequency (1. astigmatism) must be taken into account. Fostered also by a remarkable increase in some CN patients’ visual acuity. Kommerell [35] noticed that in CN patients. while the low frequency limit depends on the signal length corresponding to each gaze position (in our tests approximately 10 s). For each signal block the highest peak of the power spectrum of the eye movement signal in the range 0. the eye recording presented a slow eye movement superimposed to the stimulus trajectory in addition to nystagmic cycles.

La Gatta Antonio et al. Figure 5a and 5b. .118 Pasquariello Giulio. Cesarelli Mario. Examples of acquired signals showing the presence of the slow eye movement added up to the nystagmus oscillations.

which implies that BLO amplitude on average is one half of the correspondent nystagmus amplitude. The regression line slope coefficient was about 0. The origin of such baseline oscillation is unknown. the slow eye movement. Therefore the origin of the BLO could be searched analyzing within the same ocular motor subsystems considered for nystagmus. explains most of the eye position variability during foveations (SDp) [7]. which in turn was found exponentially well related to visual acuity [10]. The baseline oscillation highlights the presence of a slow ‘periodic’ component in the eye movement signal. baseline oscillation parameters can be estimated for any CN eye movement recordings. lasting to even 1 second) and other non linear components. psychologists/ psychiatrists. such as waveform shape. ophthalmologists. sometimes and randomly disrupted by phase inversions. the basic shape of the baseline is indeed a sinusoid. foveation periods and eye position variability. According to the procedure described above. not in the position in which nystagmus amplitude is lesser). In a case study by Pasquariello et al. its presence is particularly evident in the signal tracts away from the null zone (i.78) was also found in the linear regression analysis of BLO and nystagmus amplitude. in the remaining 30% the amplitude of the BLO was smaller and didn’t affect significantly visual acuity. and optometrists [18.Eye Movement Analysis in Congenital Nystagmus 119 Conclusion Eye movement recording methodology is most commonly used as a research tool by neurologists. are a strong support for an accurate diagnosis.e. Eye movement recording and estimation of concise parameters. In that study a high correlation coefficient (R2 = 0. interruptions (as short as hundreds of milliseconds.. The sine function is a rather good estimator of this slow periodic component added to nystagmus. nystagmus amplitude. However. Specifically. the high value of the correlation coefficient between BLO and nystagmus amplitude found in this study suggests that the two phenomena are somewhat linked together. since BLO amplitude resulted directly related to nystagmus amplitude. neurophysiologists.21. Regarding the last parameter. direction of beating.25]. suggesting a strong level of interdependence between the two. frequency. carried out on 96 recordings. almost 70% of the recordings had BLO amplitude greater than 1° (appreciatively the fovea angular size). psychophysicists. described as baseline oscillation. To the periodic component represented .5. for patient follow-up and for therapy evaluation [8]. Some authors assert that slow movement can be recorded only in subjects with severely reduced visual experience from birth (like CN patients) [27].

Motor and sensory characteristics of infantile nystagmus. Sci. References [1] [2] [3] [4] Abadi RV. Ocular oscillations. Dell'Osso LF. additional random movements should be added. 2008 Aug. Ophthalmol. 3413-23 Abel LA. The relationship between Baseline Oscillation and Nystagmus amplitude. Invest. 237. 163–189. 153-67. 49(8). Ophthalmol. Congenital and acquired. Doc. Bjerre A. 2002. Wavelet analysis in infantile nystagmus syndrome: limitations and abilities. Dickinson CM. Ophthalmol. Abel LA. La Gatta Antonio et al. 64. Br. Soc. 86. by BLO the small. Waveform Characteristics in Congenital Nystagmus. 1986. 1989.120 Pasquariello Giulio. Figure 6. Belge Ophthalmol. J. 1152-1160 Abadi RV. Vis. in order to assess the whole variability of eye position during fixation [7]. . Wang ZI. Bull. Cesarelli Mario.

Warsaw 1997. 301-2. Analysis of foveation duration and repeatability at different gaze positions in patients affected by congenital nystagmus. Doc. 104. Relationship between Visual Acuity and Oculogram Baseline Oscillations In Congenital Nystagmus. Loffredo L. Van Der Steen J. Ophthalmol. Lemesos . 2006. Bracale M. Bedell HE. VIII Mediterranean Conference on Medical Biological Engineering and Computing . Bifulco P. 1992. Doc. Muldoon MR. 131-136. Eye Movement Baseline Oscillation and Variability of Eye Position During Foveation in Congenital Nystagmus. Bifulco P. Cesarelli M. Bifulco P. 79. 416-21. Bifulco P. 42. Clement RA. J. Clement RA. June 1417. Doc. Steinman RM. 249-276. 2007. 21(2). Eye Movement Baseline Oscillation in Congenital Nystagmus. 2000. Cesarelli M. 7384. 2002. Bracale M. Song S. Whittle JP. Abadi RV. Doc.Cyprus. Dell'Osso LF. 1-23. Characterisation of congenital nystagmus waveforms in terms of periodic orbits. 1991. Relationship between visual acuity and eye position variability during foveation in congenital nystagmus. Cesarelli M. Invest. Sansone M.3 Cesarelli M. Neurosci. Cesarelli M.Eye Movement Analysis in Congenital Nystagmus [5] [6] [7] [8] [9] [10] [11] 121 [12] [13] [14] [15] [16] Akman OE. . Nonlinear time series analysis of jerk congenital nystagmus. 426-429.CD-ROM 19. et al. 8. Abadi RV. 2003. Proceedings of the 4th European Conference on Engineering and Medicine. Loffredo L. Jacobs JB. An Expanded Nystagmus Acuity Function: Intraand Intersubject Prediction of Best-Corrected Visual Acuity. Loffredo L. 59-72. Akman O. Bifulco P. 1993. 2123–2130 Currie DC. 107. Loffredo L. Interrelations between Measures of Visual Acuity and Parameters of Eye Movement in Congenital Nystagmus. 2002. Ophthalmol. 153-70. Ophthalmol. Sci. Bedell HE. Ophthalmol. Foveation Dynamics in Congenital Nystagmus. Vision Research. Broomhead DS. 101. Dell’Osso LF. Magli A.MEDICON '98. Comput. Cesarelli M. Broomhead DS. Loshin DS. EOG Baseline Oscillation in Congenital Nystagmus.. Clin. I: Fixation. Loffredo L. Proceedings of the WC2003. Sansone M. 16 (12). Visual Acuity for Optotypes with Image Motions Simulating Congenital Nystagmus. Collewijn H. IFMBE Proceedings. Bracale M. Vision Sci. 32. Vis. Ophthalmol. 1998 .

Congenital. 3. J. Optom. Am. Semin. 155-82. Cesarelli Mario. Ophthalmol. [26] Forssman B. 462–469. and strabismus: the basis of a clinical algorithm. Wizov SS. Ophthalmol. Stark L. 1996. Am. manifest latent. 63-9. and Congenital Nystagmus. 1877–1885. Reinecke RD. 48. Vol III. 25. The significance of the nystagmus. Abadi RV. La Gatta Antonio et al. Schmidt D. Pediatr. 139-47. Genet. 1973. [27] Gottlob I. Vision Res. A quantitative evaluation of the effects. 234. Eye. Ophthalmol. A developmental model of infantile nystagmus. Dell’Osso LF. 816-832. Ophthalmol. Darof RB. Dell’Osso LF. Am. Graefes Arch. Jpn. Hasebe S. 2000. [17] Dell’Osso LF. latent and manifest latent nystagmus-similarities. 35. Jpn. 39.122 Pasquariello Giulio. Ophthalmol. 1972. [21] Dell’Osso LF. 29(4). 21(2). Prevalence and inheritance of congenital nystagmus in a Swedish population. Arch. 1989. 3-13. 2006. 262-7 . [32] Hosokawa M. Congenital Nystagmus waveform and foveation strategy. Spiegel PH. Clin. Flynn JT. 86(8). Arch. 1089-96. Assoc. [28] Harris C. 4(1). Optom. 1979. RW. J. Head and eye movements in children with low vision. 2004. Oculographic and clinical characterization of thirtyseven children with anomalous head postures. [31] Hertle RW. J. The effects of Congenital Nystagmus surgery. Part 2. Nystagmus and Ocular Oscillations in Childhood and Infancy. Gauthier G. 1979. Ann. Handbook of pediatric neuro-ophthalmology. Strabismus. Eye movement recordings as a diagnostic tool in a case of Congenital Nystagmus. Ophthalmol. London: Henry Kimpton. [19] Dell’Osso LF. In: Wright KW. Latent. Thompson LH editors. The Influence of the nystagmoid oscillation on contrast sensitivity in normal observers. nystagmus. Acad. 369-77. 351-368. Systems of ophthalmology. Ohtsuki H. [18] Dell’Osso LF. 97(3). 1414–1427. Strabismus. Zhu X. Hum. 25–32. Ringer B. 1971. Tsuchida Y. [22] Dickinson CM. [24] Evans N. Liberman G. 1979. 1999. [25] Flynn JT. Doc. [23] Duke-Elder S. Assoc. Ophthalmology. 70–9. Exp. differences and relation to strabismus. New York: Springer. Clinical and ocular motor analysis of congenital nystagmus in infancy (see also comments). 289-323 [30] Hertle RW. 1985. [29] Hertle. Time-Frequency Analysis of Electro-nystagmogram Signals in Patients with Congenital Nystagmus. 1975. 3(2). J. 49(1). 1985. 97(10). Am. 2006 Apr-Jun. Ophthalmol. Daroff RB. J. Congenital Nystagmus surgery. [20] Dell'Osso LF. Arch. Ophthalmol. Berry D. Ophthalmol. Pediatr.

Recording eye movements with videooculography and scleral search coils: a direct comparison of two methods. 47(1):179-187 [34] Hu DN. Ophthalmol. 295-8. Optican LM. Incidence and occupational prognosis. MMJ. Ophthalmol. J. 24. Russo P. 1(2). Vis. 2345–2356 [38] Norn MS. Med. Romano M. J. 1964. [39] Pasquariello G. Ophthalmol. 1992. Characterisation of baseline oscillation in congenital nystagmus eye movement recordings. 43. Vision Research. Neurosci. Computer analysis of ENG spectral features from patients with congenital nystagmus. Baillieres Clin. 2006. . Wavelet spectral analysis. 185–195. 1989. Acta Ophthalmol. 584–8. Bifulco P. June 1997. Journal of Biomedical Engineering. Ophthalmic Research. [43] van der Geest JN. 2003. Clin. Invest. 21. Russo P. Sci. 1987. Goumans J and van der Steen J Recording ThreeDimensional Eye Movements: Scleral Search Coils versus Video Oculography. 67–82. 224(3). Prevalence and mode of inheritance of major genetic eye disease in China. Roberti G. Neurol. 4. Clinical features and pathogenesis of acquired forms of nystagmus. 2009 Apr.Eye Movement Analysis in Congenital Nystagmus 123 [33] Houben. 39–45. [42] Reinecke RD. Effects of tenotomy surgery on congenital nystagmus waveforms in adult patients. Cesarelli M. Am. Congenital idiopathic nystagmus. 2002. 114. Assoc. Exp. 1990. J. [40] Reccia R. 102–107. Strabismus. Spectral analysis of pendular waveforms in congenital nystagmus. Congenital nystagmus: control of slow tracking movements by target offset from the fovea. [37] Miura K. 1(2). Methods. [36] Leigh RJ. [35] Kommerell G. Roberti G. 393–416. Pediatr. FitzGibbon EJ. Idiopathic infantile nystagmus: diagnosis and treatment. La Gatta A. 42. Fratini A. Part I. 83–92. Frens MA. 1986. Graefes Arch. Hertle RW. 889–96. [41] Reccia R. Biomedical Signal Processing and Control. Genet. 12.


* E-mail address: vgrishin@iki.rssi. 84/32 Profsoyuznaya Str. All necessary technologies exist and the degree of its maturity is high. All these tasks have been successfully solved separately in various projects. weight and power consumption. angular and linear UAV motion measurement. the AI control system will obtain a high degree of awareness about the state of the environment. The second stage of CVS development is integration of CVS and control systems with artificial intelligence (AI). landing (in particular shipboard landing). 125-137 ISBN: 978-1-60876-547-8 © 2010 Nova Science Publishers. This allows the realization of a high degree of control effectiveness of the autonomous AI system in a fast changing and hostile environment.ru . pp. CVS are used for precision navigation.In: Binocular Vision Editors: J. McCoun et al. Russia Abstract Applications of computer vision systems (CVS) in the flight control of unmanned aerial vehicles (UAV) are considered. Moscow. The first stage of perspective CVS development is the realization of all the above tasks in a single full-scale universal CVS with acceptable size. The development of perspective CVS can be divided into two stages. Inc. Firstly it will allow considerable improvement of CVS performance and reliability due to accumulation of additional information about the environment. Chapter 5 EVOLUTION OF COMPUTER VISION SYSTEMS Vladimir Grishin* Space Research Institute (IKI) of the Russian Academy of Sciences 117997. This integration will bring two great benefits. Therefore. Secondly. homing guidance and others. In many projects. all UAV flight control tasks can be performed in automatic mode on the base of information that is delivered by CVS.

Further analysis will be dedicated to usage of CVS in mobile robot control systems. The enumeration of these publications may take many pages. This problem is one of the most challenging tasks of control theory and practice. Further increasing of navigation reliability can be achieved by selection of a redundant number of landmarks in the whole area of observation. For instance. the process of recognition (matching) can be reliably performed. Let’s list the key tasks of such CVS. The technologies for 3D profile reconstruction are well known. Since these landmarks are selected in advance and their reference patterns can be carefully prepared. Reliability of landmark recognition can be subsequently increased by joint usage of landmark images and their 3D profiles. Reliability had to be guaranteed in conditions of possible imitation and masking. so here we refer to a few arbitrarily chosen papers. Another demand which is imposed on these landmarks is reliability of detection and recognition process. perspective aspect angles of observation and observation conditions. mainly in unmanned aerial vehicles (UAV). Reference patterns are prepared with the account of different distances. This task can be solved by means of recognition (matching) beforehand specified objects (landmarks) whose coordinates are known [1]. Information from CVS is usually integrated . Present-Day Level of CVS Development A large number of publications is devoted to the application of CVS to different tasks of UAV flight control. The complex allows reconstructing of a 3D profile with precision about 0. Some weakening of the precision requirement will allow significant decrease in weight and size. the complex of the Israeli firm VisionMap [2] can be referred. However.126 Vladimir Grishin Introduction The computer vision systems (CVS) revealed a great evolution during the last decades. This chapter attempts to estimate the nearest perspective for its development.3 m from the altitude of 3250 m. • High precision navigation [1–6]. 3D profiles are widely used for navigation of missiles of different kinds (Tomahawk cruise missiles and others).2–0. the main principles of this analysis are applicable to most of the different kinds of mobile robots. This complex is heavy enough and has considerable size.

In particular. Such integration allows serious decreasing of accumulated errors of the inertial navigation system [3]. This task is solved by the measurement of angular and linear UAV motion relative to the observed surface or objects [7. recognition and tracing [11–12]. 10]. and this process is called SLAM (simultaneous localization and mapping) [4. 5].Evolution of Computer Vision Systems 127 • • • with information from inertial navigation system. The CVS is able to provide all necessary information for solving such control tasks. 8]. For flight control. it is important to estimate the UAV orientations with regard to local vertical (pitch and roll angles). CVS can provide all necessary information for . These devices are the specialized CVS which are used in automatic spacecrafts for measurement of angular position with high precision and reliability [9. It is a very complicated flight control task. Moreover. The distance and 3D profile of observed objects can be calculated by stereo pairs. Landing is the most dangerous stage of flight. In this aspect. 5. The 3D profile of observed surface can be calculated simultaneously [4. The optical flow is used for estimation of flight altitude. some persons can be injured or property can be damaged. Another example is the ground hugging flight or on-the-deck flight. 8]. Joint usage of CVS and inertial navigation systems allows significant improvement of precision and reliability of angular orientation [7. Flight stabilization and control of angular orientation [7–12]. the optical flow calculation allows us to evaluate the risk of collision and to correct the direction of flight to avoid collision (obstacle avoidance). the star trackers should be mentioned. In this case it is possible to make a 3D reconstruction of very distant objects and surfaces. 7]. The set of features (points in frame with good localization) are selected and traced in subsequent frames. CVS sensors of local vertical use algorithms of horizon line detection. it is possible to make a 3D reconstruction in the monocular mode (with longitudinal stereo base) [6]. These estimations are used for attitude stabilization and control. UAV can crash. In the presence of the on-board high precision inertial navigation system. A good example of such control is the flight control of an airplane or helicopter between buildings in an urban environment in altitude about 10–15 m [15]. The pose estimation and 3D reconstruction are frequently realized in single algorithm. The control system had to guarantee high precision control with a short time delay. too. Observed shifts of a selected set of points are used for calculation of relative angular and linear motion. Near-obstacle flight [13–17]. Landing [18–26]. During this stage.

24]. great attention had been paid to the pattern recognition problem. The first – the landing stripe is small. In particular. detection. These technologies are used in smart weapons (precision-attack bombs). CVS is used also for landing site selection in conditions of forced landing. 3D reconstruction of observed surface allows selection of the most proper landing area. This problem in fact is being solved in the high precision navigation task. recognition and tracking on the aim object. During the last decades. Detection and tracking of selected moving targets [27–30]. The selected target can make evolutions. The most complicated task is the shipboard landing [26]. CVS are used for autonomous landing on the surface of Solar system planets and small bodies [22. . From the flight control CVS view. In such a complicated condition. • Recognition of preliminary specified objects. see for example [19. The remaining part will be realized in the form of on-board real-time systems in near future. car. attempts to hide from the observation and so on. The second – the ship deck is moving.128 Vladimir Grishin the control system which should realize this maneuver. the CVS should use effective search algorithms for target tracking restoration. Homing guidance to selected objects [31]. During this process the observed size of the tracking object changes very significantly. CVS supports landing on unprepared (non-cooperative) sites. In other words. The homing task is similar to the task of navigation. In the case of automatic tracking collapse. A target can be a pedestrian. CVS should guarantee reliable tracing of specified target. The other scene matching terminal guidance systems exist. the pattern recognition problem can be divided into two problems. There are two main difficulties. Significant problem is the multiple changing of distance during the homing process. tracking of selected objects and homing guidance. • • All these tasks have been successfully solved separately in different projects. In such case the correlation matching technologies are used. 21 and 25]. This task includes the recognition of landing stripe or landing site and flight motion control relative to the landing stripe. It includes search. ship or other moving object. The larger part of these projects belongs to the on-board real-time systems. Another example is the Tomahawk cruise missile which is equipped with so-called DSMAC (Digital Scene Matching Area Corellator) system.

The serious attention should be drawn to the development of CVS architecture [40] and special processors for most calculation consuming algorithms. The accessibility of this task by means of present-day technologies is undoubted. The huge volume of such information makes the manual processing impossible. This problem is being successfully solved during the development of automatic systems intended for the processing of visual information – space images of high resolution. engineers and industry for realization this complicated CVS in acceptable size. It is highly probable that development and production of specialized chipsets for image processing in CVS will be required. The cooperation of many groups of researches. weight and power consumption will be needed. These parameters should be acceptable for wide area of practical applications (robots of different kind). But subsequent development of such systems and some simplification of task will allow realizing it in the form of on-board real-time systems. Mention should be made that all these technologies have many common methods and algorithms. Many algorithms have been suggested for solution any of the listed tasks. Such tasks should be realized in work stations as they require a great deal of calculation resources. Small size and cheap CVS will have very wide area of application which will be comparable with the area of GPS navigation receivers application. All necessary technologies exist and the degree of its maturity is high. . Then it is necessary to develop appropriate hardware and software. One of the main aims is the selection of the most effective and reliable algorithms.Evolution of Computer Vision Systems • 129 Recognition as a classification of observing objects [32-36]. Realization of the first stage will allow realizing complete automatic vision-based control of UAV and other mobile robots. Full-Scale Universal CVS The development of perspective CVS can be divided into two stages. The possibility to recognize wide set of objects creates the necessary prerequisites for development artificial intelligence (AI) control systems. The first stage of perspective CVS development is realization of all listed tasks in single full-scale system. Another area of application is the safety systems which should be designed for preventing pilot errors in flight control or driver errors in car control. Some attempts to move in that direction of the multifunctional CVS elaboration are currently appearing [37-42]. In other words the task is to combine (to join) these technologies into the whole interrelated system.

But during the last 20-25 years the quite acceptable approaches to the . The TV camera cost is much smaller then cost of a radar set or laser scanner. Thus the application area of CVS will be much wider then the application area of radar sets and laser scanners. operation planning. However. the requirements of AI system in solution of a task will stimulate the build-up of CVS functions and opportunities. In other words. In that way the close interaction between AI control system and CVS should be established. The modern radar set with electronically scanned array can provide the huge information flow. On the other hand the AI control system will obtains high degree of awareness about the current state of environment. and correction of plans in real time. But its size. Moreover. On the one hand such integration will allows the essential improvement of CVS performance and reliability due to accumulation of additional information about objects of environment. The possibility of gaining and accumulation of AI system individual experience allows improving CVS reliability and effectiveness autonomously during the whole time of CVS and AI system operation. the realization of full-scale AI control system is still too complicated task. realization. The effective methods for developing such combined CVS-AI systems are debatable and rather complicated. a radar set and laser scanner produce electromagnetic wave emission what in some circumstances is highly undesirable. It should be emphasized once more that CVS is the base for AI control systems development due to their high informativity and high awareness. the synergetic effect should takes place. Some activities in the direction of integration some elements of AI and CVS are described in [43-49]. This allows to realize high degree of control effectiveness in uncertain. From the other side.130 Vladimir Grishin Integration of CVS and AI Control System One can say with confidence that the second stage of CVS development is the integration of CVS with AI control systems which realize the functions of task analysis. weight and power consumption are much larger then similar parameters of TV camera. Laser scanner is capable to provide millions measurements per second. There are other high informative sensors. fast changing and hostile environment. It stands to reason that CVS and AI control system have to be tightly integrated with the conventional flight control system. such as radar sets and laser scanners. The second stage is characterized by the considerable degree of uncertainty. the possibility to undertake special action for investigation of the environment and the aggregation with the huge volume of non-visual information of AI control systems. estimation of current situation. On the certain stage of AI system development these processes can occur autonomously.

So even not very advanced and perfect CVS and AI systems can find their useful applications and be commercially successful. which is absolutely inaccessible for legs or its mechanical imitations. The second stage will require much more time. Principles of precedent accumulation and processing are used by all living creature which have nervous system of any kind. associative search. These methods should include precedents data bases development. admissible size. This book is dedicated mainly to a brain functioning. It seems that more effective special processors architectures should be used in developing of AI processors. The development of the effective methods of permanent precedents accumulation and processing is required. It will require the wide integration of different commands (groups) of researches. A more complicated and advanced conception is presented in [51]. For instance vision system of such species as frog or crocodile are very primitive as well as their intelligence. Memory capacity and effectiveness of precedent processing determines the stage of the creature evolution. which is absolutely inaccessible for birds with flapping wings or their mechanical imitations. The most popular introduction to principles of artificial and natural intelligence structure and function is presented in [50]. information and knowledge extraction methods from precedents data bases. For instance. The present-day aircraft wings permit flying with the speed. weight and power consumption such vision and intelligence systems are quite effective and useful. CVS can be considered as a basis for gradual wide practical implementation of AI. Very significant precedent based methods of the situation evolution forecast should be developed. engineers and industry. Blind imitation of biological structures on the completely different technological base is ineffective. We describe here over-simplified constructive conception which is suitable for embedded autonomous AI system design. This book is dedicated to development of autonomous AI systems and their function. . money and other resources then the first stage. Mention should be made that these authors traditionally paid great attention to the artificial and natural neuron networks functioning. This fact is confirmed by wide spreading and quantity of frogs and crocodiles species. One can state that precedent thinking is the basis of natural and artificial intelligence.Evolution of Computer Vision Systems 131 development of AI control systems and its integration with CVS have been formulated. wheels allow such speed of motion. These methods should function in real time. These AI conceptions are suitable for development of control system for mobile objects which are equipped with CVS. However starting from survival task. But even during the development process it is possible to obtain practically useful and commercially applicable results.

J. These vehicles should be capable effectively to solve the required tasks which should be carried-out in complicated unpredictable varying and hostile environments without remote control in any form.-B. part B4. J. Visionmap A3 . Z. Any unmapped obstacle can cause the UAV to crash . 286-291. Another drawback of remote control is the vulnerability to countermeasures.132 Vladimir Grishin Returning back to the UAV control. we see that the overwhelming majority of UAV operate under the remote control or preliminary prepared mission plan which is based on waypoints navigation. 2007. Possibilities of remote control are very limited. 33. Pechatnikov.. M. F. Remote Sensing and Spatial Information Sciences.. issue 3.-R.Super Wide Angle Mapping System Basic Principles and Workflow. Raizman. Raquet. Luo.. S. vol. Remote control requires a highly skilled operator.-J. Two-Dimensional Stochastic Projections for Tight Integration of Optical and Inertial Sensors for Navigation. Such information exchange is possible in limited range from the control station. Flight in a physically cluttered environment (such as streets of a city) on a preliminarily prepared mission plan is impossible. The initial information which is used for flight plan preparation can grow old rather fast. J. E.. J. References [1] [2] Xie. M. 2008. XXXVII. the preparation of a flight plan for automatic UAV is a complicated and time consuming task. Rao.. In the case of a complicated environment. Computer Vision-based Navigation and Predefined Track Following Control of a Small Robotic Airship. pp. Acta Automatica Sinica. Remote control requires high reliability of information exchange with the UAV. Proceedings of [3] . The International Archives of the Photogrammetry. Gong. Conclusion One of the most difficult and attractive aims is the development of fully autonomous unmanned aerial vehicles and other robots.. High awareness of computer vision systems is the necessary condition for the development of such advanced control systems. vol. Y. Nevertheless the crash ratio of remote-controlled UAV is relatively high. Shor. Mention should be made about the high degree of vulnerability of such popular GPS navigation to countermeasures. Veth.

E. G. issue 4-5. Barrows. A. 2003. Karlsson. E.sodern. Proceedings of the 12th Australian International Aerospace Congress. 2007. Proceedings of the ASME International Mechanical Engineering Congress and Exposition (IMECE 04). pp. Waszak. C. Hygounenc. I. Estimation of Limitations in Precision for 3-D Surface Reconstruction. pp. P. vol. M. Proceedings of the 14th Saint Petersburg International Conference on Integrated Navigation Systems.pdf Dusha. P. Neural Nets and Optic Flow for Autonomous Micro-Air-Vehicle Navigation. Augmenting Inertial Navigation with Image-Based Motion Estimation. B. 17.. P. Gooding. Baker.. Ziman. W. 1-10. pp. Fixed-Wing Attitude Estimation Using Computer Vision Based Horizon Detection. C. 2004.. W. 4292. N. F. Vision-Guided Flight Stability and Control for Micro Air Vehicles. A. S. pp. Schon. 219-224... W. 4326–4333. R. Conte. 2008. 1998. 1-19. Soueres.. vol. Walker.. vol. Roumeliotis. A. Y. V. pp. Autonomous Vehicle Video Aided Navigation – Coupling INS and Video Approaches.. S. Lacroix. 4. Proceedings of 2008 IEEE Aerospace Conference.. ..-K. 5. Advanced Robotics. A. Debrunner.. D. M. S. 587–596. pp. I. vol..Evolution of Computer Vision Systems 133 [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] the 2006 National Technical Meeting of the Institute of Navigation. Severson.. 2002. M. 2004. Nechyba. 2007. Bessonov. D. Oh. pp. 534-543. M.. Hoff W. Grishin. Green.fr/site/docs_wsw/ fichiers_sodern/SPACE%20EQUIPMENT/FICHES%20DOCUMENTS/SE D26. issue 2. R. Tornqvist. I. A.. Ya. Forsh. pp. http://www. pp. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ‘02). pp.. vol. Borodovskiy. SED 26 Star Tracker. Ifju. Jung. Montgomery. The Autonomous Blimp Project of LAAS-CNRS: Achievements in Flight Control and Terrain Mapping. BOKZ-M star tracker and its evolution. R... Man and Cybernetics.. G.. Utilizing Model Structure for Efficient Simultaneous Localization and Mapping for a UAV Application. 2006. Ettinger. 2. L. Kudelin.. V. V. 2007.. The International Journal of Robotics Research. Gustafsson F. vol. J. Avanesov. Johnson. 1-7. issue 7. T. (2006). Transactions on IEEE International Conference on Systems. G. G. Boles. 473-511.. S. E.. 23. 4387-4391.. Advances in visual computing: Proceedings of the Second International Symposium on Visual Computing (ISVC 2006). 617-640. C.

A Cramer–Rao Bound for the Measurement Accuracy of Motion Parameters and the Accuracy of Reconstruction of a Surface Profile Observed by a Binocular Vision System. J. Robotics and Autonomous Systems. 507–513. G. On-Board RealTime Image Processing to Support Landing on Phobos. S. Proceedings of the IEEE International Conference on Robotics and Automation... Proceedings of the 2005 IEEE International Conference on Robotics and Automation (ICRA'2005).. O. Proceedings of the 7th International Symposium on Reducing Costs of Spacecraft Ground Systems and Operations (RCSGSO). M. issue 3. Grishin. 2005.. A. Hubbard. R.... F.. V. J. 19. Ansar.-A. [16] Ortiz... pp. Foch.. J. 2594-2599. Hartley. Navigation. W. pp. Sukhatme. 2003. D.. 2005. Doncieux. [22] Zhukov. [20] Sharp. Mourikis. Campbell. N.. A Vision Based Emergency Forced Landing System for an Autonomous UAV. 195-209. Object Detection and Avoidance Using Optical Techniques in Uninhabited Aerial Vehicles. Floreano. Whalley.. S.-C. vol. Proceedings of the American Helicopter Society 62nd Annual Forum. Proceedings of Australian International Aerospace Congress Conference. H... Rowley... Matthies. [24] Trawny. [19] Fitzgerald. A.. Matthies. Roumeliotis. issue 3... Visually-Guided Landing of an Unmanned Aerial Vehicle. D.. 2007. S. J. Ansar. E. Walker. L. 371-380. D. vol. A. R. Flight Trials of a Rotorcraft Unmanned Aerial Vehicle Landing Autonomously at Unprepared Sites. J. [23] Grishin. A. Ramamurti. H. McFarlane. Montgomery. [17] Zufferey.... and Control Conference and Exhibit. Gordon. S. 2005. [15] Muratet. Spears. Shakernia. [18] Saripalli. [21] Theodore. B.. A. L. C. Toward 30-gram Autonomous Indoor Aircraft: Vision-based Obstacle Avoidance and Altitude Control. J. p.. 2. Y.. Pattern Recognition and Image Analysis.. L. vol.134 Vladimir Grishin [14] Kellogg. 2001. R. 1720-1727. 423-428.. N. Avanesov. A. I. A Contribution to Vision-Based Autonomous Helicopter Flight in Urban Environments. F. Meyer.... Proceedings of the Bristol RPV/AUV Systems Conference.. D. D. vol. pp. 1250-1264. pp. C. (2001 ICRA). S. Krasnopevtseva.. B.. I. C. Gardner. Sciambi. Briere. J. 2006. D.. S. Proceedings of the AIAA Guidance. 18. A. Montgomery. pp. 2001. The NRL MITE Air Vehicle.. 50. Dahlburg.. V. D. Bovais. E. S. E. Neogi. Sullivan.. S. A Vision System for Landing an Unmanned Aerial Vehicle. IEEE Transactions on Robotics and Automation. Sastry. Pipitone. F. Goldberg. Coupled Vision and Inertial . G. Srull. R. 2008. N. issue 4. Johnson. pp. 2007. C. Kamgar-Parsi.

R. F. T.2/ADA316752 Rekik. Hinz. Remote Sensing.. 661-666. 2001. Lenhart. pp. Proceedings of the 2005 International Conference on Intelligent Sensors. B. L. and Spatial Information Sciences. 2005. issue 9. M. vol. Saripalli. International Journal of Signal Processing. Walker. Lari.402.. International Archives of Photogrammetry. D. D. S. Zhao.. M.. OATS: Oxford Aerial Tracking System. Gruber.. 2007. Automatic Change Detection of Geospatial Databases Based on a Decision-Level Fusion . G. 2003. issue 1. Proceedings of the Eighth IEEE International Conference on Computer Vision (ICCV 2001). 24. T. R. H. Proceeding of the Conference on Information Extraction from SAR and Optical data. B. Kampouraki. (1996)..mil/100. Z. S. T. Samadzadegan. Campbell.. vol. Sensor Networks and Information Processing (ISSNIP). 1. A.. T. A. Helble. Min-Shou T. Nguyen. paper B2P2. pp.. Robotics and Autonomous Systems.Evolution of Computer Vision Systems 135 [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] Navigation for Pin-Point Landing. pp. 710-717. The Suitability of ObjectBased Image Segmentation to Replace Manual Aerial Photo Interpretation for Mapping Impermeable Land Cover. Automatic Vehicle Tracking in Low Frame Rate Aerial Image Sequences. http://handle. 2006. A Vision Based Forced Landing Site Selection System for an Autonomous UAV. pp. pp. 5. vol. 2007. Sukhatme. G. Proceedings of the 2007 Annual Conference of the Remote Sensing & Photogrammetry Society (RSPSoc2007). vol. The Application of Correlation Matching Technique in Image Guidance. Cameron. Hamida. vol.. 38-45. Wood... 397. 2009. Proceedings of the NASA Science and Technology Conference (NSTC’07). Nevatia. Grabner. pp. Benjelloun. Brewer. A. Automatic Extraction of Building Features from High Resolution Satellite Images Using Artificial Neural Networks. H.. Proceedings of the International Conference on Field and Service Robotics. A. 55. 203208. Abbaspour. 87-95. Proceedings of the IEEE International Conference on Research Information and Vision for the Future (RIVF’07). H. pp. On-line Boosting for Car Detection from Aerial Images. Zribi.dtic. 2007. Ebadi. 277-286. H. R. Bischof.. A. R. Car Detection in Low Resolution Aerial Image. Fitzgerald. An Optimal Unsupervised Satellite image Segmentation Approach Based on Pearson System and k-Means Clustering Algorithm Initialization. Landing on a Moving Target using an Autonomous Helicopter. S. M.. 36-3. with Emphasis on Developing Countries.. 2007. Hahn. M. S. 2007. D.

Forssen. pp. pp. Vision-Only Control of a Flapping MAV on Mars. 2004. Skarman.. Atmosukarto. issue 12. Wagter.. WITAS: An Intelligent Autonomous Aircraft Using Active Vision.. Moe. 489-491.. 1635– 1642.-E. Nechyba.. P. Roumeliotis.. F. 2004.. Proceedings of the 43rd IEEE Conference on Decision and Control. Prazenica. Proctor.. Wagter. [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] . P. I. A flight control system for aerial robots: algorithms and experiments. Object and Event Recognition for Aerial Surveillance. Shim.. Vision for a UAV helicopter.. 2003. 11.. Shapiro. Artificially Intelligent Autonomous Aircraft Navigation System Using a Distance Transform on an FPGA. Doherty. Navigation and Control Conference and Exhibit. V. 2003.pdf Granlund. Montgomery. 23. 139-149.136 Vladimir Grishin Technique. H. W. A. Journal of Field Robotics. A. E. vol. Granlund.altera. A. 2006. Kurdila. A. (2006). Li.. G. B. pp. vol. 2008. 245-267. N. vol. G. 1071-1086. Wiklund. G. Nordberg.. 2002.. K. H. Vision-Only Aircraft Flight Control. Binev.. vol. S. C. pp. (in Russian). A. E. Proceedings of the XXth International Society for Photogrammetry and Remote Sensing Congress (ISPRS 2004). J. D. L. J. R. M. L. Smith. www. J. A. C..... Aust. pp. 5781.. Architecture of Computer Vision Systems Intended for Control of Aircraft.. H. E. pp. D. Johnson. J. 8. Johnson. issue 3-4. I. D. Nordberg. Control engineering practice.. pp.. R. 29-34.com/literature/dc/2006/a1. pp. Farneback. M. Kim. 2. Mulder J. Wiklund. A. S. Proceedings of the UAV 2000 International Technical Conference and Exhibition. Proceedings of AIAA Guidance. DeVore.. 1-7. Dahmen.B. Yuen. Grishin.. Doherty. 1389-1400. G.. Workshop WS6 on aerial robotics. Proceedings of the Fourth International Conference Parallel Computations and Control Problems (PACO ‘2008). 2005. A.. Proceedings of the 22nd AIAA Digital Avionics Systems Conference (DASC '03). Vision-Based Control of Micro–Air–Vehicles: Progress and Problems in Estimation. Sandewall E. N. Matthies. R. The Jet Propulsion Laboratory Autonomous Helicopter Testbed: A Platform for Planetary Exploration Technology Research and Development.2-81-11. Sharpley. 2007.. K. vol. P. 2. Y. 2000. Kobashi. W. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'02). P. Bijnens. pp.. Proceedings of the SPIE Conference on Optics and Photonics in Global Homeland Security. J.

Managing Dynamic Object Structures using Hypothesis Generation and Validation. 2... Noble. J. 2004. 2004. vol. Laboratory of Knowledge: Moscow.. vol. 352. Biological Sciences.. pp. [47] Heintz. S. Henry Holt and Company: New York. J. F. and Communication. Knowledge-based vision and simple visual machines. 1997.Evolution of Computer Vision Systems 137 [46] Dickmanns. 1577-1592. Development and Test of Highly Autonomous Unmanned Aerial Vehicles. N. A. NY. Vehicles Capable of Dynamic Vision. [51] Zhdanov. E. vol. R. pp. Adaptive and Intellectual systems.. 2008. issue 1358. P. 54-62. . 1997. A. pp. Russia. Proceedings of 15th International Joint Conference on Artificial Intelligence (IJCAI-97). Philosophical Transactions of the Royal Society. Ha. Autonomous artificial intelligence. 1165-1175. Tannenbaum. issue 12. J. Proctor. 485-501. E. Proceedings of the AAAI Workshop on Anchoring Symbols to Sensor Data. Binom.. On Intelligence. AIAA Journal of Aerospace Computing. Times Books. A. D. pp. A. Blakeslee. D. A. 1. (in Russian). [48] Cliff. 2004. Information. Doherty. [49] Johnson. [50] Hawkins.


How is this accomplished? This article is a review of our ability to use both eyes. Kitasato. e. pp. CO (Orthoptist). we can perceive the world in three dimensions even though the images falling on our two retinas are only two-dimensional. Introduction “Binocular vision” literally means vision with two eyes. Department of Ophthalmology and Visual Science. 139-153 ISBN: 978-1-60876-547-8 © 2010 Nova Science Publishers. McCoun et al. Chapter 6 BINOCULAR VISION AND DEPTH PERCEPTION: DEVELOPMENT AND DISORDERS Ken Asakawa* and Hitoshi Ishikawa Department of Ophthalmology and Visual Science. 228-8555. rather than one eye only.In: Binocular Vision Editors: J. then. amblyopia.jp. 1-15-1. and refers to the special attributes of vision with both eyes open. suppression and diplopia. Kitasato University Graduate School. However. Doctors Program of Medical Science. Doctors Program of Medical Science.g.ac. the use of a pair of eyes can be disrupted by a variety of visual disorders. Correspondence to Ken Asakawa.kitasato-u.. What. is the reason forand the advantage of-having two eyes? From our visual information input. Japan . incorrect coordination between the two eyes can produce strabismus with its associated sensory problems. Inc. Kanagawa. Our perception under binocular conditions represents a highly complex coordination of motor and sensory processes and is markedly different from and more sophisticated than vision with one eye alone. while also providing basic * E-mail address: dm07002u@st. Sagamihara. Kitasato University Graduate School.

for example. Binocular disparity can be classified as crossed or . Advantages of Binocular Vision “Two eyes are better than one. They therefore see the world from two slightly different points.56]. Saladin[51] and Watt[63] are investigated. This positional difference results from the fact that the two eyes are arranged laterally. indeed. in which many visual thresholds are lower than with monocular vision[16]. Rogers[32]. 2) The binocular field of view is larger than either monocular field alone.37]. is typically better than monocular visual acuity.” it is said. binocular vision has a number of functional advantages. and one first closes one eye. and then the other. such as strabismus and amblyopia.140 Ken Asakawa and Hitoshi Ishikawa information on the development of binocular vision and on the clinical disorders that interfere with our depth perception. named a “cyclopean eye” [7. which offers anatomical support for the view that binocular vision is an attribute of considerable value and importance. which is the true advantage of binocular vision. the main ones being: 1) Binocular summation. Foundations of Binocular Vision Images of a single object that do not stimulate corresponding retinal points in both eyes are said to be disparate[22. We have a horizontal field of approximately 200 degrees. Clearly. and.50] (figure 1). The large designed studies of binocular vision and stereopsis by Howard.45]. 1. and is known as “stereopsis”.55.49. binocular disparity is defined as the difference in position of corresponding points between images in the two eyes [48. The subtle differences between the images entering each eye make possible the binocular form of depth perception. 3) If one looks at the fingertip in front of the eyes.[15. and are a certain distance-the interocular distance (60 to 65 mm)-apart. noticing what can be seen behind it. in which the two visual fields overlap by about 120 degrees when both eyes are used together[29]. It is reported that some 80% of the neurons in the visual cortex receive input from both eyes. 2. two eyes do offer a number of advantages over just one. We can see objects whose images are formed on both foveas as if their images fell on a single point midway between the two eyes. Binocular visual acuity. the objects behind the fingertip should appear to move. like an imaginary single eye in the middle of our forehead. and two eyes offer better contrast detection thresholds than one does.

this is called uncrossed diplopia. nearer objects in front of it will be imaged on the temporal retina of each eye on noncorresponding points. When a distant object is fixated bifoveally. In contrast. This surface of points. the visual system is able to combine two images into a single percept with smaller disparities. can be imagined as a cylinder with an infinite radius of curvature. when a near object is fixated and a distant object is seen double. In this case. Small differences in the perception of the two eyes give rise to stereopsis—threedimensional depth perception. Fixation disparity is used by the vergence eye movement system to maintain its innervational level and compensate for a heterophoria.Binocular Vision and Depth Perception: Development and Disorders 141 uncrossed in relation to the point at which the two eyes converge (the fixation point)[44]. whereas a strong sensorybased analysis has been used in Germany[39]. a theoretical prediction of objects in space that stimulate corresponding points in the two eyes) generally have lines of sight that cross in front of the fixation point. the relationship between motor and sensory fusion is more complex[25]. a small residual misalignment of the visual axis (vergence error) may occur. These object locations. Because single binocular vision only requires the retinal image to fall within Panum’s area.54]. Panum’s area determines the upper limit of disparities that can produce single vision[41. these points are said to have crossed disparity. crossed diplopia. Points farther away than the fixation point have lines of sight that meet behind the fixation point. Diplopia is the result of a large binocular disparity. . The double image arises from visual corresponding or non-corresponding retinal areas under binocular vision. However. each image is formed on the nasal retina of the eye. stimulates the perception of identical visual directions for the two eyes[57. These phenomenons called physiological diplopia. imaged onto corresponding retinal points. testing has primarily followed a motor approach. the horopter will not precisely intersect only the fixation target. The Vieth-Müller circle intersects the fixation point and the entrance pupils of each eye.58]. Points perceived to be nearer than the fixation point (within the Vieth-Müller circle. In the United States. Binocular retinal correspondence is defined by the set of retinal image locations that produces identical visual directions when viewing with both eyes at the same time. resulting in a double image. this is called uncrossed disparity. In binocular disparities associated with normal binocular vision. causing a constant retinal disparity of a fixated object without diplopia. called the horopter. however.

The first degree consists of the simultaneous perception of each eye’s image at once. Crossed and uncrossed disparities result when objects produce images that are formed on closely separated retinal points. while points outside Panum’s area produce diplopia. who classified binocular vision as three grades.142 Ken Asakawa and Hitoshi Ishikawa Figure 1. . Stereopsis as the Highest Level of Binocular Vision The first descriptions of binocular vision were described in detail by Worth (1921). 3. Any point within Panum’s area yields a percept of a single image. The second degree consists of the combination of the two images into a single percept and fusion.

and structure from motion[62]. The clues that permit the interpretation of depth with one eye alone are called monocular clues. motion parallax. three feedback responses those constitute the near reflex[42]. The third degree and highest level of binocular visual function is stereopsis—binocular. Binocular Viewing Conditions on Pupil Near Responses Here. stereopsis is not the only way to obtain depth information. such as accommodation of the crystalline lens. and shading. such as the size of the retinal image. The . We investigated the amplitudes of vergence eye movements associated with pupil near responses for subjects of prepresbyopia and presbyopia under binocular and monocular viewing conditions in dynamics of step change in real target position from far to near (figure 3). we can still determine the relative positions of objects around us and estimate our spatial relationships with them. They include pictorial clues. aerial perspective. accommodation. The classical model of binocular visual function is composed of three hierarchical degrees.Binocular Vision and Depth Perception: Development and Disorders 143 which include motor and sensory fusion. 4. Figure 2. texture gradients. three-dimensional depth perception resulting from the neural processing of horizontal binocular disparities (figure 2). even after closing one eye. linear perspective. and pupil constriction occur. convergence. the effect of binocular clues on near pupil response as our preliminary research is introduced. When changing visual fixation from a distant to a close object. as well as non-stereoscopic clues. However.

We measure and record the dynamics of pupil and convergence simultaneously with the step stimuli of a real target in real space. since the change in real target position was performed in real space and binocular viewing conditions. On the other hand. and through the processing of retinal disparity. viewing a nearby target binocularly yields proximal and disparity clues[20. whereas only presbyopic subjects showed version eye movement without pupil constriction under monocular conditions (figure 4D). As object distances from the plane of fixation increase. thus.e.31].144 Ken Asakawa and Hitoshi Ishikawa findings of these experiments were that the convergence response with pupil miosis was induced in all cases under binocular viewing conditions (figure 4A. When both eyes are oriented toward a target.43]. in young subjects.C). Consequently. However. Infrared CCD camera Target (near) Figure 3. . is a most important factor in the induction of the pupil near response.. depth perception can be achieved. proximity and disparity clues were all available and were in conjunction with each other[47]. but becomes progressively restricted with age. accommodation is active. and proximity induces pupil constriction in presbyopia resulting from the inability to accommodate[27. depth perception. Our findings imply that accommodation. retinal image disparities become large and an object appears to be in two separate directions i. a fused perception of the target is formed. the results of presbyopia subjects under binocular conditions suggested that binocular visual function such as fusion of the real target. the pupil near response with convergence by blur-driven is well induced despite the monocular viewing condition. in presbyopic subjects. which is high in younger subjects.

from a subject with presbyopia (B). Typical trace of a subject with presbyopia showed conjugate eye movement without pupil constriction (D). and the lower. The young subject’s typical results under monocular (non-dominant eye occluded) visual conditions (C). Measured data of binocular viewing conditions.Binocular Vision and Depth Perception: Development and Disorders 145 Figure 4. . The upper trace is from a young subject (A).

(A) Normal subject. (B) Strabismic patient with normal retinal correspondence and without suppression would have diplopia (B-1) and visual confusion (B-2).146 Ken Asakawa and Hitoshi Ishikawa A Target B-1 B-2 Visual axis OA Visual axis Cyclopean eye Uncrossed diplopia SA Confusion Nasal Temporal SA = OA C-1 C-2 Fovea Zero point (Yoke area) Anomalous associated point Suppression C-1 AA OA SA SA = Subjective angle OA = Objective angle Monofixation AA = Angle of anomaly (OA . . Suppression and retinal correspondence in strabismus with esodeviataion. (C) Elimination of diplopia and confusion by suppression of retinal image (C-1) and anomalous retinal correspondence (C-2): adaptation of visual directions of deviating eye.SA) Uncrossed diplopia SA SA < OA Figure 5. a common visual direction for two separate objects.

is the most common form of strabismus. Disorders affecting stereopsis include blur. infants are at a greater risk of developing visual abnormalities than at any other life stage. The net result is that the deviating eye acquires a common visual direction to that of the .65]. with no refractive and accommodative component responsible for deviation.35. Therefore. The amount of hyperopic refractive error in accommodative esotropia averages +4D. and the clinical measurement of stereopsis is of value as a means of indirect screening.24]. infants are extremely susceptible to severe visual disorders arising from inadequate visual experience during the critical period. and it is routinely measured in clinical practice[40].64]. The normal sensory organization of binocular vision can be altered in infantile strabismus by suppression or anomalous retinal correspondence (figure 5). Generally. During the critical period of rapid visual change between 6 weeks and 3 months after birth.46]. and amblyopia. a stable. strabismus. esodeviation (not related to uncorrected refractive error) is caused by a high AC/A ratio. in contrast. cross-fixational large-angle esotropia with onset before 6 months of age. Therefore. The type and extent of sensory adaptation are important factors in the re-establishment of functional binocular vision for disorders such as strabismus and amblyopia in children[8. Anomalous retinal correspondence is an adapted shift in the visual directions of the deviated eye relative to the normal visual directions of the fixating eye [4.21.Binocular Vision and Depth Perception: Development and Disorders 147 5. with an average age of onset of 3 years[17]. Infantile esotropia. most strabismic patients do not experience diplopia and visual confusion[2. cycloplegic refraction reveals less than 3D of hyperopia. Development of Binocular Vision A major question is whether binocularity and stereopsis are present at birth or whether infants must be learn to see binocularly and three-dimensionally. Accommodative esotropia. Single vision is achieved by suppression. esodeviation is restored to orthophoria by optical correction of the underlying hyperopia[1]. which causes elimination of the perception of objects normally visible to the deviating eye during simultaneous binocular viewing [28.34. with effects diminishing rapidly until about 6 years of age [6. Although it never tapers off completely[52]. usually occurs between 6 months and 7 years of age. Since Wheatstone (1838).62.19]. visual experience has its greatest effects at about 6 months of age. In the non-refractive form. The visual system takes approximately 6 weeks to become sensitive to visual stimulus deprivation. and binocular vision first appears at about 3 months of age. hyperopia averages +2D. stereopsis has been one of the most popular fields of vision research.11.

using an adhesive patch. such as congenital ptosis. Anisometropic amblyopia is caused by significant. Abnormal development of spatial vision causes amblyopia. congenital or traumatic cataracts and corneal opacities that remain untreated for some time[9. Ametropic amblyopia may have equal refractive errors that are either extremely myopic (more than -6D) or hyperopic (more than +4D). Yet another kind of amblyopia. and visual stimulus deprivation. However. meridional amblyopia. In addition. uncorrected refractive error. However. influence the outcome of treatment. including patient age at surgical alignment and duration of misalignment. Traditional amblyopia treatment consists of full-time occlusion of the sound eye. Moreover. and improved amblyopia therapy.148 Ken Asakawa and Hitoshi Ishikawa fovea of the fixating eye during binocular viewing of a peripheral retinal area [5. Clinically. Therefore.40]. unequal refractive errors. occlusion therapy and surgery are associated with normal acuity development and a potential for at least gross stereopsis[10. increased frequency of intraocular lens (IOL) implantation. form vision deprivation amblyopia occurs in patients with a constant obstruction in the image formation mechanism of the eye. amblyopia is defined as a reduction in visual function caused by abnormal visual experience during development[30.59]. early abnormal binocular visual input contributes to poor outcomes in both infantile and accommodative esotropia[33].38. According to the recent study. and visual prognosis for children with cataracts is improving due to earlier surgery. These treatments may prevent the development of sensory and motor dysfunctions[60]. typically either esotropia or exotropia.61].23]. recent trends include prescribing fewer hours and using atropine as an alternative or adjunct to patching or even as a first-line treatment[36]. Pediatric cataract treatment is now undergoing rapid development. is caused by astigmatic refractive errors for long periods (more than 2 years)[3]. The accepted strabismus treatment is wearing appropriate glasses and eye muscle surgery. . between the eyes. there is a contrast-dependent that is strongly dependent on spatial frequency and a contrast-independent deficit for position of targets[18.53]. several factors. exceeding +2D. for infantile esotropia with significant fixation preference. Strabismic amblyopia refers to amblyopia that is associated with the presence of strabismus. decreased visual acuity that cannot be attributed to suppression scotoma. The strabismic eye also shows a pronounced suppression of the central and peripheral visual field[26.14]. Future studies should establish critical factors for achieving stable binocular vision.

Sighting from the cyclopean eye: the cyclops effect in preschool children. Science. Ishikawa H. we would like to review the neural integration of depth perception and binocular vision. J. Banks MS. Doc. 22:507-513. Arch. We look forward to new ideas and research on binocular vision[12]. Asher H. Symposium: Sensory Adaptations in Strabismus. References [1] [2] [3] [4] [5] [6] [7] [8] [9] Asakawa K. Shoji N. von Noorden GK. Suppression theory of binocular vision. more recent work has indicated that this occurs in higher visual areas (in particular. J. Orthopt. together with psychophysical evidence that stereopsis occurs in visual processing. Aslin RN. Prevalence of good visual acuity following surgery for congenital unilateral cataract. Visual inputs from both eyes are combined in the primary visual cortex (V1). Romano PE. 1993 Awaya S. 106:40-43. 1975. Ophthalmol. 1983. 1988. Anomalous correspondence: definition and diagnostic methods. Strabismus. The present review provides the basic information on normal and abnormal binocular vision that forms the foundation for the clinical disorder of binocular vision. J. Oxford University Press. 190:675-677. Am. 37:37-49. 1953. suggests that V1 was the neural correlate of stereoscopic depth perception. 1967. In the future. 33:561-564. Ophthalmol. Psychophys.Binocular Vision and Depth Perception: Development and Disorders 149 Conclusion Binocular vision requires a high level of coordination between motor and sensory processes—binocular vision and stereopsis will be compromised if any component in this system fails. Vision Res. Gwiazda J. Stager DR. 20:28-35. where cells are tuned for binocular vision. Br. MT area). Birch EE. Ophthalmol. 2009. Anomalous retinal correspondence in different positions of gaze. 1970. 1982. Birch EE. Infant vision screening: Prediction and prevention of strabismus and amblyopia from refractive screening in the Cambridge photorefraction program. 23:346-98. . Barbeito R. Atkinson J. Held R. New methods for the assessment of accommodative convergence. Sensitive period for the development of human binocular vision. Letson RD. The observation of the cells tuning in V1. Percept. however. Stereoacuity development for crossed and uncrossed disparities in human infants. Pediatr. 46:273-277. Bagolini B. Ophthalmol.

J. Nelson BA. PG. 31:758-765. J. Anomalous retinal correspondence. Visuo-vestibular eye movements: infantile strabismus in 3 dimensions. Berry P. Invest. 43:143-151. J. [21] Flom MC. Visual acuity in human infants: a review and comparison of behavioral and electrophysiological studies. Optom. [18] Demanins R. Opt. 40:70-73. 1965 9. [26] Gunton KB. Pediatr. Monocular versus binocular visual acuity. Hess RF. Teller DY. 2003. Steinkuller. Physiol. 2001. Applebaum TH. The neural deficit in strabismic amblyopia: sampling considerations. Early onset refractive accommodative esotropia. [25] Fredenburg P. 208:191-192. Sci. Ophthalmol. Regan D. [14] Burian HM. 1975. 2007. Sprunger D. Ophthalmol. [20] Erkelens CJ. Am. [15] Burian HM. Vision Res. Vis. 34:237253. Critical immaturities limiting infant binocular stereopsis. 57:656-665. 123:837-842. Prospective assessment of acuity and stereopsis in amblyopic infantile esotropes following early surgery. Corresponding and disparate retinal points in normal and anomalous correspondence. Physiol. Strabismus. Arch. 1978. [22] Foley JM. Strabismus. Vision Res. [17] Coats DL. Richards WA. Invest. Evidence-based medicine in congenital esotropia. Green DG. [11] Birch EE: Stereopsis in infants and its developmental relation to visual acuity. 1951. AAPOS. Lindsey DT. 41:1969-1979. 379:145-169. 2:275278. 39:3575-3585. Somaiya M. 1951. [19] Dobson V. Nature. Oxford University Press. 207:323-324. 1999. [24] Fox R. The relative sensitivities of sensory and motor fusion to small binocular disparities. Shea SL. 1980. 1986. Aslin RN. Stereopsis with large disparities: discrimination and depth magnitude. 5-6:169-83. Stereopsis in human infants. Paysse EA. 1993 [12] Brodsky MC. Wang YZ. Ophthalmol. Stereopsis. [16] Campbell FW. Guo S. Science. Harwerth RS. Ophthalmol. 18:1469-1483. Everett ME. 2005. Update on the surgical management of pediatric cataracts. 15:417-21. Doc. Sci. Pediatr. J. Ophthalmol. J. . [13] Brown AM. Satgunam P. 1998. 2006. Avilla CW. 48:1424-1434. Am. J. Stager DR. [23] Forbes BJ. 1980. Ophthalmol. Vision Res. Vis. Ophthalmol. Its essence and its significance in diagnosis and treatment. Miracle JA. Dumais ST.150 Ken Asakawa and Hitoshi Ishikawa [10] Birch EE. Vision Res. 1990. Human ocular vergence movements induced by changing size and disparity.

15:444-448. Ophthalmol. Amblyopia treatment: 1998 versus 2004. Opt. Crelier RS. 34:690-694. Eye. 68:168-172. [44] Ogle KN. 1998. Harwerth RS. Ohmura T. Awaya S. KJ. Am. Smith TJ. Sci. 37:2953-2974. [42] Myers GA. Proximal vergence and age. Invest. J. Thorn F. [41] Mitchell DE. Pediatr. Rogers BJ. Disparity limits of stereopsis. Wang AH. 2006. 54:683-696. Sato M. 1990. AMA Arch. 1980. AMA Arch. 10:270-273. St Louis. 1964 [30] Harwerth RS. Opin. [40] Maeda M. Anomalous correspondence--the cause or consequence of strabismus? Optom. 1996. Stark L. Ophthalmol. Doc. Physiol. [39] London R. Childhood esotropia. [35] Kerr KE. Bauer J. 1997. 1991. Vis. Influence of proximal. Sci. Vis. 46:19-22. Binocular interactions in normal and anomalous binocular vision. . Mosby. 40:3031-3036. Science. Smith EL. von Noorden GK. Smith EL 3rd. [33] Hutcheson KA. 13:239-243. The visual fields. Ciuffreda. Fixation disparity analysis: sensory and motor approaches. [34] Jampolsky A. Sci. Optometry. Psychophysics of suppression. et al. Strabismus. 1986. Craft WD. [29] Harrington DO. Crawford ML. Miyazaki Y. Topology of the near response triad. 2004. Ophthalmic Physiol. Myopic children show insufficient accommodative response to blur. 1999. 1952. A review of the concept of "Panum's fusional areas". Multiple sensitive periods in the development of the primate visual system. 1955. 1993. Am. Forbes BJ. Ophthalmol. Henson DB. Ophthalmol. Ophthalmol. 49:303-324. Rosenfield M. Vis.Binocular Vision and Depth Perception: Development and Disorders 151 [27] Gwiazda J. Vision Res. [36] Khazaeni L. [38] Levi DM. [32] Howard IP. Invest. Sci. Binocular depth-from-motion in infantile and late-onset esotropia patients with poor stereopsis. Ophthalmic. Binocular vision and stereopsis. [43] North RV. Acad. 48:50-60. Curr. 75:17-22. 1966. Optom. Arch. 10:175-181. Davidson SL. [37] Lappin JS. Quinn GE. Ophthalmol. 232:235-238. Characteristics of suppression in strabismus. J. [31] Hokoda SC. Vis. Definition and detection of binocular disparity. 77:590-608. Ophthalmol. Oxford University Press. Duncan GC. 2009. 1993. Optom. accommodative and disparity stimuli upon the vergence system. Optom. 43:387-401. [28] Harrad R. Opt.

New York. J. Tyler CW. [53] Schor CM. Rawlings SC. Accommodative esotropia following surgical treatment of infantile esotropia: frequency and risk factors. Stereopsis from a performance perspective. Kose S. [55] Sekular R. The perceived direction of the binocular image. [57] Shipley T. 10:380388. Vis. Vision Res. Felius J. Amblyopia: a multidisciplinary approach. 1985. 2005. Barbeito R. [50] Richards W. Vision Res. 2008. 1970. J. Egrilmez S. [49] Richards W. [48] Regan D. Vision Res. Tamkins SM. 1966. Rawlings SC. 32:201-210. 1970. 126:1634-1642. 1995. Patching vs atropine to treat amblyopia in children aged 7 to 12 years: a randomized trial. Vision Res. Invest. Opt. Spatio-temporal properties of Panum's fusional area. Winn B. [51] Saladin JJ. Maraini G. Civan BB. [47] Phillips NJ. Stereopsis and stereoblindness. 33:2359-2360. J. The cyclopean eye vs. 1993. 1985 [56] Sheedy JE. the sighting-dominant eye as the center of visual direction. Beck RW. Neuroanatomic abnormalities of primary visual cortex in macaque monkeys with infantile esotropia: preliminary results. Proctor lecture. Am. Burkhalter A. 50:646-650. The nonius horopter. The nonius horopter. Holmes JM. 6:583-593 [54] Schor CM. 1992. Acta Ophthalmol. [59] Tychsen L. Sci. 19:201-211. [52] Scheiman MM. Exp. Vision Res. [60] Uretmen O. Ophthalmol. Anomalous stereoscopic depth perception. Knopf. Percept. [58] Shipley T. St Louis. 1971.152 Ken Asakawa and Hitoshi Ishikawa [45] Ono H. Absence of pupil response to blur-driven accommodation. Ophthalmol. 1970. Optom. Repka MX. An experimental report. Ophthalmol. Kundart J. Arch. 86:279-283. 1990 . 82:186-205. History and theory. 10:1263-1299. Mosby. 10:1225-1262. 1981. Pediatr. 1982. 21:683-692. 1977. 61:410-414. [46] Pasino L. Perception. Kraker RT. 32:1775-1779. Strabismus. Binocular correlates of the direction of motion in depth. Area of binocular vision in anomalous retinal correspondence. Sci. 32:323-328. Morrison DG. Birch EE. Brain Res. Blake R: Perception. Yuce B. 2008.26:1704-1716. Vision Res. [61] von Noorden GK. Br. Hertle RW. Soc. Visual stimuli for strabismic suppression. Fry GA. Vis. Ophthalmol. [62] von Noorden GK: Binocular vision and ocular motility. I. Psychophys. II. 1979. Gilmartin B.

Anomalous retinal correspondence: neuroanatomic mechanism in strabismic monkeys and clinical findings in strabismic children. [65] Wong AM. Burkhalter A.Binocular Vision and Depth Perception: Development and Disorders 153 [63] Watt SJ. Focus clues affect perceived depth. Burkhalter A. AAPOS. Suppression of metabolic activity caused by infantile strabismus and strabismic amblyopia in striate visual cortex of macaque monkeysi 2005. et al. Vis. 4:168-174. Akeley K. 9:37-47. J. 2005. [64] Wong AM. . Tychsen L. 5:834-862. Tychsen L. Ernst MO. Lueder GT. J. 2000.


6. 6.96.2. Goss1. Inc. . Bloomington. pp. downgaze.6. Northeastern State University. MT. Tahlequah. Testing distance was 40 cm. McCoun et al. downgaze. 22 to 35 years of age. Burns2 2 School of Optometry. straightforward. The two procedures were von Graefe prism dissociation method (VG) and the tangent scale method commonly known as the modified Thorington test (MT). 3.8. A coefficient of repeatability was calculated by multiplying the standard deviation of the difference between the results from two examiners by 1.In: Binocular Vision Editors: J. Penisten2. OK 74464 1 Abstract The evaluation of heterophoria is an important element of assessment of binocular vision disorders. 6. 2. VG. Coefficients of repeatability in prism diopter units were: VG. IN 47405 College of Optometry. Chapter 7 REPEATABILITY OF PRISM DISSOCIATION AND TANGENT SCALE NEAR HETEROPHORIA MEASUREMENTS IN STRAIGHTFORWARD GAZE AND IN DOWNGAZE David A. This study examined the interexaminer repeatability of two heterophoria measurement methods in a gaze position with no vertical deviation from straightforward position and in 20 degrees downgaze. Serving as subjects were 47 young adults. The results show a better repeatability for the tangent scale procedure than for the von Graefe prism dissociation method. straightforward. Douglas K. Kirby K. MT. Indiana University. 155-160 ISBN: 978-1-60876-547-8 © 2010 Nova Science Publishers. Pitts2 and Denise A.

7Δ using a flash presentation of the target in which subjects viewed the target intermittently between . The VG test uses prism dissociation to prevent binocular fusion. A penlight is pointed toward the patient through a hole in the center of the test card. Coefficients of repeatability for the VG test were 3. VG testing was done using phoropter rotary prisms. Kirby K. The MT test employs a Maddox rod to prevent binocular fusion. The position of the line on the tangent scale is reported by the patient. ages 22 to 31 years. [2] reported on repeatability of test results between two examiners for 72 subjects between the ages of 22 and 40 years. coefficients of repeatability were 2. Pitts et al. while the MT testing was performed without a phoropter. Alignment of the diplopic images by a rotary prism or a prism bar provides measurement of the heterophoria.96 to get a 95% limits of agreement between the repeat measurements. Measurement of heterophoria is an important component for the clinical examination for vergence disorders. Penisten. Rainey et al. Near point VG and MT tests were performed on separate days by one examiner with subjects viewing through a phoropter at a test target. One of the factors that can be considered in the evaluation of a clinical test is the repeatability of the results obtained with it. Coefficients of repeatability for the VG test were 6.0Δ for the trained observers and 1. On the MT. on separate days. A metric often used in the evaluation of repeatability of clinical tests is found by multiplying the standard deviation of the differences between pairs of measurements on a series of subjects by 1.6Δ for the untrained observers. Two common clinical methods of heterophoria measurement are the von Graefe prism dissociation method (VG) and the tangent scale method that is commonly known as the modified Thorington test (MT). A line is seen by the eye covered by the Maddox rod and a tangent scale is seen by the other eye.156 David A. Slightly lower coefficients of repeatability were obtained when the subjects had kinesthetic input from holding some part of the test target or the target support. Measurement of heterophoria requires a method for the elimination of binocular fusion and a method for determining the angle between the lines of sight of the eyes and the position the lines of sight would assume if they intersected at the object of regard.9Δ for 20 untrained observers. Introduction Vergence disorders can be a source of eyestrain and uncomfortable vision. Previous studies have reported better repeatability for MT testing than for VG testing.3 prism diopters (Δ) for 20 trained observers and 2. Goss. Douglas K. Morris [1] tested adult subjects. Test distance for both the VG test and the MT test was 40 cm. This value is sometimes referred to as a coefficient of repeatability.

ranged from 2. The coefficient of repeatability for the VG test. The present study reports results for straightforward position and for 20 degrees of downgaze. The present study reports results with both tests done without a phoropter. no amblyopia. served as subjects. using continuous presentation. Only one of the studies involved doing both tests outside the phoropter. Wong et al. and best corrected distance visual acuity of at least 20/25 in each eye.0 Δ/D for the MT test. Escalante and Rosenfield [4] examined the repeatability of gradient (lens change) accommodative convergence to accommodation (AC/A) ratios. a loose prism was used for prism dissociation and a prism bar was used for the alignment measurement. Coefficients of repeatability for the MT were 2.Repeatability of Prism Dissociation and Tangent Scale… 157 adjustments of the prism and 8. The coefficient of repeatability for the MT was 2. no history of ocular surgery. A consistent finding of the previous studies was better repeatability on the MT than on the VG. which was used for the gradient AC/A ratio lens changes. For 20 . Tahlequah. For the VG test. Oklahoma.2 to 3. ranging in age from 18 to 35 years. Methods Forty-seven subjects ranging in age from 22 to 35 years served as subjects. Testing was done at 40 cm. Repeat measurements.1Δ for a flashed presentation of the test card.3Δ. was 3. Viewing distance was 40 cm. All testing was done without a phoropter. Testing protocols were approved by the human subjects committee at Northeastern State University. VG and MT tests were performed with no vertical deviation from straightforward gaze and with 20 degrees of downgaze. Subjects wore habitual contact lens or spectacle prescriptions during testing. Thus the purpose of the present study was to test for confirmation of better repeatability on the MT test than on the VG test and to examine their repeatabilities for a position of downgaze. were performed by one examiner on 60 subjects ranging in age from 20 to 25 years.3Δ for continuous presentation and 2. While not explicitly stated in each of the papers.2 to 2.5 prism diopters per diopter (Δ/D) for the VG test and from 1. it may be presumed that test targets were placed in a position without any vertical deviation from straightforward position.2Δ using continuous viewing of the target. [3] presented repeatability data based on the agreement of results from two examiners. Coefficients of repeatability of the AC/A ratios. no strabismus. Seventy-two students. at least 24 hours apart.8Δ. using various lens combinations. Test distance for both tests was 40 cm. Testing was done with subject viewing through a phoropter. Inclusion criteria were no ocular disease.

test cards were moved down from their position for straightforward gaze by about 13. Pitts et al. and eso findings (base-out for alignment) were treated as positive numbers. Two values. they reported the number through which the vertical red line passed and whether the line was to the left or right of the white light. Results The mean findings on both tests were a small amount of exophoria (Table 1). Subjects were instructed to report when the columns of letters were aligned so that the upper column of letters was directly above the lower one. The number indicated the phoria magnitude. there is a horizontal row of numbered dots separated by 1Δ when the card is at 40 cm and a hole in the center of the row.158 David A. The recording form used was changed with each consecutive subject number. dissociation of the eyes was induced with an 8Δ base-down loose prism held over the subject’s right eye. A rotary prism was placed over the subject’s left eye in a clamp mounted on a table. Subjects viewed a vertical column of letters on a test card at 40 cm from them. Kirby K. A red Maddox rod was placed over the subject’s right eye oriented so that the subject saw a vertical line. and the prisms were tilted forward by approximately 20 degrees. They were instructed to keep the letters clear to control accommodation. For downgaze measurements. For the VG test. All tests were done by two examiners. For lateral phoria testing. The position of the red line relative to the white light indicated the direction of the phoria. were averaged. Douglas K. Light from a penlight was directed through the hole in the center of the card toward the subject. degrees of downgaze. each with a different test sequence. As soon as they opened their eyes. and the reading on the rotary prism was noted when the subjects reported alignment. and the result was recorded. were used to counter-balance test order. This is calibrated for a 40 cm test distance. A coefficient of repeatability was determined by finding the mean and standard deviation of the differences between the two examiners and then multiplying the . to the left for exo (recorded as a negative number) and to the right for eso (recorded as a positive number). Subjects were then instructed to close their eyes. one with the prism starting from the base-in side and one starting from the base-out side. Four recording forms. subjects were instructed to turn their eyes down rather than their heads. A Bernell Muscle Imbalance Measure (MIM) near test card was used for the MT test. Exo findings (Base-in for alignment) were recorded as negative values. Goss. Penisten.7 cm and they were tilted back about 20 degrees so that the subjects’ lines of sight would be approximately perpendicular to the cards.

9 (4.6 Discussion The results of the present study agree with previous studies [1-5] in finding better repeatability with the MT test than the VG test. [2. The respective coefficients of repeatability for the MT test were 2.0 (3.6) -2.1) -2. Table 1.0) Examiner 2 Mean (SD) -2.1) -0. there is quite good . One investigation reported the repeatability of the VG test to be better than that for the Maddox rod test.3 (4.2 to 5.4 (5. [1] Another study found repeatability for the near Howell card phoria to be better than that for the VG test and nearly as good as that for the MT test.2Δ for downgaze. [2. D MT.6Δ for no elevation or depression of gaze and 6. in which means for the VG ranged from -2.0Δ and means for the MT ranged from -2.6Δ (Table 1). Repeatability of other less commonly used phoria tests has also been studied. Means (and standard deviations in parentheses) for both examiners and the coefficients of repeatability for each of the tests. D Examiner 1 Mean (SD) -1. It has been reported that for midrange phorias.5) -2. as in the previous studies. von Graefe test.4Δ.2 (3. D.6) -1. The VG test is the most commonly used subjective dissociated phoria in optometric practice.8 3. S. Some clinicians have observed that the MT test is generally simpler than the VG test in terms of instrumentation and patient instructions. straightforward position. S MT.0 (4. S VG. [3] The means found in the present study are similar to previous studies comparing VG and MT tests. downgaze Test VG. standard deviations for the VG were higher than for the MT.6 6. VG. The coefficients of repeatability for the VG test were 6. Units are prism diopters. including with or without the phoropter and in downgaze as well as with no vertical gaze deviation from straightforward position. MT.2 2.4) -1.Repeatability of Prism Dissociation and Tangent Scale… 159 standard deviation of the differences by 1.6] The fact that the MT also offers better repeatability than the VG suggests that its more widespread adoption as a routine phoria test may be advisable.96.3.8Δ and 3. This better repeatability is present under a variety of test conditions.9 (4. modified Thorington test.5-7] In the present study.9 (4.1 to -3.7) Coefficient of Repeatability 6.

1948. Fricke. TR. 2002. Optom. one can conclude that the MT tangent scale test has better repeatability than the VG prism dissociation test under a variety of test conditions. Optom. TP. LB. FM. Vis. Am. Sci. J. The effect of testing method on values obtained for phoria at forty centimeters. Hirsch.719-26. . DA. Arch.407-16. IN 47405. Optom. BB. O.37. Pitts et al. J.75. Grosvenor. Indiana University.19. 1948. MC. Acad.160 David A. Optom – J. JB. Wong. Am.492-5.. 1960. Arch. C. Behav. Optom. Escalante. Optom. EPF.79. Optom. Optom.D. BJ. Interexaminer repeatability of a new. Hirsch. [4] [5] [6] [7] Reviewed by Douglas G. 1998.370-5. Am.77. J. J. The influence of kinesthesis upon near heterophoria measurments.D. Optom. Optom. Sci. M. References [1] [2] [3] Morris.229-34. Teske. Dinardo. Rainey. Am. TL. but for higher magnitude phorias.145-9. Am. Bing. 2008. modified Prentice card compared with established phoria tests. MJ.. Douglas K. Am. Optom. Goss. Inter-examiner repeatability of heterophoria tests. Assoc. Rosenfield. either exo or eso. VG tends to yield higher values. Schroeder. Moyer.25. Kirby K. Effect of hetereophoria measurement technique on the clinical accommodative convergence to accommodation ratio. Acad.327-51. Am. [7] Conclusion Based on the results of the present study as well as those of previous studies. MJ. Goss. School of Optometry. Penisten.25. agreement of VG and MT findings. 2006. Bloomington. Clinical investigation of a method of testing phoria at forty centimeters. DA. Ph. A comparison of dissociated phoria test findings with von Graefe phorometry and modified Thorington testing. Horner. Arch. Acad. Vis. Goss.

Inc. Marcusstraße 9-11 .M. Existing data supports this hypothesis by demonstrating that emotional cues are more quickly detected among neutral distractors. (Biological Psychology. Little data is available to demonstrate that emotional stimuli are also preferentially processed during prolonged viewing. The preferential perception of visual emotional cues is apparent under conditions where different cues compete for perceptual dominance.97070 Würzburg.de. Germany. Alpers Department of Psychology. Clinical Psychology. pp. When two incompatible pictures are presented to one eye each. Germany Abstract Preferential perception of emotional cues may help an individual to respond quickly and effectively to relevant events. such that only one picture is visible while the other is suppressed. University of Würzburg. Gerdes University of Würzburg and University of Bielefeld.uni-wuerzburg. 161-188 ISBN: 978-1-60876-547-8 © 2010 Nova Science Publishers. . this results in a perceptual alternation between the pictures. Address for correspondence: PD Dr. Tel. These findings can be interpreted as evidence for preferential processing of emotional cues within the visual system. D. Several studies from our laboratory showed that emotional stimuli predominate over neutral stimuli in binocular rivalry. and Psychotherapy). Georg W. McCoun et al.: 0049-931-31-2840. Chapter 8 TEMPORARILY BLIND IN ONE EYE: EMOTIONAL PICTURES PREDOMINATE IN BINOCULAR RIVALRY Georg W. This so called binocular rivalry involves different stages of early visual processing and is thought to be relatively independent from intentional control. which extends beyond 1 E-mail address: alpers@psychologie. Alpers1 and Antje B.In: Binocular Vision Editors: J. Fax: 0049-931-31-2733.

The amygdala is central for the processing of emotional cues and especially for fear relevant information (LeDoux. 1997. 1. 1988. for example) that appear on the spot where a fear relevant. Similarly. data from this paradigm demonstrates that emotional pictures are perceived more intensively.M. Lundqvist. Evidence for a privileged role of aversive stimuli in perception and attention processes can be found in a number of convincing research paradigms. Buchel. Flykt. For example.162 Georg W. LeDoux (1996) demonstrated that sensory information of external stimuli can reach the amygdala via two relatively independent pathways. Alpers and Antje B. Preferential Processing of Emotional Stimuli We are constantly exposed to a multitude of visual stimuli. for example when they were presented subliminally and masked by another picture (Dimberg. Gerdes initial attentional capture. helps us to survive. participants respond faster to test probes (letters. Bonham-Carter. Based on a number of animal studies. 2000. and Elmehed. and Esteves. 2001). Fergussoon. Mogg and Bradley. emotional pictures. visual perception. Calder et al. visual search tasks show that angry faces can be detected very quickly and pop out among a variety of neutral faces (Hansen and Hansen. Friston. 1994). as opposed to a neutral. Öhman and Soares. On the . Young. Jenkins et al. In the so called dot-probe paradigm. Millar. Convincing evidence for this assumption comes from experiments which reveal psychophysiological reactions to emotional stimuli even when they were not consciously perceived. Mogg. 2002). 1996. Öhman.1. 2001). Many scientists assume that this allocation of attention is based on automatic processes which operate independently from conscious processing. Thunberg.. stimulus was presented beforehand (Bradley. Two Pathways for the Processing of Emotional Stimuli Fast and automatic perception of emotionally relevant visual stimuli calls for rapid neuronal processing. Identifying threatening stimuli seems to be especially important as we need to repel danger or protect ourselves from it as quickly as possible. 1998). Frith.. Taken together. preferential processing 1. Morris. Evaluating this information and responding with adequate behavior where necessary. fear relevant stimuli like spiders and snakes surrounded by neutral objects can be found more quickly (Öhman. Keywords: binocular rivalry. and Esteves.

This pathway is rather slow but encompasses detailed information and LeDoux therefore calls it the “high road” of emotional processing. et al. emotional reactions that are initiated by the amygdala enable for effective fight or flight . a subcortical pathway for immediate processing of emotional information exists. Thus. this so called “low road” represents a shortcut which bypasses cortical areas. In addition to this cortical pathway. 2006). where sensory information reaches the amygdala via direct thalamic projections. via the sensory thalamus. and Güntürkün. Calabrese. 2002. Wehrmann. a more time consuming and more elaborate processing of emotional stimuli can be guaranteed (Pessoa. from where emotional reactions can be elicited and modulated. This is thought to be independent from the cortical analysis mentioned above. Figure 1. Even more complex reciprocal influences of emotion and attention can further modulate processing of visual perception within a network of frontal and parietal brain regions (Pessoa. This allows for a direct and fast response to potentially dangerous stimuli before the analysis is complete. Kastner. Then. Two pathways of processing of visual emotional stimuli (after LeDoux. information from the retina projects to the sensory cortex. Along this pathway. This direct projection from the thalamus to the amygdala only provides for a crude analysis but it is much faster than the one from the thalamus to the amygdala via the sensory cortex. information about emotional relevance can reach the amygdala much more rapidly (see Figure 1). Cortical areas process this input before it reaches the amygdala. 2002). and Ungerleider..Temporarily Blind in One Eye 163 one hand. Windmann. Thus. 1996). for a higher-level analysis.

cortically blind patients show physiological responses to emotional stimuli even if they are not able to consciously perceive them (Anders. Dressel. defensive reflexes or an increased respiration rate and heart rate – see Alpers.intermediate division. They demonstrate that processing of visual emotional information is indeed possible without involvement of intact cortical circuitry. and Kelly. 2003). For example. Support for the independence of the "low road" from cortical processing comes from animal studies and studies with cortically blind patients. Amygdala projections to the visual cortex of the macaque brain (Amaral. L . Behniea. Treig. Hamm. TE – telencephalon.lateral nucleus.. and Pauli. Birbaumer. it was shown in animal studies that there are direct neuronal projections from the amygdala to cortical visual areas (V1 or V2. and Kessler. Jones. 1984) (see figure 2). Alpers and Antje B. Amaral and Price. Mader. 2003. 2005). Schupp.164 Georg W. Evidence supporting that such neuronal circuits also exist in humans has been provided since (Catani. Weike. Once again.g. Donato.. and Ffytche. Bmc . BI .M. 2004. 2003). Erb. Gerdes responses (e. The “low road” has an additional function: It can modulate cortical processing on the “high road”. Mühlberger. These projections make it possible that "quick and dirty" subcortical processing can influence more .. for example) (Amaral.magnocellular division. et al. Sadowski. 2003). TEO – optic tectum . Grodd et al. Figure 2.

processing of positive as well as negative words is associated with higher amygdala activation (Hamann and Mao. higher amygdala activation in response to positive and negative as opposed to neutral pictures has been observed (Garavan. Alpers. 2008. and Kilts. Cuthbert. Fitzsimmons. Ehlis. Davis and Whalen. Junghöfer. Pendergrass. As it can be seen in functional magnetic resonance imaging (fMRI) and positron-emission-tomography (PET) studies. 2002).Temporarily Blind in One Eye 165 elaborate processing in cortical areas which would allow for enhanced conscious perception and processing (Pessoa. Thus. Birbaumer. and Ungerleider. all emotional stimuli are selectively processed. Mühlberger et al. Lang. Electroencephalography (EEG) findings also support the notion that strongly activating affective pictures are processed faster and more intensely in the visual cortex (Cuthbert. these findings suggest that the amygdala may be involved in the processing of positive as well as negative emotional stimuli. and Risinger. Stein. Hamann. 2001).. Scott. Huter. 1998) shows that emotionally relevant stimuli are accompanied by increased activation of visual areas in the occipital lobe. 2003. 2002). And indeed. Also. Schupp.2. 2003). 2001. and Lang. Kastner. it could be expected that positive as well as negative pictures boost visual perception. Ely. Electrophysiological (Schupp. et al. Intensive Processing of Negative Valence or of Arousal? In additon to multiple findings documenting a preferential processing of negative stimuli in the amygdala there is growing evidence for an equally intensive processing of arousing positive stimuli. 2006). 2003). Bradley. Furthermore. Bradley. the intensity of emotional arousal seems to be more crucial than the specific affective valence of a stimulus. Keil. 2000. . According to the Emotionality Hypothesis. 2003. In conclusion. Schupp. It is very plausible that this may be partly initiated by input from the “low road” in addition to top-down input from higher cortical areas such as directed attention. and Hamm. independent of their specific valence. Stolarova. 1.. as well as hemodynamic evidence (Herrmann. Plichta. Hoffman. Weike. Ross.. and Moratti. Moulder et al. processing stimuli which are associated with reward involves similar brain circuits as the processing of cues for danger (Berridge and Winkielman.

2. but it is not a newly discovered phenomenon. What Helmholtz Knew Already Recently. Binocular rivalry is a remarkable phenomenon and offers the possibility to further investigate visual perception. and processes underlying perception. one of the pictures is perceived dominantly while the other is suppressed and thus removed from conscious awareness. Binocular rivalry has been a well known phenomenon for a very long time (see Humphrey and Blake. conscious perception does not inevitably represent the physical environment. Gerdes 2. Tong. 2001). However. Early systematic investigations of binocular rivalry trace back to Wheatstone (1838). conscious control over what is perceived during binocular rivalry is very limited (Meng and Tong. Thus. 1985). while the other was invisible.1. this results in a fascinating perceptual phenomenon called binocular rivalry. both from the psychological and the neurological point of view. When competing pictures are presented to the two eyes and a distinct impression cannot be evoked. binocular rivalry receives growing attention in research focused on consciousness. 2001). An unpredictable alternation between the two impressions ensues. For his experimental investigations he developed the mirror stereoscope – . input from one eye is completely suppressed from conscious awareness. Alpers and Antje B. This is especially obvious when ambivalent information is presented to the two eyes and information cannot be combined to a meaningful impression at later stages of processing in the brain. For a given period of time. perception is the end product of several steps of selective processing. In general. In 1593 already. Importantly. Porta reported a perceptual phenomenon which occurred when he held the two pages of a book right in front of his two eyes (cited by Wade and Ono.M. He reported being able to temporarily read one of the pages. At times rivalry can also result in percepts which consist of mixtures of both pictures combined of parts from both. in detail. 2004. perceptual changes occur while visual input remains constant. only a small fraction of them reaches conscious awareness. During extended periods of time.166 Georg W. Instead. Under normal circumstances this information is combined into a meaningful spatial representation. While many stimulus properties are still processed by the eyes’ sensory cells. "Blind" in One Eye: Binocular Rivalry Information from the environment reaches the two eyes independently. binocular rivalry enables the researcher to investigate features of conscious perception.

the decision about dominance or suppression would only occur after both pictures have been processed independently and higher mental processes such as attention would select among the two. Consequentially. In spite of the multitude of interesting findings. the neuronal mechanisms have not yet been unambiguously clarified. he called the phenomenon “retinal rivalry”. One of the earliest theories was introduced by Helmholtz (1924) and many other studies are based on it. the underlying neuronal processes have been discussed controversially. He assumed that the eyes’ visual fields are completely independent of each other and that under normal circumstances the consistent perceptual impression does not occur until higher mental processes have taken place. .Temporarily Blind in One Eye 167 an optic apparatus which makes it possible to present different pictures to the two eyes (see a modern version in Figure 3). According to this theory. The main disagreement that prevails is whether the competition for conscious perception takes place at very early or later stages of visual processing. Ever since binocular rivalry has been studied scientifically. The mirror stereoscope used in our laboratory (sample pictures on the computer screen). Figure 3.

Gerdes Hering (1886). Binocular rivalry suppression of one channel would thus mean that input from one eye is not thoroughly processed before being suppressed. 2.. independent from the source of this input. These neurophysiological findings favour the theory that rivalry is the result of a competition between incompatible stimulus representations in higher areas of visual processing. 1997). that is. Thus. after information from the two eyes has been integrated in V1 (Sheinberg and Logothetis. Evidence for this theory comes from imaging studies showing that rivalry alters activity early in the visual stream. He assumed that early inhibitory interactions in visual processing account for the occurrence of rivalry. according to this theory. 1996). on the other hand. Thus. Competition between Input from the Eyes or between the Percepts? This 19th century controversy continues to the present day. Alpers and Antje B. and Sheinberg. and Rees. favoured another model. binocular rivalry exhibits typical properties of early visual processes. It assumes that rivalry is decided after primary processing in monocular channels is integrated in binocular channels. Rivalry thus occurs by means of inhibitory connections between the monocular processing channels. in V1 (Polonsky. i. . the crucial processes are thought to be independent from the fact that this input originated from one eye or the other (Logothetis. and Heeger. Blake.e. processing in these circuits is generally thought to be mostly independent from input from the two eyes. Tong and Engel. rather than information from the two eyes. 2000. this perspective is also called the “stimulus-rivalry” perspective. In contrast.168 Georg W. 2001) and the lateral geniculate nucleus (Haynes.M. According to the “low-level” theory (advocated for example by Blake. Leopold. 2005). Importantly. inconsistent effects for visual areas V4 and MT.g. 1989). Therefore.2. Support for this perspective comes from single cell recordings in animals which show little evidence for rivalry-correlated activation in V1. Thus. according to this theory. Braun. but clear effects in the inferior temporal cortex. e. the “high-level” theory postulates that binocular rivalry arises because of a competition between stimulus information. rivalry takes place between integrated stimulus representations. the decision about dominance or suppression takes place before the two pictures are completely processed. this theory is also called “eye-rivalry” theory. Deichmann. This has also been labelled the “low-level” theory.

even when they are suppressed in binocular rivalry. and Schiltz. 2003. Because conscious control over binocular rivalry is probably not possible (Meng and Tong. V2) which we mentioned above (Amaral. although the emotional material was not available to conscious perception (confirmed by an absence of activation in specialized face-sensitive regions of the ventral temporal cortex).Temporarily Blind in One Eye 169 Some findings from human studies also account for the fact that a certain amount of processing has taken place before the competition is decided. For example. 2003). we proposed that such preferential processing of emotional pictures may result in their predominanceover neutral pictures in binocular rivalry. Indeed. 1996). but enough information of the suppressed picture can proceed to brain circuits which process integrated information from both monocular channels. Mayes. . Williams. and Alais. From this point of view. Freeman.. If the relevance of a stimulus has been detected within subcortical circuits (“low road”) this may lead to more intense visual processing at several later stages of visual perception. 1996). suppressed stimuli can produce after-images(e. et al. and Mattingley. Rivalry then occurs between combined percepts and not between input from one eye. O'Shea and Crassini.. Yang. 2. but rather characterize two extremes on a continuum. Because convincing support has been found for both theories. 2004) we argued that demonstrating such predominance would provide particularly convincing evidence for preferential processing during prolonged perception. it has been shown that the amygdala is more strongly activated by fearful faces. many authors conclude that binocular rivalry may involve stages of visual processing (Nguyen. Papathomas. Wilson. such rivalry between percepts even occurs when parts of the pictures are intermittently projected to one or to the other eye (Logothetis..3.g. it becomes apparent that this may be an avenue for a picture's emotional significance to influence processing within visual circuitry. and Feher. Thus. 1981). emotional circuitry was (Pasley. However. 2003). Abbott. Interestingly. the strongest argument for the “high-level” theory is the finding that fragments of a picture presented to one eye each can be reassembled to a consistent percept in conscious perception (Kovacs. Rivalry considerably suppresses activation in the primary processing channels. Possible Influences from Non-visual Neuronal Circuits Taking into account the projections from the amygdala to primary sensory areas of the visual cortex (V1. Thus. McGlone. Morris. et al. the theories are not mutually exclusive. 2004). 2004.

In North American participants the typically American scene (a baseball player) predominated over less relevant material (a matador) while the latter predominated in participants from Mexican descent. and Daves.. . very few studies have examined the influence of a pictures’ significance on predominance and suppression in binocular rivalry. Rivalry would have had to occur between non-salient letters of salient words. Previous Investigations of Emotional Pictures in Binocular Rivalry 3. Interest in this research paradigm waned for many years after a comprehensive review by Walker (1978) highlighted the methodological problems with early studies. Additional studies further support that perception in binocular rivalry is modulated by personal significance of the stimuli. It was also reported that depressed participants predominantly perceive pictures with sad content compared to healthy control participants (Gilson. thus. However. However. He concluded that response biases might have been the main cause of the observed results.1. Blake (1988) did not find predominance of meaningful texts over meaningless strings of letters. Alpers and Antje B. Brown.170 Georg W. An early study by Engel (1956) demonstrated that upright pictures of faces predominate over inverted faces. Enthusiasm for this paradigm was hampered by other disappointing results. Gerdes 3. it has to be acknowledged that response biases (for example whether a picture is easier to specify) have not been taken into account in these studies. holistic processing was not possible. Walker’s main critique was that studies investigating the influence of picture content did not apply stringent definitions of predominance. Personal relevance and familiarity thus seem to have an influence on perception in binocular rivalry. Significance and Predominance Until recently.g. Bagby (1957) investigated the possible influence of personal relevance by showing pairs of pictures containing culturally relevant material to participants with different cultural backgrounds. indicating that greater salience of the familiar upright faces boost their competitive strength in binocular rivalry. Kohn (1960) as well as Shelley and Toch (1962) showed that certain personality traits (e. 1982).M. it is important to note that the text elements were not shown at once but as a stream of letters. aggressiveness) can influence the predominance of related stimuli (violent scenes).

Moreover. valence seemed to have had the strongest impact while arousal mainly influences dominance when the rivalling pictures had the same valence. Fukui. while pictures with stronger valence and superior arousal are more unambiguously perceived as dominant. Monchi. Here. Overall. and Koning. Snoeren. Pairs of pictures with different emotional facial expressions were presented and the perceived impression was rated for valence and arousal. pairs of pictures showing different facial expressions were presented to one eye each. It is problematic that only emotional faces were presented together and the presentation time was very short. there is little evidence that emotional salience can also promote dominance in rivalry. and Suzuki. 2005). Another study investigated perception of different facial expressions in binocular rivalry but its aim was to document the two-dimensional structure of emotions (on scales of valence and arousal) and predominance of emotional pictures was not the center of its design (Ogawa. Taken together. There are few studies in which different emotional pictures were presented stereoscopically. The authors conclude that there is more rivalry between pictures which are evaluated similarly on valence and arousal. Results convincingly demonstrate that pictures with a good Gestalt predominate over meaningless pictures (de Weert. A second and very similar study by the same group measured specific perceptual impressions during the presentation in addition to the dimensional structure of the evaluations (Ogawa and Suzuki. interest in this paradigm has been renewed and a number of well controlled experiments have been conducted since.2. Emotional Discrepancy and Binocular Rivalry Although an influence of semantic content of pictures on binocular rivalry thus seems very likely. 2004). asking the participants about their percept after the presentation introduced sources of error such as memory effects and response biases. 1999). because binocular rivalry takes time to built up. 3. and that a meaningful context of pictures promotes predominance (Andrews and Lotto. 1992). support the hypothesis that semantic contents of pictures can affect binocular rivalry. A recent study investigated the effect of different emotional facial expressions on their relative dominance (Coren and Russell. these investigations more clearly than former studies. .Temporarily Blind in One Eye 171 Today. and after a presentation time of 350 msec participants were asked what their predominant percept was. 2000). Takehara. Facial expressions with strong positive or strong negative valence and high subjective arousal were more frequently perceived as predominant than faces with less extreme ratings.

There is a general lack of recent and methodically convincing studies. Participants looked through a mirror stereoscope that was mounted about 30 cm in front of the computer monitor. Lang. Binocular Rivalry Experiments at Our Lab 4. These pairs were presented for 30 sec in a randomized order. These pictures depict different emotional scenes and are frequently used as stimuli in emotion research. . but until now. but were simply asked to report the content they saw. Using this widespread picture material allows for a direct comparison of the results with data from other established paradigms.1. neutral and positive valence for this study. this would suggest that emotional input is preferentially processed at an early stage of visual processing. Sixty-four healthy participants were stereoscopically shown ten pairs of pictures from the International Affective Picture System (IAPS. and Cuthbert.M. there is some evidence that emotional meaning influences binocular rivalry. A trained research assistant coded the participants’ comments as emotional or neutral with button presses. Participants were not explicitly instructed to categorize emotional and neutral percepts. 4. but our review of the literature demonstrated that this question has not been thoroughly investigated. The first study on binocular rivalry from our lab investigated whether complex emotional pictures predominate over neutral pictures (Alpers and Pauli. participants had to continuously verbalize what their perceptual impression was throughout each trial. Predominance of Emotional Scenes We argue that binocular rivalry provides for very interesting experiments in emotion research. only few studies investigated this influence systematically. Thus. If pictures with emotionally activating content predominated over neutral pictures in binocular rivalry. A ratio of the cumulative time for emotional versus neutral percepts was used as the dominance index. Gerdes Taken together. 2006). we chose pictures of negative. In contrast to earlier studies. Bradley.172 Georg W. Alpers and Antje B. 2005). Because positive as well as negative pictures elicit similar patterns of activation in areas responsible for emotional processing (see above). pairs of pictures were composed with one emotional (positive or negative) and one neutral picture each. Thus only one picture was projected to each eye. We assessed the initial perceptual impression in each trial as another dependent variable because this is thought to be less strongly affected by habituation and less error-prone with regard to verbalization.

First. verbal coding is certainly prone to response biases. to a large extent such differences are closely associated with the emotional content (Lang et al. For example. Also. relatively large pictures were used in this study. larger pictures tend to fuse with each other more strongly than smaller pictures (i. either in duration of the percept nor in the frequency of the initial percept. 1998). 1997). it remains unclear whether participants applied different decision criteria for reporting mixed or unambiguous percepts. Furthermore. because some contents may have been embarrassing for the participant..1. for example (erotic pictures). is diminished) (O'Shea. Sims. 2002). Possible Confounds Among the undesirable confounding factors which can potentially influence perceptual dominance.1. a number of serious limitations have to be considered. The tendency to more often mention emotional picture contents could have had an influence on the results. As clear cut as the results of this experiment may be. Differences in predominance were not found between positive and negative pictures. 1997). In conclusion. the rate at which rivalry occurs. the initial perceptual impression was significantly more often that of the emotional as opposed to the neutral picture. et al. Binocular rivalry is strongly influenced by certain physical characteristics. Pictures with high-contrast are perceived more dominantly compared with low-contrasted pictures (Blake. This confirms our hypothesis that emotional content can boost visual perception. these results support the hypothesis that binocular rivalry is influenced by the emotional contents of two competing pictures. our results confirmed that emotional pictures are considerably longer perceived as dominant than neutral pictures. However. as well as a tendency to avoid verbalizing specific picture content. Brighter pictures dominate more frequently than darker pictures (Kaplan and Metlay. This experiment clearly shows that pictures of emotional scenes presented to one eye predominate over neutral pictures presented to the other eye. so called piecemeal rivalry (O'Shea. 4. and Govan.e. Although it was also possible to report mixed pictures. 1964) and moving pictures dominate over stable pictures (Blake and Logothetis. and this often leads to the perception of mixed pictures. . Verbalizing unpleasant issues (repellent and disgusting details) could also pose a problem. potential physical differences in the pictures’ complexity and color are certainly most important.Temporarily Blind in One Eye 173 First of all.. The studies summarized below are aimed at controlling for physical characteristics and at controlling for possible problems with self report. 1989).

McInerney. Davis. Lee. 2000). Cook. Gerdes 4. For our study on binocular rivalry we chose pictures of eight actresses. frightened. 1969) and because they provoke emotional reactions in every day life (Dimberg. and Tanaka. 2007). 1998) via a specialized subcortical route (Johnson. Different from our expectation. 1996) and holistically (Farah. KDEF. Etcoff.2. Polis et al. Dominance of Emotional Facial Expressions In order to further pursue the question of preferential perception of emotional stimuli with the aid of binocular rivalry. During the stereoscopic presentation the 30 participants continuously coded their perception of emotional. there was clear evidence for predominance of emotional stimuli compared to neutral stimuli for both. Predominance of emotional faces is equally . largely independent from culture (Ekman. Kim. Drain.. Sorenson. Rauch. and Jenike. Flykt. Lundqvist.. Pair wise presentation of one neutral and one emotional facial expression of the same actress made it possible for us to adequately control for inter-individual differences of the actresses. Whalen. we conducted an experiment with pictures of different facial expressions (Alpers and Gerdes. the cumulative duration with which each percept was seen as well as the initial percept seen during each trial (see Figure 4).174 Georg W. Wilson. and Friesen. and stimuli which are better controlled for physical differences. happy. Kagan. Emotional faces attract attention as can be seen in visual search and Dot-Probe paradigms (Mogg and Bradley. Again. 1998) even when they are masked and thus not consciously perceived. Alpers and Antje B. 1998). from a standardized set of pictures (Karolinska Directed Emotional Faces. et al. 2005).M. 2004. neutral or mixed pictures by button presses. Emotional facial expressions are processed very rapidly (Jeffreys. They elicit subcortical activation as well as peripheral physiological and behavioral reactions (Whalen. These findings support our earlier findings of predominance of emotional pictures over neutral pictures in binocular rivalry. 1982). surprised and neutral facial expressions. Presenting emotional faces is especially useful in emotion research because these stimuli are evolutionary relevant (Dimberg. These pictures are very well standardized regarding physical characteristics such as background and brightness. there were no differences between positive and negative facial expressions. In this study we were able to take into account potential limitations of the first study. Emotional faces were consciously perceived by the participants significantly longer throughout the trial and they were significantly more often perceived as the first clear percept of a trial. 2002). each of them showing angry. and Öhman.

Temporarily Blind in One Eye 175 strong as that of emotional IAPS pictures. with permission from APA). 2007. Nonetheless. although the latter normally induce a higher arousal than emotional faces (see Adolph. Right panel: Average frequency of initial perception and standard errors of emotional. we investigated whether fear relevant stimuli are perceived as more dominant by individuals who . for emotional. Results from the experiment with emotional faces (Alpers and Gerdes. Figure 4. 2006). biases can not be completely ruled out. 4. neutral). Inter-Individual Differences: Phobic Stimuli After having demonstrated that emotional pictures are in fact dominant over neutral pictures. As participants did not have to verbalize what they perceived and the categorical classification of perceived facial expressions was easy (emotional vs. coding of participants' perception was still based on self-report. neutral and mixed pictures. Alpers. the likelihood of response biases was clearly reduced in this study. we addressed another early hypothesis of the binocular rivalry literature: Are inter-individual differences reflected in what people perceive in binocular rivalry? With the help of a further improved experimental design. and Pauli.3. neutral and mixed pictures. Left panel: Cumulative duration of dominant perception (in seconds) and standard errors of the mean. Thus.

The advantages of this approach were that the specific content of a picture did not have to be identified and that no decision between two semantic pictures was needed. Vaitl. and and one for mixed percepts. We hoped that this would minimize the problems related to response biases and to interindividual differences in decision criteria. Different from previous studies all of these pictures were paired with an abstract pattern. Alpers. 1999). Stimulus material for the experiment with phobic patients (Alpers and Gerdes. Esteves. Participants were asked to continuously code their perceptual impression by pressing one of three different buttons. 1994) and 20 non-phobic control participants were recruited for this investigation (Alpers and Gerdes. and Stark. Alpers and Antje B. Hamm. and Öhman. A group of 23 patients who met diagnostic criteria for spider phobia (DSM-IV. American Psychiatric Association.. the abstract pattern. Remarkably. Gerdes. Figure 5. 2006): examples of a spider picture. and a flower. has been documented in a stronger amygdala activation in response to phobic cues (e. possible influences of mere physical differences of pictures (emotional versus neutral) are not problematic in this endeavor because they should affect phobic and non-phobic participants in equal measure. Different degrees of dominance between patient and control participants would also support the theory that phobia-related cues are processed more intensely in phobic participants.176 Georg W. Gerdes are characterized by high levels of high fear. Lagarie. The presentation of phobia-related stimuli elicits strong emotional reactions (Globisch. Tabbert.g. one for the abstract pattern. That phobic material activates subcortical networks which may in turn prime or boost the activity of the visual cortex. There was one button each for dominant perceptual impression of a picture (spider and flower). .. submitted).M. Twenty different pictures of spiders and flowers were presented stereoscopically. These picture-pattern pairs were presented for 8 sec each (see Figure 5). We recruited patients with a specific phobia of spiders because they are especially well suited for this investigation. 2006).

Temporarily Blind in One Eye 177 Figure 6. empty bars: control participants). Right panel: mean duration of dominant perception (with standard errors) of pictures of flowers. At the same time. there are no group differences for pictures of flowers versus the pattern. We were also able to support the theoretically founded assumption that phobia-related cues are preferentially processed by individuals with spider phobia. we have no data concerning personal relevance of positive pictures at this point. groups did not differ in the duration with which they perceived pictures of flowers as dominant or in how often they reported seeing flowers as the first percept in a trial (see Figure 6). Left panel: mean duration of dominant perception (with standard errors) of pictures of spiders. When we compared negatively valenced and positively valenced pictures in earlier studies. empty bars: control participants). no differences were apparent. patterns and mixed pictures (crossed bars: phobic. patterns and mixed pictures (crossed bars: phobic. In addition. This study replicates our previous findings in showing that emotional pictures modulate competition in binocular rivalry. . Spider phobic participants perceived pictures of spiders as dominant for longer periods of time during the trials and they also reported that they perceived phobic pictures as the first clear percept of a trial more often than control participants. Results from the experiment with phobic patients. we were able to demonstrate that personal relevance is reflected in dominance of specific phobiarelated cues. With respect to interindividual differences.

Thus. The pattern of results was very similar to the pattern reported above for photographic emotional faces. and Öhman.4. We designed neutral. 2002. Controlling for Physical Properties of Stimuli Although the findings we reported above clearly support the hypothesis that emotional picture content results in more predominant perception. In this experiment schematic emotional faces clearly predominanted compared with neutral faces (Experiment 2. Gerdes 4. In order to introduce differences in emotional valence we used a differential fear conditioning paradigm. and negative facial expressions by arranging nearly identical picture elements (also see Lundqvist. even if they aquire emotional relevance after fear conditioning. In the first experiment. could have influenced the results in these studies. and Pauli. Walz. Mühlberger.M.178 Georg W. Interestingly. Thus. This can probably be best explained by the fact that geometric patterns are not evolutionary relevant. the two stimuli which had neutral valence at the outset were given different emotional valence by our experimental manipulation (Alpers. we conducted four more experiments with the objective to largely eliminate the influence of physical properties of stimuli and minimize response biases in self-report of perception. 2005). Lundqvist and Öhman. and Merikle. This helped us to document that more aversive experience with a given grating with a given orientation changed its influence on predominance across trials. as well as self-report of perception. However. Alpers and Gerdes. positive. the aversively conditioned pattern was perceived as the dominant percept for a longer period of time during the trials when compared to the perception before the fear conditioning. Smilek. and it was reported more and more frequently as the first percept of a trial. Alpers and Antje B. 2003. we were able to interleave conditioning and binocular rivalry trials. it cannot be completely ruled out that physical properties of pictures. Ruhleder. 2004). Eastwood. In a second study we presented schematic emotional faces which are more biologically relevant but differ in physical characteristics. we presented two physically identical geometric gratings which only differed in spatial orientation. Taken together. these effects were rather small compared with the effects in studies using emotional IAPS pictures and emotional facial expressions. several studies have demonstrate that schematic faces can elicit similar emotional reactions as photographs of faces (Bentin and Gollanda. Although those faces are rather simple. 2005). As a result. underlying neuronal processes of emotional processing are probably less pronounced here than in experiments with stimuli that are evolutionary prepared or are naturally relevant. both control-experiments demonstrate that . Esteves. 2007). To account for this and to validate our previous findings.

1994. we occasionally presented a small dot in either the emotional or the neutral picture. 2005). each frequency resulted in a distinguishable EEG signals (steady-state visually evoked potentials) (Brown and Norcia. we again presented emotional and neutral faces from the KDEF picture set (Alpers and Gerdes. 2001). More dots were identified in the more emotional pictures which were dominant more often and reaction times were shorterwhen dots were identified in emotional pictures compared to neutral ones. Moreover. 4. we can largely rule out effects of physical differences or response biases on our results. . In this experiment. 1968. Freeman. The findings are consistent across a variety of stimuli such as emotional scenes. two geometric patterns were shown stereoscopically. Taken together. et al.5. and Wenderoth. Validation of Self-report Two further studies were aimed at verifying the participants' self report of perception during binocular rivalry. Summary The series of studies presented here documents that emotional visual stimuli clearly predominate in binocular rivalry. In addition. With this approach. we were able to demonstrate that the participants' self-report of what they saw corresponded with the respective objective EEG signal of the dominant pattern over occipital regions (Alpers. 1997).Temporarily Blind in One Eye 179 dominance of emotional stimuli can not be exclusively attributed to covarying physical differences between neutral and emotional pictures. Similar to the conditioning experiment introduced above. these two experiments may provide support for the validity our participants' self-reportof their perception in binocular rivalry. Nguyen. Participants were asked to press a button when they detected a probe. In order to obtain an objective measure of the participants' perception we coded each stimulus with one of two flicker frequencies. Freeman and Jolly. differences in perception between differentially affected groups of people were documented using phobia-related pictures. 2007). In addition. 4.6. Another experiment was based on the finding that changes in the suppressed picture are harder to detect than changes in the dominant picture (Fox and Check. emotional facial expressions or aversively conditioned stimuli.. in the course of a trial. When perceived dominantly.

predominance of emotional stimuli in binocular rivalry is another piece of evidence for preferential processing in the visual stream. et al. and Carmichael. 1992) and the anterior cingulate cortex (Posner and Raichle. Herrmann. et al. .. 1995) may be involved in processes leading to conscious perception. Because different stages of processing in primary and extrastriatal areas are involved in binocular rivalry (Kovacs. Schupp. it is apparent that influences from emotional processing centers could take effect. Lundqvist. An evolutionary advantage of faster detection of potentially meaningful stimuli accounts for an easier processing of emotional material (Öhman and Mineka. 1988. 2003). et al. Price.. Nonetheless. such as the faster detection of emotional stimuli in search tasks (Hansen and Hansen. with the paradigm we described here we were not able to verify whether activation of emotional neuronal circuits is in fact responsible for the competitive strength of emotional pictures in binocular rivalry. 1996. we have not yet shown that dominance was mediated by activation of emotional neuronal circuits (the amygdala. 2003) or an interaction of both is a challenging problem for future research.. 5. However. the results are consistent with findings form psychophysiological studies which show stronger activation of the visual cortex when looking at emotional pictures (Alpers. 2001). Pitkanen. higher order (cortical) attentional processes (Pessoa. Furthermore. 1996. As we explained in the introduction.. 2008. et al. However. et al. Alpers and Antje B. as to the mechanisms. 1988.But not When It Comes to Emotion The preferential perception of emotional pictures in binocular rivalry is clearly consistent with results from other experimental paradigms. 2005.180 Georg W. Conclusion: "Blind" in One Eye . et al. Gerdes These results are consistent with other findings showing that meaningful pictures dominate over meaningless ones in binocular rivalry (Yu and Blake. 1992).. et al. Whether this is based on automatic processes (Öhman.. emotional material activates the amygdala in binocular rivalry even when it is suppressed. it can be hypothesized that feedback from subcortical circuity such as the amygdala (Amaral. et al. 2005). While some experiments with different paradigms found emotion specific effects which suggest that preferential processing of negative pictures is specific (Hansen and Hansen. Öhman. 2001). 1997). 2001). With regard to neuronal substrates in which binocular rivalry is processed. for example). our experiments with binocular rivalry point to a dominance of both positive and negative pictures.. Sheinberg and Logothetis.. Logothetis.M. Öhman.

automatic allocation of attention was found for both positive and negative pictures (Alpers. W. Physiological Reactions to emotional Stimuli: A Comparison between Scenes and Faces. 2002). and Stark. Lagarie. Meinhardt and M. 495-506. K..B. (2006). and Gerdes.. and Pauli.. A. In H. from an evolutionary perspective. A. Hamann and Mao. 2008.M. Hecht. (2008). Here's looking at you: Emotional faces predominate in binocular rivalry. Alpers. 13. . et al. G. It might be left to the higher cortical circuits to precisely analyze the specific valence and to control appropriate approach or avoidance responses. 158-178. W. Krebs. Gerdes. Alpers. P. Beiträge zur 48. Alpers. Brain and Cognition. et al. Symposium der Fachgruppe Klinische Psychologie und Psychotherapie (pp. D. Lengerich: Pabst. Alpers. 2000). 87). In G. W. H. D. W. Laterality: Asymmetries of Body. G. 7. The use of the binocular rivalry paradigm could render an essential contribution to further investigations of emotional influences on visual perception. largely independent of physical differences. 235). P. (2007).. Berti. G. 2002). positive and negative picture content seems to influence perception in binocular rivalry. to preferentially process negative stimuli. G. This series of experiments provides some of the first evidence that emotional stimuli are also preferentially processed during prolonged viewing. Lengerich: Pabst.. B. and Gerdes. G. which show that positive stimuli are also processed preferentially (Garavan. B.. Furthermore.. and Mansell. S. Effects of arousal irrespective of valence are also evident in event related potentials of the EEG (Cuthbert. R. Vaitl. Tabbert. Mühlberger. Ehlers. Gamer (Eds. A. Im Auge des Betrachters: Bei Spinnenphobikern dominieren Spinnen die Wahrnehmung. W. Pauli (Eds. M. Eye-catching: Right hemisphere attentional bias for emotional pictures.).Temporarily Blind in One Eye 181 Although it seems to be particularly reasonable. References Adolph. Alpers. Emotion. Chen. Weyers and P.. Wissenschaftliche Beiträge zum 24. Alpers. Attention modulates amygdala activity: When spider phobic patients do not attend to spiders. This yields more evidence for a preferential processing of emotional stimuli in the visual system. Tagung experimentell arbeitender Psychologen (pp. W.. (2006). In conclusion.). Clark. (submitted). there are several findings.. 2001. A. G.

19. P. Anatomical organization of the primate amygdaloid complex.. J. N. 14. R. W. Alpers. Pitkanen. and Lotto. 25-32... Neuroscience. Journal of Abnormal and Social Psychology.. Alpers. Angst . 86.M. What is an unconscious emotion? (The case for unconscious "liking"). P. K. (2006). L. Ruhleder. N... Cognition and Emotion. C. R. 133-141. Erb. T. (2002). M. Alpers and Antje B. R. J.. G. R. and Fallgatter. (1992).. Gerdes Alpers. W. J.. G. (2003). Perception and Psychophysics..182 Georg W. 1099-1120. Emotional pictures predominate in binocular rivalry. (2003).. and Pauli. (2005). Birbaumer. 145-167. D. Y. Cognition. (1988). P. G. and Kelly... 418-423. Amaral. 1-66). Mühlberger. W.. K. . Psychological Review. Herrmann. Alpers. Amaral. International Journal of Psychophysiology. et al. S. Behniea. (2004). Current Biology. 181-211. L. 17. 339-340. Nature Neuroscience. and Pauli. and Pauli. G. Visual competition. Berridge. 13-21. A. (2005). 96. Blake. Dichoptic reading: The role of meaning in binocular rivalry. G. Mader. In J. 596-607. (1957). S. The Amygdala (pp. B. Parietal somatosensory association cortex mediates affective blindsight. Mühlberger. Pauli. B1-B14... A.. I. Aggleton (Ed. M. Cognition and Emotion. Blake. 3. (1989). N. J. Neurobiologie psychischer Störungen [Neurobiology of mental disorders] (pp. F M. and Winkielman. P. (2005). Bagby. Grodd. 54. Roth (Eds. In H. Hautzinger and G. W. P. and Gollanda. Heidelberg: Springer. A. Nature Reviews Neuroscience. A. Andrews.. Topographic organization of projections from the amygdala to the visual cortex in the macaque monkey. S. Emotional arousal and activation of the visual cortex: A Near Infrared Spectroscopy Analysis [abstract]. W. 7.. A neural theory of binocular rivalry. 20. 331-334. Binocular rivalry between emotional and neutral stimuli: a validation using fear conditioning and EEG. Journal of Psychophysiology. 57. M. Meaningful processing of meaningless stimuli: The influence of perceptual experience on early visual processing of faces. B. Price.). (2002). Fusion and rivalry are dependent on the perceptual meaning of visual stimuli. P. 106. Anders.. Walz. 118. J. A cross-cultural study of perceptual predominance in binocular rivalry. New York: Wiley-Liss.. D..523-544).. and Logothetis. J.Neuropsychologie [Anxiety -Neuropsychology]. W... H. Bentin. G. and Carmichael. Blake. Sadowski. 44.). (2004).

C. and Jolly. (1998). Mogg. Cognition and Emotion. (1994). S. Bonham-Carter. K. and Elmehed. Visual loss during interocular suppression in normal and strabismic subjects. and Whalen. Cognition and Emotion. R. N. U. D. (2000). 95-111... (1968). 65.. Donato. Brown. J. Biological Psychology. and Lang. Unconscious facial reactions to emotional facial expressions. D.. M. 24012408. Sorenson. Behaviour Research and Therapy. (1982).. and Merikle... K. Schupp. Detection of motion during binocular rivalry suppression. 40. (1956). P. Millar. A. What is "special" about face perception? Psychological Review. de Weert. W. Psychological Science. M. D.. U. and Friesen. Drain.. R. P. M. and Ffytche. Vision Research. P. V. 86-89. H.. The amygdala: vigilance and emotion. American Journal of Psychology. P. E. N.. R. 6.. (1997). and Russell. W. 388 .. Vision Research. Jones. Bradley. Ekman. Brain potentials in affective picture processing: covariation with autonomic arousal and affective report. 2093-2107. 6. E. 643-647. A. 482-498. 164.. Patients with generalized social phobia direct their attention away from faces. (1969). Smilek. (1997). Catani. (1992).. Thunberg.. M... J. 45. 87-91. D. Ehlers. Wilson. Birbaumer. Negative facial expression captures attention and disrupts performance. T. and Mansell. A method for investigating binocular rivalry in real-time with the steady-state VEP. J. Farah. The role of content in binocular resolution. (2000). M. M. Fergussoon. D. W. B. (2001). 126. H. Freeman. N. (2005). 78. Clark. 11. R. R.. M. . et al. and Norcia. Dimberg. (2003). Snoeren. Coren. J. Occipitotemporal connections in the human brain. 69. J. 2043-2050.. and Koning. Vision Research. 105. E. Dimberg. Perception and Psychophysics.. and Tanaka. A. journal of Experimental Psychology. 34...395.. Eastwood. 52. Psychophysiology. Pan-cultural elements in facial displays of emotion. 11. D. M. Davis. 25-42. B. P. and Check.. Engel. The relative dominance of different facial expressions of emotion under conditions of perceptual ambiguity.. 677-687. N. Molecular Psychiatry. P. M. Science. K. Jenkins... Facial reactions to facial expressions. Y. J.Temporarily Blind in One Eye 183 Bradley. 13-34. J. A.. M. R. K. J.. C. Brain. 352-358. Interactions between binocular rivalry and Gestalt formation.. 37. (2002). A. N. 19. Cuthbert. 2571-2579. Chen. (2003). Attentional biases for emotional faces.. 86-88. P. Fox. 339-356. M.

Hamm.: Harvard University Press. and Mao.184 Georg W.. Enhancement of neural activity of the primary visual cortex for emotional stimuli measured with event-related functional near infrared spectroscopy (NIRS). H. 13. (2001). Helmholtz's treatise on physiological optics.. C. F. Psychophysiology. R. 917-924. M. Finding the face in the crowd: an anger superiority effect. T. Kaplan. (2002). journal of Experimental Psychology. (1996). 494-500. E. T. A.).. W. Ross. C. Plichta.. A. A.. A.Y. H. Weike. Nature. Positive and negative emotional verbal stimuli elicit activity in the left amygdala. C. R. D.. C.. T. T. I. 8. Alpers. Fear appears fast: Temporal course of startle reflex potentiation in animal fearful subjects. and Kessler. Haynes.. Introduction [Special issue on binocular rivalry]. J. Treig. J. M. (2008). Hamann. C. Journal of Personality and Social Psychology. 29. 36. 267-275. (2005). 28-35. W. and Blake. Ely. Humphrey. J. Herrmann. B. Helmholtz. D. et al. 15-19. (Ed. Jeffreys. N. 1-38. Mühlberger.. and Rees...... Schupp.. 2279-2783. Brain. Hamann. A. Southall (Ed. and Kilts. 1-4... Hoffman. Subcortical face processing. Mass. 54. M. Cambridge. Esteves. Ecstasy and agony: activation of the human amygdala in positive and negative emotion. v. 126. (1988). R. J. G. and Daves. S. (1964). Johnson. Gerdes Garavan. (2001). (2005). E. 66-75. . (1982). Brown. and Metlay.. O. C. In J. Neuroreport... Pendergrass. F. (Translated from the 3rd German edition. 3. 6. Eye-specific effects of binocular rivalry in the human lateral geniculate nucleus. Globisch. A.. M. O. Evoked potential studies of face and object processing. Dressel. H. Ehlis. S. D. 496499. 766-774. 12. G. Human Brain Mapping.. Stein. Personality and Social Psychology Bulletin. Light intensity and binocular rivalry. Outlines of a theory of the light sense (originally published in 1886). Hamm. 13. Neuroreport. J. H. M. (2002). Rochester.-C. K. A. T.. Nature Reviews Neuroscience. Sexual orientation as measured by perceptual dominance in binocular rivalry. D.M. Affective blindsight: intact fear conditioning to a visual cue in a cortically blind patient. and Hansen. C. Deichmann. T. 135-141. Gilson. Huter. Psychological Science. (1964). 438. A. (2003). W. (1924).. P.: The Optical Society of America. Amygdala response to both positively and negatively valenced stimuli.. Alpers and Antje B. and Risinger.. M.. 67. D. I. Hansen. J. Visual Cognition. 1909) Hering. H. W. (1999). 2. Brain and Mind. J.).. H. R.. G. and Öhman. A.

D. . Stockholm: Karolinska Institute. D. 12. 35. The Psychological Record. Calder. J. (1960). D. Esteves. 18. L. Yang. Biological Psychiatry. Proceedings of the National Academy of Sciences of the United States of America. Cuthbert. M. Fear is fast in phobic individuals: Amygdala activation in response to fear-relevant stimuli. Brain. Moulder. (1996). and Öhman. S. Friston. Can attention selectively bias bistable perception? Differences between binocular rivalry and ambiguous figures. 410417. A. C. Behaviour Research and Therapy.. Young. (1998). P. B.. D. A. Scott. I.. (1998).. and Feher. J. University of Florida. What is rivalling during binocular rivalry? Nature. M. Lang. Lundqvist. and Bradley. Cognition and Emotion. Lang.. Flykt... K.. 40. D. Journal of Vision. A. 60. T.. 9-13.. Lundqvist. (2004). (1996). facial emotion. M. Larson.. Siegle. S. (2006). P. 47-57. Jackson. and Tong.. D. Technical Report A-6. LeDoux. Psychophysiology. M. and visual attention. J. L.Temporarily Blind in One Eye 185 Kohn. B. J.. Bradley. and Davidson. F. H. 161-182. C. G.. A. When the brain changes its mind: Interocular grouping during binocular rivalry. M.. J. C. Logothetis. FL.. A. and Öhman. and Sheinberg. J. D. 4. Kovacs. J. J. International affective picture system (IAPS) :Affective ratings of pictures and instruction manual.. et al. B. (1996). The face of wrath: The role of features and configurations in conveying social threat. 121. P. N. H. Emotion regulates attention: The relation between facial configurations. Emotional arousal and activation of the visual cortex: an fMRI analysis. 539-551. V. 15508-15511. N. The emotional brain: The mysterious underpinnings of emotional life. Lundqvist. 93. Gainesville.. and Cuthbert.. F. B. Mogg.. Frith. Selective orienting of attention to masked threat faces in social anxiety. A. M. and Öhman. 1403– 1414.. C. Some personality variables associated with binocular rivalry. (1998). Papathomas. Morris.. 621-624. Leopold. 10.... 380. Buchel. A. Bradley.. 199-210.. J. Meng. New York: Simon and Schuster. (2004).. R. Anderle. A. A. (2005). (2002).. 51-84. Schaefer. J. W. et al. J. The Karolinska Directed Emotional Faces (KDEF). B. (2005). N. Visual Cognition. Fitzsimmons. R. A neuromodulatory role for the human amygdala in processing emotional facial expressions.. M. K.. K.

15. N.. (2001).. Subcortical discrimination of unperceived objects during binocular rivalry.. Öhman. Alpers and Antje B. Ogawa. Monchi. Interocular transfer of the motion aftereffect is not reduced by binocular rivalry. L. Psychoneuroendocrinology. Freeman.. A. 231-240. S. Neuroimaging studies of attention: from modulation of sensory processing to top-down control. and Ungerleider. 466-478. J. J. W. A. phobias. 80.. 30. Vision Research. V. Attentional control of the processing of neutral and emotional stimuli. L. H. Journal of Neuroscience. (1994). Braun. Neuron. R. Emotion space under conditions of perceptual ambiguity. Nature Neuroscience. Blake. and Wenderoth. T. Emotion drives attention: detecting the snake in the grass. and Govan. F.. Pasley. A. D. R.. F. Sims. 23. and Alais.186 Georg W. Cognitive Brain Research. G. Öhman. Perceptual and Motor Skills. B. Neuronal activity in human primary visual cortex correlates with perception during binocular rivalry. A. N. A. 43. Pessoa. 63. and Esteves. G. Öhman.. 953-958. 381-396. F. (1997). Vision Research. L.. and Suzuki. A. P. S. R. Perception and Psychophysics. Ogawa. N. 1153-1159.. 483-522. P. (2001). A. and Suzuki. (2005). 103. V. O'Shea. 108. (2001). 3990-3998. 37. Nguyen. C. 348-360.. Flykt. Psychological Review. 163-172. A. L. R. (2000). (2004). Öhman. and preparedness: Toward an evolved of fear and fear learning. and Esteves. . 291-298. Mayes. Gerdes Nguyen. A. W. The face in the crowd revisited: A threat advantage with schematic stimuli. P. T. 2003-2008. D. T. D. 1379-1383. Fears. (2001). J. Polonsky.. A. Y.... Kastner. and Schiltz. Perceptual and Motor Skills..... J.. G. Journal of Experimental Psychology: General. Emotion space as a predictor of binocular rivalry. 90. Pessoa. The effect of spatial frequency and field size on the spread of exclusive visibility in binocular rivalry. and Crassini. 42. (2003). (2000). A. (1981). 31-45. D. and Mineka. O'Shea. T. Journal of Abnormal Psychology. Lundqvist. 21. and Soares. "Unconscious anxiety": Phobic responses to masked stimuli. and Ungerleider.. S. Freeman. (2002). Kastner. 88.. R. 130. B. 3. Vision Research. Increasing depth of binocular rivalry suppression along two visual pathways.. The role of the amygdala in human fear: Automatic detection of threat.... Takehara. A. 801-804. L. 175-183..M. The depth and selectivity of suppression in binocular rivalry. (2003). (1999). and Heeger. Fukui. Journal of Personality and Social Psychology. J. Öhman.

N. N. M. Journal of Neuroscience. A. S. and Toch. 125-133. Computational evidence for a rivalry hierarchy in vision. Lee.. Proceedings of the National Academy of Sciences of the United States of America. 55-83. Emotional facilitation of sensory processing in the visual cortex. phenomena of binocular vision. Wheatstone. F. (2004). Psychological Science. and Mattingley. J. A. (2004).. A. Walker. 876-887. 18. (2001). Binocular rivalry: Central or peripheral selective processes? Psychological Bulletin. Precis of Images of Mind. and Hamm. V. S.. and Moratti. Kagan. P. R. J. D. Psychological Bulletin. Tong. H. (2001). Brain and Mind. and Logothetis. Weike. 85. S. 371-394. 2. 2061. P. Journal of Criminal Law. The stereoscopic views of Wheatstone and Brewster. Whalen. F. Tong. Cerebral Cortex.. The Journal of Neuroscience.. 327-383. L. 47. (1998). (1962). Keil. McInerney. M. A. R. Philosophical Transactions of the Royal Society of London. B.. G. A.. Williams. F. D.. L. E. C. 14. 3408–3413. J. and Jenike.. M. Interocular rivalry revealed in the human cortical blind-spot representation.. Cook. H. Whalen. The role of temporal cortical areas in perceptual organization. H. 195-199. 94. I.Temporarily Blind in One Eye 187 Posner.. (1985). Stolarova. S. Polis.. Nature. Abbott. 53. Human amygdala responsivity to masked fearful eye whites. E. (2003). L. J. M.. (2003). . 28982904. (1997). 411-418. Morris. H. C. N. I. Amygdala response to fearful and happy facial expressions under conditions of binocular suppression. L. P.. On some remarkable. Competing theories of binocular rivalry: A possible resolution. Wade. 18. and Ono. Criminology and Police Science. (1995). C. et al.. 306. K. Masked presentations of emotional facial expressions modulate amygdala activity without explicit knowledge. A. F.. The perception of violence as an indicator of adjustment in institutionalized offenders. Modulation of the C1 visual event-related component by conditioned stimuli: evidence for sensory plasticity in early affective perception.. and Raichle. 7-13. 128. Shelley.. Proceedings of the National Academy of Sciences of the United States of America. 376-389. J. P. Rauch. B. Etcoff. and hitherto unobserved. M. M. 14499-14503. Kim.. 24. 411. M. and Engel. F. Junghöfer. McGlone. T.. 16.. Behavioral and Brain Sciences. Wilson.. (2006).. Schupp. A. Science. Davis. H. O.. 463-469. S. (1978). 100. (1838). H.. Sheinberg.

18. Yu.. O..188 Georg W. K. .. Calabrese. (1992).M.. Role of the prefrontal cortex in attentional control over bistable vision. Wehrmann. R. Do recognizable figures enjoy an advantage in binocular rivalry? Journal of Experimental Psychology: Human Perception and Performance. and Blake. (2006). and Güntürkün. Journal of Cognitive Neuroscience. P. 456-471. Alpers and Antje B. M. 18. 1158-1173. Gerdes Windmann. S.

o 1 Computer Vision Center and 2 Computer Science Department Universitat Aut` noma de Barcelona. The proposed algorithm. which consists of three stages. ∗ E-mail address: dgeronimo@cvc. Then. pp.e. together with statistics of searching space reduction are provided. Inc. Barcelona. L´ pez1.. (c) not uniformly distributed but according to a quadratic function (combined 2D3D).uab. 189-208 ISBN 978-1-60876-547-8 c 2010 Nova Science Publishers. (b) uniformly distributed through the image (2D). a successful road surface fitting algorithm provides estimates on the relative ground-camera pose. Sappa1 and Antonio M. First. Experimental results of the proposed algorithm. Finally. three different schemes are used to scan the estimated road surface with pedestrian-sized windows: (a) uniformly distributed through the road surface (3D). McCoun et al. pedestrians are on the ground) to reduce the candidate searching space. o Bellaterra. This stage directs the search toward the road area thus avoiding irrelevant regions like the sky. is based on the use of both stereo imaging and scene prior knowledge (i. the set of candidate windows is reduced by analyzing their 3D content.In: Binocular Vision Editors: J.es . Spain Abstract This chapter describes a stereo-based algorithm that provides candidate image windows to a latter 2D classification stage in an on-board pedestrian detection system. 08193. Chapter 9 S TEREO -BASED C ANDIDATE G ENERATION FOR P EDESTRIAN P ROTECTION S YSTEMS David Geronimo1∗ Angel D.2 .

2 million people are killed and 50 million are injured in traffic accidents worldwide [11]. These mechanisms. Hence. image retrieval and human-machine interfaces. in the beginning of 20th century. Introduction According to the World Health Organization. L´ pez o 1. People detection has been an important topic of research since the beginning of CV. However. Angel D. which aim at improving the safety of these vulnerable road users. PPSs detect the presence of people in a specific area of interest around the host vehicle in order to warn the driver. which warns when the car is driven out the lane unadvertently. which automatically maintains constant distance to a front-vehicle in the same lane. the problem faced by PPSs differs from these applications and is far from being solved. Since the early days of the automobile. 150 000 injured and 7 000 killed people each year in the European Union [6]. and along with its popularization.g.190 David Geronimo. automatic braking) by the use of different sensors and intelligent computation.. in which active sensors like radar or lidar are employed. it is clear that any improvement in these systems can potentially save many human lifes.. which involves not only motor companies but also governments and universities. Attending to the number of people involved in vehicle-to-pedestrian accidents. perform braking actions and deploy external airbags in the case of an unavoidable collision. e. and it has been mainly focused on applications like surveillance. which rely on physical devices. and lane departure warning (LDW). These systems provide information to the driver and perform active actions (e. Some ADAS examples are adaptive cruise control (ACC). Computer Vision (CV) techniques play a key role in this research area. These dramatic statistics highlight the importance of the research in traffic safety. which is not strange given that vision is the most used human sense when driving. were focused on improving safety specifically when accidents where happening. The main challenges of PPSs are summarized in the following points: .g. In the 1980s a sophisticated new line of research began to pursue safety in a preventive way: the so-called advanced driver assistance systems (ADAS). seat-belts and airbags. contrary to other ADAS such as ACC. One of the more complex ADAS are pedestrian protection systems (PPSs). Sappa and Antonio M. every year almost 1. different mechanisms were successfully incorporated to the vehicle with the aim of improving its safety. The most used sensor to detect pedestrians are cameras. Some examples are turn signals.

viewpoints (e. it can be improved by making use of some prior knowledge from the application. The simplest candidate generation approach is the exhaustive scan. This two-step candidate generation and classification scheme has been used in a countless number of detection systems: from faces [14]. the detection takes place in outdoor dynamic urban roads with cluttered background and illumination changes. Some cues used for generating candidates are vertical symmetry [1]. in the case of 2D analysis. the algorithm assumes a constant road slope. [10] proposed to extract candidate windows by exhaustively scanning the input image and classify them with support vector machines based on Haar Wavelet features. sizes (not only adults and children are different. back or side viewed). vehicles or generic object detection to human surveillance and image retrieval [3].Stereo-Based Candidate Generation. during the last decade researchers have tried to exploit the specific aspects of PPSs to avoid this generation technique. front. However.. the aforementioned techniques also hold problems. discarded pedestrians) can not be guaranteed to be low enough: symmetry relies on vertical edges. Although this candidate generation method is generic and easy to implement. culture. Papageorgiou et al.. infrared hot spots [4] and 3D points [7]. so the problems appear when the road orientation is not constant which is common in urban scenarios.. • The variability of the scenarios is also considerable. also called sliding window: it consists in scanning the input image with pedestrian-sized windows (i. the proposed techniques that exploit them pose several problems that make the systems not reliable in real-world scenarios. • The requirements in terms of misdetections and computational cost are hard: these systems must perform real-time actions at very low miss rates. For example. On the other hand... with a typical aspect ratio around 1/2) at all the possible scales and positions. i.g.e. . the number of false negatives (i. Hot spot analysis in infrared images holds a similar problem because of the environmental conditions [2].. 191 • Pedestrians have a high variability in pose (human body can be viewed as a highly deformable target). but also there are many different human constitutions).e. and people).e. but in many cases the illumination conditions or background clutter make them disappear. clothes (which change with the weather. although stereo stands as a more reliable cue. The first research works in PPSs were presented in the late 1990s. In the case of [7]. distance (typically from 5 to at least 25m). Accordingly.

which in our opinion is the most convenient option for this early step of the system. Finally. 3. Sect. 7. pedestrians-on-the-ground. Sappa and Antonio M. The algorithm presented in this chapter consists in a candidate generation algorithm to be used in the foreground segmentation module. Algorithm Overview A recent survey on PPSs by Ger´ nimo et al. Then. which directly affects the performance of the system both in terms of speed (the fewer the candidates sent to the classifier the less the computation time is) and detection rates (negatives can be pre-filtered by this module). [8] proposes a general archio tecture that consists of six modules. . provides experimental results of the algorithm output. addresses the candidate filtering.. modules 2) and 3) correspond to the steps presented in the introduction. we introduce the proposed candidate generation algorithm with a brief description of its components and their objective. This procedure can be seen as a conservative but reliable approach. The remainder of the manuscript is organized as follows. Road surface estimation computes the relative position and orientation between the camera and the scene (Sect.). The modules (enumerated in the order of the pipeline process) are: 1) preprocessing. describes the road surface estimation algorithm. which gets an input image and generates a list of candidates where a pedestrian is likely to appear. in which most of the existing systems can be fit. 3. 1. 3) object classification. The proposed algorithm is divided into three stages. In Sect. 1. 4) verification and refinement. As can be seen.192 David Geronimo. 5) tracking and 6) application. otherwise the later modules will not be able to correct the wrong filtering. the classifier. L´ pez o This chapter presents a candidate generation algorithm that reduces the number of windows to be classified while minimizes the number of wrongly discarded targets. 2) foreground segmentation. Sect. conclusions and future work is presented. 2. and using 3D data to filter the candidates. The second is not to discard any pedestrian. 5. Angel D. presents the road scanning and Sect. The first is to reduce the number of candidates. 6. There are two main objectives to be carried out in this module. 4. as illustrated in Fig. to be sent to the next module. the three stages in which the algorithm is divided are presented: Sect. This is achieved by combining a prior-knowledge criterion. First.

ZC ). 3. second.). 4. Road Surface Estimation Road Scanning Candidate Filtering Vertical Objects Estimated road position Pre-selected windows Discarded windows Selected windows Figure 1. the searching space is the road. Candidate filtering filters out windows that do not contain enough stereo evidence of containing vertical objects (Sect. hence irrelevant regions like the sky can be directly ommited from the processing. just under the camera coordinate system (XC . the six extrinsic parameters (three for the position and three orientation angles) that refer the camera coordinate system to the world coordinate system reduce 1 Also referred to as camera extrinsic parameters.. in such a way that: the XW ZW plane is contained in the current road fitted plane. 3. Due to that. 5. In the context of PPSs. . Next sections describe each stage in detail. the YW axis contains the origin of the camera coordinate system.Stereo-Based Candidate Generation. the XW YW plane contains the XC axis and the ZW YW plane contains the ZC axis.. The main targets of road surface estimation are two-fold: first. YC . Stages of the proposed algorithm.). Road Surface Estimation The first stage is focused on adjusting the candidate searching space to the region where the probability of finding a pedestrian is higher. Road scanning places 3D windows over the estimated road surface using a given scanning method (Sect. YW . to fit a surface (a plane in the current implementation) to the road. A world coordinate system (XW . to compute the relative position and orientation (pose) of the camera1 with respect to such a plane. ZW ) is defined for every acquired stereo image. 193 2.

3. This condition is fulfilled as a result of a specific camera mounting procedure that fixes Φ at rest. Θ) (i. L´ pez o to just three. Φ. c) is a scalar that represents a scene point of coordinates (xC . ZC ) and world coordinate system (XW .1. in most situations the value of Φ (roll) is very close to zero. Camera coordinate system (XC . To speed up the whole algorithm. most of the processing at this stage is performed over a 2D space.194 David Geronimo. in which each array element (r. denoted in the following as (Π. c) be a depth map provided by the stereo pair with R rows and C columns.e. referred to the camera coordinate system (Fig. Angel D. containing most of the road points. yC . The aim at this first stage is to find a compact subset of points. Θ) parameters. Sappa and Antonio M. zC ). YW . roll and pitch). camera height. Φ. ZW ). The proposed approach consists of two substages detailed below (more information in [13]): i) 3D data point projection and cell selection and ii) road plane fitting and ROIs setting. and because in normal urban driving situations this value scarcely varies [9]. YC . 2). ZC XC YC ZW XW Ro ll Camera height Pitch YW Figure 2. 3D Data Point Projection and Cell Selection Let D(r. ζ.. . From the (Π. Figure 2 illustrates the world and camera coordinate systems.

ζ. ∆Z) is the working range in 3D space. YZ Projection and road plane estimation. ∆Y. q). ( n yCi )/n. Hence. every selected cell is represented by the 2D barycenter (0. 195 Initially. it picks the cell with the largest number of points in each column of the 2D projection... where o = ⌊DY (r. 3D data points are mapped onto cells in the (YC ZC ) plane. in the Y-axis) is selected. . 39m Right camera 50m YW ZW pro jec tio n ZC Estimated road plane Inliers band at ±10cm of plane hypothesis XC Camera 5m 4m YC Figure 3. 3). The set of these barycenters defines a compact representation of the selected subset of points. relying on the assumption that the road surface is the predominant geometry in the given scene. ς representing a scale factor that controls the size of the bins according to the current depth map (Fig. as well as a counter with the number of mapped points. a considerable reduction in the CPU time is reached during the road plane fitting stage. q) keeps a reference to the original 3D data points projected onto that position. c) · ς⌋ and q = ⌊DZ (r. Every cell of ψ(o. (∆X. one cell per column (i. Finally. Using both one single point per selected cell and a 2D representation. resulting in a 2D discrete representation ψ(o. c) · ς⌋.e. The scaling factor is aimed at reducing the projection dimensions with respect to the whole 3D data in order to both speed up the plane fitting algorithm and be robust to noise. From that 2D representation. ( n zCi )/n) of its n mapped i=0 i=0 points.Stereo-Based Candidate Generation.. It is defined as: ς = ((R + C)/2)/(∆X + ∆Y + ∆Z)/3).

Dominant 2D Straight Line Parametrisation At the first step a RANSAC based approach is applied to find the largest set of cells that fit a straight line. ζ.2. This value will be considered as a probability density function. However. Next. where ni represents the number of points mapped onto the cell i and N represents the total amount of points contained in the selected cells. a cumulative distribution function. The second step computes plane parameters by means of a least squares fitting over all 3D data points contained into inlier cells. where most of them belong to the road. hence the projection is expected to contain a dominant 2D line corresponding to the road together with noise coming from the objects in the scene. Φ (roll) is assumed to be zero. Initially. Sappa and Antonio M. L´ pez o 3. Road Plane Fitting The outcome of the previous substage is a compact subset of points. The plane fitting stage consits of two steps. In order to speed up the process. which selects the dominant line corresponding to the road.2.g. Angel D. it would increase CPU time since robust estimation of standard deviation involves computationally expensive algorithms (e.1. . an automatic threshold could be computed for inliers/outliers detection. i=0 If the values of F are randomly sampled at n points. is defined as: Fj = j pdfi . following robust estimation of standard deviation of residual errors [12]. As stated in the previous subsection. the application of the inverse function F −1 to those points leads to a set of n points that are adaptively distributed according to pdfi . Fj .. within a user defined band. sorting functions). The normalized probability density function is defined as follows: pdfi = ni /N .196 David Geronimo. The first one is a 2D straight line parametrisation. every selected cell is associated with a value that takes into account the amount of points mapped onto that position. a predefined threshold value for inliers/outliers detection has been defined (a band of ±10 cm was enough for taking into account both data point accuracy and road planarity). 3. It uses a RANSAC based [5] fitting applied over 2D barycenters intended for removing outlier cells.

Repeat L times 197 (a) Draw a random subsample of 2 different barycenter points (P1 . to cover the different sizes of pedestrian. as mentioned above using a ±10 cm margin..e. 4. c) plane parameters by using the whole set of 3D points contained in the cells considered as inliers...2. L). . as will be described later. The most intuitive scanning scheme is to distribute windows all over the estimated plane in a uniform way. instead of the corresponding barycenters. b. Road Scanning Once the road is estimated. (c) In case the number of inliers is smaller than 40% of the total amount of points contained in ζ (e. those plane parameters are discarded and the ones corresponding to the previous frame are used as the correct ones. the least squares fitting approach [15]. severe occlusion of the road by other vehicles).2. candidates are placed on the 3D surface and then projected to the image plane to perform the 2D classification. Road Plane Parametrisation (a) From the previous 2D stright line parametrisation choose the solution that has the highest number of inlier. Each sampling point on the road is used to define a set of scanning windows. P2 ) according to the probability density function pdfi using the above process. in a nx ×nz grid. compute the number of inliers among the entire set of barycenter points contained in ζ. compute the straight line parameters (α. 3... (b) For this subsample. β)l .. with nx sampling points in the road’s X axis and nz in the Z axis. which minimizes the square residual error (1 − axC − byC − czC )2 is used. indexed by l (l = 1. (b) Compute (a.g. To this end.Stereo-Based Candidate Generation. (c) For this solution.. i. .

In order to ammend these problems. bz b (1) where z = ZC min + iδZ ∀i ∈ {0. b. L´ pez o Let us define ZC min = 5m as the minimum ground point seen from the camera2 .. the sampling is aimed at extracting candidates in the 2D image. c) are the plane parameters. According to this. from XCmin to XCmax with the nx sampling points. this scheme has two main drawbacks: it oversamples far positions (i. c). c + b(y − y0 ) (5) With a camera of 6mm focal. . b c . Angel D. the sampling is too sparse when Z is close to the camera). As can be appreciated in Fig. the corresponding z in the plane (later needed to compute the window size) is z= 2 f .. b. (4) where δim = (yZC min − yZC max )/nz . f is the camera focal.e.. we compute the minimum and maximum image rows corresponding to the Z range: yZC max = y0 + yZC min = y0 + f bZC max f bZC min −f −f c . nz − 1}. b (2) (3) and evenly place the sampling points between these two image rows using: y = yZC m in + iδim ∀i ∈ {0.198 David Geronimo. the first road point seen is around 4 to 5 meters from the camera.e. (a. it is clear that the sampling cannot rely only on the world but must be focused on the image. and τ = 100 the number of available sampling positions along the ZC axis of the road plane (a. e. Given that the points are evenly placed over the 3D plane. The same procedure is applied to the X xis. In this case. 4(a). and y0 is the y coordinate of the center point of the camera in the image. δZ = (ZC max − ZC min )/nz is the 3D sampling stride. the sampled rows in the image are: y = y0 + f c −f .nz − 1} . the corresponding image rows can be computed by using the plane and projection equations.. ZC max = 50m as the furthest point.. . Z close to ZC max ) and undersamples near positions (i. We refer to this scheme as Uniform World Scanning. Hence. Sappa and Antonio M.g. oriented to the road avoiding to capture the hood.. In fact.

the Uniform to Road.nor under-sampling the image or the world. 199 In the case of X axis.. For the XC axis we follow the same procedure as with the other schemes. mostly linear in the bottom region of the image (close Z) and logarithmic-like for further regions (far Z). where κ = 0. the far ZC are undersampled. neither over. with an standard deviation σ = 0. The idea is to sample the image with a curve in between the two previous schemes. i..e. and adjust the row-sampling according to our needs.. This scheme is called Uniform Image Scanning. Hence. draws a linear function since the windows are evenly distributed over the available rows.6 and λ = 0. In this case. 5 (solid-black line). constrained to pass through the intersection points between the linear and hyperbolic curves and by a user defined point (iuser . On the contrary. 4(c).e. 4(b) that although the density of sampling points for the closer ZC is appropiate. yuser ) between the two original functions. c yuser iuser 2 iuser 1     (6) where imin = 0 and imax = nz − 1. The aforementioned over.and under-sampling in the top and bottom regions of this curve can be also seen in this figure. takes the form of an hyperbola as a result of the perspective projection. specifically 1/2. We assume a pedestrian to be h = 1.2m. they are used to compute the corresponding 2D windows to be classified. The curve parameters can be found by solving the following system of equations:  yZC max a imax 2 imax 1       imin 2 imin 1   b  =  yZC min  . in dotted-dashed-blue. the same procedure as in the first scheme can be used. in the non-uniform curve in Fig. called non-uniform scanning. For example. i. the space between sampling points is too big (see histogram of the same figure). The Uniform to Image.e. we finally propose the use of a non-uniform scheme that provides a more sensible sampling. yuser = imin + (imax − imin )×κ and iuser = imin + (imax − imin )×λ.70m high. Figure 5 displays the sampling functions with respect to the ZC scanning positions and the image Y axis. it is seen in Fig. we use a quadratic function of the form y = ax2 + bx + c.. .25. so a width margin is used to adjust most of human proportions and also leave some space for the extremities. Once we have the set of 3D windows on the road. the width is defined as a ratio of the height. Attending to the problems of these two approaches. In the case of body width. the variability is much bigger than height.. In our case. i. The resulting scanning. can be seen in Fig. but avoiding over-sampling. in dashed-red.Stereo-Based Candidate Generation.

In order to enhance the figure visualization just 50% of the lines are shown.200 David Geronimo. The histograms of sampled image rows are shown on the left column. Right column shows the scanning rows using the different schemes and also a representation of the scan over the plane. Angel D. . L´ pez o World Image Rows 5 Over-sampling 4 Times Sampled 3 2 Under-sampling z x 1 0 280 300 320 340 360 380 400 Sampled Image Rows 420 440 460 (a) Uniform Road Scanning World Image Rows 5 4 Times Sampled 3 2 Under-sampling z x 1 0 280 300 320 340 360 380 400 Sampled Image Rows 420 440 460 (b) Uniform Image Scanning World Image Rows 5 4 Times Sampled 3 2 z x 1 0 280 300 320 340 360 380 400 Sampled Image Rows 420 440 460 (c) Non-Uniform Scanning Figure 4.and over-sampling problems can be seen. The three different scanning schemes. Sappa and Antonio M. under.

25 is between the uniform to road and to image curves.Stereo-Based Candidate Generation. 2) with the aim of compensating pitch angle Θ. 5. independently of the extra-margin taken by the classifier3 . 3.y. Scanning functions. as described in the aforementioned section. the coordinates of a given point p(x. Candidate Filtering The final stage of the algorithm is aimed at discarding candidate windows by making use of the stereo data (Fig.85m.z) . 3 . 201 Figure 5. [3] demonstrate that adding some margin to the window (33% in their case) results in a performance improvement in their classifier. 6).. Assuming that roll is set to zero.6 and λ = 0. computed in Sect.70 × 0. referred to the new coordinate system. are computed as follows: Dalal et al.. the mean pedestrian window sizes 1. The method starts by aligning the camera coordinate system with the world coordinate system (see Fig. hence achieving a more sensible scan.. For example. A non-uniform road scanning with parameters κ = 0.

7(b). as a result of perspective projection. Second. Schematic illustration of the candidate filtering stage. rotated points located over the road4 are projected onto a uniform grid GP in the fitted plane (Sect. zR ) votes into the cell (i. First. cells far away from the sensor tend to have few projected points. j) . where each cell has a size of σ × σ. A given point p(xR . (8) where jσ corresponds to the real depth of the cell. The resulting map GP is shown in Fig. j) = jσGP (i.). The reweighting function is GRW (i.202 David Geronimo. As can be seen. In order to amend this problem. yR . This is caused by two factors. thus the points of an ideal vertical and planar object would spread wider into GP as the distance of these points increases. The redistribution function Set of points placed in a band from 0 to 2m over the road plane. Sappa and Antonio M. the number of points projected onto each cell in GP are reweighted and redistributed. Angel D. assuming that this is the maximum height of a pedestrian. (7) ROIs over cells with few accumulated points are discarded x x x x x x x x x x x x x x x x x y x x x x x x 3D p lan e x x x x y x z Preserved ROI x Discarded ROI Figure 6. L´ pez o px R p yR pz R = px = cos(Θ)py − sin(Θ)pz = sin(Θ)py + cos(Θ)pz . 4 . where i = ⌊xR /σ⌋ and j = ⌊zR /σ⌋. 3. j). the number of projected points decreases directly with the distance. Then. the uncertainty of stereo reconstruction also increases with distance.

after reweighting and redistribution processes. j) = s=i−η/2 t=j−η/2 GRW (s.e. The resulting map G.. (3) (1) (2) (1) (2) (4) (5) (4) (5) (2) (1) (3) (3) (1) (2) (3) x x z (c) z (a) (b) Figure 7.. t) .Stereo-Based Candidate Generation. The stereo pair has a baseline of 0.12m and each camera has a focal of . Uncertainty is computed as a function of disparity values: uncertainty = f ·baseline µ .ptgrey. Probability map of vertical objects on the road plane.. is illustrated in Fig. f is the focal length in pixels and µ is the correlation accuracy of the stereo.com. 7(c). (9) where η is the stereo uncertainty at a given depth (in cells): η = uncertainty/σ. in this early system module false positives are preferred than false negatives. this parameter is low in order to fulfill the conservative criterion mentioned in the introduction. The filtering consists in discarding the candidate windows that are over cells with less than χ points. (a) Original frame. (c) Reweighted and redistributed vertical projection map of the frame 3D points. (b) Raw projection GP . 8). http://www. Fig. i. disparity2 (10) where baseline is the baseline of the stereo pair in meters. In our implementation. which is set experimentally. 6. Experimental Results The evaluation of the algorithm has been made using data taken from an onboard stereo rig (Bumblebee from Point Grey. consists in propagating the value of GRW to its neighbours as follows: i+η/2 j+η/2 203 G(i.

However. which measure around 12 × 24 pixels (of course the size will slightly differ depending on the focal and the size of the sensor pixels). which is our motivation. which allows to detect pedestrians at a minimum of 5m. which is a reduc- . Hence. 3.2 and the position stride is 4 pixels. Let us say that we must detect pedestrians up to 50m. Angel D. The parameters for the road surface estimation are L = 100 and ς = 0. Although this method does not perform an explicit foreground segmentation. smaller windows need a smaller stride between them.5m and the biggest 0. Figure 8. a regular exhaustive scan algorithm must place windows of the scales between these two distances at all the possible positions. Stereo pair used in our acquisition system.8m). so the number can range between from 200 000 to 400 000. the nearest pedestrian fully seen. We have selected 50 frames taken from urban scenarios with the aforementioned stereo camera and applied the proposed algorithm. κ = 0. On the other hand. 10 different sizes are tested (the smallest 0.. at 5m. L´ pez o 6mm and provides a resolution of 640 × 480 pixel (the figures in the paper show the right sensor image). and the camera reconstruction software provides 3D information until 50m.. it is useful as a reference to evaluate the benefits of our proposal.68. If a scale variation is assumed to be 1.5 and λ = 0.204 David Geronimo..75 × 1. The HFOV is 43◦ and the VFOV is 33◦ .075m. is about 140 × 280 pixels. the number of windows is over 100 000. As introduced in Sect.25. we have used the non-uniform scheme with τ = 90 sampling points. 1. The scanning in the XC axis is made in XC = {−10. In the case of the scanning. Sappa and Antonio M. The algorithm selects about 50 000 windows. . For each selected window. one of the most used candidate generation methods is sliding window..95 × 1. 10}m with a stride of 0. which coincides with the parameters described in Sect.

so just one candidate per point was drawn. The middle column corresponds to the final windows after the filtering step. Experimental results. . The left column shows the original real urban frames in which the proposed algorithm is applied. In order to enhance the visualization the different scales tested for each sampling point are not shown. 205 Figure 9...Stereo-Based Candidate Generation. The right column shows the number of windows generated after the scanning (Gen) and after the filtering (Final).

This work was supported by the Spanish Ministry of Education and Science under project TRA2007-62526/AUT and research programme Consolider Ingenio 2010: MIPRCV (CSD200700018). road scanning and candidate filtering. Angel D. Chapuis. the pedestrians in the scenario are correctly selected as candidates. China. In Proc. L´ pez o tion of about 75 − 90% with respect to the sliding window. Sappa and Antonio M. Broggi. Shape–based pedestrian detection and localization. Fascioli.2 and χ = 2000. while other freespace areas are discarded to be classified. References [1] M. reducing again a 90% the number of candidates. Tibaldi. attending to the results. while minimizing the number of false negatives to around 0%. A. This represents a reduction of 97 − 99% compared to the sliding window. Chausse. . David Ger´ nimo was supo ported by Spanish Ministry of Education and Science and European Social Fund grant BES-2005-8864. In addition. R. F. which is a key factor for the whole sytem performance. which can potentially improve the proposed pipeline process. The stages consist of road surface estimation. Shangai.206 David Geronimo. of the IEEE International Conference on Intelligent Transportation Systems pages 328–333. we apply the filtering stage with a cell size of σ = 0. Figure 9 illustrates the results in six of the frames used to test the algorithm. 7. and A. 2003. A. and Catalan Government under project CTP 2008 ITT 00001. the number of false negatives is marginal. depending on the stride of this latter. Future work will be focused on the research of algorithms to fuse the cues used to select the candidates. Experimental results demonstrate that the number of candidates to be sent to the classifier can be reduced by a 97 − 99% compared to the typical sliding window approach. As can be seen. Then. Conclusions We have presented a three-stage candidate generation algorithm to be used in the foreground segmentation module of a PPS. Bertozzi. Acknowledgements The authors would like to thank Mohammad Rouhani for his ideas with the road scanning section.

Masaki. [6] United Nations Economic Commission for Europe. E. Aubert. 1987. Robust Regression and Outlier Detection. Geneva. [12] P. pitch. pages 31–36. Bolles.. Gavrila. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Papageorgiou and T. [3] N. USA. Sleet. R. In Proc. Mathers. A shapeindependent method for pedestrian detection with far-infrared images. Graf. 2000. 53(6):1679–1697. In IEEE Transactions on Pattern Analysis and Machine Intelligence (in press). World Report on road traffic injury prevention. pages 13–18. Parma.. Peden. 2004. Fang. Graphics and Image Processing. Yamada. D. Triggs. International Journal on Computer Vision . [8] D. volume 1. 2004. K. In Proc.A. [10] C. of the IEEE Conference on Computer Vision and Pattern Recognition. OH. June 1981. of the IEEE Intelligent Vehicles Symposium. Jarawan. [5] M. A trainable system for object detection. Chan and F. of California at Berkeley. Giebel. 2009. B. Statistics of road traffic accidents in Europe and North America. June 2003. [4] Y.M. Switzerland. Ninomiya. Leroy. A single framework for vehicle roll. 2004. Poggio. USA. 2005. [11] M. Munder. Italy. CA. 38(1):15–33. on Vehicular Technology . Uni. Sappa.-Y. of the IEEE Intelligent Vehicles Symposium. Hyder.Stereo-Based Candidate Generation. Fischler and R. and I. 2005. Bu. and S. Horn. Literature review of pedestrian detection technologies and sensor survey. and C. San Diego. D. [9] R. Ger´ nimo. 2005. and T. Scurfield. yaw estimation and obstacles detection by stereovision. Dalal and B. Rousseeuw and A. Mohan. 24(6):381–395. Vision–based pedestrian detection: The PROTECTOR system. A. World Health Organization. Histograms of oriented gradients for human detection. L´ pez. J. Technical report. Survey of pedestrian o o detection for advanced driver assistance systems. IEEE Trans. . In Proc. Y. Columbus. Institute of Transportation Studies. pages 886–893. A. John Wiley & Sons. [7] D. 207 [2] C. New York. Labayrade and D.D. A.

H. [14] P. Tanahashi. Jones. Rapid object detection using a boosted cascade of simple features. Wang. Niwa. Viola and M. 2008. Hirayu. and K. 9(3):476–490. F. An o o efficient approach to onboard stereo vision system pose estimation. Kauai Marriot. USA. Ger´ nimo. D. USA. Angel D. on Intelligent Transportation Systems. In Proc. Y. and A. pages 663–669. HI. [15] C. December 2001. L´ pez o [13] A. of the IEEE Conference on Computer Vision and Pattern Recognition. Sappa. Comparison of local plane fitting methods for range data. HW. L´ pez. . IEEE Trans. Yamamoto. In Proc. Ponsa.D. D. pages 511–518. of the IEEE Conference on Computer Vision and Pattern Recognition. 2001. Kauai. Dornaika.208 David Geronimo. H. Sappa and Antonio M.

called fovea. 1. where the receptor cells and the other cells in the retinal layers are densely packed. Germany Abstract This chapter describes the development of eye movement control. It is only this small part of the retina which allows to see sharp images. Introduction Saccades are fast eye movements. What we want to see in detail and what we want to identify as an object or any other small . Freiburg. The saccadic reflex and the control of saccades by voluntary conscious decision and their role in the optomotor cycle will be explained on the basis of the reaction times and neurophysiological evidence. Without these ongoing sequences of saccades we would not see very much. Chapter 10 D EVELOPMENT OF S ACCADE C ONTROL Burkhart Fischer Univ. pp. 3 to 5 in a second. Inc. We will consider. McCoun et al.In: Binocular Vision Editors: J. 209-246 ISBN 978-1-60876-547-8 c 2010 Nova Science Publishers. The age curves of the different variables show that the development of the voluntary component of saccade control lasts until adulthood. only those aspects of eye movements that are important for reading: stability of fixation and control of saccades (fast eye movements from one object of interest to another). Due to these saccadic eye movements the brain receives 3 to 5 new pictures from the retina in each second. Optomotor Lab. because of the functional and anatomical structure of the retina: it contains in the middle a small area. of Freiburg. We make them all the time. The diagnostic methods used in the next part of the book will be explained in this chapter.. however.

To avoid this illusory black blinks. by our own decision. The time periods between saccades are 150 to 300 ms long. Usually in everyday life these saccades are made automatically. The Fig. An example of a geometric illusion allows also to become aware of ones own saccadic eye movements. the visual system cooperates with the saccade control centres in a perfect way to differentiate between self-induced movements of the retinal image and externally generated movements. a letter. 3 shows the Z-illusion. 2 shows an example. As long as one can prevent saccades. we just have to decide to stop making saccades. 1. we can also stop the sequence and fixate a certain small object for longer periods of times. i. We notice. One can see that the prolongations of the short ends of the line (upper left and lower right) will not meet the corners at the lower left and upper right.e. we see black dots appearing spontaneously at the crossing lines between the black squares. Pick one of the white dots and maintain fixation at it. must be inspected by foveal vision. As long as we do not take into account the fact that vision needs saccades. which is a long time in this context) one will see that the lines meet the corners as they do in reality. As long as we look around across on this figure. The Fig. . Each saccade occurring spontaneously will create the illusion of dots again. The picture looks like scintillations. they remain unconscious and – most importantly – we do not see the jumps of the retinal image.g. We can also actively and voluntarily move our eyes from one place of interest to another. Only under certain somewhat artificial conditions we can see our saccades. The figure shows a modification of the well known Hermann grid. They are often called ”fixations”. But.210 Burkhart Fischer visual pattern. The solution for this biological demand are sequences of rapid relatively small eye movements (saccades). The reader may convince her/himself by using a ruler. e. The situation is quite similar to our breathing: it works by itself but we can control it also voluntarily. An example is shown by Fig. which are related to eye movements. that there are no such black dots. One can also see illusory movements. Somehow. The reader may try be her/himself. the black dots remain absent. we do not have to ”think” about them and we do not have to generate each of them by a conscious decision. If one succeeds to prevent all saccades for some seconds (up to 10. The movements disappear when we stop to make eye movements. We are not aware of these saccades.

2003]. we see black dots jumping around. that most visual psychophysical experiments require the subject to fixate one spot of light while being examined on their visual experience with another stimulus a distance away.Development of Saccade Control 211 Figure 1. when we talk about reading. that the instability of geometrical illusions. With still longer fixation most of the white dots disappear also. In particular. As long as we look around across this pattern. fixation was not longer required when testing the stability of the illusory impression. eye movements are one key for the . shown in the examples above. Unfortunately. we will not understand the visual system. It is interesting to note. remained undiscovered until recently [Fischer et al. The result was. Scintillations of the Herman grid due to eye movements. The reason for this methodological detail is the sharply decreasing visual acuity from the centre to the periphery of the visual field. when it came to visual illusions. Whenever we try to look at one of them. there is no black spot. Any theory of vision which makes correct predictions but does not include the fast movements of the eyes can hardly be considered a valid theory. When fixating the eyes at one white spot for a few seconds black spots are no longer seen.

but imperfections of binocular vision. understanding of the reading process. We therefore have to consider the saccade system. 2003]. may . which remain undetected. which was manipulated in several physical and linguistic ways [Reichle et al. The illusion disappears. In the following sections we will consider those parts of eye movement control. that play an important role for reading. before we may consider its role in reading. The significance of saccadic eye movements in reading was emphasized also by a reader model resting on the basis of experiments. where eye movements were measured while the subjects were reading text. because nothing moves in this figure. The horizontal movements that one sees when looking at this figure are obviously illusory. when we stop our eye movements by fixating the centre of the figure. we will consider the instability of fixation due to unwanted saccades or to unwanted movements of the two eyes in different directions or with different velocities. binocular vision is not needed at all for reading. The other types of eye movements (vestibular ocular compensation. However. In principal. optokinetic nystagmus) will be neglected all together. which allows us to compose words from letters or from syllables.212 Burkhart Fischer Figure 2.

These methods have been described in detail [Fischer et al. For the purpose of clinical application a special instrument was developed. It is called ExpressEye and provides the infrared light source. 1997].The method has been described in detail elsewhere [Hartnegg and Fischer. The front view of the system is shown in Fig. 5. The prolongations of the short lines do not seem to hit the corners. 2002]. A sufficiently precise method uses infrared light reflection from the eyes. The real geometric relations can be seen by using a ruler and draw the prolongations or they can be seen by fixating the eyes in the middle. see below). Here we are interested in fixation and in saccades. disturb vision and consequently reading may be difficult. There are hundreds of papers on eye movements in the literature. Today it is easy to measure eye movements in human observes. The instrument can deliver the raw eye position data trial by trial during the experiment and detect saccades and provides a statistical analysis of saccades. The raw data can be stored at the hard disc of a computer for further analysis to obtain different variables that characterize the performance of the tasks. The Fig. and the amplifiers. Z-Illusion shows the capital letter Z. the light sensitive elements. 1987]. The data presented here are all collected by using this method. Saccades have always received special interest [Fischer. the visual stimuli needed for basic tasks (with minilasers. 4 shows the most important anatomical connections that begin in the retina and end up at the eye muscles. .Development of Saccade Control 213 Figure 3.

NC = Nucleus Caudatus. It seems that there was little doubt. Tectal = Tectum = Superior Coilliculus. Fixation and Fixation Stability It may come as a surprise. MST = Medio-Superior-Temporral Cortex. ITC = Infero-Temporal Cortex. SN = Substantia Nigra. with periods of no eye movements. It has been the problem over many years of eye movement research. PFC = Prefrontal Cortex. The figure shows a schematic diagram of the neural system of the control of visually guided saccades and their connections. 1992] and thorough investigation of the reaction times of saccades [Mayfrank et al. that a section on eye movements starts by dealing with fixation. Only direct neurophysiological experiments [Munoz and Wurtz. that fixation was not considered at all as an important active function. We will see.214 Burkhart Fischer Anatomical Pathways for Saccade Control ITC Assoc Parietal MT Visual LGN FEF II MST I NC SN PFC III Frontal Tectal BS Figure 4. LGN = Lateral Geniculate Nucleus. 2. The interest was in the moving and not in the resting (fixating) eye. e. BS = Brain Stem. i. that almost any subject can follow the instruction ”fixate” or . MT = Medio Temporal Cortex. Assoc = Association Cortex. that fixation and saccade generation are controlled in a mutual antagonistic way similar to the control of other body muscles. 1986] provided the evidence. FEF = Frontal Eye Field. that we can observe movements of the eyes during periods where they were not supposed to move at all.

This section deals with the results of the corresponding analysis of eye movements. we simply count these saccades during a short time period. Monocular Instability As pointed out earlier. Below we will also explain the binocular instability.Development of Saccade Control 215 Figure 5. we have to consider two different aspects of disturbances of fixation: the first aspect are unwanted (or intrusive) saccades. Infrared light emitting diode and the two photocells are located behind and directed to the centre of the eye ball. To measure the stability or instability of fixation due to unwanted saccades. This kind of disturbance is called a monocular instability. 2. This is the reason why it is called a monocular instability.1. These are mostly small conjugate saccades that take the fovea from the fixation point and back. But this not the case: stability of fixation cannot always be guaranteed by all subjects. because when it occurs one sees it in both eyes at exactly the same time and by the same amount of saccade size. The front view of the Express Eye designed to measure eye movements. The disturbance remains if one closes one eye and therefore it disturbs monocular vision and it does not disturb binocular vision. when the subject is instructed to fixate a small fixation point. One sees the screws for the mechanical adjustment in all 3 dimensions in front of each eye. Such a period repeatedly occurs in both diagnostic tasks that are used for saccade analysis that are described in . ”do not move the eyes”.

216 section 3.2. on page 226.

Burkhart Fischer

The number of unwanted saccades counted during this task is used as a measure of monocular fixation instability. For each trial this number is recorded and attributed to this trial. The mean value calculated over all trials will serve as a measure. The ideal value is zero for each individual trial and therefore the ideally fixating subject will receive also zero as a mean value. The Fig. 6 shows the mean values of the number of intrusive (unwanted) saccades per trial as a function of age. While children at the age of 7 produce one intrusive saccade every 2 or 3 trials, adults around 25 years of age produce one intrusive saccade every 10 trials. At higher ages the number of intrusive saccades increases again. Of course, not every intrusive saccade leads to an interruption of vision and therefore one can live with a number of them without problems. But if the number of intrusive saccades is to high, visual problems may occur.

Figure 6. The curve shows the age development of the number of unwanted (intrusive) saccades per trial. The ideal value would be zero.

When we measure the monocular instability by detecting unwanted saccades, we should not forget, that there may be also another aspect of strength or weakness of fixation, which cannot be detected by looking at the movements of the eyes during periods of fixation, but rather be looking at reaction times of saccades that were required when the subject has to disengage from a visible fixation point.

Development of Saccade Control



Binocular Instability

To understand binocular stability we have to remember that the two eyes must be in register in order for the brain to ”see” only one image, even though each eye delivers its own image. This kind of convergence of the lines of sight of the two eyes at one object is achieved by the oculomotor system. We call it motor fusion. However, even with ideal motor fusion, the two images of the two eyes will be different, because they look at a single three dimensional object from slightly different angles. The process of perceiving only one object in its three dimensions (stereo vision), is called perceptual fusion, or stereopsis. When we talk about stereo vision (stereopsis) we mean fine stereopsis, i.e. single three-dimensional vision of objects. It is clear that we need both eyes for this kind of stereopsis. However, we also have three-dimensional vision with one eye only. The famous Necker cube shown in Fig. 7 is one of the best known examples. From the simple line drawing our brain constructs a threedimensional object. Close one eye and the percept of the cube does not change at all. This type of three-dimensional spatial vision does not need both eyes. The brain constructs a three-dimensional space within which we see objects. In order to guarantee stable stereopsis, the two eyes must be brought in register and they have to stay in register for some time. This means that the eyes are not supposed to move independently from each other during a period of fixation of a small light spot. By recording the movements of both eyes simultaneously one has a chance to test the quality of the stability of the motor aspect of binocular vision. The Fig. 8 illustrates the methods for determining an index of binocular stability. Two trials from the same child are depicted. In the upper trial the left eye shows stable fixation before and after the saccade. The right eye, however, converges after the saccade producing a period of non-zero relative velocity. In the lower case, both eyes produce instability after the saccades. The example shows, that the instability is sometimes produced by one eye only, or by both eyes simultaneously Often it is caused in some trials by one eye, and in other trials by the other eye. Extreme dominance of one eye producing the instability was rarely seen (see below). In the example of Fig. 8 the index of binocular instability was 22%. This means, that the two eyes were moving at different velocities during 22% of the analysed time frame. To characterize a subject’s binocular stability as a whole,


Burkhart Fischer

Figure 7. The figure shows the famous Necker cube. Note that one sees a three-dimensional object even though the lines are not correctly connected. The percept of a three-dimensionl cube remains even when we close one eye. Note, that the lines do not really meet at the corners of the cube. Yet, our perception is stable against such disturbances and the impression of a cube is maintained.

the percent number of trials, in which this index exceeded 15% was used. The ideal observer will be assigned zero. The worst case would be assigned a value of 100%. The Fig. 9 shows the data of binocular instability of a single subject. The upper left panel depicts the frequency of occurrence of the percentages of time, during which the eyes were not in register. The upper right panel depicts the distribution of the relative velocity of the two eyes. The scatter plot in the lower left panel displays the correlation between these variables. Ideally all data points should fall in the neighbourhood of zero. The lower right panel depicts the time development of the variable percent time of limits by showing the single values as they were obtained trial by trial from trial 1 to trial 200. This panel allows to see, whether or not fatigue has in influence on the binocular stability. When the values of the binocular stability were compared among each other, several aspects became evident: (i) Within a single subject the values assigned to the trials can be very different. Almost perfect trials may be followed by trials with long periods of instability. This means, that the subject was not completely

Development of Saccade Control


Figure 8. The figure illustrates the methods for determining an index of binocular stability by analysing the relative velocity of the two eyes. Time runs horizontally. At the time of stimulus onset the subject was required to make a saccade. Up means right, down means left. Two trials from the same child are depicted. In the upper case the left eye shows stable fixation before and after the saccade. The right eye, however, converges after the saccade producing a period of non-zero relative velocity. In the lower case, both eyes produce instability after the saccades. For details see text.

unable to main the line of gaze for both eyes, but from time to time the eyes drifted against each other. (ii) There was a large interindividual scatter of the mean values even within a single age group. (iii) Even among the adult subjects large amounts of instability were observed. (iv) The test-retest reliability was reduced by effects of fatigue or general awareness of the subjects. The Fig. 10 shows the age development of the binocular instability using data from the prosaccade task with overlap conditions. At the beginning of school large values but also small values were obtained. There was a clear tendency towards smaller values until the adult age. How-


Burkhart Fischer

Figure 9. The figure shows the data of binocular instability of a single subject. For details see text.

ever, the ideal value of zero is not reached at any age. This means that somehow small slow movements of the two eyes in different directions during short periods of time are well tolerated by the visual system. In other words: there are subjects with considerably instable binocular fusion not complaining about visual problems. Maybe these subjects suppress the ”picture” of one eye to avoid double vision all the time at the price of a loss of fine stereo vision, their vision is monocular. Because this does not create too much of a problem in everyday life, subjects do not show up in the eye doctors praxis. Their binocular system is never checked. This situation could be regarded as similar to the case of redgreen colour blindness, which may remain undetected throughout life, because the subject has no reason to take tests of colour vision.

however. Only those parts form one single picture in the brain that fall on corresponding points of the two retinae. but usually. Because of the necessity to suppress the information from one eye most of the time. The reader may try by her/himself using the thumb of one hand as a near point and the thumb of the other hand as a far point. Eye Dominance in Binocular Instability Since the two eyes send two different pictures to the brain. 11 shows the distribution of the differences between the right eye values and the left eye values of the index of binocular instability. 2. We can easily see. the image of one eye is permanently suppressed. It is often forgotten. The percentage of trials in which the two eyes were moving relative to each other (in more than 15% of the analysis time) is shown as a function of age. the picture of one of the two eyes must be prevented from automatically reaching consciousness. If.3. that this part covers only those objects. fine stereopsis is not possible. that we are just fixating with both eyes. that each subject selects one eye as the dominant eye (similar to the selection of one hand as the dominant hand). we do not perceive both of them. that the images of both eyes are present in our visual system. But one sees that in a few cases . that are at about the same distance from the eyes as the object. it has been speculated. This is true for most of the visual scene we see. Fixating a near point and attending to an object further away leads to double vision of the object.Development of Saccade Control 221 Figure 10. The Fig. The mean value is not significantly different from zero.

2. including unwanted saccades and unwanted drifts of one or both eyes. . 12 shows the scatter plots of the binocular versus the monocular instability for two age groups. the two types of instability may have the same reason: a weak fixation system allows all kinds of unwanted eye movement. The correlation coefficients were only 0. The Fig.4. that will support the independence of these two aspects of fixation instability. that a monocular training improves the binocular instability but not the monocular instability. we will see later.22 for the younger subjects (left side) and 0. When we look at the data of dyslexic children. In this case one should see high correlations between the two variables describing these types of instability. 3 far to the right). Also. Both correlations failed to reach a significance level of 1%. Independence of Mono. Because both variable depend on age. we will have many more data.21 for the older group (right side). we analyse the data within restricted age group. The distribution of the differences between the right eye and the left eye values of binocular instability.222 Burkhart Fischer clear dominances are obtained (8 subjects scored values far to the left. This means that the properties assessed by the two measures of fixation instability are independent from each other and different in nature.and Bino-Fixation Instability In principle. Figure 11.

what these subfunctions are and how they can be assessed. This might sound as a contradiction. Specialization of visual scientists and oculomotorists has prevented that the two fields so closely related have been investigated by corresponding combined research projects for a long time. The real requirement is. Earlier. 3. The left panels depict the data from subjects between 7 and 17 years of age (N= 129). Scatter plot of the binocular versus the monocular instability obtained from overlap prosaccade trials. that saccades are necessary for vision. Therefore. 3. that one should be able to generate sequences of saccades and fixations without an intrusion of unwanted saccades and without loosing the convergence of the two eyes when they are in register for a given distance. both components of gaze control should function normally. we have mentioned. We have to find out first. Development of Saccade Control In the last section we have considered the stability of fixation as an important condition for perfect vision. No correlations can be seen. The oculomotor research groups were interested in the eye movement as a move- . the right panel shows the data from older subjects 22 to 45 years of age (N=97).Development of Saccade Control 223 Figure 12. The Optomotor Cycle and the Components of Saccade Control The control of saccades has been investigated for about 40 years and still we do not understand the system completely. This section will show that saccade control has to be subdivided into subfunctions described by different variables.1.

Rather the reaction time spectrum indicated that there must be at least 3 different presaccadic processes that determine the beginning of a saccade. as soon as it appeared. there was a chance to learn more about the coordination of saccades and visual processes. It exhibits 3 modes: one at about 100 ms. A much shorter time remains. Another 5 ms elapse before the eye begins to move. It became clear that the shortest reaction time was 100 ms (not 200 ms) and this was much easier to explain by the time the nerve impulses needed to be generated in the retina (20 ms). each taking its own time in a serial way. one cannot . that there was not only one reaction time with a (large and unexplained) scatter. 1983] and in human observers [Fischer and Ramsperger. the next at about 150 ms. 1987]. The Fig. Under these conditions the reaction time is in the order of 200 ms. 17 shows in its lower part a distribution of saccadic reaction times. The reflex needs an intact superior colliculus [Schiller et al. The express saccades is the reflex movement to a suddenly presented light stimulus after an extremely short reaction time (70-80 ms in monkeys and 100-120 ms in human observers). while being tested for their visual functions. It was evident from these observation. Only when the interest was concentrated on the time just before the movements. that saccades are pre-programmed movements: during the last 80 ms before a saccade actually starts. 1984]. Their interest begins when the eyes begin to move and it stops when the eyes stop to move.224 Burkhart Fischer ment. The first was: why is this time so long? While this was a question all from the beginning there was no answer until 1983/84. However. They required their subjects to fixate a small spot. the reaction time can take one out of three values. that was attributed to a central computation time to find the correct size of the saccade to be programmed. The visual groups on the other hand. there were several problems. This is a value. when the express saccade was discovered in monkeys [Fischer and Boch. Depending on how many of the presaccadic processes are completed already before the occurrence of the target stimulus. each with a certain amount of scatter [Fischer et al. The time before a saccade is easily defined by the reaction time: one asks a subject to maintain fixation at one spot of light straight ahead and to make a fast eye movement to an other spot of light. which one can find in a student handbook. 1995]. and the third at about 200 ms. to travel to the cortex (10 ms). to the centres in the brain stem (10 ms) and finally to the eye muscles (15 ms). One has to know at this point. were interested in time periods when the eyes do not move.

1985] From all these consideration became clear. when saccades were made [Munoz and Wurtz.Development of Saccade Control 225 change anything anymore. What could have been found much earlier. Most important for the understanding of the relation between saccades and cognitive processes is the finding that there is a component in saccade control that relies on an intact frontal lobe [Guitton et al. The next problem was: what is it that keeps the eyes from producing saccades all the time? Or. Most of them have been summarized and discussed earlier [Fischer and Weber. that we experience as one unique action. became clear only after the neuroscientists began to think in very small steps: each process. The figure shows the functional principle of the cooperation of the 3 components of eye movement control. 1993]. that sequences of fixations and reflexes form the basis of natural vision. There quite a number of single papers contributing to the solution of the related problems. 1993]. It became clear that the break of fixation and/or allocated attention was a necessary step before a saccade can be generated. must be eventually subdivided into a number of sub-processes. Cognitive Processes Attention Decision Reflex Fixation Stop Go Figure 13. the other way around: what is it that enables us to fixate an object on purpose? The answer came from observations of cells that were active during time periods of no eye movements and that were inhibited. .

The task became very popular during recent years. 3. Then we will see some of the results obtained by the these methods. which the subjects is asked to fixate. The words pro and anti in their names refer to the instructions that the subject is given. but it was used already many years ago [Hallet. The time span from extinguishing the fixation point to the onset of the new target stimulus is called gap. the go-function. called the antisaccade task [Guitton et al. were unable to suppress the reflex in a simple tasks. The gap condition differs from the overlap condition in only one aspect: the fixation point is extinguished 200 ms before the target stimulus is presented. In addition to this physical difference the instruction for the subject is also changed: the subject is required to make a saccade in the . 1985]. We describe these methods and define the variables first. The answer came from observations of the frontal lobe functions: patients who lost parts of their frontal lobe at one side. The words overlap and gap describe the timing of the presentation of the fixation point. was the question of how it was possible to interrupt this automatic cycling. This stimulus is called the fixation point. These two together built up a stop-and-go traffic of fixations and saccades. as soon as it appears. which are surprisingly similar but give insight into different aspects of the optomotor cycle. In both tasks a small light stimulus is shown. This task requires the subject to make a saccade to one side. They have been used to quantitatively measure the state of the system of saccade control. which summarizes and takes into account the different findings: the stop-function by fixation alternates with the reflex. The Fig. 13 shows a scheme. This overlap condition and the task to look towards (’pro’) the stimulus explain the complete name of the task: overlap prosaccade task. In overlap trials a new stimulus is added left or right t of the fixation point. 1978]. 14 shows the sequence of frames for gap and for overlap conditions. The subjects is asked to make a saccade to this new stimulus.2. when the stimulus is presented at the opposite side. the target stimulus. Methods and Definition of Variables The fundamental aspects of saccade control as described by the optomotor cycle have been discovered by using two fundamental tasks. The two tasks are called the overlap prosaccade task and the gap antisaccade task. Both. the fixation point and the target are visible throughout the rest of the trial: they overlap in time. What remains open.226 Burkhart Fischer The Fig.

The stimulus is indicated by the thick black line. in the case of a gap trial the fixation point is extinguished 200 ms before. the fixation remains visible. Now we can define variables to describe the state of the saccade control system. the appearance of a new stimulus should allow a timely generation of a saccadic eye movement. However. The Fig. side differences should not be much of a surprise. that these variables may be different for left versus right stimulation. The presence of a fixation point should prevent the occurrence of too many reflexes. The antisaccade task with gap condition challenges the fixation system to maintain fixation and the ability to generate a saccade against the direction of the reflex. 15illustrates the definition of the variables described below. Because left/right directed saccades are generated by the right/left hemisphere. The fixation point is shown by the thin black line. Overlap Prosaccade Task + Gap Antisaccade Task Gap + + Figure 14. The prosaccade task with overlap condition allows to find too slow or too fast reaction times and to measure their scatter. In the case of an overlap trial. for the general definition of the variables to be used in the diagnosis. in the antisaccade task. in which direction the saccade should be made: to the stimulus in the prosaccade task. in the direction opposite to the stimulus.Development of Saccade Control 227 direction opposite (’anti’) of the stimulus: when the stimulus appears left. Because its presentation is identical in both conditions. In addition the figure shows schematically traces . the side differences do not need to be considered at this point. The figure shows the sequence of frames for overlap and for gap conditions. First of all. Therefore the complete name of this task is: gap antisaccade task. The horizontal arrows indicate. the subject shall look to the right and vice versa. it is drawn only once in the middle. one has to keep in mind. Time runs from left to right.

While these variables can be taken from every single trail.GAP time Gap ProSRT % errors CRT % corrections AntiSRT Figure 15. SRT. It contributes the reaction time of the error. The upper case shows a trace from an overlap trial. which contributes its reaction time.228 Burkhart Fischer of eye movements. reaction times between . which help to understand the definition of the variables. The schematic drawing of eye movement traces illustrates the definition of the different variables describing the performance of the prosaccade task with overlap conditions and the antisaccade task with gap conditions. One shows a correct antisaccade. i. Stimulus and Eye Movement Events PRO . Pro-SRT and the correction time. List of variables: From the overlap prosaccade task the following mean values and the scatter are used: • SRT: saccadic reaction time in ms from the onset of the target to the beginning of the saccade • % expr: the percentage of express saccades. some other variables are determined by the analysis of the complete set of 200 trials: the percentage of express saccades from all overlap trials. CRT (in case the error was corrected).e. The other trace depicts a trial with a direction error that was corrected a little later. Below one sees two examples of traces. ANTISRT.OVERLAP % express SRT eye position Fixationpoint Stimulus Fixationpoint ANTI . the percentage of errors from all gap trials and the percentage of corrections among the errors. Usually one saccade is made and it contributes its reaction time.

When the gap experiment of Saslow was repeated years later.3. 3. Prosaccades and Reflexes The optomotor system has a number of reflexes for automatic reactions to different physical stimulations. is it possible to stabilize the image of a moving object by the optokinetic reflex. in which the subject missed to reach the opposite side within the time limit in the trial (700 ms from stimulus presentation) can be calculated as pmis = perr · (100 − pcorr)/100 The latter variable combines errors rate and correction rate. It was discovered only in 1983/84 by analysing the reaction times of the saccades in a situation. It was known at that time that under these gap conditions the reaction times were considerably shorter as compared to those obtained under overlap conditions [Saslow. 1967]. From the gap antisaccade task: • A-SRT: the reaction time of the correct antisaccades • Pro-SRT: the reaction time of the errors • CRT: the correction time • %err: the percentage of errors • %corr: the percentage of corrections among the errors 229 Note that the percentage of trials. Both reflexes have little or nothing to do with reading. The best known reflex is the vestibular-ocular reflex.Development of Saccade Control 80 and 130 ms. which compensates head or body movements to stabilize the direction of gaze on a fixated object: the eyes move smoothly in the direction opposite to the head movement in order to keep the currently fixated object in the fovea. where the fixation point was extinguished shortly (200 ms gap) before a new target stimulus was presented. The saccadic reflex is a reaction of the eyes to a suddenly appearing light stimulus. it became evident that among the well know reactions around 150 ms after target onset there was a . Similarly.

The Fig. In the overlap condition it is the target stimulus. which triggers the preparatory processes and therefore the chances of express saccade are low. 16 shows the distribution of reaction times from a single subject. In the gap condition there is time during the gap to complete one or even two pre-saccadic processes. that saccades can be generated at distinctly different reaction times depending on the preparatory processes between target onset and the beginning of the saccade. the second represents the fast regular saccades. One clearly sees two peaks. The separate peaks in the distributions indicate. The figure shows the distributions of reaction times from a single subject. the second the fast regular saccades. The first represents the express saccades. The Fig. The effect is called the gap-effect and has been investigated in numerous studies of different . The first peak consists of express saccades. If one leaves the fixation point visible throughout the trial (overlap condition) the reaction times are considerably longer. the express saccade [Fischer and Ramsperger. even longer as compared with the gap=0 condition (not shown here). 17 shows the difference in the distributions of reaction times when gap and overlap trials were used. Therefore the chances of generation of express saccades is high. One clearly sees two peaks. 1984].230 Burkhart Fischer separate group of extremely short reactions at about 100 ms. because the role of fixation and of the fixation point as a visual stimulus was unknown. The consistent shortening of reaction time by introducing a temporal gap between fixation point offset and target onset was surprising. Figure 16.

which is being used as a fixation point and it inhibits the subsystem which generates saccades. The figure shows the difference in the distributions of reaction times when gap and overlap trials were used. 1993]. Note.Development of Saccade Control 231 Figure 17. after the shortest possible reaction time. An overview and a list of publications can be found in an overview article [Fischer and Weber. If this stimulus is removed early enough. the fixation system. . Today it is clear. that the main reason for the increase in reaction time under overlap conditions is due to an inhibitory actions of a separate subsystem in the control of eye movements. It is activated by a foveal stimulus. The effect of the gap on the reaction time is strongest if the gap lasts approximately 200 Milliseconds. that the effect of the gap is not a general reduction of reaction times. i. Note the separate peaks in the distributions.e. research groups all around the world since 1984. the inhibition is removed by the time the target occurs and a saccade can be generated immediately.

who produce express saccades under overlap conditions [Fischer et al. We will encounter these so called express saccade makers [Biscaldi et al. while the generation of antisaccades to the opposite side remained intact [Guitton et al. Reviews have been published and can be consulted by the interested reader [Everling and Fischer. that we can stop our saccades and that we can direct our centre of gaze to a selected object or location in space on our own decision. 1996] again. These saccades were called antisaccades. 1985]. These saccades are called voluntary saccades for obvious reasons. At this point. . The investigation of voluntary saccades was introduced many years ago [Hallett. Meanwhile the antisaccade task has become an almost popular ”instrument” for diagnosis in neurology and neuropsychology. 3.4.232 Burkhart Fischer but rather the first peak is larger and the third peak is smaller or almost absent. All from the beginning it will not be a big surprise to learn. The effect of changing the instruction from ” look to the stimulus. when it appears” to ”look away from from the stimulus (the anti-instruction)” can be seen in Fig. 1993]. An early observation of neurologists did not receive much attention either. when we consider the eye movements of dyslexic subjects. were unable to generate antisaccades to the side of the lesion. Hallett instructed his subjects to make saccades to the side opposite to a suddenly presented stimulus. that there are also different neural subsystems. but turned out to be very important. It was reported that patients. [Munoz and Everling. but the oculomotor research community did not pay attention to it very much. 2004]. As a result the mean value of the total distribution is reduced. that there are subjects. 1998]. 18. It is also important to remember. Antisaccades: Voluntary Saccade Control It is an everyday experience. that generate the automatic saccades and the voluntary saccades. that the gap conditions enables the reflex movements to a suddenly presented visual stimulus. we do not have to go through the discussion of whether or not directed visual attention also inhibits or des-inhibits the saccade system depending on whether attention is engaged or disengaged. 1978]. This issue has been discussed extensively and still today different the arguments are not finally settled We only have to keep in mind. who lost a considerable part of their frontal lobe in one side only.

The first factor contained the variables that describe prosaccades. in which the subjects made errors by looking first to the stimulus. The processes preparing the saccades and their execution remain mostly unconscious. The lower panels show the data from those trials. that with overlap conditions there are virtually no such errors. The figure shows the distribution of saccadic reaction times under overlap (left) and under gap conditions (right). Interestingly. 1997] it turned out. that we have little conscious knowledge of what we do with our eyes. irrespective of whether . but did not make many.Development of Saccade Control 233 Figure 18. subjects often failed to judge their performance: some claimed that they made many errors. This indicates. but made quite many. while with gap conditions a considerable number of errors were made. that there were only 2 factors. When the variables obtained from an overlap prosaccade task and from a gap antisaccade task were analysed by a factor analysis [Gezeck et al. others claimed that they made few errors. Note. The introduction of the gap leads to quite a number errors.

The distributions show most of the important aspects of the data. Earlier studies of antisaccade performance as summarized recently [Munoz and Everling. 1997].234 Burkhart Fischer they were generated in the prosaccade task or as erros in the antisaccade task. which are not as clear in the data of single subjects. The second factor contained the variables that described the performance of the antisaccade task. 19. The explanation of this result becomes evident. (ii) the subject has difficulties in looking to the side where there is no target. there is already a special instrument and analysis system. But they are not quite identical: more express saccades are made to the right stimulus than to the left stimulus. 2004] analyse the reaction times in the antisaccade task and the percentage of errors. The . The procedure of the corresponding analysis of the raw eye movement data have been described in great detail [Fischer et al. Most studies. 14. Today. The 3 peaks are seen in both left and right distributions obtained from the prosaccade task with overlap conditions (upper panels). 19 separately for left and right stimulation. failed to analyse the reaction time of the errors. The antisaccades (lower panels) have longer reaction times and a structure with different modes is missing. We will see below. [Fischer and Weber. however. Now we can further explain the data shown in Fig. 20. the definition of the variables are illustrated by Fig. The data were combined from 8 subjects in the age range of 14 to 17 years. But there was one exception: the error rate loaded on both factors. 1992]. their mean values and their scatter. The tasks are illustrated in Fig. why errors were made [Fischer et al. which allows to measure the eye movements and to assess the variables. They also neglected the percentage of corrective saccades and the correction time. We therefore also show the distributions of the reaction times of the errors and the distributions of the correction times of the same subjects as in Fig. 2000]. Test-retest reliability of saccade measures. 19 and in Fig. 2005] The data obtained from these two tasks are shown in Fig. when we remember that the correct performance of the antisaccade task requires 2 steps: suppression of the prosaccades and generation of antisaccades. The error may be high for 2 reasons: (i) the suppression is not strong enough. that these variables provide important information about the reasons. The details of this observations finally resulted in the decision to use the 2 tasks described above in order to characterize the functional state of the system of saccade control. 15. especially also for measures of antisaccade task performance are available [Klein and Fischer.

Their errors were mostly due to a weakness of the fixation system. This indicates that the subjects have no problem of looking to the opposite side. The figure shows the distributions of reaction times of 8 subjects performing the prosaccade task with overlap conditions (upper panels) and the antisaccade task with gap conditions. 19) indicating that their ability to suppress saccades is limited. We can see that the errors in the gap antisaccade task (upper panels of Fig. subjects as a group made quite a number of express saccades in the overlap prosaccade task (upper panels of Fig. 20) contained also more than 50% express saccades. 20). respectively. Panels at the left and right show the data for left and right stimulation. They do reach the destination. The error rate is 35% at the left and 42% at the right side. This reminds us. respectively (lower panels of Fig. but they get there with a detour because they could not suppress the saccade to the target.Development of Saccade Control 235 Figure 19. that we have already 2 independent factors of instability of . Of these errors 87% and 92% were corrected after very short correction times of 131 ms and 129 ms.

The figure shows the reaction times of the errors (upper panels) and the distributions of the correction times (lower panels) of the same subjects as in Fig. when it does not allow to suppress the errors in the antisaccade task. and the binocular instability of slow movements of the two eyes in different directions or with different velocities.5. Now a third aspect is added by the occurrence of express saccades and in particular.236 Burkhart Fischer Figure 20. fixation: the intrusive saccades. Fixation may also by weak. 3. 1997]. . The data presented here contain many more subjects than in an earlier study. The Age Curves of Saccade Control After these considerations and definitions we can look at the age development of the different variables. 19. when they occur as errors in the antisaccade task. which has shown already the development of saccade control with age increasing from 7 to 70 years [Fischer et al.

this general aspect of the development may be seen much earlier in life. that percentages of express saccades above a limit of 30% must be regarded as an exceptional weakness of the fixation system. There are however.Development of Saccade Control 237 Fig. However. 1996]. The diagrams show the age curves of the performance of prosaccades with overlap conditions. Figure 21.e. The left side depicts the age dependence of the reaction times. Later in the book we will look at the percentage of express saccades among the prosaccades generated under overlap conditions. The reaction times start with about 240 ms at the age of 7 to 8 years. the right side shows the age dependence of the percentage of express saccades in the distributions. N=425. Yet. 22. i. The large scatter in the data is due to these subjects. there is strong tendency of a reduction of the number of express saccades with increasing age from a mean value just below 15% to a mean value of about 5%. because the reflexes receive more cortical control with increasing age. At about 60 years they reach the level of the 7 year old children. extreme cases of subjects producing quite many express saccades. The corresponding subjects are called express saccade makers [Biscaldi et al. during the first year of life. It has been stated. During the next 10 years the reaction times become shorter by about 50 or 60 ms. 21 begins with the age curves of the performance of prosaccades with overlap conditions. From the age of 40 years one sees a gradual increase of the reaction times. because we want to be pre- . In this subject the express saccades occur only to the right side. One might expect that the occurrence of reflex-like movements (express saccades) is also a function of age. An extreme case of an express saccade maker is shown in Fig.

Note the large peak of express saccade to the right as compared with no express saccades to the left. The error rate decreases down to about 20%. The percentage of errors (middle left panel) reaches almost 80% for the youngest group. The mean value of the youngest group at about 340 ms is 100 ms is slower than that of their prosaccades. The reaction times of the correct antisaccades are depicted by the upper left panel. The correction rate increases until the age of 20 to above 80%. the youngest group was able to correct the primary error in only 40% of cases. stays at this level and decreases again after the age of 50 years. The bottom left panel depicts the correction rate. Out of the 80% errors. those to the right side by the right panel. When compared with the prosaccades this reduction is two times as big. they are reduced by about 100 ms. Combining the two measures of error production and correction results in . a reduction of the reaction times is obtained within the next 10 years. However. pared for the diagnosis of saccade control in the following parts of the book. The Fig. As in the case of the prosaccades. The figure shows the distributions of saccadic reaction times from a single subject.238 Burkhart Fischer Figure 22. This means that they are almost completely unable to do the task in one step. who performed the prosaccade task with overlap condition. when large amounts of express saccades are made by single subjects of certain ages. stays at this level and increases after the age of about 40 years. 23 shows the age curves for the variables that describe the performance of the antisaccade task. Saccades to the left side are depicted by the left panel.

The period of the ”best” values is between 20 and 40 years of age. The children of the youngest group reached the opposite . N=328. The figure shows the age development of the performance of the antisaccade task with gap conditions.Development of Saccade Control 239 Figure 23. the age curve of the percentage of uncorrected errors (misses) shown by the lower right panel of 23.

1% and not significantly different from zero. This indicates that the adult subjects in the age range between 20 and 40 years produce 20% errors. The standard deviation of 30 ms to either side indicates that in 32% of the cases the reaction times differ by 30 ms or more. that depending on the culture writing goes from left to right. all error reaction time were shorter by about the same amount of 50 ms over the complete range of ages covered. but they correct almost all of them. The distribution looks rather symmetrical and in fact the deviation of the mean value is only 6 ms and not significantly different from zero. or from top to bottom. However. During the following 10 years the rate of misses drops down to almost zero. But there is a tendency to more express saccades to right than to the left. The upper right panel depicts the differences between the percentages of express saccades made to the right and to the left. The tendency is that reaction times are somewhat shorter for the right directed saccades as compared with the left directed saccades. right to left. 24 shows these distributions of differences for 6 variables.6. presumably because the age dependence for the right and the left variables have the same development. In the case of saccade control it might be argued. we look at the reaction times of the errors shown by the upper right panel. Therefore we look at the total distribution of the difference values for all ages. However. During years after the age of 60 the subjects begin to have more difficulties in correcting their increasing rate of errors.240 Burkhart Fischer side in only half of the trials. The standard deviation if 12% indicating that .The upper left panels depicts the differences in the reaction time of the prosaccades with overlap conditions. The Fig. The mean value is -1. The age curve reflects the curve for the reaction time of the prosaccades generated in the overlap condition. Finally. 3. The differences between left and right variables did not show any systematic age dependence. Therefore we look at the possible asymmetries of the different variables describing saccade control. that asymmetries occur about as often in favour of the right side as they occur in favour of the left side. Left – Right Asymmetries The question of hemispheric specialisation is asked for almost any aspect of brain functions. It shows. This small difference maybe related to the fact that the German language is written from left to right (all data in this book comes from native German speakers). this does not indicate that there are no asymmetries.

The figure shows the distributions of the left minus right differences of 6 variables describing saccade control.Development of Saccade Control 241 Figure 24. The distribution of saccadic reaction times obtained with overlap conditions are shown for left . Extreme cases can be seen within this relatively large group of normal subjects. 32% of the subjects produced more then 12% of their express saccades to one side than to the other. 22. An example can be seen in Fig.

These are fast regular saccades. but not vice versa: high values of errors may occur along with low or with high numbers of intrusive saccades. Correlations and Independence Large numbers of errors in the antisaccade task are often interpreted as a consequence of a weak fixation system. is also able to suppress intrusive saccades. correction . The analysis of the relationship between errors. The percentage of errors in the antisaccade task exhibit differences of more than 15% in 32% of the cases and the differences of the percentage of corrective saccades are larger than 28% in 32% of the subjects.242 Burkhart Fischer and right directed prosaccades. Because the asymmetries in favour of the right or of the left side are about the same in number as well as in size. the plot shows in detail. Most saccades to the right occur between 85 ms and 140 ms. While the correlation coefficient indicates a positive significant correlation. The reaction times of the correct antisaccades in the gap condition shows a similar result: one encounters quite a number of subjects with heavy asymmetries (32% with differences of more than 45 ms). The correction times exhibit even stronger asymmetries: in 32% of the subject the differences are larger than 55 ms. who is able to suppress the reflexive saccades. the mean value of the distribution does not deviate significantly from zero. The figure demonstrates an extreme case of asymmetry of prosaccades. This would imply that many intrusive saccades should be observed in the overlap prosaccade task (poor mono fixation stability) along with many errors in the gap antisaccade task. From the consideration of the asymmetries in saccade control we can conclude that large asymmetries occur in quite many cases. But a subject. Almost all saccades to the left occur between 130 ms and 170 ms. he/she may not be able to suppress reflexive saccades to a suddenly presented stimulus. This means that the reason for many errors in the antisaccade task may be a weak fixation system. 25 shows the scatter plot of the data obtained from control subjects in the age range of 7 to 13 years. that the relation works only in one direction: High values of intrusive saccades occur along with high values of errors. In other words: even if a subject is able to suppress intrusive saccades while fixating a small spot. 3. These are express saccades. error correction. but other reasons also exits such that high errors rates may be produced even though the mono fixation stability was high. but the mean value of 5 ms is statistically not significant from zero.7. We can look at the possible correlation between these two measures. Fig.

The details are described in the literature [Mokler and Fischer. In conclusion from this section we can state. These 3 components work together in the functional from of the optomotor cycle. High values of intrusive saccades occur along with high values of errors. M. Stuhr. B. 1999]. reflexive control. B. Fischer. but not vice versa: high values of errors may occur along with low or with high numbers of instrusive saccades. who produce also many errors but relatively few express saccades. and voluntary control. V. The antisaccade: a review of basic research . S. J Neurophysiol 76: 199-214 Everling. that saccade control has indeed 3 main components: fixation (being weak or strong as indicated by express saccades). Fischer. 1997]. (1996). Human express-saccade makers are impaired at suppressing visually-evoked saccades. They correct the errors not as often and the correction times are longer. (1998). Scatterplot of error rate in the gap antisaccade task and mono fixation instability. The functioning of the cycle improves over the years from the age of 7 to adult age and has a strong tendency to deteriorate after the age of 40 years [Fischer et al. time and express saccades can also be used to learn more about fixation and its role in saccade control. Those. who produce many errors and many express saccades.Development of Saccade Control 243 Figure 25. correct their errors more often and after shorter correction times in comparison to subjects. References Biscaldi.

Exp-Brain-Res 92: 528-541 Gezeck. On the development of voluntary and reflexive components in human saccade generation. Brain-Res 754: 285-297 Fischer. B. Strzel. Breitmeyer. (1992). W. Neuropsychologia 36: 885-899 Fischer. Boch. S. Otto. Mokler. A. R. B. Dynamic visual perception of dyslexic children. K. Stuhr. B. B. P. (1997). (1993). Perception 32: 10011008 Fischer. Mechanisms of visual attention revealed by saccadic eye movements. B. B. Saccadic eye movements after extremely short reaction times in the monkey. Biscaldi. H. Illusory illusions: The significance of fixation on the perception of geometrical illusions. B. Characteristics of ”anti” saccades in man. Saccadic reaction times: a statistical analysis of multimodal distributions. Weber. Separate populations of visually guided saccades in humans: reaction times and amplitudes. Ramsperger. (1987). H. F. O. Gezeck. (2003). Fischer. J. B. Perception 29: 523-530 Fischer. Hartnegg. Rev Physiol Biochem Pharmacol 106: 1-35 Fischer. The preparation of visually guided saccades. M. Aiple. Human express saccades: extremely short reaction times of goal directed eye movements. Gezeck. Vision Res 37: 2119-2131 . S.244 Burkhart Fischer and clinical studies. B. Neuropsychologia 25: 73-83 Fischer. Exp-Brain-Res 57: 191195 Fischer. Huber. ExpBrain-Res 89: 415-424 Fischer. B. (1995). H. F. M. S. (1987). Express Saccades and Visual Attention. Weber. daPos. (1993). (1984). Biscaldi. E. Brain-Res 260: 21-26 Fischer. (1997). B.3: 553-567 Fischer. (2000). Behavioral and Brain Sciences 16. B. Weber. V. Timmer. The three-loop-model: A neural network for the generation of saccadic reaction times. Biol Cybern 72: 185-196 Fischer. (1983). B.

(2004). D. Kimmig. (1999). J Opt Soc Am 57: 1030-1033 . The E-Z-Reader model of eyemovement control in reading: comparison to other models. The role of fixation and visual attention in the occurrence of express saccades in man. The recognition and correction of involuntary saccades in an antisaccade task. Look away: the anti-saccade task and the voluntary control of eye movement. (1986). DP. Role of the rostral superior colliculus in active visual fixation and execution of express saccades. P. (1978). Douglas. Eur Arch Psychiatry Neurol Sci 235: 269-275 Mokler. Behavioral and Brain Sciences 26: 445-526 Saslow. (1985). Exp-Brain-Res 58: 455-472 Hallet. Everling. MG. (1967). Mobashery. M. B. RH. Pollatsek. K. Primary and secondary saccades to goals defined by instructions. H. Vision Res 18: 1279-1296 Hallett. Exp Brain Res 125: 511-516 Munoz. A. A turn-key transportable eye-tracking instrument for clinical assessment . B. J Neurophysiol 70: 559-575 Munoz. K. Wurtz. Fischer. Primary and secondary saccades to goals defined by instructions. DP. B. Research Methods. Vision Res 18: 1279-1296 Hartnegg. ED. J-Neurophysiol 67: 1000-1002 Munoz. Fischer. Behavior. Wurtz. Fixation cells in monkey superior colliculus. PE. (2005). RH. (1992). HA. (2003). Instruments. (1993). (1978). Nature Reviews/ Neuroscience 5: 218-228 Reichle. Rayner. Fischer. Buchtel. RM. S. A. & Computers 34: 625-629 Klein. Biological Psychology 68: 201-213 Mayfrank. (2002). Fischer. C. Frontal lobe lesions in man cause difficulties in suppressing reflexive glances and in generating goaldirected saccades. L. B. Instrumental and test-retest reliability of saccadic measures. Latency for saccadic eye movement. DP. I. Characteristics of cell discharge.Development of Saccade Control 245 Guitton.

(1987). The effect of frontal eye field and superior colliculus lesions on saccadic latencies in the rhesus monkey. J Neurophysiol 57: 1033-1049 . Sandell. JH.246 Burkhart Fischer Schiller. JH. Maunsell. PH.

Inc. Such techniques have proved to be reliable indicators of dominance across populations and within either gender. the ocular laterality thus identified is apparently stable with advancing chronological age. and appears not to show familial traits. ie. By this is meant: The eye that is consciously or unconsciously selected for monocular tasks. In contrast. pp. Short Commentary OCULAR DOMINANCE Jonathan S. Sighting dominance can be regarded as the ocular laterality most analogous to decisions regarding limb choice. Higham Ferrers. aiming or alignment. that ocular dominance is best understood by the behaviourally descriptive term ‘sighting preference’. ball-kicking foot). Northamptonshire NN10 8BP. An individual’s sighting choice appears to be substantially consistent within and between applicable tests.In: Binocular Vision Editors: J. indications of ocular dominance from sensory tests. This definition aligns with the first description of the phenomenon recorded four hundred years ago. Pointer Optometric Research 4A Market Square. McCoun et al. which latter are usually based on simple motor tasks such as pointing. UK The time has come to consider seriously what has hitherto possibly been regarded as a somewhat narrow proposition. under circumstances when only one of a pair can be selected for use (eg. including the binocular viewing of rivalrous stimuli or the recording of functional oculo-visual . writing hand. 247-248 ISBN: 978-1-60876-547-8 © 2010 Nova Science Publishers. namely.

and inter-test disagreement. Many of these explanations have been at best parsimonious and sometimes in contradiction of the prevailing knowledge of binocular vision. including that of visual suppression. Sighting dominance is not reliably associated with limb preference or. the identification of a functional basis for ocular dominance continues to prove elusive: the phenomenon remains a ‘demonstrable habit’.248 Jonathan S. for reasons of ocular neuro-anatomy. adopted when a single eye must be chosen for a particular viewing task. Despite a burgeoning research output over recent decades. predicted by cortical laterality: a generalised (uniform) laterality preference of sensory organs and motor limbs in the individual is unlikely. Pointer asymmetries (often of visual acuity). and to account for the not infrequent disagreement found in the individual between the laterality result obtained using a sighting compared to a sensory test format. . Theories of ocular dominance have developed and evolved through the twentieth century to acknowledge and accommodate these discrepancies. have a tendency to show intra.

21. 213 argument. 203. 155. 108 annealing. 164 animals. 180 . 108. 203 ACM. 59. 209 adults. 171. 129. 104 anomalous. 206 alternative. 196. 111. 136 Airship. 168 ambiguity. 237. 190. 74. 153. 37. 148. 169. 44. xi. 31. 239. 186. 146. 219. 176. 152. 172. 71. 19. 216. 170 aid. 220. 132 albinism. 100. 83. 175. 156. 111. 4. 59 activation. 165. 150. 234. 174. 35. 147. 155 astigmatism. 122. 182. 123. 184.INDEX A abnormalities. 34. 69. 186 ambivalent. 152 attentional bias. 64 alters. vii. 193. 180. 35. 149. 3. 68. 22. 166 amblyopia. 109 algorithm. 207 accommodation. 139. 157 American Psychiatric Association. 162. 26. 117 astronomy. 190 accessibility. 64 asymmetry. 182. 79. 164. 182. 182. 194. 181. 122. 180. 33. 32. 76. 4. 149. 79. 240. 180 anxiety. 74 adaptation. 66 age. 152. 61. 73. xi. 174 Aircraft. 168. 165. 125. 126. 78. 144. 16. xii. 76. 17. 66. 3. 187. 110. 152 ACC. 185 acute. 116. 221. 169. xii. 157. 184. 72. 192. 185 artificial intelligence. 187 anger. x. 88. 119. 183. 163. 79 arousal. 120 amygdala. 66. 129. 127. 196. 238. 151. ix. 165. 13. ix. 191 assessment. 189. 150. 112. 222. 129 accidents. x. 184 animal studies. 116. 31. 73. 27. 24. 67. 130 aggressiveness. 192. 43. xi. 66 automatic processes. 181 Australia. 144. 112. 39. 152 antagonistic. 242 atropine. 236. 146. 148. 6. 162. 200. 148 alternatives. 209. 67. 205. 204. 148. 113. 40. 140. 181. 130. 64. 240. 143. 201. 99. 242. 72. 29. 150. 2. xi. 151. 74. 69. 147. 103. 114. 147. 183. 37. 215 adult. 214 anterior cingulate cortex. 59. 176 amplitude. 169 Aristotle. xii. 158. 243 adulthood. 23. 223. 134. 191. 216 Africa. 27. 185. 160 accuracy. 31. xi. 176. 81. 147 adjustment. vii. 191. 191. 58. 157. viii. 2. 186 application. 162. 175. 168 aniridia. 38. 183. 147. 219. 137 aspect ratio. 66 adaptability. 195. 243. 247 aggregation. 155.

81. 117 brain. 143 classification. 111. 5. 105 bias. 223. 132. 129. 177. 83. 15. 103. 91. 190. 238. 130. 2. 245 . 18 breathing. 169 chiasma. 133. 175. 74 clinical assessment. 152. 167. 80. 225 cognitive process. 134. 123. 207 constraints. 32. 113. 243 battery. 161. 2. 241. 236. 220 bonding. 73. 10. 109. 104. 130 configuration. 133 conception. 165. 197. 129. 38. 245 cerebral hemisphere. 62. 15. 166. 190. 119 behavior. 189. 173. 146. 244 computation. 111. 219 Index B basic research. 125. 94 confusion. 127. 173 components. x. 243. 178. 2 avoidance. 150 cell. 126. 63. 147 congenital cataract. 105 contrast sensitivity. 31. 130. ix. 82. 176. 164. 168. 110. 112. 175 cognitive. 6. 217. 225 coil. 169 consciousness. 192. 192. 182 compilation. 103. 104. 67. 227. 60. 88. 107. 149 cataract surgery. 68 channels. 210 buildings. 3. 5. 149. 77. 185. 19. 225. 109. 68. 179. 108. 76. 102. 127 buttons. 206 capacity. 104. 109. 167. vii. 115 Canada. 185 binomial distribution. 131. 166. 131 brain functions. 151. 125. 245 clinical disorders. x. 99 computing. 71 childhood. 4. 209. 158. x. 168. 119. xii. x. 206.250 availability. 95. 136. 96. 108. xi. 148. viii. 16. 214. ix. 122 control. 169. 169. 198. 231. 184 confidence. 212. 80 cataracts. 109. 125. 105 consumption. 204 Bezier. 181 awareness. 125. 50 botulinum. 224. 48. 191. viii. 177. 85. 147 blindness. 136 conscious awareness. 3. ix. 112. 79. 237. 221. 81. 170. 111. 35. vii. 58 community. 162 benefits. 140 clinical examination. 110. 166 conscious knowledge. 232 compensation. 209. 67 complexity. 147. 132. 192. 4. 234. 188. 233 conscious perception. 210. 37. 75. 131 conditioning. 197 clay. 30. 76 beating. 192. 206 classical. 64 clinical approach. 153. 22. 240 brain stem. xi. 119 cataract. 221 consensus. 194. 87. 191. 196. 108. 242. 240 brain functioning. 223. 216. 148. 131 birth. 113 communication. 2. 117. 82. 176 C calibration. 122. 131 case study. 224 computational performance. 113 caloric nystagmus. 168. 225. 226. 148. 129. 67 children. 74. 195. xi. 212 competition. 123. 240. 84. 9. 58. 5. 128. 166. 222. 35. 127. 111 coding. 131. 166. 119. 75. 130. 109 Congress. 163. 109. 18. 169. 224 branching. 181. 186. 202. 244 China. 239. 174. 113 candidates. 243. 6. 182. 131 continuity. 191. 166. 88 birds. ix. 103. 237.

240. 177 251 E Education. 113. 184. 223 convex. 155. 185. 61 critical period. 168. 85. 103. 121 Index depression. 83 demand. 175. 174. 163. 147 crocodile. 180. 164 emotional reactions. 178 D damping. xi. 126. 172. 17 correlations. 147 Cyprus. 185 emotional. 196 Cybernetics. 190. 77. 5. 232. 157. 195. 248 DSM-IV. 147. 183. 71. 155. 26. 180. 149. 196. 207. 110 danger. 63. 88. 87. 161. 222. 226 cycloplegic refraction. 156. 33. 181. 35. 150. 224 corneal opacities. 164 countermeasures. 32. 109 emission. 77. 130 electromagnetic wave. 104. 172. 171. 206. 163. 233. x. 67 CPU. 217. 156. 177. 178. 108. 150 definition. 139. 242 correlation function. 148 depth perception. 189. 186. 6. 29. 30. 80. 130 embryogenesis. 113. 140. 78. 165. viii. 222. 3. 160 distribution. 113 emotion. 158. 117 deficiency. 40. 149 displacement. 141. 196 CRC. 104 dissociation. 148 correlation. 222. 169. 73. 164. 224. 94. 74. 160. 196. 224 cortical processing. 79. 131 cross-cultural. 162. 133 cycles. x. 144. 181. ix. 173. 66. 64. 69. 191. 24. 217. 179. 176 duration. 228. 151. 34. 103. 240 cumulative distribution function. 176. 140. 171. 157. 157. 247 decoupling. xi. 36. 169. 174. 186. 180. 121. 88. 182 cross-talk. 129 electrodes. 114. xi. 142. 146. 119. 113 electromagnetic. 153. 167. 222. 168. vii. 16. 165. 218. 165 data set. 152. 185. 247 deformation. 117. 16. 241. 37 coordination. 229 cruise missiles. 109 CRT. 163. 191. 182 elaboration. 68. 240 diodes. 143. 199 dependent variable. 70. 172. 177. 141.convergence. 184. 186 diseases. 228. 179. 96. 107 disorder. 75. 164 doctors. 128. 143 cues. 139. xi. 116. 81. 210 density. 220 dominance. 191. 162. 149. 161. 3 defects. 182. 159. 74 discrimination. 176. viii. ix. 140. vii. 126. 174. 22. 182. xi. 147. 112. 149. 203. 196. 206 culture. 117 cycling. 126 crystalline. 176. 61. 108. 161. 165. 76. 164. 128. 90. 187. 173. 85. 174. 247. 184. 206 EEG. x. 187 emotional information. 234. 174. 111. 59. 104 diplopia. 132 covering. xi. 172 depressed. 84. 144. 183. 221. 173. 208 deviation. 177. xii. 112. 110. 165. 180. 148. 242 division. 130 emitters. 176. xi. 184. 72. 75. 107. 127. 227. 82. 175. 69. 67. 170 . 179. 72. 218. 183. 31. 242 correlation coefficient. 82. 21. 159 deprivation. 6 deficit. 148. 182. 2. 99. 119. 162. 180. 176. 163. 163. 158. 17. 2 decisions. 147 discomfort. ix. 152 detection. 192. 223 cortex. 149. 230.

115. 148. 149. 232. 68 execution. 119. 140. 186 gratings. 123. 208 Germany. 48. x. x. 187 factor analysis. 60. 218. 192. 4. 116. 148. 231. 132. ix. 121 epipolar geometry. 174. 64. 192. 212. 227. 126. 16. 125. 71. 229. 182. 158. 150. 223. 128. 215. 230. 212. 65. 73. 113. 206. 143. 114. 220 G Gaussian. 127. 66. 5. 187 extraction. 191 EOG. 50. 125. 107. 161. 75. 145. 237. 226. 112. 178. 111 fusion. 233. vii. 178 grouping. 132. 214. 216. 164. 185 groups. 69. 179. 65. 131 frontal lobe. ix. 174. 127. 186 emotional valence. 109. 58. 215. 144. 122. 211. 213. 190 evolution. 216. 207 Ger. 218. 126. 166 environmental conditions. 209. 148 goals. 120. 168. 73 false negative. 110. 210. 147. 206 false positive. 72. 107. viii. 3. 13. 214. 144. 141. 245 eyes. 111. 224. 206. 224. 130. 112. 123. 144. 185 fundus. 219 . 141. xii. 156. 232. 3. 73. 219. 109. 242. 151. 178 emotions. 207. 111. 72. 59. 227. 156. 176. 224. 222. 222. 21. 225. 17 esotropia. 229 Fox. 72. 209. 79. 206 European Union. 179. 108. 223. 148. 111 F facial expression. 244. 68. 227. 85 gender. 179. 219. 131. 210. 203. ix. 247 generation. 6 epipolar line. 117. 175. 63. 142. 209. 74. 225. 67. 214. 184. 119 Euro.252 emotional stimuli. 183 glasses. 171 encoding. 180. 204. 220. 76. 245 exotropia. 65 feedback. 185. 247 family. 132 grades. 226. 78. 110. 1. 203 familial. 191. 223. 234. 181. 211. 109. 186 February. 245 flight. 113. 163 flow. 150. 4. 141. 175. 109. 131. 161. 129. 171. 224. 117. 150. 117. 139. 66. 40. 179. 210. 68. 171. xi. 29. 163. 143. 211. 3. 40 estimator. 129. 244. 180 females. 108 environment. 215. 3. 127. vii. 225. 69. 144. 38. 230. 143. 245 government. 107. 143. 78. 225. 223. 234. 114 fovea. 217. 213. 232 functional magnetic resonance imaging. 221. 209 Gestalt. 191. 217. 214. 111. 110. 243. 79 fatigue. 215. 15. 66. 152 estimating. 141. 175 explicit knowledge. 113. 108. 148. 67 filters. 213. 167. 166. 165. 244 genetic factors. 148 experimental design. 131 eye movement. 178. 235. 183 FPGA. xii. 59 estimation process. 228. 222. 184. 222. 193 fixation. ix. 231 growth. 75 Fourier. 141. 162. 66 Geneva. 130 fluid. 229. 165. 37. 190 GPS. 62 Europe. 217. 136 frog. 231. 70. 150. 183. 108. 236 Index fear. 236. 207 European Social Fund. 162. 10. 177. 212. 142 grass. x. 119. 129. 109. 133 examinations. 233 failure. 233. 136. vii. 64. 130. 112.

130. 132 hot spots. 203 inattention. 68. 40. 215. 191. x. 199. 213. 113. 111. 171. 110 independence. 129. 217. 164 heart rate. 119 interference. 110 incidence.guidance. vii. 78. 110. 122 infants. 213 infrared light. 164 height. 129 high-frequency. 191. 226 inspection. 156. 133 insight. 236. 128 Index H habituation. 191 illusion. 122. 184 heart. 209. 178. 104. 71. 79 INS. 189 imitation. 213 infrared spectroscopy. 3. x. viii. 66 human subjects. 40. 76. 51. 151. 165 Hessian matrix. 160 indication. 113. 20 high resolution. 131 intensity. 190. 79. 143 interval. 103 instabilities. 79. 36. 164. 199. 82. 248 idiopathic. 44. 60. 10. 30. 141 information exchange. xi. 112. 123 inhibition. 108. x. 115. 244 human brain. 99. 114 Harvard. 212 illusions. 185. 85. 130. 211. 2. 83. 79 harmonics. 180 interactions. 244 hyperbolic. 113. 64. 235. 113. 194. 155. 149. 169. 5. 14. 4. 131. 231 injury. 82. 199 hyperopia. 84. 207. xi. 125. 60. 109. 73. 140. 42. 111. 64. 224. ix. 187. 104 intelligence. 2. 103. 111. 61. 99. 130. 183 human development. 147. 74. 170 Homeland Security. 13. 111. 9. 72 interpretation. ix. 191 human. 112. 161. 184 interaction. 109 hypothesis. 184. 214. 69. 96. 207 images. 220. 169 high-speed. 104. 157. 87. 43. 223. 150. 64. 110. 190 hostile environment. 125. 183. 141. 131. viii. 69 indicators. 168. 127. 113 instability. 184. 84. 186. 185. 244 image analysis. 41. 199. 217. 139. 165. 16. 111 253 . 231 inhibitory. 123 Illinois. 219. 113 high-level. 126. 61. 111. 27. 207 innervation. 195 I identification. 113. 147 hypoplasia. 129. 193. 149 integrity. 116. 181 hemodynamic. 72. 200 holistic. 136 horizon. 127 horse. 247 induction. 81. 78. 142. 184 inheritance. 221 imaging. 243 instruction. 164. 149. 5. 191. 109. 72. x. 90. 218. 127 infancy. 39 host. 112. 131 implementation. 74. 103. 232 integration. 76 illumination. 168 interdependence. 222 Indiana. 211. 125. 168. 6. 165. 126. 84. 222. 83. 202 hemisphere. 173. 132 infrared. 175. 58. 35. 157 humans. 131 inertial navigation system. 221. 103 industry. 150 infinite. 29. 113. 78. 88. 172 handedness. 168. 1. 207. 109. 112 histogram. 216. 226. 85. 212. 99. 108. 66. 73. 81. 75. 210. 82. 144 industrial.

212 localization. 108. 113. 94. 246 light emitting diode. 89. 87. 77. 66. 113. 99. 113 inversions. 74. 83. 5. 94. 168 J Japan. 139 Japanese. 126 matrix. 66. x. 167. 69. 79 Jet Propulsion Laboratory. 247. 3. 79. 112. 119. 22. 120. 232 London. 186 left hemisphere. 127 Mars. 90. 80. 130 late-onset. 199 linear function. 199 linear regression. 206 location. 78. 3. 218 long-distance. 72. 129 K kinesthesis. 80 low-level. 4. 206 miosis. 22. 166. 58 Italy. 136 judge. 78. 78. 148. 160 kinesthetic. 104 longevity. 13. 153 metric. 165 males. 70. 127. 215 likelihood. 137. 245 medicine. 240 laser. 148 intrinsic. 248 law. 119 Investigations. 76. 150 manipulation. ix. 65. 136 masking. 104. 169 learning. 2. 148. 89. 156 Mexican. 175 . 76. 174 linear. 74 Ministry of Education. 58 ITT. viii. 69. 214 iteration. 126. 150 Mediterranean. 170 Mexico. 191 magnetic. 90 measurement. 155. 71. viii. 245. 125. 75. 129 mobile robots. 84. 165 magnetic resonance imaging. 165 magnetic resonance. 67. 8 intrusions. 95. 78. 121 memory. 63. 127. 171 mental disorder. 151. 96. 78. 103. 156. 170 IOL. 160 measures. 99. 112. 81. 143. 40 ITRC. 83. 71 M machines. 80. 94. 147. 172 missiles. 96. 222. xi. 167 metabolic.254 intraocular. ix. 67 management. 74. 233 Jung. 143. 48. 104. 84. 112 invasive. 151 laterality. 178 mapping. 28. 85. 81. 64. 182 mental processes. 20. 148 Iran. 156 L language. 9. 6. 242. 207 ITC. 77. 119 linguistic. 122. 24 limitations. 107. 81 microscopy. 206 Index limitation. 82. 104. 173. x. 27. 234. 96. 157. 187 long period. 227 lens. 68. 126 mobile robot. 67. 125. 157 lenses. 19. 75. 238. 74 lesions. 22. 64 middle-aged. 66. 127. 79. 144 mirror. 79. 79 lead. 148 intraocular lens (IOL). 133 justification. 71 longitudinal study. 4.

91. 67. 79 motor system. 223. 115. 90. ix. 128. 82. 107. 186 motion control. 245 multidisciplinary. 111. 110. 24. 5. 127. viii. 72 nonlinear. 103. 68. 71 organism. x. 185. 98. 127. 165 occipital regions. 178. 148. 117. 39. 122. 51. 187 orientation. 118. 28. 13. 225 navigation system. 226. 110. 119 optic nerve. 143. 100. 1. 29. 137 motion. 85. 113. 125. 146. 147. 148. 109. 74. 187 natural. 184. 186 modules. 1. 71 neuronal circuits. 170. 13. 12. 3. 7. 131. 225. 60 non-uniform. 192. 60. 47. 108 model fitting. 241 North America. 105 operator. 71. 81 neuroanatomy. 108. 108. 113. 108. 110. 99. 81 neural networks. 4. 171 neonatal. 229. 84. 72. 74 MOG. 224 monocular clues. 26. 232 neuroscientists. 108. 82. 8. 137. 94. 48. 112 motor task. 82. vii. 3. 194. 187 Oklahoma. 153. 76. 58. 122. 19. 167. 100. 123. 109 optical. 87. 149. 93. 13. 97. 9. 217. 16. 225 New York. 244 neural network. 122. 52. 147. 152. 83. 64. 150. 19. 152 muscle. 119. 79. 22. 213. 127 near infrared spectroscopy. 40. 197 occupational. 72. 3. 196 N NASA. 224. 56. 50. 51. 207 nucleus. 207 modeling. 83. 184 optomotor system. 166. 148 occipital lobe. 2. 91. 54. 193 oscillation. 199. 23. 45. 120. 214. 11. 60. 245 modulation. 114. 116. 16. 123 oscillations. 117. 207 noise. 22. 16. 185. 196 non-human. 61 models. 118. 141. 120 outliers. 109. 53. 84. 41. 116. 111. 229 organ. 8. 49. 69. 119. 84. 179 occluding. 232 offenders. 118. 182. 96. 184 neck. 112. 109. 140 neuropsychology. 72 organization. 22. 80 myopic.mobility. 135 National Academy of Sciences. 123. 16. 21. viii. 77. 122. 77. ix. 131 monkeys. 163. 125. 183. 94. 27. 103. 117. 3. 93. vii. 83. 6. vii. 117. 57. ix. 7. 224 myopia. 23. 95. 2. 40. 40. 3. 128 motivation. 99. 247 movement. 46. 104 occlusion. 4. 111. 3. 195. 212 255 O observations. 184 nystagmus. 108. 164. 55. 104. 191. 132 ophthalmologists. 234 obstruction. 75 modalities. 94. 96. 110. 67. 164. 224 nervous system. x. 94. ix. 81. 41 negative valence. 148 Index neurons. 9. 112. 119. 25. 148 muscles. 9. 21. 204 motor control. 82 optics. 180 . 2. 122. 192 modus operandi. 117. 121. 151. 131. 5. 88. 109 nerve. viii. 93. 88. 2. 81. 101. 152. 204 normal. 182. 143 Moscow. ix. 17. 127. 107. x. 183. 4. 41. 123 oculomotor. ix. 201. 152. 131 network. 147 optical systems. 18. 234. 168. 157 operations research. 112 money. 14.

142. xi. 127 proposition. 99. 16. 130. 174. 207 pedestrians. 83. 76 prevention. 212 pond. 214 Paris. 169. 68. 201. 41. ix. 151. 144. 192 presbyopia. 75. 107. viii. 190 primary visual cortex. 72. 201. 148. 113. 228. 177. 184 picture processing. 242 population. 233. 128 pedestrian. 177 Phobos. 204. 165. 109 pathways. 203 Parietal. 117. 83 promote. 113. 152 pupils. 117. 93. 237. 194. 143. 186 primate. 150 perception. 125. 247. 72. 213. 116. 178. 108 preference. 75. 206. 116. 67. 183. 123. 163. 58. 64. 110. 143. 122. 27. 218. 207 preventive. 67. 151 psychiatrists. ix. 79. 207 planar. ix. 40 perturbations. 177 personal relevance. vii. 248 prefrontal cortex. 192. 178 physicians. 190. 170. 21. 186 preprocessing. 204. 119 ptosis. 121. 234. 107. 71. 128 planning. 201. 129. 183. 114. 127. viii. 70. 173. 214 phobia. 152. 202. 157 proximal. 177. 170 perturbation. 176. 189. 148 program. 239 periodic. 131. 112. 238 prognosis. 81. 179. 141 predictors. 170. 31. 75 pathophysiological. 97 pitch. 74. xi. 110 power. 119. 164. 196. 140. 149 preschool children. 76.256 Index P paper. 119. x. 131 PPS. 185. 147. 170. 73. 144. 148. 3. 162. 113. 215 photographs. 165 PFC. 143. 111. 148. 122 posture. 76. 193. 125. 176. 145 preschool. 152. 185 personality traits. 197 probability density function. 187 play. 6. 193. 73. 183 pinhole. 162. 41 PET. 161. 109 pathophysiological mechanisms. ix. 107. 141. 123. 178 physical environment. 191. 129. 175. 63. 177 personality. 247 protection. 191 PRISM. 75. 72. 84. 130 plasticity. 2. 155 probability. 151. 144. 77. 74. 83. 186. 123 pathology. 76. 188 preparedness. 130. 232 pattern recognition. 80. 182. xi. 206. 149. 244 performance. 186 patients. 149. 220 prediction. 84. 144. 148 pupil. 199. 166 physical properties. 75 parameter. 192. x. 202 planets. 181. 206 pediatric. 141 . 134 photocells. 187. 197. 171 propagation. viii. 226. 179 production. 204 paradoxical. 190. 189. 189. 58. 182 primates. 109. 141. 164. 111 physiological. x. 128. 73. 196. 4. 190. 121 permit. 180. 74. 80. 78 pathogenesis. 64. 21. 174. 119. x. 145. 135. 197 probe. 206 praxis. 72 prior knowledge. 72. 167. 170. 139. 90 poor. 190 protocols. 147. 4 property. 143 personal. 166. 171. 149 projector. 108. 151. 149. 2. 238. xi. 37. 181. ix.

83. 170. 194 scaling. 111. 210. viii. 167. 228. 237. 99. 240. 104. 245 safety. 199. 244 reading. 226. 213. 185 reaction time. 2 ratings. 210 reasoning. 209. 155. 234. 174. 129. 193 . 178 reliability. 110. x. 112. 39. 159. 141 random. 109. 126. 40 ROI. 228. 107. 223. 237 regression. 218. 126. viii. 128. 76. 48. 130. 242. 196. 127. 180. 163. 202. 61. 148 search. xi. 113 scotoma. 216. 219. 82. 131. 119. 125. 231. 125. 229. 244. 133. 137 Russian Academy of Sciences. 164. 143 relevance. viii. 189 searching. 172. 113. 212. 68. 81. 241. 219. 199. 5. 105. 70 recognition. 115. 227. 2. 214. 204. 245 renormalization. 232. 105. 63. 2. 213 reflexes. 70. 223. xii. 129. 130. 187. 219 scientists. 66. 205 SAR. 170. 3 questioning. 3. 217. 209. 237. 114. 195. ix. 130. 60 repeatability. 148. 60. 73 reconstruction. 191 residual error. 230. 132. 235. 215. 238. 58. 29. 125. 20. 234. 71 reflection. 87. 104. 229. 19. 128. 136. 204. 51. 214. 104. 213. 198. 228. xii. 210. 191. 158. 114. 245 saccadic eye movement. 238. 131 respiration. 204 resources. 69 R radar. 61 school. 225. 132. 231. 223 sclera. xii. 166. 128. 156. 162. 203 reduction. 113. 2. 119 regression analysis. 116. 93. 241. vii. 234. 222. 227. 19. 242 scatter plot. 189. 224 risk. 202. 135 scalar. 169. 198. 137 Russian. xi. 162. 212. 26. 1. 202 Royal Society. 204 recovery. 180. 231. 121. 236. 224. 242 relationship. 152 risk factors. 208. 19. 216. 238 reference frame. 69. 162. 141. 242 ras. 108. 243. 61. 189. 113. 135. 243. 224. 230. 108. 113. 230. 113 radius. 213. 126. xi. 58. 150. 244. 134. 129. 219. 127. ix. 2. 227. 204. 117. x. 123. 147. 171. 38. 235. 136 researchers. 149. 152 Robotics. 222 Schmid. 82. 190 sample. 2. 6. 32. 119 regular. 237. 197 range. 164 retina. 209. 103. xii. 233. 211. 245 real time. 131 reality. 229. 197. 197 resolution. 209. 227. 242 relationships. xii. 158. 231. vii. 182. 127. 29. 236. 190 radio. 245 reconcile. 248 Research and Development. 125 257 S saccades. 206. 179. 64. 177. 120. 137 Russia. 157. 108. 190. 206. 224. 20. 16. 141. 71. 80. viii. 209. 232. 3 redistribution. 160 research. 233. 163. 240. 234. 75 quantization. 104. 240. 147. 143. 229. 237. 234. 195 scatter. 174.Index Q quality of life. 120. 207 sampling. 33. 214. 127. 218. 222. 136 robustness. 227. 195. 225. 119 regression line. 183.

204 solutions. 192. 74. 40. 148. 81. viii. 183.258 segmentation. 191 survival. 104. 223. 224 speculation. 131. 117. 157. 213 statistics. 118. 16 simulation. 88. 168 stimulus pairs. 111. 219. 117. 179. 131 switching. 194. 168. 110 smoothing. 137. 60 strength. 180 suffering. 193 task performance. 113. 227. 207 short period. 120 syntactic. 211. 113. 122. 112. 110. 169. 146. 186 self-report. 182 sorting. 77. 181 stimulus. xii. 146. 127. 190 steady state. 234. 240 species. 211. 152 surprise. 22. 3. 148. 133 sine. 38 Simultaneous Localization and Mapping. 32. 3 somatosensory. 129. 233. 71. 179 semantic. 75. 207 T targets. 71 speed. 195. 66 specialisation. 2 Spain. 192. 194. 180. 2. 190. 171 substrates. 214. 126. 159. 175. 16. 34. 162. 73 syndrome. 132 strabismus. 128. 171 semicircular canals. 156. 64. 75. 108. 226. 102. 234. 115. 107. 230. 165. 166. 104 sites. 147. 93. 157 subjective. 148. 109. 28. 187. 157 surgical. 139. 21. 178. 167. 117. 159. 217. 148. 152. 175. 29. 181 shape. 108. 8. 13. 170. 74. 31. 196 sports. 131 spectral analysis. vii. 110. 189. 190 separation. 68 . 148. 229. 240 standard error. ix. 128. 103. 231. 119 Singapore. 147. 129. 178. 166. 227. xi. 82. 84. 170. 207 symmetry. 119. 122 sensors. 125. 248 surgery. x. 192. 74. 117 Stimuli. 33. 157 streams. 177 statistical analysis. 222 similarity. 60. 180. 26. 115 series. 162. 191. 192. 151. 209. 26. 199. 115 superiority. 162 social anxiety. 36. 37. 114. 108. xii. 143. xi. 149. 1 students. 219. 104. 125. 234 taxonomy. 196. 217 spatial frequency. 101. 71. 169. 69. 108. 69 Stochastic. 9. 189. 216 STRUCTURE. 123. 140. xi. 196 space-time. 161. 190. 204. 100. 167. 185 social phobia. 178. 151. 109. 153. 242 stimulus information. 148. 148 spatial information. 148. 128 sleep. 99. 3. 229 Index stages. 18. 189 spatial. 242 stabilization. 206 standard deviation. 117. 25. x. 168. 110. 175. 66. 190 significance level. 130. 220 signals. 40 snakes. 149. 184 suppression. 78. 122. 131. 129. 70. ix. 117. 13. 224. x. 104. x. 218. 127 stabilize. 150. 150. 232. 127. 110. 179. 215. 186. 171. 132. 232 surveillance. 228 stability. 130. 8. 115 synthesis. 68. 192. 90. 123 spectrum. 176 semantic content. 150. 186. 75 Switzerland. 64 SRT. 147. 206 selectivity. 121. 59 systems. 103. 166. 108 sensitivity. 179. 32. 191 symptoms. 27. 72. 183 software. 139. 72. 76. 68. 83.

131. 190 Unmanned Aerial Vehicles. 22. 188. 242. 221. 108. 163. 187. 139. 168. 81. 107. 170 visible. 233. 103. 224. 155. 108. 141. x. 141. 132. 69. 201. 176. 131 threat. 191 vehicles. 121. 22. 223. 115. 113. 113. 179 values. 240. x. 90. 137. 140. 110. 127. 111. 63. 221. 112 telencephalon. 190. 67. 182. 177. 196. 5. 239. 169. 75. 5. 234. 234. 21. 230 vision. 66. 111. 160. 19. 87. 20. 226 training. 132. 125. 59. 70 variation. 219. 9. 128. 148 thinking. 75. 40. 213. 24. 29 test-retest reliability. 192. 60 Index transverse section. 107. 66. 27. 207. 245 thalamus. viii. 114. 196. 172. 234. 115. 22. 27. 2. 58. 197. 218. vii. 220. 64. 195. 23. xi. 190. 99 travel. 223. 244 top-down. 89. 112 violence. 48. 40. 72. 82 topology. 172. 171. 6. 121. 87. 61. 230 triangulation. 4 three-dimensional space. 4. 185. 60 translational. 224. 137 V valence. 164 temporal. 226. 204 vector. 218. 115. 219. 132. 202. 231. 76. 108. 217. 163 theory. 4. 203. 108. 226. viii three-dimensional reconstruction. viii. 147. 125. 215. 103. 116. 39. 142. 228. 178. 88. 123. 156. 114. 2. 4. 209. 119. 22. 163 time frame. 110. 37. 211 therapy. 152. 210. 117 trans. 109. 218. 171 259 U uncertainty. 4. 222 traits. 66. 117. 3. 165. 82. 185. 78. viii. 219. x. 95. 114. 110. 217. 226.technology. 148. 227. 121 timing. 192 traffic. 83. 108. 76. 230. 187 violent. 172. 5. 151. 217. 167. 178. ix. 57. 130. 182 validity. 222. 222. 186 topographic. 216. 81. 28. 73. 225. 103. x. 40. 64. 248 United Nations. 113 time series. 108 vestibular system. 117. 83. 80. 218. 219 vertebrates. 11. 203 uniform. 230 two-dimensional. 19. 58. 217 time periods. 40. 77 translation. 117. 216. 199 variable. 218. 152. 196 thresholds. 148. ix. 6 toxin. 74. 120. ix. 228. 82. 186 threatening. 152. 77 transfer. 122. ix. 15. 240. 27. 126. 132. 178. xii. 3. 172. 230 test procedure. 226 Timmer. 7. x. 171. 26. 19. 17. 169. 207 United States. 166. 222. 228. . 228. 210. 161. 227. 241 variance. 243 variability. 115. 5. 131. 229. 216. 111. 16. 12. 115. 13. 32. 217. 162 three-dimensional. 96. 225 time resolution. 224 trend. 126. 202. 139. 113. 174. 2. 104. 247 trajectory. 38. x. 184. 187. 168. 110. 213. 227. 111. 27. 236. 243. 191. 229. 186 transformation. 217 threshold. 197 velocity. 99. 238. 143. 15 triggers. ix. 31. 247 time consuming. 77 trial. 132. 119. 116. 179. 185. 126. 58. 209. 187 universities. 15. 140 time. 42. 191. 226. xi. 130. 141. 149. 109. 147. 126. 117 tracking. 165. 229 variables. 218. ix. 112. 74. 113. 82. 71. 109. 67. 158. viii. 3. 112. 108. 111. 111. 218 three-dimensional model. 8. 141. 85. 149. 177. 181 validation. 84.

247 X X-linked. 189. 204. 109 Y Y-axis. 75. 169. 167. 149. 162. 182 visual stimuli. 72. 197. 213 visual stimulus. 201. 77.208. 221. 205 World Health Organization. 147. 165. 160 young adults. 162. 132 260 Index wavelet. 157. 115 wavelet analysis. 223. 108. 221 visualization. 141. 77. 114. 151. 68. 248 visual area. 217. 211. 79. 71. 215. xi. 161. 210. 200. 140. 235. xii. 205 vocational. 199. 166. 216. 70. 185. 212. 115. 211 visual perception. 68. 182. 74 weapons. 151. 69. xi. 232 visual system. 149. 167. 128 windows. 220. xi. 69. 248 visual acuity. 111. 244 visual processing. 114. 225. 195 yield. 181. 181. 220. xi. 61. 163. 161. 112. 245 visual field. 168 visual attention. 190 writing. 244. 203. 192. 211. 148. 140. 240. 114. 115 weakness. 172. 155 W Warsaw. 148. 119. 164. 78. 193. 116. 216. 169. 213. 150. 165. 230. 110. 149. 121 . 168. 111. x. 191. 75. 210. 237 wealth. 179. 211. 65. 232. 117. 74 vulnerability. 147. 116. ix. 78. 173.

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.