Professional Documents
Culture Documents
How do we get visual information from the world and use it to control behavior? What neural processes underlie visually guided behavior? Traditional sub-areas visual sensitivity color vision spatial vision temporal vision binocular vision/ depth perception texture perception motion perception surfaces, segmentation object perception attention
Traditional sub-areas -
visual sensitivity color vision spatial vision temporal vision binocular vision/ depth perception texture perception motion perception surfaces, segmentation object perception attention
Sources: Kandel, Schwartz & Jessel Principles of Neural Science McGraw-Hill 4th ed Gazzaniga, Ivry, Mangun Cognitive Neuroscience Norton, 3rd ed Squire, Berg, Bloom, du Lac, Ghosh, Spitzer Fundamental Neuroscience 3rd ed Academic Press Rosenbaum Human Motor Control 2nd ed Academinc Press
Eye movements
Visual Projections
Signals from each eye are adjacent in LGN but remain segregated in different layers. Convergence occurs in V1.
M=magno=big P=parvo=small
Magno and parvo cells have different spatial and temporal sensitivities.
Visual cortex is a layered structure (6 layers). LGN inputs arrive in Layer 4. Layers 2,3 output to higher visual area., Layers 5,6 output to sub-cortical areas (eg superior colliculus) Massive feedback projection from layer 6 to LGN 800 lb gorilla.
Cells in V1 respond to moving or flashing oriented bars. Little response to diffuse flashes or steady lights.
LGN cells have circular receptive fields, like retina. Not clear what the role of the LGN is. Oriented cells emerge in V1, probably composed Of appropriately aligned LGN cells as shown.
Cells in V1 are organized into columns. Orientation preference gradually changes as one progresses across cortex. Cells at different depths Have same orientation preference.
Binocular convergence: Cells respond more or less to R and L eye inputs. Ocular dominance varies smoothly across cortical surface orthogonal to orientation variation
Regular large scale organization of orientation preference across cortical surface. Does this simplify signal processing?
What is V1 doing?
Early idea: edge detectors basis for more complex patterns Later (1970-80s) spatial frequency channels any spatial pattern can be composed of a sum of sinusoids Late 90s to now: Main idea about V1 is that is represents an efficient recoding of the information in the visual image.
Images are not random. Random images would require point-by-point representation like a camera.
Images have clusters of similar pixels and cells designed to pick this up. Cells extract information about spatial variation at different scales (clusters of different sizes). Can think of receptive fields as basis functions (an alphabet of elemental image components that capture clustering in local image regions)
can be coded with only twelve V1 cells where each cell has 64 synapses
V1 striate cortex
Because there are more than we need - Overcomplete (192 vs 64) - the number of cells that need to send spikes at any moment is Sparse (12 vs 64).
More complex analysis of image properties in higher visual areas (extra-striate) Defining visual areas:
Retinotopic responses Anatomical projections
Note old simplistic view: One area, one attribute Is not true. Areas are selective in complex and poorly understood ways Note the case of Mike May.
Mike May - world speed record for downhill skiing by a blind person.
Lost vision at age 3 - scarred corneas. Optically 20/20 - functionally 20/500 (cf amblyopia) Answer to Molyneuxs question: Mike May couldnt tell difference between sphere and cube. Improved, but does it logically rather than perceptually. (cf other cases) Color: an orange thing on a basket ball court must be a ball. Motion: can detect moving objects, distinguish different speeds (structure from motion). Note: fMRI shows no activity in Infero-temporal cortex (corresponding to pattern recognition) but there is activity in MT, MST (motion areas) and V4 (color). Other parts of brain take over when a cortical area is inactive. Cannot recognize faces. (eyes, movement of mouth distracting) Cant perceive distance very well. Cant recognize perspective. No size constancy or lightness constancy/ segmentation of scene into objects, shadows difficult. Vision most useful for catching balls (inconsistent with Held & Hein??) and finding things if he drops them.
Hippocampus
MT
MST
Output of cells goes to brainstem regions controlling pursuit eye movements.
A. When the eyes are held still, the image of a moving object traverses the retina. Information about movement depends upon sequential firing of receptors in the retina. B. When the eyes follow an object, the image of the moving object falls on one place on the retina and the information is conveyed by movement of the eyes or the head.
Motion of the body in the world introduces characteristic motion patterns in the image. MST is sensitive to these patterns.
dorsal
ventral
Right occipito-parietal lesions in human leads to similar deficits in pursuit eye movements.
MST has input from the vestibular system. Thus the cells have information about self motion from sources other than the flow field.
Many cortical areas have inputs form eye movement signals as well, even as Early as V1. Presumably this is responsible for the ability of the visual system to process image information independent of image motion on the retina.
Perception of Depth
Monocular cues: familiar size occlusion geometric perspective shading motion parallax
Stereopsis
Stereo sensitivity is one of the hyperacuities Motion parallax a little less sensitive but probably important because it is ubiquitous.
Neural computation of disparity is complex and not well understood. Disparity signals in V1 and V2, MT.
Such stimuli have no monocular information and so are experimentally useful for isolating stereo processes, but have disadvantage that they are harder than usual.
Perception of Surfaces
Subjective Contours
Infero-temporal cortex
Cortical specialization