Professional Documents
Culture Documents
Magazine
from the environment to blend in and Primer when we map a visual stimulus to
match other objects, as exemplified by just one neuron’s firing rate rather
decorator crabs and caddis fly larvae. Dimensionality than a possibly complex multi-neuron
response; or when, for example, we
How is camouflage connected to reduction in evaluate the effects of attention in
animal cognition? Cognitive processes terms of changes in the power in a
also influence what makes an effective neuroscience certain frequency band in the local
camouflage, beyond sensory processing field potential.
(such as visual detection). As the brain As this illustrates, one of the
Rich Pang1, Benjamin J. Lansdell2,
may interpret stimuli differently, it may goals of neuroscience is to find
and Adrienne L. Fairhall3,4,5,*
affect predator behavior and thus interpretable descriptions of what
have consequences on camouflage the brain represents and computes.
efficacy. Predators have been shown to The nervous system extracts Choosing to describe a V1 neuron’s
be worse at finding camouflaged prey information from its environment response in terms of the orientation
when prey populations are polymorphic and distributes and processes that of a moving bar is somewhat
in appearance. This is because under information to inform and drive arbitrary, however, as the firing rates
some conditions predators concentrate behaviour. In this task, the nervous of V1 cells can be modulated by
on prey types that they have recent system faces a type of data analysis many other visual features. Further,
experience with, forming ‘search images’ problem, for, while a visual scene thinking of the brain’s output in
for these and thus overlooking the rare may be overflowing with information, terms of the firing rate of individual
morphs. As a result, negative frequency- reaching for the television remote neurons or the power of the summed
dependent selection can maintain before us requires extraction of only electrical signal in a certain frequency
polymorphic prey and fluctuations in a relatively small fraction of that band is also an arbitrary choice of
morph frequency. Learning and cognitive information. We could care about representation of neural activity that
processes may also have a major effect an almost infinite number of visual may not reflect the brain’s natural
on the value of different camouflage stimulus patterns, but we don’t: we computational ‘units’. In general, we
strategies. Predators learn some types distinguish two actors’ faces with ease ought to seek representations, both
of camouflage more quickly than others, but two different images of television of the stimulus and of brain activity,
especially those involving high contrast static with significant difficulty. that are concise, complete, and
patterns. The value of a given type of Equally, we could respond with an informative about the workings of
camouflage thus depends not just on almost infinite number of movements, the nervous system, and yet which
initial detection, but also on predator but we don’t: the motions executed are not biased by an experimenter’s
experience and cognition. to pick up the remote are highly arbitrary choice. Considering this task
stereotyped and related to many from the perspective of dimensionality
Where can I find out more? other grasping motions. If we were reduction provides an entry point
Bond, A.B., and Kamil, A.C. (2002). Visual predators
select for crypticity and polymorphism in virtual
to look at what was going on inside into principled mathematical
prey. Nature 415, 609–613. the brain during this task, we would techniques that let us discover
Diamond, J., and Bond, A.B. (2013). Concealing find populations of neurons whose these representations directly from
Coloration in Animals. Harvard University Press,
Massachusetts. electrical activity was highly structured experimental data, a key step to
Hanlon, R.T. (2007). Cephalopod dynamic and correlated with the images on the developing rich yet comprehensible
camouflage. Curr. Biol. 17, 400–404.
Lovell, P.G., Ruxton, G.D., Langridge, K.V., and
screen and the action of localizing and models for brain function.
Spencer, K.A. (2013). Egg-laying substrate picking up the remote.
selection for optimal camouflage by quail. Curr. Describing a complex signal, Single neuron coding
Biol. 23, 260–264.
Skelhorn, J., and Rowe, C. (2016). Cognition and the such as a visual scene or a pattern A tenet of sensory neuroscience is
evolution of camouflage. Proc. R. Soc. B. 283, of neural activity, in terms of just a that, within a rich and varying world,
20152890.
Skelhorn, J., Rowland, H.M., Speed, M.P., and
few summarizing features is called neurons have evolved to respond
Ruxton, G.D. (2010). Masquerade: camouflage dimensionality reduction. The core to a small set of behaviourally
without crypsis. Science 327, 51. notion of dimensionality reduction meaningful inputs and to represent
Stevens, M. (2016). Cheats and Deceits: How
Animals and Plants Exploit and Mislead. Oxford is long established in neuroscience. them efficiently. And indeed, it is often
University Press, Oxford. For example, in characterizing the observed that many sensory neurons’
Stevens, M., and Merilaita, S. (2011). Animal
Camouflage: Mechanisms and Function.
response of a neuron in primary responses can be characterized as
Cambridge University Press, Cambridge. visual cortex (V1), Hubel and Wiesel depending only on a small set of
Stuart-Fox, D., and Moussalli, A. (2009). observed that an object’s motion features of an external stimulus.
Camouflage, communication and
thermoregulation: lessons from colour changing orientation modulated the firing rate of An example of such dimensionality
organisms. Phil. Trans. R. Soc. B. 364, 463–470. the cell. This allowed them to describe reduction is color vision. Light
Troscianko, J., Wilson-Aggrawal, J., and Stevens, M.
(2016). Camouflage predicts survival in ground-
the firing rate as a function of this one hitting the eye has intensity in a
nesting birds. Sci. Rep. 6, 19966. variable, rather than of the intensities wide range of frequencies. While a
of all of the pixels in the visual scene. spectrophotometer would provide a
Centre for Ecology & Conservation, University of Conversely, dimensionality reduction complete description of the light beam
Exeter, Penryn Campus, Penryn, TR10 9FE, UK. can be applied to patterns of multi- in terms of its power spectrum across
*E-mail: martin.stevens@exeter.ac.uk neuronal activity. We do just that all frequencies, our retina has only
R656 Current Biology 26, R641–R666, July 25, 2016 © 2016 Elsevier Ltd.
Current Biology
Magazine
three kinds of color sensor, the L, M selected feature. More generally Filter
and S cone types (corresponding to one might consider a neuron that is
long, medium, and short wavelengths). selective for a sequence of images, or
All we can know about the incoming a short movie. For example, an ‘ON’
light is given to us by the activation RGC which responds to a particular Stimulus
of those three sensors: because of spot becoming brighter over time can
the unique frequency absorption be understood with an appropriate + -
properties of each cone type, the spatio-temporal filter. That is, the
activation of a given cone type is neuron would weight the intensities
a function of a weighted sum of of all the pixels at all recent time
- +
the light’s intensities at different points — for example, there would
frequencies. Thus, our color be 2000 weights for 20 frames of a
perception is a three-dimensional 10 x 10 grayscale image — and the
representation of the original, infinite- sum of the weighted intensities over ~0 ~0
dimensional spectrogram that both space and time would determine
specifies the light’s intensity at every the probability of the neuron’s emitting Current Biology
frequency. a spike.
Further into the visual system, a It is possible that the neuron’s Figure 1. Linear filters detect the presence
neuron’s response is often a function response is sensitive to more than one of specific features.
of only a small set of visual inputs, feature of the stimulus. For example, Linearly filtering a stimulus yields a single num-
or features. These features are the RGC might be sensitive not only ber that quantifies how similar the stimulus
identified in a given stimulus through to the brightening of the spot but also is to the filter. If the filter shape is a positive
a remarkably simple procedure known to the speed of the change. In this deflection, of a dot’s luminance over time, for
example (upper red trace), then stimuli that re-
as linear filtering, which consists of case, the response could depend on semble positive deflections (upper blue trace)
simply weighting and summing of the the outputs of multiple filters, and will get filtered to positive values, whereas
components of a signal according the neuron’s response would depend stimuli that resemble negative deflections
to a given set of weights known as on the similarities to these multiple (middle blue trace) will get filtered to negative
the filter. This is a general procedure features. As long as there are many values. The opposite is true for a negative de-
that can be applied to any stimulus fewer features than there are, in this flection filter. If the stimulus has approximately
equal positive and negative deflections (lower
representation, be it color-spectral case, pixels in the movie, this feature blue trace), then filtering it with either a positive
components, light intensities in representation — the set of similarity or negative deflection filter will yield a value of
an image, time-varying intensities values — is a much more compact approximately 0.
in a movie, and so forth. Linear way to describe the input, and ideally
filtering produces a single value that captures everything about the input using a variety of techniques. One
expresses the similarity of the image that is relevant to the response of the straightforward approach is to analyze
to the filter — the extent to which neuron. the covariance of the spike-triggering
the stimulus feature is present in the Generally, the feature or features stimuli (Figure 2A, B) in order to find
image. A geometric illustration makes that a neuron is selective for are additional relevant features. This is
clear how filtering accomplishes this not known a priori. Dimensionality especially useful in cases when the
task (Figure 1). reduction methods identify relevant spike-triggered average alone is not
For example, some retinal ganglion features directly from experimental very informative; for example for the
cells (RGC) are excited by, or data. The key idea is simple: one ON–OFF retinal ganglion cells, which
positively weight, the image intensities presents the system with many are triggered either by an upward or
at the ‘center’ of a visual stimulus random examples of complex stimuli a downward change in light level.
and are suppressed by, or negatively (images, movie segments, and so The spike-triggering stimuli average
weight, intensities in the surrounding on) and notes which stimuli make to almost zero, but computing the
region. Together, these weights define the neuron spike and which do not. covariance of these stimuli allows
the filter, or a stimulus feature that One can then use these samples one to find a set of stimulus features
drives the neuron. The RGC’s firing to characterize what is particular to that capture both the upward and
can then be predicted by taking an the cases that caused the neuron to downward variations, even, for
input image, weighting the value of respond. example, if they have different rates
each pixel in the image by the filter The simplest statistic to look at is of change. V1 responses have also
values and summing the result. Thus, the average of the spike-triggering been found to be best fit by models
just as the cone activations reduce the stimulus examples (called the spike- that include a number of features,
full spectrum to three components, triggered average). In many cases where the additional features allow
here the RGC’s activation is reduced this can lead to accurate spike one to account for properties like
from being a function of the full image prediction, for example in an ON phase invariance in complex cells, and
(specified by its intensity at each retinal ganglion cell that responds components that lead to suppression.
pixel) to being a function of a single primarily to upward deflections Multiple features can also be found
number that represents a measure of in light level. If there are multiple using methods that use alternate
the image’s similarity to that neuron’s relevant features, they can be found statistical properties like entropy and
Magazine
Magazine
Magazine