S. Afshar1,2, G. Cohen1, R. Wang1, A. van Schaik1, J. Tapson1, T. Lehmann2, T.J. Hamilton*1,2 !


The Ripple Pond: Enabling Spiking Networks to See

Bioelectronics and Neurosciences, The MARCS Institute, University of Western Sydney, Penrith NSW Australia 2 School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney NSW Australia Correspondence: Tara Julia Hamilton The MARCS Institute Bioelectronics and Neuroscience University of Western Sydney Locked Bag 1797 Penrith NSW 2751 Australia t.hamilton@uws.edu.au

Abstract—In this paper we present the biologically inspired Ripple Pond Network (RPN), a simply connected spiking neural network that, operating together with recently proposed PolyChronous Networks (PCN), enables rapid, unsupervised, scale and rotation invariant object recognition using efficient spatio-temporal spike coding. The RPN has been developed as a hardware solution linking previously implemented neuromorphic vision and memory structures capable of delivering end-to-end high-speed, low-power and lowresolution recognition for mobile and autonomous applications where slow, highly sophisticated and power hungry signal processing solutions are ineffective. Key aspects in the proposed approach include utilising the spatial properties of physically embedded neural networks and propagating waves of activity therein for information processing, using dimensional collapse of imagery information into amenable temporal patterns and the use of asynchronous frames for information binding. Key Words—object recognition, spiking neural network, neuromorphic engineering, image transformation invariance, view invariance, polychronous network INTRODUCTION How did the earliest predators achieve the ability to recognize their prey regardless of their relative position and orientation? (Figure 1). What minimal neural networks could possibly achieve the task of real-time view invariant recognition, which is so ubiquitous in animals (Caley, M.J., 2003), even those with miniscule nervous systems (Van der Velden, 2008), (Tricarico, 2011), (Neri, 2012), (Avargues-Weber, 2010), (Gherardi, 2012), yet so difficult for artificial systems? (Pinto, 2008). If realised such a minimal solution would be ideal for today’s autonomous imaging sensor networks, wireless phones, and other embedded vision systems which are coming up against the same constraints of limited size, power and realtime operation as those earliest animals of the Cambrian oceans! In this paper we outline such a simple network. We call this system the Ripple Pond Network (RPN). The name Ripple Pond alludes to the network’s fluid-like rippling operation. The RPN is a feed-forward time-delay spiking neural network with static unidirectional connectivity. The network converts centered input images received from an unspecified

upstream salience detector into spatio-temporal spike patterns that can be learnt and recognized easily by a downstream distributed Polychronous Network (PCN). Working together the entire system can perform shift, scale and rotation invariant recognition.

Figure 1. Batesian mimicry: The highly poisonous pufferfish, Canthigaster valentine (top) and its edible mimic Paraluteres prionurus (bottom). The remarkable degree of precision in the deception reveals the sophisticated recognition capabilities of local predatory fish whose neural networks, despite being orders of magnitude smaller than that of primates nonetheless seem capable of matching human vision in performance (note: the dorsal, anal and pectoral fins are virtually invisible in the animals’ natural environment).!

The View Invariance Problem In the 2005 book, “23 Problems in Systems Neuroscience” (van Hemmen, 2005), the 16th problem in systems neuroscience, as posed by Laurenz Wiskott, is the view invariance problem. The problem arises from the fact that any real world 3D object can produce an infinite set of possible projections on to the 2D retina. Leaving aside factors such as partial images, occlusions, shadows and lighting variations, the problem comes down to the shift, rotation, scale and skew variations. How then, with no central control could a network like the brain learn, store, classify and recognize in real time the multitude of relevant objects in its environment? Generally most biologically based object recognition solutions have been based on vertebrate vision, in particular mammalian vision, and have used either statistical methods (Sonka, 2007), (Gong, 2012), (Sountsov, 2011), signal processing techniques (such as log-polar filters) (Cavanagh, 1978), (Reitboeck, 1984), artificial neural networks (i.e. non-spiking neural networks) (Norouzi 2009), (Nakamura, 2002), (Iftekharuddin, 2011), and more recently, spiking neural networks (SNNs) (Meng 2011), (Serrano-Gotarredona, 2009), (Rasche, 2007), (Serre, 2005). Since many of the approaches above are based on mammalian vision and achieving the accuracy and resolution of mammalian vision, they are very complex and can only be truly implemented on computers (Sonka, 2007), (Gong, 2012), (Norouzi 2009), (Nakamura, 2002), (Iftekharuddin, 2011), (Meng 2011), (Serre, 2005), (Jhuang, 2007), sometimes with very slow computation times. Other implementations that have been demonstrated on hardware (Serrano-Gotarredona, 2009), (Rasche, 2007), (Folowosele, 2011) have been successful in proving that vision can be achieved for small, low-power robots, UAVs, and remote sensing applications, however; the functionality of such neuromorphic systems has been limited.

2001). namely: the PolyChronous Network (PCN) (Izhikevich. 2005). Polychronous Learning Network Before discussing the RPN. 2006). a) Human fovea centralis responsible for the central 2! of the visual field (two thumb widths at arm’s length) (Ahnelt. the visual field (c) and mapping of this in V1 in a primate (d).The few models of invertebrate visual recognition have had an explanatory focus (Huerta. . While the mapping may look approximately log-polar in (d). (Traver. (Arena. reduces the biological plausibility of such log polar approaches. b) Standard log polar model used in artificial vision (Traver. for speed and view invariance. not being developed for the purposes of hardware implementability. 2010). 2001). 2011) showing the ability and efficiency with which SNNs. 1987). 1996). and d) Visuotopic organization of V1 (i. a typical log-polar implementation used to model this (b). the mapping of the visual field) in the monkey from (Gattass. 1998). The PCN shares this feature with other recently proposed dynamic neural networks including liquid and echo state machines (Maass.e. CMOS implementations of bio-inspired radial image sensors are most closely related to the RPN however. The RPN trades the high accuracy and resolution approaches favoured by other implementations. A PCN uses a spatio-temporal pattern of spiking neurons to represent information asynchronously whereas classic artificial neural networks discard timing information by modelling neuron firing rates sampled synchronously by a central clock.. In the following sections we describe the learning network and the RPN that build on these principles. (Jaeger H. ! Figure 2. (Hamilton. c) Visual field. 2002). together with the highly intricate connectivity required. overcome problems with noise. 2012). assume highly connected networks not suitable for hardware. with competition between neurons. liquid state machines do not perform well for spatio temporal pattern recognition unless the reservoir network has been set up to ensure feedback stability. we will briefly describe the neuromorphic learning network with which it interfaces. power consumption and computation speed suggest that this is a good starting point to build a network capable of solving the view invariance problem. 2012) and. incomplete data. 2010) rather than biologically plausible SNNs as is the case with the RPN. the mapping at the fovea is approximately uniform which. these sensors ultimately interface with conventional processors (Pardo. Recent papers (Afshar. 2009). however unlike PCNs. (Van Rullen. Figure 2 shows the human fovea (a). The PCN’s additional use of asynchronous temporal information results in greater energy efficiency and speed (Levy.

2008). It does so by converting the raw visual stimulus to a Temporal Pattern (TP) suitable for . d2. J.! However PCNs or other distributed memory systems tasked with recognition cannot directly interface with the retina since they expect their learnt patterns to appear via the same channels every time (see Figure 5). Figure 4 shows an example of a spike pattern entered into a PCN with five neurons (Wang.. (Ranhel. and input separability (Manevitz. Biological model (top) and mathematical description (bottom) of a single element in a PCN. a spike at neuron 1 is delayed by T1+T2 and a spike at neuron 5 is delayed by T2.. L. It achieves this by continuously adapting its parameters to maximize recognition at its output in response to the statistics of its input. 2009). R. A longer spatio-temporal pattern can be stored in a network of neurons by learning the delays and weights that are required between neurons (Paugam-Moisy. Neuron activation in a polychronous network (PCN) (Wang. Here in order for neuron 3 to spike. and as such we prefer to use the simpler. 2011). A PCN can function as a content addressable. The RPN is in the class of systems that serve as the interface between such a memory system and retinotopically mapped inputs from the visual field. H. 2011).. distributed memory system comprising of multiple neurons connected to each other via multiple pathways.fading memory. Thus the spikes at neuron 1 and 5 arrive simultaneously at neuron 3. ! Figure 3.. d3 in Figure 3) makes particular neurons exclusively responsible for particular interspiking intervals. Subsequently the PCN’s “output” can be measured as the relative activation of certain neurons which individually or in concert indicate the recognition of a certain pattern (Martinez. 2010). W3 in Figure 3) and dendritic propagation delays (d1. feed forward PCN for pattern recognition. 2012). W2. causing it to spike. The PCN through dynamic adaptation of synaptic weights (W1. ! Figure 4.

The proposed RPN system does not require any specific salience model having a centered input image as its only requirement. The field of computational and biologically-based salience detection is extensive with a wide range of models and techniques and approaches (Drazen. In fact. the incoming “image” stimulates in a frame-based fashion the neurons distributed on a disk as shown in Figure 6. 1999). Input Images Ripple Outwards on the Neuron Disc After centring by the salience network and high-pass filtering. (1) . this simplified system is not far off the mark in the case of many organisms (Land. For visual clarity in all the illustrated figures the disc comprises of only ten neurons on each arm (N=10) with a total of thirty arms (!=30). (Itti. 2007). By using a sliding the window of attention and focusing it onto a single salient object at a time the salience detector effectively allows the overall network to operate in a shift invariant manner. For n=1…N For !=1…" ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! where an. 2009).! (t) is the activity on the nth neuron on arm ! at time t. (Dill. A PCN element with the spiking pattern at the input flipped. For simplicity. high-pass filtered activation pattern of neurons on a conceptual 2D sheet representing the field of attention that has been produced by an upstream salience detection system. however. (Park. 2011). (Furber.the PCN. compared with Figure 3. results in no recognition. the RPN receives an image as the spatio-temporal. 2004). Subsequent extensions of/to the RPN allow extraction of different image features which are presented to the PCN as additional TPs thus converting a two dimensional image to an m-dimensional spatio-temporal sequence. we may assume the upstream salience network to consist of only a motion detector. 2001). though largely absent in biology. (Hall. frequently constrains hardware implementations and yet is not often considered in the development of artificial neural networks (Sivilotti. (Vogelstein. 2012). MATERIALS AND METHODS The RPN Network A central feature of the RPN is getting the most functionality from a minimally connected network. 1993) and may serve in robotics applications where energy and hardware are also limiting factors. which physically fixes the creature’s gaze onto a moving object. 1991). in this way the RPN is a transformation of an image from its 2-dimensional geometry into a 1-dimensional temporal sequence. Connectivity is a limiting factor which. ! Figure 5. (Gao. An Upstream Shift Invariant Salience Network As shown in Figure 6. 2012).

! (t) can be activated if its upstream neuron (on the same spiral arm but closer to the disc centre). More complex neuron models (Indiveri. Since the neurons act as delayed relays. stimulating succeeding neurons in turn (1). With implementation in mind. The spiral RPN System Diagram: Input image to Temporal Pattern generation. stimulation of a neuron close to the centre of the disc travels outward along the disc arm as a discrete wave. an. In the RPN each neuron an+1. All neurons are also sensitive to any incoming image.! (t) transmits a pulse to its input. From the . the neuronal connections radiate outward to the edge of the disc.! Figure 6. Functionally these neurons represent simple binary relays with fixed unit delays between them. Starting from the disc’s centre. The Summing Neuron outputs a Temporal Pattern The neurons at the edge of the disc all connect to a common Summing Neuron (red sigma in Figure 6 that outputs a real-valued TP that can be represented as a 1"N vector (2). 2011) could increase the system’s resolution but would incur an increased hardware cost. permitting an approximately uniform distribution of neurons. This outward connection creates a rippling effect. and in contrast to log-polar filters. functioning as radially distributed outwardly connected pixels on a circular retina. the density of the neurons along each arm of the disc is increased along the arm as we move away from the centre of the disc.

! ! (2) !"!"# !!! ! ! ! ! ! !! !! ! !! ! ! !! ! !! Where: TPsum(t) is the temporal pattern output of the summing neuron. In hardware this feedback loop can be realised via an asynchronous sample and hold. N is the total number of neurons per arm. " is the total number of neurons on the disc. however. 2012). The output of the inhibitory neuron then feeds back inhibitively onto the visual pathway carrying the image to the disc. ! ! !! ! !! ! ! ! ! ! !! ! (3) . but may also be represented by a network of neurons.e. More subtly but as important is the RPN’s central feature of image-to-collapsed-temporal-pattern conversion. 2010). Thus. can provide an arbitrary number of channels into the PCN in a simple fan-out fashion. while RPN is processing the image) no new image will be projected onto it. In this way the neuron ensures that as long as any activity remains on the disc (i. an. binary pulses. (Brunel. This conversion greatly simplifies the task of scale invariant recognition of the downstream PCN. thresholds and center frequencies that would relay the value of the RPN output to the PCN’s reservoir.geometry of the neuron disc it is clear that this output TP is rotationally invariant. Note that for simplicity we represent the summing neuron as a single neuron with a real valued spike output. all neurons on the disc also connect to an inhibitory neuron (green sigma in Figure 6) such that the inhibitory neuron carries information about the net activity of all neurons on the disc: ! !"!!!! ! Where: Inh(t) is the output of the inhibitory neuron. These neurons would effectively act as multiple input channels which. The Inhibitory neuron blocks this pathway preventing further transmission of the image. though all receiving the same input. The PCN however receives spatio-temporal spike patterns as inputs which are. (McDonnell. to the first approximation.! (t) is the activity on the nth neuron on arm ! at time t. which receiving the same input collectively produce the same output. 2000) which we will consider further in the discussion section. This can be achieved in the PCN’s input layer through the use of multiple heterogeneous neurons with a range of different internal parameters such as varied input weights. biologically plausible systems require a more distributed approach (Wang. the real valued output of the RPN’s summing neuron needs to be encoded into binary values via multiple channels into the PCN. The Inhibitory Neuron Acts as an Asynchronous Shutter In addition to being sensitive to an incoming image and synapsing onto outer neurons along the disc arm. The summing neuron sums the activity of the neurons at the edge of the disc.

) It can be observed that the information carried by the inhibitory neuron can also be used to normalize the system’s TP output. however we also suggest useful PCN functionalities that would enhance the RPN-PCN system. called shunting inhibition (Koch. ! Figure 7. The timing and level of initial activation of this neuron can be made to dynamically affect the magnitude and propagation speed of the output signal of the summing neuron as shown in Figure 7. One example is the ability of the PCN to recognize spatio-temporal patterns received at different speeds which we here refer to as being robust. All signals are accurate. Normalization of amplitude and time of the TP for scaled images for a non-robust PCN. Shunting inhibition involves a control signal (for example the net activity . is a fundamental element in neuronal computation and results in normalization both of the signal’s magnitude and its temporal scale (i. (Volman. In order to compensate for a PCN lacking this capability we include the following network augmentation. 1983). duration).e. !=30. (N=10. In biological networks this process. 2010).Need for Normalization Depends on PCN Robustness In this paper we do not make assumptions about the functional complexity of the PCN and propose a complete system compatible with the simplest PCN realization.

Furthermore. which converts (i) into the continuous signal given in (ii). while also altering the signal amplitude (Wills. or in hardware. The functional diagram of the normalization block is shown in Figure 8. In a biological context the summing neuron would simply be represented by a population of neurons all with the same input rather than a single neuron and the output would be in the form of a modulated local field potential and thus already continuous. L. Figure 8 details the functional operations to normalize the TP if required.) would similarly result in a continuous signal exiting the summing neuron. The output from the summing neuron in both the large (a) and small (b) images of “R” is given in (i). imperfect neuron placement. (Tapson. however. the time interval between the image initially hitting the disc and the first activity by the summing neuron such that: . Immediately after the projection of the image. The (now continuous) TP is normalized both in time and magnitude resulting in the output signal shown in (iii). The output of the summing neuron (the red TPin in Figure 7) is down sampled using L. whether in the context of biology. Figure 7 outlines how the normalization process affects the output from the RPN. mismatched delays along arms. 2011). (Kohonen. !! Figure 8. neurocomputational models. (Carandini. noise in combination with high neuron numbers. 2012). etc. effectively speeding up and slowing down the signal. indicates the required degree of warping in time. 2012).of a group of neurons) affecting the time constants of dynamic neuronal processes through which a signal travels. the value of this signal at its initial rise can be used to normalize the magnitude of the output whereas the interval between this inhibitory pulse (v) and the activation of the summing neuron. 2013a). Note that this LPF operation is only needed due to the assumption of an ideal system. 2008). in a hardware implementation context. Computational description of the compartmentalized normalization block with (N = 10). (Paugam-Moisy. This utilization of nonidealities in the aid of processing is also a common feature in biological systems (McDonnell. onto the disc. This output then passes through a low-pass filter (LPF). in this case ‘R’s of different sizes. device non-idealities (internal parasitic capacitance. 1982). no normalization may be required as the internal parameters of the PCN could be made to automatically vary in response to incoming signals ensuring robust recognition of rescaled signals (Carandini. the total activation of all neurons is summed at the inhibitory neuron (iv). 2004). 2009). (Gutig. We include this block for simplicity in order to interface with of non-robust downstream PCNs.

Inh is the output of the Inh neuron. N is the number of neurons per arm. M is the down sampling factor. The PCN uses synaptic weight and delay adjustments to learn and recognize the commonest encountered TPs.!"!"# ! !"#$%&'()*!!"!" !! !! !"! !! ! !!! (4) ! ! !!"# ! !!"! where. To prevent this loss of information instead of using a single disc. its corresponding TP is recognizable by a non-robust PCN. the input image can be projected onto multiple discs operating in parallel. 2005). L is the time interval between the first activity of the inhibitory and summing neurons tTP is the time of initial activation of the summing neuron. normalized using the total activation signal and then input to a non-robust PCN. tinh is the time of activation of the inhibitory neuron. Examples of such heterogeneities include introduction of cross talk or coupling between the discs’ arms. Taking the square root of the inhibitory signal compensates for the collapse of the disc’s area onto a one dimensional TP. (Dahlem. However in the RPN we propose using hardware implemented radial Gabor . multiple feature maps can be created so that the incident image can be processed into an array of multiple. This TP along with the inhibitory total activation signal is then input directly to a robust PCN. Multiple Parallel Heterogeneous Discs Result in Higher Specificity A drawback of the collapse of a feature rich 2D image into a temporal pattern is the significant loss of information. 2006). 2007). densities and pre-filtering. independent TP’s the combination of which are unique for every object. which forms the TP. By varying disc connectivities. (Choi. TPout is the output of the normalization block. use of discs with different neuron densities and use of hardware implemented Gabor filters (analogous to orientation sensitive hypercolumns in the visual cortex (Bressloff. Thus the completed system delivers recognition using a simple evolvable network. (Shi. The result of the overall system is that an image projected onto the disc in the form of spikes begins to radiate outwards from the disc centre. Finally all the incident spikes reach the edge of the disc and their sum is translated into the output activation of the summing neuron. or alternatively the TP is low-pass filtered. 2004)) to create orientation sensitive discs (Chicca. regardless of the size or relative intensity of the incident image. 2002). The square root of the total initial activation via the inhibitory neuron is used to normalize the TP’s magnitude. Thus the complete RPN-PCN system delivers scale and rotation recognition. To function correctly the RPN and all pre-processing system working with it must have the rotation invariance property. As a result. If standard Gabor filters operating on Cartesian coordinates were used the resulting feature maps would be sensitive to rotation. TPin is the output of the summing neuron. A simple solution to this problem is to first transform the image into polar coordinates and then perform Cartesian Gabor filtering.

filters which group features based on their orientation relative to the disc center as shown in Figure 9. demonstrating the relative temporal order of the multi-resolution spatio-temporal patterns delivered to the PCN. ! Figure 9. As an illustration of the fan-out feature extraction architecture. These parallel high speed TPs could not only provide early information for signal normalization but can also be used to narrow the range of possible objects the image could represent. in the top two blocks extract edges relative to the Cartesian coordinates. Recalling that patterns are represented in the PCN as a signal propagation pathway. Feature extraction via Cartesian Gabor filters. This approach has the benefit of merging the two steps into one that delivers possible speed advantages. thus saving most of the energy required to check a high resolution TP against every known pattern. . thus. Another useful multi-disc arrangement would be the use of discs with different neuron densities along their arms which produce higher speed TPs that can reach the PCN more rapidly. This ensures a highly energy efficient system which rapidly narrows the number of possible object candidates with successively greater certainty. radial Gabor filters to create a higher dimension spatiotemporal pattern for the PCN. N/2 and N/4 neurons on each arm. signals from the sparsely populated discs can readily be used to deactivate the vast majority of PCN pathways not matching the early small scale low resolution TP. avoids a biologically implausible transform and leaves open the option of extending the RPN in a distributed manner where information about a dynamic center of attention can be used locally to generate rotationally invariant feature maps. Figure 10 shows separation of an incident image via parallel. As examples. radial Gabor filters with 0#$ 45# and 90# orientation relative to the center are shown. and via Radial Gabor filters. Also illustrated are outputs of discs with N (full). g("). enabling greater selectivity. G("). in the bottom two blocks where edges of similar orientation are extracted relative to radial coordinates.

for j discs operating on i Gabor filters. 2011) and the potential impact on energy consumption. multi-scale system. low resolution TPs. multispeed scheme. Thus. After passing through the parallel Gabor filters. A PCN with complex features such as lateral and recurrent inhibitory connections having recognized these earlier. 1998). . the presence of multispeed pathways in the visual cortex (Loxley. i × j TPs are produced. each with a different orientation.Figure 10. The simplicity of such a parallelized. which could correspond to more general categories of objects. However. as shown at the right half of Figure 10 the low resolution TPs (those resulting from the disc with N/4 and N/2 neurons per arm) are produced and available to the PCN very rapidly and far sooner than the full resolution TP. the primary limiting factor for all biological systems. argues in favour of further investigation of such a multi-scale. the incident image in Figure 10 is separated into independent feature sets matching the preferred direction of the corresponding filter. can use this information to “switch off” the recognition paths obviously not corresponding to the received pattern rapidly and efficiently narrowing the PCN’s propagation pathways to the target object. the biological evidence for multiscale visual receptive fields (Itti. The resultant separated images are then each processed by multiple parallel discs. (Riesenhuber. 1999). each with a different neuron density (dj) resulting in a corresponding number of TP’s with different resolutions. RPN producing a spatiotemporal pattern using parallel radial Gabor filters and varying neuron density discs.

To measure the RPN performance as a function of image rotation scale and shift a mixed set of 300 different 200 by 200 pixel test images consisting of letters. right. ! Figure 11. A less obvious result of the RPN is the scale invariance shown in Figure 11. The TP similarity measured via Cosine Similarity (cos#) and Spearman's Rank Correlation Coefficient (Spear'$) are high for the rotation transformation. faces and fish were used in approximately equal numbers. . translation and scale transforms with respect to the original image with the average values shown in Figure 12. can be performed. Measures of similarity between the TPs are given in the form of the Cosine Similarity (cos#) and Spearman's Rank Correlation Coefficient (Spear'$). The pattern shown from 0 to %/100 radians is repeated as expected due to the disc’s symmetry.RESULTS To better illustrate the pertinent characteristics of the RPN we focus only on the simple one disc case without the added multi-disc extensions. conceptually they are repetitions of the simple case and merely make the PCN more effective by delivering more information in parallel. The left panel in Figure 11 shows the output TPs resulting from a 200×200 pixel image and its rotated equivalent. Unlike rotational information. discussed above. words. shapes. All images were high pass filtered using a difference of Gaussians kernel and processed by an RPN disc with 200 spiral arms each with 200 neurons. Variance as a function of rotation is shown in the left panel. scale information is preserved by the RPN in the form of the time interval between the initial projection of the image on the disc and the activation of the summing neuron. where the interlaced spiral distribution of the 200 disc arms resulting in a high level of similarity. Both of these measures show high degrees of similarity between the images. The similarity metrics of Spearman-$ and cosine similarity were measured for each image across a range of rotation. RPN’s Geometry Enables Scale and Rotation Invariance It may be apparent by inspection that the RPN is invariant to rotation due to the radial symmetry of the disc. Although these extensions deliver significant improvement in the overall system’s performance. Using this information the additional TP normalization step. numbers. Temporal patterns (TPs) from the RPN illustrating rotational invariance (left) and scale invariance (right).

In the multi-disc case this term would consist of the time required to generate the most time consuming feature maps e. In the worst case. The right most panel shows variance as a function of shift or translational transform. Variance as a function of rotation. Here the resampling operation on the small 200×200 image was the dominant source of variance a source which would not be present in a real world context. TPCN is the time needed by the PCN to process one weighted input spike which can loosely be interpreted as the temporal resolution of the PCN. the TP from a disc with N neurons on each arm would be recognized in: !!"#$%&'(" ! !!"#$%&' ! ! !!"##$% !! ! !!"# !! (5) where Trecognize is the total time needed for recognition. RPN is Fast The RPN has been designed for speed. Given TPCN and the length of a temporal pattern. 2007). With TPCN being on the order of 10 nanoseconds in current first generation hardware implementations of the PCN (Wang. hardware implemented Gabor filters giving resulting in Tproject & 240ns (Chicca. The central panel shows variance with respect to scaling.g. 2013).17 and . scale and shift. Assuming implementation via a digital relay Tripple is in the order of nanoseconds or even smaller. Tproject is the time needed for the image from the retina/sensor to be projected onto the RPN disc.25 on the cosine and Spearman similarity metrics respectively. where the only distinguishing feature of two otherwise identical objects is at the object centre (since the centre of the image takes the longest time to propagate out to the summing neuron). and assuming a linear relationship between the length of the input signal and time to recognition.! Figure 12. we obtain an approximate Trecognize on the order of 5 microseconds. such as the examples in Figure 13. Tripple is the time needed for the activation pattern to propagate one step out from the original activation point on the disc along its corresponding radial arm. . N. and choosing a high N (say N=500). As would be expected for a global polar transform RPN is highly sensitive to non-centered images where a 10% shift of a 200×200 image results in a drop of . the response time of the PCN can be estimated. Nonetheless the system is robust to rescaling down to image sizes where the resampling operation makes recognition challenging even for the human eye.

This allows rapid recognition of the salient exterior of objects. 2013). which would . 2010). However. more complex. however unlike the RPN’s rippling operation which delivers TPs usable by a distributed memory system like the PCN. Which results in: !!"#$%&'(" ! !!"#$%&' ! !!"# !! (6) Signal processing programs running on sequential von Neumann machines require computation times on the order of several milliseconds just to convert Cartesian images into log-polar images while consuming significant computational resources (Chessa. the log-polar foveated systems operate essentially as simple sensor grids and must interface with conventional sequential processors. Distinguishing between a juvenile Spotted Hampala (Hampala macrolepidota) and an adult plain Hampala would represent a worst case for the RPN. Therefore the system can afford to trade the obtained high confidence level for increased speed. with anomalies and inconsistencies in the internal regions of the objects leaving only an after-taste of something amiss. (Perez-Carrasco. these implementations lack complete view-invariance. and in the case where multiple. RPN Connects the Dots The two dashed squares in Figure 14 (bottom left and right) are pixel-for-pixel equivalent matches to the square yet the heterogeneous TP (and critically its early section) output by RPN matches our intuitions. automatic recognition ceases and higher. no other learnt pattern apart from the target object matches the initial TP of the object. introducing bottlenecks that distributed memory systems avoid. independently structured discs are operating in parallel producing multiple feature maps to the PCN in the form of spatio-temporal patterns. any unanticipated anomalies at the initial section of the TP could bring the entire process to a halt. 2005). the PCN often needs only some initial section of a TP to reach its recognition threshold for that pattern since in most cases. the fraction is further reduced. the ability of the PCN to recognize the TP as it is being generated by the RPN effectively eliminates the Tripple term due to the temporal overlap. Hardware implemented log-polar solutions provide significantly higher speed than mapping techniques (Traver. However. rapidly classifying a TP and moving on the next object once a threshold of certainty is reached. As mentioned previously. deliberate mechanisms. PCNs often require only some initial fraction of the entire TP to confidently recognize the object. Other hardware implementations can partially bypass this bottleneck by using processor per pixel architecture or convolution networks resulting in very high speeds (Dudek. This results in a Trecognize that is considerably shorter than the worst case. 2011). Since most recognizable objects do not share the properties of the pathological examples in Figure 13. In the case of the RPN the conversion process results in the outer features being the first to reach the PCN. If the initial section of the TP does not match the pattern expected by the PCN.! Figure 13. In addition.

In the context of artificial systems the use of wave dynamics for computation and recognition has only recently begun (Izhikevich. While this algorithm is not the optimal solution it was sufficient for the purposes the RPN. DISCUSSION Hardware Implementability and Biological Plausibility Advantages of Spiral Packing in Hardware and Biology As the RPN uses a global image transform to achieve view invariance it is at best an incomplete element in a possible model of biology. The spoke connectivity is obviously easier to understand. the connectivity of neurons that radiate from the centre of the disc comes in two flavours: spokes and spirals (see Figure 15). (Fernando. must be called upon to resolve the anomaly. (Maass. But in the context of artificial system where limited connectivity is a major constraint the RPN’s minimally connected network together with high-speed upstream salience detector and downstream PCN can potentially improve end to end processing time significantly. highlighting the utility of approaches such as the use of propagating activation waves. Such spiral structural symmetry as well as the spreading of wave-like activation has been observed in the visual pathways of a range of animals (see Figure 16) (Dahlem. 2008) suggesting possible utility in visual processing. 2009). 1997). 2004). Here a square with corners intact is recognized as a square (left) while a square with corners removed is recognized as an octagon (right). In our implementation we used a simple adaptive algorithm that incrementally varied the radial twist of the spiral arms as a function of the disc's last neuron density. 2007). 2002). 2003). The RPN illustrates the intuitive relationships between object temporal patterns. spiralling the arms allows for more uniform neuron placement. In the case of human perception why exterior features of an object may have greater weight or significance in recognition and produce similar results to the proposed RPN-PCN in these limited cases could warrant further investigation. (Huang. (Wu. . ! Figure 14. (Adamatzky. however. In the RPN model we present here. (Dahlem.presumably operate at much slower speeds. geometric properties of simply connected networks and conversion of imagery information to temporal information in modelling biology. 2005). 2009). (Adamatzky.

reaching the summing neuron along all the spiral arms at the same moment. The simplicity of this network is illustrated in the following example: if the network received the image of a small spot centred on the disc. or ripple. Spiral packing is superior to spoke packing. 2009). a) Spiral and b) semi-circular propagating wave on the chicken retina. (Reitboeck. Images from (Yu. Thus. Instead the RPN is constructed simply with approximately uniform neuron distribution throughout the visual field. recognition with this uniform distribution allows extension of the RPN to more challenging tasks and greater biological plausibility not achievable with log-polar designs. 1984). The resultant spike from this single neuron would result in a propagating spike wave front. 2012) and (Dahlem. firstly because this is a more efficient use of available sensor space and secondly. as a spike propagates from neuron-to-neuron along the arm. log-polar solutions are particularly sensitive to centring. . ! Figure 16. this would result in a single spike stimulating a neuron at the centre of the disc. This makes these systems biologically implausible and difficult to implement in hardware (Cavanagh. Uniform vs Log-Polar Distribution Log-polar solutions require precise neuron connectivity and a radially uniform but continuously increasing neuron density from the periphery to the centre of the visual field. a problem not evidenced in biology. 1978). the delay along each of the arms (spiral or spoke). RPN connectivity: spokes (left) and spirals (right). with spiral or spoke arms. is equal.! Figure 15. in the RPN. is a simple time-delay neural network. Simple Neuron Distribution and Network Evolvability The RPN. Additionally.

The initial feedback loop between the disc centre and the summing neuron would create the error signal required to adjust delays and weaken inter-arm connections until the error signal is minimised. Moreover. can be made very small. (Cavanagh. 1978). motion. Simple Neuron Distribution and Hardware Implementation Unlike the log-polar filters. 2004). The number of neurons on each arm can be set to increase with the radius from the centre. however. increasing the robustness without adding extra delays. RPN is evolvable using the mechanisms of competition and reinforcement learning present in biological neurons (Barabási. the structures on the initial sensor disc can be minimally complex. (Reitboeck.This property of matching delays between the arms may imply that the connectivity of the disc needs to be precisely ordered. In this context. The parallelized structure allows different levels of recognition to occur simultaneously. thus permitting a uniform density of sensors on the disc. unlike previous models that have attempted to show view invariant recognition. their output routed out onto a propagation/buffer/flip-flop layer which itself need only cascade the spikes down its arms at every tripple. Thus. 2005). Discs with sparse neuron populations can very rapidly (via very few propagation steps) provide a low-resolution outline of the TP permitting the PCN to pre-inhibit obviously incongruent templates long before the first full resolution TP arrives. The arbitrary selection of a neuronal region as the disc centre and summing node would enable unsupervised wiring and pruning of connections. in fact. the RPN does not require increased neuron density at the image centre. the sensors. could have evolved autonomously from a randomly connected phase to a self-organised phase based on reinforcing the delay between a spike occurring at the central neuron and a spike generated at the summing neuron. 1984). The process might include the spontaneous genetically programmed activation of the central region followed by delay adaptation and pruning along and between the arms such that the signal received by the summing neuron is a single sharp impulse. 2008). this increase in resolution does not require any restructuring of the system allowing a smooth increase in performance as well as graceful degradation. the central regions are less significant and can often be left empty. and stimulus timing are processed through anatomically segregated parts of the brain (Paulk. This is a highly useful property for implementation in hardware where rectangular grids and maximum element density are dictated by the various technologies. The RPN Approach can be Extended to Three Dimensions . Such simple structures permit approximately radial placement of elements despite the rectangular grid on which most integrated circuits are developed. Evidence of such separate parallel visual processing is particularly pronounced in invertebrates where features such of color. An argument against the existence of log-polar filters in the brain is that the connectivity required is highly intricate. In this way discs with increasingly higher resolution would report their TP providing higher and higher confidence of increasingly refined classification leading to high-speed hierarchical recognition. and no mechanism for its construction has been proposed (van Hemmen. being only tasked to produce a pulse upon excitation. RPN is Scalable in Parallel Multiple discs with different neuron distributions and connectivity schemes can be used in parallel to increase the resolution available to the PCN network. This connectivity.

where the evidence points to complex interactions between multiple mechanisms. Furthermore the mechanisms proposed to explain such observed waves corresponds to a more distributed analog of the RPN’s inhibitory neuron (Martinez. rippling outwards and being integrated at the surface. (Buschman. The reconstruction of 3D images in artificial systems is a well-developed field (Faugeras. 2009). In this way. Increasing evidence from neuroscience points to functional synchronicity being present in the visual cortex in the form of synchronized gamma waves where one might hypothesise an RPN or other recognition system to exist. 2011). The framebased operation of the RPN makes it more useful from a hardware implementation context. 1993). (Meador. (Dehaene. In this context the disc is replaced by a sphere with the three dimensional image being mapped into the sphere. to date every proposed and implemented recognition system. skew invariant recognition could also be realised. The Shortcomings of the RPN Motivates the Dynamic RPN . 2002). computational “medium” but can operate on continuous input signals. In contrast. the RPN operates well with a limited number of neurons and connections. 2002). RPN Operates on Framed Inputs The RPN converts two (or three) dimensional data into a 1 dimensional TP that can be theoretically learnt by a single neuron with dendritic delays corresponding to the pattern. 2012). unlike an LSM. (Wiesmann.All the features of the RPN work equally well in three dimensions. (Fang. 2007). namely the inhibitory lateral and feedback connections that clump related. However this failure may speak more to the inherent nature of the visual recognition problem than any lack of human ingenuity. Here the sphere does not necessarily refer to the physical shape of the network but to the conceptual structure of the connectivity. an effect especially pronounced with attention (Fries. 2009). and can just as easily recognize reconstructed three-dimensional “images”. 2009). 2004). but requires framed inputs. high dimensional data can be collapsed into lower dimensional data. (Farabet. which is among the most difficult challenges in image recognition (Pinto. The function of this synchronicity has been attributed to the unification of related elements in the visual field. including those with the express goal of performing frameless event-based visual processing. (Gregoriou. has had to introduce some variant of a frame-based approach when attempting recognition. 2009). However. which requires a continuous. 1995). 2011). but appears to detract from its biological plausibility prompting a search for a frameless solution. 2012). allowing multiple uses by downstream networks (Maass. the final result presented to the downstream memory system is the convergence of temporally distant pieces of information by the partial slowing or arresting of the leading segments of the incoming signal (Chen. 2001). 2005). the RPN is similar to a Liquid State Machine (LSM) in that in both. (Van Rullen. This convergence from separate fields may be pointing to the usefulness of temporal synchrony for visual inputs in the context of recognition (Seth. Yet despite attempts to eliminate the frame requirement. (PerezCarrasco. (Zelnik-Manor. 2009). Given a 3D projection of an object within the sphere. but spatially distant information into compact wavefronts separated by periods of inactivity. 2008). 2013). with a highly connected sphere centre and radiating connectivity out to an integrating layer of neurons on the sphere surface. the underlying mechanisms performing this 3D information representation task in humans is still an area of active research (Bülthoff. and although the approach tends to acquire different names along the way. (Lazar.

(Hudson. in the context of a self-similar distributed network with no central control. Buschman.N." Neural Netw 32: 35-45.J.. and K. Miller (2009). (1987). Edelman S..” Physics Letters A. Bressloff P. no. 1997) where a series of route controlling units transport the input image to some hypothetical central recognition aperture in the brain.D... A proposed solution is the use of dynamic routing systems (Olshausen. likely to be blue sensitive.Y. This system not only needs to detect objects of interest based on cues such as colour or movement. "Emergence of competitive control in a memristor-based neuromorphic circuit. and E. Tapson J. CONCLUSION In this paper we have introduced the RPN-PCN system. the problem of shifting an image by an arbitrary value is trivial." Perception 7(2) pg. The switching speeds required to operate such control systems.. (1995). but more challengingly it must shift the object onto the neuron disc.. pp. covert shifts of attention during visual search are reflected by the frontal eye fields and correlated with population oscillations. We described a few of the ways in which RPN can be extended as well as detailing its shortcomings. (2012). Neural Networks. 5(2):101-13.. Arena. (2005). the task is challenging. De Lacy Costello. & Asai. Arthur. Kolb H. Carandini M. "Serial. 5(3):247-60.A significant drawback of the RPN system is the need for precise centring of a salient object by an unexplained salience detection system. Bülthoff H. J. P. (2002).J. and thus biologically plausible operations. 344–352. De Lacy Costello.. & Ratcliffe. 1987 Jan 1. and Heeger D. 13. B. Elsevier Science.1-8. 167 – 177. Pflug R. however. K.. “How are three-dimensional objects represented in the brain?” Cereb Cortex. Golubitsky M. Even more importantly. Wiener M. 51-62. multiple scale. "Configural processing enables discrimination and categorization of facelike stimuli in honeybees. Cavanagh P. Tarr M.” J Comp Neurol.. vol. “Experimental reaction–diffusion preprocessor for shape recognition. Ahnelt PK.” Nat Rev Genet. V. Cowan J. Adamatzky. Afshar S.. however.L. (2012).. Kavehei O. Oltvai Z. (2002). J. 297(5–6). Thomas P. et al. a simple yet robust biologically inspired view invariant vision recognition system that is hardware implementable. 14(3):473-91. "Normalization as a canonical neural computation.J. (2004). M. 1993). "Size and position invariance in the visual system.. “Network biology: understanding the cell's functional organization. T. In our follow-up paper we will present our solution to the salience detection black box and show that through restricting the design to distributed. namely a high speed. Hamilton T. 2004.” Neural Comput.. REFERENCES Adamatzky.H. in the human retina. A.C. and capable of rapidly performing simple vision recognition tasks. multiple object recognition system operating on stochastically structured networks which we call the dynamic Ripple Pond Network (dRPN). are too high to be biological achievable yet we know that humans and other animals are capable of recognizing non centered objects in the visual field. Proceedings. A." J Exp Biol 213(4): 593-601. N. van Schaik A...C. a more powerful solution presents itself. "Learning expectation in insects: a recurrent spiking neural model for spatio-temporal representation. 2004 IEEE International Joint Conference on. T. Barabási A. Skafidas S... Reaction-Diffusion Computers. fully parallel. In the context of centralized processing. Recurrently connected silicon neurons with active dendrites for one-shot learning. With these as motivation we briefly outlined the characteristics of a more powerful general solution that meets these challenges and more. “What geometric visual hallucinations tell us about the visual cortex.. Boahen (2004).255(1):18-34. et al.. (1978). . vol." Nat Rev Neurosci. Avargues-Weber. (2012). (2010). The 2012 International Joint Conference on . the extraordinary high-speed performance of biological systems (the primary motivation for our approach) points to a more distributed system than this naïve centralized solution. A. “Identification of a subtype of cone photoreceptor. J. B." Neural Networks (IJCNN). pp. We demonstrated the very particular disabilities of the system which appear to coincide with those of our own distributed recognition system." Neuron 63(3): 386-396...

Faugeras O... Furber. IEEE Transactions on 52(6): 1049-1060. plasticity and cortical dynamics. (2009). & Sojakka. R. Gong B. & Changeux. 2801. Wu J. Gherardi. 2005 Apr 29. Proceedings. Solari F. Fries P. and Tatti F. Kim (Eds.. 70(2). Dehaene. Desimone R. F.” Proceedings of the Royal Society of London. 51(5) pp 1465-1469. Series B: Biological Sciences .360 (1456):709-31. “Experimental and Theoretical Approaches to Conscious Processing. 32:209-24.” PLoS One. Dill. S. M. R..B. et al. Dahlem MA. “High-frequency. (2012). Dahlem M. (2009). Schiff S. (2009). Caley. 667–672. “Developing large-scale field-programmable analog arrays. 751–753.. “Geodesic Flow Kernel for Unsupervised Domain Adaptation. Fang L. Chessa M.. D.” Neuron. (2012). 270 (1516 ). Sha F. Huang X..2073. 200–227.." Circuits and Systems (ISCAS).J. pp.. Sabatini S. (2004). Hall." IEEE Transactions on Computers 99 (PrePrints). Müller S. "Fast and robust learning by reinforcement signals: explorations in the insect brain. & Schluter. 324(5931):1207-10..” In Proceedings of the 8th international conference on Computer vision systems (ICVS'11). Vogelstein R. Hamilton T.J. “Migraine aura: retracting particle-like waves in weakly susceptible cortex." Anim Cogn 15(5): 745-762. “Neuronal gamma-band synchronization as a fundamental process in cortical computation..2263 Dahlem M. Gregoriou G. Draper. 22(1):45-82.. Vasconcelos N. (2012). (2005). Grossberg S. Springer Berlin Heidelberg. no. and T.” IEEE Transactions on Circuits and Systems I: Regular Papers 52(1) pp13-20. et al.A. Bruce A..” Science. & Sompolinsky. columns. et al. et al.J. M. Jensen A..A. pp.G. “A general-purpose processor-per-pixel analog SIMD vision chip. (1993). Ma H.. long-range coupling between prefrontal and visual cortex during sustained attention. 516—525. Biological Plausibility. 24(44):9897-902. 4(4):e5007. Farabet.. S. 115(2):319-24.. (2003). Gotts S.. and Implications for Neurophysiology and Psychophysics. (2009). P. pg." Neural Comput 21(8): 2123-2151. “Pattern Recognition in a Bucket. Nowotny (2009).. Fernando.J. Etienne-Cummings R. Shi Y. (2011). Banzhaf.” Neural Computation 21(1). Christaller. (2004). 18th International.. (2011).” Nature. "Neuromorphic implementation of orientation hypercolumns.” IEEE Conference on Computer Vision and Pattern Recognition (CVPR)." Front Neurosci 6: 32. P. (2009).P..2002. (2005). “A quantitative comparison of speed and reliability for log-polar mapping techniques. “Toward real-time particle tracking using an event-based dynamic vision sensor..Y.” Annu Rev Neurosci. Yang Q.O. Heidelberg.. C. 2009. Crowley. (1993). Lichtsteiner P.. “Spiral waves in disinhibited mammalian neocortex. “Self-induced splitting of spiral-shaped spreading depression waves in chicken retina.” Philos Trans R Soc Lond B Biol Sci... and Monique Thonnat (Eds. Hadjikhani N. S. Dudek P.” IEEE Journal on Emerging and Selected Topics in Circuits and Systems.” Experiments in Fluids.” PLoS Biol. Gattass R. “Predators favour mimicry in a tropical reef fish. Ziegler.” Spat Vis. (2012). Y. 7(7). et al. Folowosele F. (2004).63 (Vol. H. 2011 IEEE International Symposium on . ISCAS 2009. e1000141. Huerta. doi:10. Drazen D. S. “Decision-Theoretic Saliency: Computational Principles. Gütig. W..C.. 74(6):351-61. et al. “A computational perspective on migraine aura.” Prog Neurobiol. “Time-Warp–Invariant Neuronal Processing...” The MIT Press. T. Laing C..” In W..” Cortical visual areas in monkeys: location.Shoushun. "Overview of the SpiNNaker System Architecture. et al.R. 365(6448).. J. 2004. “Visual pattern recognition in Drosophila involves retinotopic matching.P. (2011). (2011). Delbrück T. Gao D. (2005). (2003)..” Exp Brain Res.” Parallel and Distributed Processing Symposium. 41-50.J. IEEE International Symposium on. “Three-Dimensional Computer Vision. Zhou H. “A bio-inspired event-based size and position invariant human posture recognition algorithm.). Springer-Verlag. "Revisiting social recognition systems in invertebrates. Wolf. (2011). Tapson J. R. M.-P. T. 1(4). Troy W. Berlin.). “Towards a Cortical Prosthesis: Implementing A Spike-Based HMAX Model of Visual Object Recognition in Silico.. Hudson.. & J. T.1098/rspb. James L. ..865-868." Neural Netw 10(6): 993-1015.. Häfliger P.” Circuits and Systems. (1997). & Heisenberg. "SCAN: A Scalable Model of Attentional Selection." Circuits and Systems I: Regular Papers...” J Neurosci. “From stereogram to surface: how the brain sees the world in depth. Advances in Artificial Life SE . Choi. C. (2009). C.C. 588–597).. (1997). vol. topography.. (2009). Dittrich. T. Grauman K. "Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free SpikingDynamic-Pixel ConvNets for Visual Processing. et al."A neuromorphic cross-correlation chip.. pages 239-271... J.. Page(s): 2066 . connections. Chronicle E.

Orientation and Size Recognition of Normal and Ambiguous Faces by a Rotation and Size Spreading Associative Neural Network. (2011). M. G. Land M. 19(05). Izhikevich E. Takase K. D... Springer Berlin Heidelberg. & A.). et al. B. 507–516). & Torre. Jaeger H. Itti L.. M. M. McDonnell.M. vol. (2010). "Neuromorphic silicon neuron circuits. N...” Proceedings of the National Academy of Sciences . (2011)." Neural Comput 17(12): 2548-2570. Kohonen..J. et al. K.. “Shape. Serrano C. Sidorov. "Transformation Invariant On-Line Target Recognition. “Computation with spikes in a winner-take-all network. "Input-rate modulation of gamma oscillations is sensitive to network topology. “Polychronization: Computation with spikes. (2012).” Journal of Comparative Physiology A.” Neural Computation. Wang M. 20(11) p. A. (2012). (2011). A. “The echo state approach to analysing and training recurrent neural networks” GMD-Report 148. V.. L. B. D. et al." Neural Networks. (2001). (2007). (1993). J.A.21 (Vol.” In G. “DELTRON: Neuromorphic architectures for delay based learning. no. 43:59-69. G.. Arimura K.. Oster M. “Real-time computing without stable states: a new framework for neural computation based on perturbations. Indiveri. Olshausen. D.” In S. (2001). Loring. Ward (2011). T.... Löwe. 2007 (ICCV). (2006). and Liu S. (2007).M. Self-organized formation of topologically correct feature maps. A. Serre T. Poggio. Yin J. Pardo. Maass... A. Lazar. Iftekharuddin K. Hamilton T. (1998).. “A model of saliency-based visual attention for rapid scene analysis. Computation and Logic in the Real World SE . Nakamura K. Izhikevich.." Neural Networks.. 14(11):2531-60. Echauz. Basu A. F. M.” Neural Comput. Koch. “Stacks of Convolutional Restricted Boltzmann Machines for ShiftInvariant Feature Learning... Baxter R. & Kenyon. Hernández Aguirre. 304 – 307.. J.. “Nonlinear interactions in a dendritic tree: localization. Natschläger T. Springer Berlin Heidelberg. (2002).” IEEE Transactions on Pattern Analysis and Machine Intelligence... 245— 282.” Neurology . B. (1982).” Proceedings of the 2002 International Joint Conference on Neural Networks (IJCNN '02). D.” Neural Computation. Reyes García (Eds. Biological Cybernetics. “Stability and Topology in Reservoir Computing. 59 (6 ). IEEE Transactions on 22(3): 461-473. Linares-Barranco B.. (1999). “Ultra-fast detection of salient contours through horizontal connections in the primary visual cortex..1952-1966. & Hazan..Hussain S. pages 842 – 849... “Liquid Computing. Page(s): 2439-2444. "A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. Niebur. Loxley. 2012 IEEE Asia Pacific Conference on. G. E. (2009). “Computational modelling of visual attention." Animal Behaviour 84(2): 485-493. Ranjbar M. McDonnell. 21(9). 2(3):194-203. 33(6). pp. “Polychronous Wavefront Computations. H. Sorbi (Eds." Brain Res 1434: 162-177. (2002). "Modeling Activity-Dependent Plasticity in BCM Spiking Neural Networks With Application to Human Behavior Recognition. “Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate- . (2013).” EPL (Europhysics Letters). Norouzi M. (1983). J. F.” Neural Comput. M. “Gamma coherence and conscious perception. Koch C. "Oscillatory synchronization requires precise and balanced feedback inhibition in a model of the insect antennal lobe..12.).. Martinez. 8(3):531-43. Maass W..906-918. delays and short-term plasticity. timing. “A biologically inspired system for action recognition. Mori G.. Itti L. IEEE Transactions on . pp. (2012). “Space-variant nonorthogonal structure CMOS image sensor design. pp. P. Ray.. Meador. et al. T. Koch C.. Manevitz. “Motion and vision: why animals move their eyes. G. 847–854." The Journal of Neuroscience 13(11): 4700-4719.22. 1254 – 1259. (2009). Meng Y.. E. 2799–2802. IEEE Journal of. "Feature binding in zebrafish. Page(s): 2735-2742.” International Journal of Bifurcation and Chaos. Acha B. Douglas R. Zhao B. W. R. (2005). no.. 93(6). Jhuang H. C.F. Markram H.53 (Vol..6. vol... 245–256). (1996).22. Cooper. 185(4): p. and role in information processing. pp. Chen S. Neri. 64001. Levy W.. P. 18(2). & C. and E. P. Poggio T.. 80 (9 ). 341-352. & Hoppensteadt. Serrano-Gotarredona T.” IEEE 11th International Conference on Computer Vision.” Nat Rev Neurosci.. German National Research Institute for Computer Science. and L. (1998)." Neural Networks. (2009). “Energy efficient neural codes..A. "Video Time Encoding Machines.. "The benefits of noise in neural systems: bridging theory and experiment. IEEE Transactions on . Wolf L. (2002)." Nat Rev Neurosci 12(7): 415-426.B. C. p.” IEEE Conference on Computer Vision and Pattern Recognition (CVPR). W.. T. Jin Y.-C.” Solid-State Circuits. Perez-Carrasco J. Pnevmatikakis (2011).. (2011). 6438. 1733–1739. pg." Frontiers in Neuroscience 5. Bettencourt. Advances in Soft Computing SE . & Vachtsevanos. 4497.” Circuits and Systems (APCCAS). L.

pp.. M. & Sejnowski.520-529. (2013). Patullo B. T.” IEEE Trans Pattern Anal Mach Intell.... L. 5M Synapse. Syst.. Serrano-Gotarredona R. “A review of log-polar imaging for visual perception in robotics. Thorpe S. Edelman... vol. Sountsov.J. (2007).” AI Memo 2005-036..” California Institute of Technology. online. “A Theory of Object Recognition: Computations and Circuits in the Feedforward Path of the Ventral Stream in Primate Visual Cortex. Afshar S. Van Rullen..” PLoS ONE 3(2): e1695. L..Coding and Coincidence Processing. (2008) “Why is Real-World Visual Object Recognition Hard?” PLoS Comput Biol 4(1): e27. et al. USA. (2010).” PLoS ONE 6(4): e18710. and Machine Vision." Neural Networks.. Stiefel K. Buskila Y. & Bengio.W. A. and Bernardino A.” Neural Networks. “I Know My Neighbour: Individual Recognition in Octopus vulgaris. [Epub ahead of print]. Auton. Knoblich U. "A Biologically Plausible Transform for Visual Recognition that is Invariant to Translation. "A Multichip Neuromorphic System for Spike-Based Visual Information Processing. Van Rullen R. (2011).. Sivilotti.. Oster M. Analysis. Reitboeck H. “Synthesis of neural networks for spatio-temporal spike pattern recognition and processing. Levine.” Neural Comput. Expandable hardware for computing cortical feature maps. "The blinking spotlight of attention..” Seventh International Conference on Intelligent Sensors. Zheng Y. IEEE Transactions on .L. (2008). Hamilton T.” Robot. Macmillan D. “An FPGA implementation of a polychronous spiking neural network with delay adaptation.” PLoS Comput Biol. (1984). Hamilton T.2. (2008). K. B. (2006). van Schaik A. Vogelstein.1417-1438.J. & Krichmar.. (2013a). 12G Connects/s AER Hardware Sensory–Processing– Learning–Actuating System for High-Speed Visual Object Recognition and Tracking.” Submitted to Frontiers in Neuromorphic Engineering 2013. (2007). 97—102..J. (2013b)... "CAVIAR: A 45k Neuron.18." Neural Comput... Application to Feed Forward ConvNets.. R. et al.. 58. J. J.. Proceedings..” Biological Cybernetics 51.. 71(7–9). “Delay learning and polychronization for reservoir computing. ." Neural Networks. Riesenhuber. Lichtsteiner P. “Crayfish Recognize the Faces of Fight Opponents." The Journal of Neuroscience 28(25): 6319-6332. 13(6):1255-83. Motion. Gherardi F. H. “A model for size.. and Rotation. 1143–1158. Tapson J. 4 (April 2010). A. “Rate coding versus temporal order coding: what the retinal ganglion cells tell the visual cortex. Scale. vol. “Visual Binding Through Reentrant Connectivity and Dynamic Synchronization in a Brain-based Device. 2006. R. Paulk. e1000973. Tapson J. (2011)... Pinto N. ISCAS 2006. (2008). Tricarico E. van Schaik A.M. pp... Cox D. 2013 Apr 10..... van Schaik A.” Frontiers in Neuroscience 7(14). Sonka M. with application to field-programmable networks.. 2006 IEEE International Symposium on. Van der Velden J. no... Paugam-Moisy.J... Seth. S. V. Tapson J. Sejnowski T. “Wiring considerations in analog VLSI systems... Cohen G.. P. Fiorito G. (2007). "Hierarchical models of object recognition in cortex. “23 Problems in Systems Neuroscience. Wang R.. Tapson J. Sensor Networks and Information Processing (ISSNIP).” CL Engineering 3rd ed. et al. J. M. (2011). Altmann J. DiCarlo J. et al. Hamilton T. G. R. (2009). and T. “Processing. and van Schaik A.. Wang R.” Cerebral Cortex ." Nat Neurosci 2(11): 1019-1025.9.. (2010). BoyleImage R. (2012). Volman. Poggio (1999)... "The Processing of Color. et al. Cadieu C. CBCL Memo 259.. 378-398. Borrelli L..D.and rotation-invariant pattern processing in the visual system.” Oxford University Press. E. pg. IEEE Transactions on.. Martinez. (2005). 1185– 1199.L. “An analogue VLSI implementation of polychronous spiking neural networks. 14 (11 ). “Learning the pseudoinverse solution to network weights.J. J. H. Ranhel. (2005)." Front Comput Neurosci 5: 53.. IEEE Transactions on . no. Rasche C. Wang R.. A. Shi. C.." Proceedings of the National Academy of Sciences 104(49): 19204-19209. Cohen G. Stiefel K. (2004).20. and Stimulus Timing Are Anatomically Segregated in the Bumblebee Brain. J. "Neuromorphic Excitable Maps for Visual Processing. J. Circuits and Systems. et al. Traver V.J.. Poggio T. M.” Neurocomputing. Kouh M. van Hemmen J.” Neural Networks and Learning Systems. Kreiman G.. McKinstry. “Shunting Inhibition Controls the Gain Modulation Mediated by Asynchronous Neurotransmitter Release in Early Development. (1991).J. (2007).. 19(9): 2281-2300. Hlavac V.. Serre T.. (2001). 6(11). “Neural Assembly Computing.

and M.. . G. “Reentrant spiral waves of spreading depression cause macular degeneration in hypoglycemic chicken retina. “Computation with Spiking Neurons. Ph. Irani (2001). Mattiace L... 2012 IEEE Computer Society Conference on.-J. “Propagating waves of activity in the neocortex: what they are. 14(5):487-502. (2010). Rozental R. what they do. Dissertation. Benabou K." Physiological Reviews 90(3): 1195-1268. (2004). CVPR 2001. Yu Y. Wiesmann. Chuan Zhang (2008). L.. X.Wang. (2012). (2012)... Santos L. Costa M.” Computer Vision and Pattern Recognition Workshops (CVPRW).D. “Event-driven embodied system for feature extraction and object recognition in robotic applications.. Ferreira L.A. 2001. Abrahams J. Wu JY. et al. Zelnik-Manor.H..109(7):2585-9.. “Event-based analysis of video.M.” University of Cambridge. Xiaoying Huang.” Neuroscientist.V. Proceedings of the 2001 IEEE Computer Society Conference on.” Proc Natl Acad Sci U S A.C..” Computer Vision and Pattern Recognition..L. "Neurophysiological and Computational Principles of Cortical Rhythms in Cognition. Bennett M. Wills S. Kim A.

Sign up to vote on this title
UsefulNot useful