You are on page 1of 28

PROF.

TOBY BERGERS RESEARCH INTERESTS


Presentation in ECE 696 Seminar Series September 1, 2006

TWO MAJOR INTERESTS:


1. INFORMATION THEORY 2. NEUROINFORMATION THEORY

SOME SECONDARY INTERESTS:


1. COMMUNICATION NETWORKS 2. VIDEO CONFERENCING 3. RADAR AND SURVEILLANCE SYSTEMS

DATA SOURCE

ENCODER

CHANNEL

USER

DECODER

Figure 1

C(E)

CHANNEL CAPACITY-EXPENSE FUNCTION

R H

R(D) C

D(C)

Dmx a

RATE-DISTORTION FUNCTION

ENCODER 1 DATA SOURCE

ENCODER 2

CHANNEL

USER

DECODER

MULTIPLE DESCRIPTIONS; SUCCESSIVE REFINEMENT

SOURCE 1 {Xk} ENCODER 1

SOURCE 2 {Yk} ENCODER 2

CHANNEL USER 1 {X^k} DECODER USER 2 {Y^k}

2-SOURCE, 2-USER CODING

2-SOURCE, 2-USER CODING HAS BECOME HIGHLY ACTIVE AGAIN AFTER A RELATIVE HIATUS OF ALMOST THIRTY YEARS!!
A WHOLE SESSION AT ISIT IN SEATTLE LAST MONTH WAS DEVOTED TO IT. ITS CURRENTLY BEING INVESTIGATED AT LEAST IN JAPAN, EUROPE, USA, AND SOUTH AMERICA. GRAD STUDENT JIONG WANG AND I HAVE BEEN WORKING ON IT INTENSELY SINCE MAY.

NEUROINFORMATION THEORY

ARTISTS CONCEPTION OF A NEURON

Glia (Astrocytes & Oligodendrocytes) Feeding and Caring for a Neuron: (Artists Conception)

Dentritic tree of retinal ganglion cell in postnatal cat (390 um); from Maslim et al., JCN, 254:382, 1986.

NEURON CARDINALITY
There are approximately 1011 neurons in the human brain. Most of them are formed between the ages of -1/2 and +1. Each neuron forms synapses with between 10 and 105 others, resulting in a total of circa 1015 synapses. From age -1/2 to age +2, the number of synapses increases at net rate of a million per second, day and night. (Many are abandoned, too.) It is believed that neuron and synapse formation rates drop rapidly after age 1 and age 2, respectively, but recent results show that they do not drop to zero.

NEURON CONNECTIVITY
Each neuron receives spikes from as many as 105 other neurons whose axons form synapses with it on either on its dendrites or its cell body. The average size of this so-called afferent cohort is 104, which number is typical of most cortical neurons. Likewise, each neuron generates spikes (action potentials) which propagate along its axon to the neurons in its efferent cohort. Said cohorts cardinality also is circa 104 for a primate cortical neuron.

NEURAL SPIKE TRAIN BIT RATE


Neural spike trains are digital signals in the sense that, in each 2.5 ms bin, each neuron either generates a spike or does not.* This seemingly imposes a ceiling of 400 x 0.694 =277 bps on the rate at which a neuron can send information. To achieve this bound, a neuron would need to fire an average of nearly 150 spikes/s. However, the long term average firing rates of most neurons are ten or more times smaller than this, because most of the time a neuron is more concerned with maximizing bits/joule than bits/s.
*The detailed amplitude variations of individual action potentials are today almost universally considered to be noise that conveys no information.

SPIKE PROPAGATION TIMES

In most cases a neurons spikes travel to the end of its axon in less than 2.5 ms. Accordingly, each spikes leading edge already has been delivered to the furthest of its circa 104 recipients before the trailing edge reaches the closest recipient.

MULTICASTING
Viewed as a network, the human brain simultaneously multicasts 1011 messages that have an average of 104 recipients each. In time-discrete models of the brain, each of these 1011 x 104 = 1015 destinations receives a new binary digit every 2.5 ms.

Moreover, 2.5 ms later another petabit that depends on the outcome of processing the previous one has been multicast. (The Internet pales by comparison!)
The brain does not simply use store-and-forward routing. Rather, it uses an intensive form of network coding, the exciting new information-theoretic discipline recently introduced by Raymond Yeung and Bob Li. (See, e.g., the latest IT Outstanding Paper Award winning article by Yeung, Li, Ahlswede, and Cai.)

BUT, neurons actually fire asynchronously in continuous time. We shall see that this may enable them to send considerably more bps than their relatively low firing rates suggest is the case.

Encoding of Excitation via Dynamic Thresholding


A compelling (?) case for this has been made by Berger and Levy; NEUROSCIENCE 2004 San Diego, CA 10/23-28/2004

PSP

Increasing
4 8 10 12

Time, ms MEAN PSP v. TIME FOR VARIOUS BOMBARDMENT INTENSITIES

PSP

Time, ms Filtered Poisson PSPs v. Time

Spiking times of red and blue PSPs for descending threshold

PSP

Fixed Threshold Descending Threshold

Time, ms Spiking times of red and blue PSPs for fixed threshold

DYNAMICALLY DESCENDING THRESHOLDS ENABLE TIMING CODES

A descending threshold can serve as a simple mechanism by means of which a neuron can accurately convert (i.e., encode) - into the duration of the ISI between any two of its successive APs - the value of the excitation intensity it has experienced during said ISI. This statement is true regardless of whether the intensity in question is strong, moderate, or weak. A neuron that possesses a fixed threshold cannot accomplish this.

It is also known that synapses possess chemical clocks that enable them to remember, even for hundreds of milliseconds, how long ago the most recent and the next-to-most recent spikes arrived. ALL THIS STRONGLY SUGGESTS THAT NEURONS DO INDEED IMPLEMENT ACCURATE, LOW-LATENCY TIMING CODES BY MEANS OF DYNAMIC POST-SYNAPTIC POTENTIAL THRESHOLDS THAT DECAY WITH TIME.

Alternatively, a neuron also can achieve much the same result by having a postsynaptic leakage conductance that varies inversely with PSP. (See, e.g., Brette and Gensler, 2005.) It may well be that neurons employ a combination of theshold decay and variable leakage conductance. However, in what follows we use only threshold decay terminology.

1. The precise shape of the threshold decay curve is not important; the neurons in the efferent cohort can readily adapt to the shape of T(t). 2. The resulting variance in estimating has the form 3. If instead you are interested in estimating log ,
Var[(log ) log ] = c2 / .

Var (

) = c1 .

4. To estimate the accuracy of ISI encoding of bombardment intensity, one must take into account at least the following three sources of imprecision: i) Imprecision in the instant of generation of an AP ii) Imprecision in the rates of axonal propagation along the axon for two successive action potentials

iii) imprecision in the estimate of the APs time of arrival at the synapse. (See Berger and Suksompong, IEEE ISIT, Seattle, July 9-15, 2006.) Doing so shows that neural encoding bit rates can be meaningfully higher than previously had been thought! 5. If the excitation is a time-varying Poisson process, then its intensity (t ) is a sufficient statistic for stochastically describing it, so it is the only thing that needs to be communicated. 6. The excitation of a (cortical) neuron is indeed robustly a time-varying Poisson process, despite the individual spike trains of which it is composed not being Poisson and possibly being highly correlated. (This is a consequence of Stein-Chen Poisson approximation theory; cf. C. Stein, IMS Lecture Notes, vol. 78, Lecture VIII, IMS, Hayward, CA, 1986, and subsequent work of Barbour et al., among others.)

A CHALLENGING, IMPORTANT QUESTION ABOUT RNNs


Consider a sparsely connected, feedback-heavy network of hundreds of millions of neurons most of which have an in-degree and out-degree of circa 10,000. When galvanized by sensory inputs and exchanging their excitation histories in the manner described above, what kinds of decisions, computations, and responses can such a network generate? (N.B The excitation history that a neuron communicates does not directly propagate beyond its first-tier neighbors.)

You might also like