Statistical Signal Processing for Neuroscience and Neurotechnology - Read Online

Book Preview

Statistical Signal Processing for Neuroscience and Neurotechnology

You've reached the end of this preview. Sign up to read more!
Page 1 of 1

systems.

Introduction

Karim G. Oweiss

1.1 Background

Have you ever wondered what makes the human brain so unique compared to nonhuman brains with similar building blocks when it comes to the many complex functions it can undertake, such as instantaneously recognizing faces, reading a piece of text, or playing a piano piece with seemingly very little effort? Although this long-standing question governs the founding principles of many areas of neuroscience, the last two decades have witnessed a paradigm shift in the way we seek to answer it, along with many others.

For over a hundred years, it was believed that profound understanding of the neurophysiological mechanisms underlying behavior required monitoring the activity of the brain's basic computational unit, the neuron. Since the late 1950s, techniques for intra- and extracellular recording of single-unit activity have dominated the analysis of brain function because of perfection in isolating and characterizing individual neurons' physiological and anatomical characteristics (Gesteland et al. (1959); Giacobini et al. (1963); Evarts (1968); Fetz and Finocchio (1971)). Many remarkable findings have fundamentally rested on the success of these techniques and have largely contributed to the foundation of many areas of neuroscience, such as computational and systems neuroscience.

Monitoring the activity of a single unit while subjects perform certain tasks, however, hardly permits gaining insight into the dynamics of the underlying neural system. It is widely accepted that such insight is contingent upon the scrutiny of the collective and coordinated activity of many neural elements, ranging from single units to small voxels of neural tissue, many of which may not be locally observed. Not surprisingly, there has been a persistent need to simultaneously monitor the coordinated activity of these elements—within and across multiple areas—to gain a better understanding of the mechanisms underlying complex functions such as perception, learning, and motor processing.

Fulfilling such a need has turned out to be an intricate task given the space-time trade-off. As Figure 1.1 illustrates, the activities of neural elements can be measured at a variety of temporal and spatial scales. Action potentials (APs) elicited by individual neurons—or spikesof the recording electrode Katzner et al. (2009), typical of somato-dendritic currents Csicsvari et al. (2003). Electrocorticograms (ECoGs)—also known as intracranial EEG (iEEG)—are intermediately recorded using subdural electrode grids implanted through skull penetration and therefore are considered semi-invasive as they do not penetrate the blood-brain barrier. Originally discovered in the 1950s Penfield and Jasper (1954), they are believed to reflect synchronized post-synaptic potentials aggregated over a few tens of millimeters. Noninvasive techniques include functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), and electroencephalography (EEG). The latter signals are recorded with surface electrodes patched to the scalp and are thought to represent localized activity in a few cubic centimeters of brain tissue, but they are typically smeared out because of the skull's low conductivity effect Mitzdorf (1985).

Figure 1.1 Spatial and temporal characteristics of neural signals recorded from the brain. Please see this figure in color at the companion web site: www.elsevierdirect.com/companions/9780123750273/

Despite the immense number of computing elements and the inherent difficulty in acquiring reliable, sustained recordings over prolonged periods of time (Edell et al. (1992); Turner et al. (1999); Kralik et al. (2001)), information processing at the individual and population levels remains vital to understand the complex adaptation mechanisms inherent in many brain functions that can only be studied at the microscale, particularly those related to synaptic plasticity associated with learning and memory formation Ahissar et al. (1992). Notwithstanding the existence of microwire bundles since the 1980s, the 1990s have witnessed some striking advances in solid-state technology allowing microfabrication of high-density microelectrode arrays (HDMEAs) on single substrates to be streamlined (Drake et al. (1998), Normann et al. (1999)). These HDMEAs have significantly increased experimental efficiency and neural yield (see for example Nicolelis (1999) for a review of this technology). As a result, a paradigm shift in neural recording techniques has been witnessed in the last 15 years that has paved the way for this technology to become a building block in a number of emerging clinical applications.

As depicted in Figure 1.2, substantial progress in invasive brain surgery, in parallel with the revolutionary progress in engineering the devices just discussed, has fueled a brisk evolution of brain-machine interfaces (BMIs). Broadly defined, a BMI system provides a direct communication pathway between the brain and a man-made device. The latter can be a simple electrode, an active circuit on a silicon chip, or even a network of computers. The overarching theme of BMIs is the restoration/repair of damaged sensory, cognitive, and motor functions such as hearing, sight, memory, and movement via direct interactions between an artificial device and the nervous system. It may even go beyond simple restoration to conceivably augmenting these functions, previously imagined only in the realm of science fiction.

Figure 1.2 Timeline of neural recording and stimulation during invasive human brain surgery.

Source: Modified from Abbott (2009)

1.2 Motivation

The remarkable advances in neurophysiology techniques, engineering devices, and impending clinical applications have outstripped the progress in statistical signal processing theory and algorithms specifically tailored to: (1) performing large-scale analysis of the immense volumes of collected neural data; (2) explaining many aspects of natural signal processing that characterize the complex interplay between the central and peripheral nervous systems; and (3) designing software and hardware architectures for practical implementation in clinically viable neuroprosthetic and BMI systems.

Despite the existence of a growing body of recent literature on these topics, there does not exist a comprehensive reference that provides a unifying theme among them. This book is intended to exactly fulfill this need and is therefore exclusively focused on the most fundamental statistical signal processing issues that are often encountered in the analysis of neural data for basic and translational neuroscience research. It was written with the following objectives in mind:

To apply classical and modern statistical signal processing theory and techniques to fundamental problems in neural data analysis.

To present the latest methods that have been developed to improve our understanding of natural signal processing in the central and peripheral nervous systems.

To demonstrate how the combined knowledge from the first two objectives can help in practical applications of neurotechnology.

1.3 Overview and Roadmap

A genuine attempt was made to make this book comprehensive, with special emphasis on signal processing and machine learning techniques applied to the analysis of neural data, and less emphasis on modeling complex brain functions. (Readers interested in the latter topic should refer to the many excellent texts on it.¹ ) The sequence of chapters was structured to mimic the process that researchers typically follow in the course of an experiment. First comes data acquisition and preconditioning, followed by information extraction and analysis. Statistical models are then built to fit experimental data, and goodness-of-fit is assessed. Finally, the models are used to design and build actual systems that may provide therapeutic benefits or augmentative capabilities to subjects.

In Chapter 2, Oweiss and Aghagolzadeh focus on the joint problem of detection, estimation, and classification of neuronal action potentials in noisy microelectrode recordings—often referred to as spike detection and sorting. The importance of this problem stems from the fact that its outcome affects virtually all subsequent analysis. In the absence of a clear consensus in the community on what constitutes the best method, spike detection and sorting have been and will continue to be a subject of intense research because techniques for multiunit recordings have started to emerge. The chapter provides an in-depth presentation of the fundamentals of detection and estimation theory as applied to this problem. It then offers an overview of traditional and novel methods that revolve around the theory, in particular contrasting the differences—and potential benefits—that arise when detecting and sorting spikes with a single-channel versus a multi-channel recording device. The authors further link multiple aspects of classic and modern signal processing techniques to the unique challenges encountered in the extracellular neural recording environment. Finally, they provide a practical way to perform this task using a computationally efficient, hardware-optimized platform suitable for real-time implementation in neuroprosthetic devices and BMI applications.

In Chapter 3, Johnson provides an overview of classical information theory, rooted in the 1940s pioneering work of Claude Shannon Shannon (1948), as applied to the analysis of neural coding once spike trains are extracted from the recorded data. He offers an in-depth analysis of how to quantify the degree to which neurons can individually or collectively process information about external stimuli and can encode this information in their output spike trains. Johnson points out that extreme care must be exercised when analyzing neural systems with classical information-theoretic quantities such as entropy. This is because little is known about how non-Poisson communication channels that often describe neuronal discharge patterns provide optimal performance bounds on stimulus coding. The limits of classical information theory as applied to information processing by spiking neurons are discussed as well. Finally, Johnson provides some interesting thoughts on post-Shannon information theory to address the more puzzling questions of how stimulus processing by some parts of the brain results in conveying useful information to other parts, thereby triggering meaning ful actions rather than just communicating signals between the input and the output of a communication system—the hallmark of Shannon's pioneering work.

In Chapter 4, Song and Berger focus on the system identification problem—that is, determining the input-output relationship in the case of a multi-input, multi-output (MIMO) system of spiking neurons. They first describe a nonlinear multiple-input, single-output (MISO) neuron model consisting of a number of components that transform the neuron's input spike trains into synaptic potentials and then feed back the neuron's output spikes through nonlinearities to generate spike-triggered after-potential contaminated by noise to capture system uncertainty. Next they describe how this MISO model is combined with similar models to predict the MIMO transformation taking place in the hippocampus CA3-CA1 regions, both known to play a central role in declarative memory formation. Song and Berger suggest that such a model can serve as a computational framework for the development of memory prostheses which replaces damaged hippocampus circuitry. As a means to bypass a damaged region, they demonstrate the utility of their MIMO model to predict output spike trains from the CA1 region using input spike trains to the CA3 region. They point out that the use of hidden variables to represent the internal states of the system allows simultaneous estimation of all model parameters directly from the input and output spike trains. They conclude by suggesting extensions of their work to include nonstationary input-output transformations that are known to take place as a result of cortical plasticity—for example during learning new tasks—a feature not captured by their current approach.

In Chapter 5, Eldawlatly and Oweiss take a more general approach to identifying distributed neural circuits by inferring connectivity between neurons locally within and globally across multiple brain areas, distinguishing between two types of connectivity: functional and effective. They first review techniques that have been classically used to infer connectivity between various brain regions from continuous-time signals such as fMRI and EEG data. Given that inferring connectivity among neurons is more challenging because of the stochastic and discrete nature of their spike trains and the large dimensionality of the neural space, the authors provide an in-depth focus on this problem using graphical techniques deeply rooted in statistics and machine learning. They demonstrate that graphical models offer a number of advantages over other techniques, for example by distinguishing between mono-synaptic and poly-synaptic connections and in inferring inhibitory connections among other features that existing methods cannot capture. The authors demonstrate the application of their method in the analysis of neural activity in the medial prefrontal cortex (mPFC) of an awake, behaving rat performing a working memory task. Their results demonstrate that the networks inferred for similar behaviors are strongly consistent and exhibit graded transition between their dynamic states during the recall process, thereby providing additional evidence in support of the long withstanding Hebbian cell assembly hypothesis Hebb (1949).

In Chapter 6, Chen, Barbieri, and Brown discuss the application of a subclass of graphical models—the state-space models—to the analysis of neural spike trains and behavioral learning data. They first formulate point process and binary observation models for the data and review the framework of filtering and smoothing under the state space paradigm. They provide a number of examples where this approach can be useful: ensemble spike train decoding, analysis of neural receptive field plasticity, analysis of individual and population behavioral learning, and estimation of cortical UP/DOWN states. The authors demonstrate that the spiking activity of 20 to 100 hippocampal place cells can be used to decode the position of an animal foraging in an open circular environment. They also demonstrate that the evolution of the 1D and 2D place fields of hippocampus CA1 neurons reminiscent of experience-dependent plasticity can be tracked with steepest descent and particle filter algorithms. They further show how the state space approach can be used to measure learning progress in behavioral experiments over a sequence of repeated trials. Finally, the authors demonstrate the utility of this approach in estimating neuronal UP and DOWN states—the periodic fluctuations between increased and decreased spiking activity of a neuronal population—in the primary somatosensory cortex of a rat. They derive an expectation maximization algorithm to estimate the number of transitions between these states to compensate for the partially observed state variables. In contrast to multiple hypothesis testing, their approach dynamically assesses population learning curves in four operantly conditioned animal groups.

In Chapter 7, Yu, Santhanam, Sahani, and Shenoy discuss the problem of decoding spike trains in the context of motor and communication neuroprosthetic systems. They examine closely two broad classes of decoders, continuous and discrete, depending on the behavioral correlate believed to be represented by the observed neural activity. For continuous decoding, the goal is to capture the moment-by-moment statistical regularities of movements exemplified by the movement trajectory; for discrete decoding, a fast and accurate classifier of the desired movement target suffices. The authors point to the existence of a trade-off between accuracy in decoding and computational complexity, which is an important design constraint for real-time implementation. To get the best compromise, they describe a probabilistic mixture of trajectory models (MTM) decoder and demonstrate its use in analyzing ensemble neural activity recorded in premotor cortex in macaque monkeys as they plan and execute goal-directed arm reach movements. The authors compare their MTM approach to a number of decoders reported in the literature and demonstrate substantial improvements. In conclusion, they suggest the need for new methods for investigating the dynamics of plan and movement activity that do not smear useful information over large temporal intervals by assuming an explicit relationship between neural activity and arm movement. This becomes very useful when designing decoding algorithms for closed-loop settings where subjects continuously adapt the neuronal firing properties to attain the desired