You are on page 1of 9

Neuromorphic Computing: The Next Step in

Electronics
ABSTRACT:

Taking a closer look at the ever-evolving world of computing, "Neuromorphic Computing: The Next Step in
Electronics" explores the rapid development and promising potential of neuromorphic computing. As the renowned
Moore's Law nears its physical limitations, with its forecast of transistor doubling in a densely integrated circuit
every two years, there is a growing demand for more processing power. Through its emulation of the human brain's
structure and functions, this paper argues that neuromorphic computing offers a solution to this pressing challenge.
By addressing the escalating need for computational power and its accompanying energy consumption, the paper
highlights the efficiency of neuromorphic systems. Ultimately, it concludes by underscoring the transformative
impact that this technology could have on the electronics industry.

Introduction:

The unrelenting quest for greater computer power has propelled the field of electronics' rapid evolution. After
decades of correctly predicting the exponential increase in computing power, Moore's Law is now confronting its
physical limits. Despite the doubling of transistors in a dense integrated circuit approximately every two years, the
demand for computing power continues to outpace supply.

Furthermore, traditional computing's energy usage is starting to pose a serious threat. The Natural Resources
Defence Council estimates that in 2020, data centres in the United States alone will use over 73 billion kWh. We
must change the way we think about computers in response to this increasing power usage.

A potential answer to these problems is neuromorphic computing, which imitates the neural architecture and
operation of the human brain. Neuromorphic computing has the ability to greatly increase computational speed and
efficiency while lowering power consumption by mimicking the brain's effective processing capabilities.

The transformational potential of neuromorphic computing in the electronics sector will be examined in this study,
indicating a major advancement in the field. It will explore the complexities of this technology, go over its benefits
over conventional computer techniques, and point out the difficulties that still need to be overcome before it can be
fully developed and used. The talk will be backed up by pertinent information and research results, giving a
thorough rundown of neuromorphic computing and its potential applications in electronics in the future.

Fig 1: number of transistors integrated on a single CMOS chip over the years (source ‘the beauty and joy of computing—BJC’
Thematic areas:
Carver Mead first used the word "neuromorphic" at CALTECH in the late 1980s to refer to very large-scale
integration (VLSI) systems using electronic circuits that imitate the neuro-biological structures seen in the nervous
system. But over time, the phrase has "morphed" from its initial meaning and begun to refer to a wider range of
ideas and methods. The term "neuromorphic computing" began to be used in addition to its original meaning of
"neuromorphic engineering." technologies for modelling spiking neural networks and neural models of computing
that comprise conventional processors or pure digital circuitry. Simultaneously, systems made up of nanoscale
"memrestive" devices created in the realm of upcoming memory technologies began to be referred to by the same
word.

1. Neural circuits and systems integrated in CMOS technology:

The primary objective of the original neuromorphic engineering approach was to replicate the
computational physics of biological neural networks. This was achieved by operating transistors in their
‘weak inversion’ or ‘subthreshold’ regime. The aim was to construct artificial neurons, synapses, networks,
and sensory systems that followed the same organizational principles as the nervous system in animal
brains.

This endeavour had a dual purpose: to understand neural computation by constructing physical replicas of
actual neural circuits, and to develop compact, low-power sensory processing and computing architectures
that were radically different from the standard digital computers of that era.

Due to the high-risk nature and fundamental research aspect of this approach, only a handful of academic
groups continue to pursue it today. These groups primarily focus on developing small-scale prototype chips
to investigate various facets of neural computation. This includes everything from sensory systems to
reconfigurable networks with biologically plausible neural dynamics, and circuits for spike-based learning
and plasticity.

In recent years, the term "neuromorphic" has expanded to encompass both mixed-signal and pure digital
VLSI systems. These systems serve as computing platforms capable of simulating models of spiking neural
networks, driven by advancements in computing and integrated circuit technology. Notably, projects like
the EU Human Brain Project have funded the development of wafer-scale integrated systems for faithfully
reproducing neuroscience simulations at accelerated speeds.

One such example is the Spinnaker system, also supported by the Human Brain Project, designed as a
multi-core computer to simulate vast numbers of spiking neurons in real time. Presently, the Spinnaker
machine, constructed from 600 printed circuit boards with 48 Spinnaker processors each, can simulate
hundreds of millions of neurons. IBM introduced an alternative approach in 2014 with the 'TrueNorth'
neuromorphic systems. These systems, featuring 4096 cores on a single chip, utilize pure digital
asynchronous circuits to simulate neurons and synapses. This breakthrough showcased the potential of
advanced technology nodes, such as the Samsung 28 nm bulk CMOS process, to integrate large numbers of
silicon neurons with remarkably low power consumption.

Intel's more recent Loihi chip shares design similarities with the IBM TrueNorth chip, employing pure
digital asynchronous circuits for simulating neurons and synapses. Fabricated using the Intel 14 nm FinFET
process, the Loihi chip focuses on intricate neural and synaptic features, including spike-based learning
mechanisms. Although it integrates fewer cores (128 cores) compared to Spinnaker and TrueNorth, Loihi
supports the simulation of networks containing up to 130,000 spiking neurons. Like its counterparts, Loihi
serves as a research platform, emphasizing the exploration of spike-based processing architectures for
practical problem-solving. The Intel Neuromorphic Research Community (INRC) program actively
encourages researchers and students to innovate and develop novel spike-based computing solutions using
Loihi, contributing to the growth of the community and yielding promising results.

2. Memrestive devices and emerging memory technologies:

The community focused on material science and device physics has long been engaged in researching
innovative materials and technologies for memory and long-term storage applications. Recently, this
community has adopted the term 'neuromorphic' to describe emerging devices and systems. These
technologies exhibit behaviours reminiscent of biological synapses and serve as crucial components for the
advancement of large-scale AI computing systems.

Following the proposal of utilizing nano-scale devices as 'memristors,' the community swiftly embraced the
concept of employing these devices to function as synapses in artificial neural networks, storing synaptic
weights locally. These advancements open up possibilities for 'in-memory computing' in neural networks,
leveraging the physics of these devices to support complex non-linear features that emulate key properties
of biological synapses. A diverse range of research initiatives is underway, encompassing the development
of various non-volatile and volatile memristive devices.

Efforts also extend to designing spike- or pulse-based control schemes aimed at inducing biologically
plausible learning behaviours in memristive crossbar arrays. Given the absence of a singular solution to the
challenge of identifying the optimal artificial synapse, ongoing investigations explore a wide array of
materials, devices, and techniques in pursuit of innovative solutions.

Related terms:

The subsequent ideas are integral to the functioning of a system designed to mimic the operations of the brain: -

1. Spiking. Neurons communicate through voltage or current spikes, a distinctive form of signaling those
contrasts with the binary signals of current digital systems or the manipulation of continuous signals in
analog implementations. Spiking signaling systems encode information temporally and transmit it through
"action potentials."

2. Plasticity. A conventional device exhibits a distinct response to a specific stimulus or input. In contrast,
standard neuromorphic architecture depends on altering the properties of an element or device based on its
history. Plasticity, a crucial property, enables complex neuromorphic circuits to undergo modifications or
"learning" as they encounter various signals.

3. Fan-in/fan-out. In traditional computational circuits, the interconnections between individual devices are
typically limited. In contrast, the brain features a significantly larger number of dendrites, often on the
order of magnitude of thousands (e.g., 10,000). Additional research is necessary to ascertain the
fundamental importance of this aspect to the computing model of neuromorphic systems.

4. Hebbian learning/dynamical resistance change. Long term alterations in synapse resistance occur
following repeated spiking by the presynaptic neuron, a phenomenon also known as spike time-dependent
plasticity (STDP). In Hebbian learning, an alternative description is captured by the phrase "devices that
fire together, wire together."

5. Adaptability. In biological brains, numerous connections initially form, and through a selection or learning
process, some are retained while others are discarded. This mechanism is significant for enhancing the fault
tolerance of individual devices and for determining the most efficient computational path. In contrast,
conventional computing systems have a rigid and fixed architecture from the outset.

6. Criticality. The brain usually functions in proximity to a critical point, where the system exhibits sufficient
plasticity to transition between states—neither excessively stable nor highly volatile. Simultaneously, the
system's ability to explore numerous closely adjacent states may be crucial. In the realm of materials
science, this closeness to a critical state, such as a phase transition, becomes evident.

7. Accelerators. Building an ultimate neuromorphic-based thinking machine involves progressing through


intermediate steps, focusing on small-scale applications rooted in neuromorphic concepts. Some of these
applications necessitate the integration of sensors with a certain degree of limited computation.
Basic Building Blocks:

In functional terms, the most basic and straightforward characteristics of the different devices and their roles in brain
areas encompass the following.

1. Soma (also known as neuron bodies), which function as integrators and threshold spiking devices

2. Synapses, which provide dynamical interconnections between neurons

3. Axons, which provide long-distance output connection between a presynaptic to a postsynaptic neuron

4. Dendrites, which provide multiple, distributed inputs into the neurons.

Realization:

1. Synapse/Memristor. The synapses represent the most sophisticated elements successfully simulated and
built to date, possessing two crucial properties: switching and plasticity. Synapse implementation is often
achieved through a two-terminal device, such as a memristor, which exhibits a pinched (at V=0), hysteretic
I-V characteristic.

Fig 2: 2-terminal memristor symbol

2. Soma/Neuristor. These devices serve two crucial functions: integration and threshold spiking. Despite
their significance, they have not been extensively explored. One potential implementation of such a device
involves a capacitor parallel to a memristor. The capacitance (representing "integration") and the spiking
function can be designed into a single two-terminal memristor.

Fig 3: PSPICE neuristor model circuit, featuring two MoS2 hard memristors, resistors, capacitors, and DC voltage sources.
3. Axon/Long wire. The axon's role has traditionally (potentially inaccurately) been seen as primarily
providing a circuit connection and a time delay line. Consequently, limited research has been conducted on
this component, even though a substantial portion of information dissipation may occur during information
transmission. Recent studies suggest that the axon plays an additional role in signal conditioning.
Consequently, further research is essential to comprehend its functions and devise a device that mimics its
capabilities.

4. Dendrite/Short wire. Dendrites are conventionally thought to serve the purpose of consolidating signals
from multiple neurons into a single neuron, highlighting the three-dimensional connectivity inherent in the
brain. Although pseudo-3D systems have been realized in multilayer (~8) CMOS-based architecture, a
genuine 3D implementation requires additional research and development. Moreover, recent advances in
neuroscience indicate that dendrites also contribute to pattern detection and subthreshold filtering. Certain
dendrites have demonstrated the capability to detect over 100 patterns.

5. Fan-in/Fan-out. Certain neurons establish connections with many thousands of other neurons, with one
axon potentially linking to ten thousand or more dendrites. Current electronic systems face limitations in
terms of fan-in/fan-out, typically accommodating only a few tens of terminals. To overcome this,
innovative approaches for high-radix connections may be necessary. While crossbars are commonly
employed in most neuromorphic systems, their scalability is currently constrained.

Numerous essential functions have been implemented in intricate CMOS circuits, yet they not only demand
substantial physical space but also prove to be energy inefficient. The latter poses a potentially critical fundamental
limitation, as highlighted earlier. Consequently, for the next phase in the development of brain-like computation, it
becomes imperative to fabricate these devices from a singular material that offers sufficient flexibility for large-scale
integration while ensuring minimal energy consumption.

Von Neumann Architecture vs. Neuromorphic Architecture:

1. System Level

Traditional computational architectures, including their parallel variants, rely on the von Neumann
architecture, which involves physically separated functional units like memory, control processing,
arithmetic/logic, and data paths. This design creates a bottleneck due to the repeated shuttling of
information between these units, limiting computational system development. In contrast, the brain, with its
collocated memory and processors, demonstrates superior energy efficiency and flexibility, adapting to
complex environments.
To overcome current limitations, a disruptive technology is needed, and the "neuromorphic" architecture
emerges as a dynamic system where computational elements are intertwined, allowing for continuous
adaptation, and learning processes based on received stimuli. While fully emulating the brain's design
remains a challenge, imitating nature can pave the way for crucial advancements.

Fig 4: Conventional computers face a "von Neumann bottleneck" between CPU and memory. Neuromorphic architectures overcome
this by distributing synapses and neurons, efficiently scaling both memory and compute elements, eliminating the bottleneck.
2. Device Level

At the device level, traditional von Neumann computing relies on transistors, resistors, capacitors, and
inductors, each with limitations like energy consumption, rigid design, and limited fault tolerance. In
contrast, the brain's neurons, with adaptable and fault-tolerant soma, synapses, axons, and dendrites, offer
greater complexity and connectivity than conventional circuitry.

Fig 5: a) In traditional circuits, interconnectivity relies on transistors, resistors, capacitors, and inductors’) Neuronal circuits, on the
other hand, exhibit intricate interconnectivity through adaptable neurons, synapses, axons, and dendrites.

3. Performance Level:

Comparing biological and technological systems reveals that, while individual components in silicon
systems may seem superior in some aspects, the overall functionality falls short. Even with reasonable
extrapolations, conventional silicon-based systems can't match the density and power efficiency of a human
brain. We need new conceptual developments.

Table-1: A comparison between biological and silicon-based systems is illustrated in the table below. Red indicates areas where
biology excels, while black indicates areas where silicon is superior. The potential for advancement lies in harnessing the strengths of
both biological and silicon components.

We need technology that combines the benefits of both biological and engineered materials without inheriting their
drawbacks. This requires significant changes in nanoscale device designs, the use of new functional materials, and
innovative software implementations. Unlike traditional computational systems that rely on billions of uniform
devices responding predictably to specific stimuli, neuromorphic circuits, especially synapses, are expected to
respond uniquely based on their past experiences. Therefore, the design and implementation of new architectures
must undergo substantial modifications.
Impact on Modern Electronics:

Neuromorphic systems have made a significant impact on modern electronics, revolutionizing various fields and
applications. Here are some key points:

1. Efficiency and Speed: Neuromorphic systems, with their efficiency and speed, could help address the
computational challenges facing AI development1. Superconducting electronics (SCE), used in
neuromorphic systems, can provide scalability, programmability, biological fidelity, online learning
support, efficiency, and speed.

2. Overcoming the ‘von Neumann bottleneck’: Neuromorphic electronics, inspired by biological neurons
and synapses, can alleviate the ‘von Neumann bottleneck’ between memory and processor. This offers a
promising solution to reduce efforts in data storage and processing.

3. Wide Range of Applications: The impact of commercial neuromorphic computing could be enormous. It
has repercussions across various fields, including image and speech recognition, robotics and autonomous
vehicles, sensors in the Internet of Things (IoT), medical devices, and even artificial body parts.

4. Adaptive Synapses: One of the challenges in neuromorphic systems is providing effective adaptive
synapses in large numbers. New methods using VLSI technology have been proposed for implementation,
as well as novel techniques for storing adaptable weights.

Neuromorphic systems have significantly influenced modern electronics, offering solutions to some of the most
challenging problems in the field. Their potential in various applications marks them as a revolutionary step in
electronics.

Challenges:

Neuromorphic systems, which mimic the structure and function of the brain using electronic devices, face several
challenges:

1. Integration of New Memory Technologies: Incorporating new memory technologies into state-of-the-art
CMOS processes is a significant challenge.

2. Algorithm Definition: Defining the algorithms that specify the network structure and dynamics is another
hurdle.

3. Imbalanced Dataflow: Dealing with imbalanced data-flow due to spatial-temporal sparsity is a complex
issue.

4. Parameter Management: Managing multiple parameters and hyper-parameters that affect the network
state is a daunting task.

5. Replicating Neuron and Synapse Behaviour: Developing devices and circuits that can naturally replicate
the behaviour of neurons and synapses is a major challenge.

6. Security Concerns: The adoption of neuromorphic computing systems that implement neural network and
machine learning algorithms on hardware generates the need for protecting the data security in such
systems.

7. Energy Consumption: Developing algorithms and hardware capable of performing intricate computations
with minimal energy consumption is a key challenge.

8. Learning and Adaptation: Creating systems that can learn and adapt over time, and devising methods to
control the behaviour of artificial neurons and synapses in real-time is a significant challenge.

9. Information Representation: How information is represented is a crucial design challenge.


10. Adaptation to Changes: Facilitating adaptation to local and evolutionary changes is a key design
challenge.

These challenges need to be addressed for the successful implementation and advancement of neuromorphic
systems. Despite these challenges, the field of neuromorphic engineering continues to make significant strides,
offering promising solutions for the future of computing.

Future Scope:

Neuromorphic systems, which mimic the neural structure and functionality of the human brain, have a promising
future in various fields:

1. High-Performance Computing: Neuromorphic systems could be the future of high-performance


computing. They are energy-efficient and could deliver improved energy efficiency, which is an important
consideration as energy consumption and waste heat are limiting factors for conventional electronics.

2. Robotics: Neuromorphic computing can be used to develop robots that are capable of learning and
adapting to their environment.

3. Sensing: Neuromorphic computing can be used to develop sensors that can detect and analyse patterns in
data.

4. Artificial Intelligence and Machine Learning: Neuromorphic computing devices and materials are
expected to allow ICs to do “compute in memory” (CIM) with a thousand- to a million-times improved
power-consumption compared to the best digital AI chips today.

5. Internet of Things (IoT): Neuromorphic computing is expected to play a significant role in the
development of responsive “edge” systems for IoT.

However, the success of neuromorphic computing hinges on our ability to integrate and scale-up components as
demonstrated by the semiconductor industry. Despite the challenges, the field of neuromorphic engineering
continues to make significant strides, offering promising solutions for the future of computing.

Conclusion:

In conclusion, neuromorphic systems, which mimic the neural structure and functionality of the human brain, have
emerged as a promising solution to the challenges faced by conventional computing systems. Despite the
complexities in integrating new memory technologies into CMOS processes and defining the algorithms that specify
the network structure and dynamics, significant strides have been made in the field of neuromorphic engineering.

The impact of neuromorphic systems on modern electronics is profound, offering solutions to some of the most
challenging problems in the field. Their potential in various applications, including high-performance computing
and artificial intelligence, marks them as a revolutionary step in electronics.

However, the success of neuromorphic computing hinges on our ability to integrate and scale-up components.
Despite the challenges, the field of neuromorphic engineering continues to make significant strides, offering
promising solutions for the future of computing. As we continue to explore and develop these systems, we can
expect to see even more innovative applications and advancements in the years to come.

Therefore, the future scope of neuromorphic systems is vast, with potential impacts in high-performance computing,
robotics, sensing, artificial intelligence, machine learning, and the Internet of Things (IoT). As we continue to
overcome the challenges and harness the potential of neuromorphic systems, we are paving the way for a new era in
electronics and computer technology.
References:

1. Report of a Roundtable Convened to Consider Neuromorphic Computing Basic Research Needs October

29-30, 2015, Gaithersburg, MD

2. Indiveri, Giacomo (2021). Introducing ’Neuromorphic Computing and Engineering’. Neuromorphic


Computing and Engineering, 1(1):010401.

3. What is Neuromorphic computing and what are its applications, Website- “What is Neuromorphic
computing? (futurescope.co)”

4. Are neuromorphic systems the future of high-performance computing? Website-“ Are neuromorphic
systems the future of high-performance computing? – Physics World”

5. Neuromorphic Computing: Insights and Challenges (ieee.org)

6. CMOS-MEMS Integration: Why, How and What? (uci.edu)

You might also like