Professional Documents
Culture Documents
The modern definition of artificial intelligence (or AI) is "the study and design of intelligent
agents" where an intelligent agent is a system that perceives its environment and takes
actions which maximizes its chances of success.
John McCarthy, who coined the term in 1956, defines it as "the science and engineering of
making intelligent machines."
Other names for the field have been proposed, such as computational intelligence, synthetic
intelligence or computational rationality.
The term artificial intelligence is also used to describe a property of machines or programs:
the intelligence that the system demonstrates.
AI research uses tools and insights from many fields, including computer science,
psychology, philosophy, neuroscience, cognitive science, linguistics, operations research,
economics, control theory, probability, optimization and logic.
AI research also overlaps with tasks such as robotics, control systems, scheduling, data
mining, logistics, speech recognition, facial recognition and many others.
Computational intelligence Computational intelligence involves iterative development or
learning (e.g., parameter tuning in connectionist systems).
Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and
soft computing.
Subjects in computational intelligence as defined by IEEE Computational Intelligence Society
mainly include: Neural networks: trainable systems with very strong pattern recognition
capabilities.
Fuzzy systems: techniques for reasoning under uncertainty, have been widely used in
modern industrial and consumer product control systems; capable of working with concepts
such as 'hot', 'cold', 'warm' and 'boiling'.
Evolutionary computation: applies biologically inspired concepts such as populations,
mutation and survival of the fittest to generate increasingly better solutions to the problem.
These methods most notably divide into evolutionary algorithms (e.g., genetic algorithms)
and swarm intelligence (e.g., ant algorithms).
With hybrid intelligent systems, attempts are made to combine these two groups.
Expert inference rules can be generated through neural network or production rules from
statistical learning such as in ACT-R or CLARION.
It is thought that the human brain uses multiple techniques to both formulate and cross-check
results.
Thus, systems integration is seen as promising and perhaps necessary for true AI, especially
the integration of symbolic and connectionist models.
Page : 1
One of the greatest challenges facing artificial intelligence development is
understanding the human brain and figuring out how to mimic it. Now, one group
reports that they have developed an artificial synapse capable of simulating a
fundamental function of our nervous system -- the release of inhibitory and stimulatory
signals from the same 'pre-synaptic' terminal.
Share:
FULL STORY
One of the greatest challenges facing artificial intelligence development is understanding the
human brain and figuring out how to mimic it. Now, one group reports in ACS Nano that they
have developed an artificial synapse capable of simulating a fundamental function of our
nervous system -- the release of inhibitory and stimulatory signals from the same "pre-
synaptic" terminal.
The human nervous system is made up of over 100 trillion synapses, structures that allow
neurons to pass electrical and chemical signals to one another. In mammals, these synapses
can initiate and inhibit biological messages. Many synapses just relay one type of signal,
whereas others can convey both types simultaneously or can switch between the two. To
develop artificial intelligence systems that better mimic human learning, cognition and image
recognition, researchers are imitating synapses in the lab with electronic components. Most
current artificial synapses, however, are only capable of delivering one type of signal. So,
Han Wang, Jing Guo and colleagues sought to create an artificial synapse that can
reconfigurably send stimulatory and inhibitory signals.
The researchers developed a synaptic device that can reconfigure itself based on voltages
applied at the input terminal of the device. A junction made of black phosphorus and tin
selenide enables switching between the excitatory and inhibitory signals. This new device is
flexible and versatile, which is highly desirable in artificial neural networks. In addition, the
artificial synapses may simplify the design and functions of nervous system simulations.
Story Source:
Materials provided by American Chemical Society. Note: Content may be edited for style
and length.
Journal Reference:
1. He Tian, Xi Cao, Yujun Xie, Xiaodong Yan, Andrew Kostelec, Don DiMarzio, Cheng Chang, Li-
Dong Zhao, Wei Wu, Jesse Tice, Judy J. Cha, Jing Guo, Han Wang. Emulating Bilingual
Synaptic Response Using a Junction-Based Artificial Synaptic Device. ACS Nano, 2017;
DOI: 10.1021/acsnano.7b03033
Passing the chemical Turing test: Making artificial and real cells talk
Date:January 25, 2017
Source:American Chemical Society
Summary:
The classic Turing test evaluates a machine's ability to mimic human behavior and
intelligence. To pass, a computer must fool the tester into thinking it is human --
Page : 2
typically through the use of questions and answers. But single-celled organisms can't
communicate with words. Now researchers have demonstrated that certain artificial
cells can pass a basic laboratory Turing test by 'talking' chemically with living bacterial
cells.
The classic Turing test evaluates a machine's ability to mimic human behavior and
intelligence. To pass, a computer must fool the tester into thinking it is human -- typically
through the use of questions and answers. But single-celled organisms can't communicate
with words. So this week in ACS Central Science, researchers demonstrate that certain
artificial cells can pass a basic laboratory Turing test by "talking" chemically with living
bacterial cells.
Sheref S. Mansy and colleagues proposed that artificial life would need to have the ability to
interact seamlessly with real cells, and this could be evaluated in much the same way as a
computer's artificial intelligence is assessed. To demonstrate their concept, the researchers
constructed nano-scale lipid vessels capable of "listening" to chemicals that bacteria give off.
The artificial cells showed that they "heard" the natural cells by turning on genes that made
them glow. These artificial cells could communicate with a variety of bacterial species,
including V. fischeri, E. coli and P. aeruginosa. The authors note that more work must be
done, however, because only one of these species engaged in a full cycle of listening and
speaking in which the artificial cells sensed the molecules coming from the bacteria, and the
bacteria could perceive the chemical signal sent in return.
Story Source:
Materials provided by American Chemical Society. Note: Content may be edited for style
and length.
Journal Reference:
1. Roberta Lentini, Noël Yeh Martín, Michele Forlin, Luca Belmonte, Jason Fontana, Michele
Cornella, Laura Martini, Sabrina Tamburini, William E. Bentley, Olivier Jousson, Sheref S.
Mansy. Two-Way Chemical Communication between Artificial and Natural Cells. ACS
Central Science, 2017; DOI: 10.1021/acscentsci.6b00330
The world's first demonstration of spintronics-based artificial intelligence
Date: December 20, 2016
Source:Tohoku University
Summary:
Researchers have, for the first time, successfully demonstrated the basic operation of
spintronics-based artificial intelligence.
Researchers at Tohoku University have, for the first time, successfully demonstrated the
basic operation of spintronics-based artificial intelligence.
Artificial intelligence, which emulates the information processing function of the brain that can
quickly execute complex and complicated tasks such as image recognition and weather
prediction, has attracted growing attention and has already been partly put to practical use.
The currently-used artificial intelligence works on the conventional framework of
semiconductor-based integrated circuit technology. However, this lacks the compactness
and low-power feature of the human brain. To overcome this challenge, the implementation
of a single solid-state device that plays the role of a synapse is highly promising.
Page : 3
The Tohoku University research group of Professor Hideo Ohno, Professor Shigeo Sato,
Professor Yoshihiko Horio, Associate Professor Shunsuke Fukami and Assistant Professor
Hisanao Akima developed an artificial neural network in which their recently-developed
spintronic devices, comprising micro-scale magnetic material, are employed. The used
spintronic device is capable of memorizing arbitral values between 0 and 1 in an analogue
manner unlike the conventional magnetic devices, and thus perform the learning function,
which is served by synapses in the brain.
Using the developed network, the researchers examined an associative memory operation,
which is not readily executed by conventional computers. Through the multiple trials, they
confirmed that the spintronic devices have a learning ability with which the developed
artificial neural network can successfully associate memorized patterns from their input noisy
versions just like the human brain can.
The proof-of-concept demonstration in this research is expected to open new horizons in
artificial intelligence technology -- one which is of a compact size, and which simultaneously
achieves fast-processing capabilities and ultralow-power consumption. These features
should enable the artificial intelligence to be used in a broad range of societal applications
such as image/voice recognition, wearable terminals, sensor networks and nursing-care
robots.
Story Source:
Materials provided by Tohoku University. Note: Content may be edited for style and length.
Journal Reference:
1. William A. Borders, Hisanao Akima, Shunsuke Fukami, Satoshi Moriya, Shouta Kurihara,
Yoshihiko Horio, Shigeo Sato, Hideo Ohno. Analogue spin–orbit torque device for
artificial-neural-network-based associative memory operation. Applied Physics Express,
2017; 10 (1): 013007 DOI: 10.7567/APEX.10.013007
Story Source:
Materials provided by University of Vienna. Note: Content may be edited for style and
length.
Page : 4
Journal Reference:
1. Michael Gastegger, Jörg Behler, Philipp Marquetand. Machine learning molecular
dynamics for the simulation of infrared spectra. Chem. Sci., 2017; 8 (10): 6924
DOI: 10.1039/C7SC02267K
It is now evident that complex diseases, such as cancer, often require effective drug
combinations to make any significant therapeutic impact. As the drugs in these combination
therapies become increasingly specific to molecular targets, designing effective drug
combinations as well as choosing the right drug combination for the right patient becomes
more difficult.
Artificial intelligence is having a positive impact on drug development and personalized
medicine. With the ability to efficiently analyze small datasets that focus on the specific
disease of interest, QPOP and other small dataset-based AI platforms can rationally design
optimal drug combinations that are effective and based on real experimental data and not
mechanistic assumptions or predictive modeling. Furthermore, because of the efficiency of
the platform, QPOP can also be applied towards precious patient samples to help optimize
and personalize combination therapy.
Story Source:
Materials provided by SLAS (Society for Laboratory Automation and Screening). Note:
Content may be edited for style and length.
Journal Reference:
1. Masturah Bte Mohd Abdul Rashid, Edward Kai-Hua Chow. Artificial Intelligence-Driven
Designer Drug Combinations: From Drug Development to Personalized
Page : 5
Medicine. SLAS TECHNOLOGY: Translating Life Sciences Innovation, 2018;
247263031880077 DOI: 10.1177/2472630318800774
Brain-hacking & memory black market: Cybersecurity experts warn of imminent risks
of neural implants
Published time: 31 Oct, 2018 23:53Edited time: 1 Nov, 2018 11:39
The human brain may become the next frontier in hacking, cybersecurity researchers have
warned in a paper outlining the vulnerabilities of neural implant technologies that can
potentially expose and compromise our consciousness.
While basic brain implants are already here, current science indicates we are on the brink of
mastering the chemistry of memory. Deep Brain Stimulation (DBS) therapy involves
implanting a device into the body to send electrical impulses to electrodes specifically placed
to treat neurological diseases like Parkinson’s. “Memory implants” like those featured in
dystopian TV series Black Mirror will most likely be built on existing DBS architecture.
Page : 6
Kaspersky Labs collaborated with Oxford University to test these systems for security
flaws before we start linking our consciousness to them, and their report is quite illuminating.
© Kaspersky
The technology to actually control memories through such implants will be available within 20
years, according to their predictions. A DARPA-funded study was recently able to isolate
memory-encoding electrical signals from humans and feed them back, boosting short-term
memory performance by 37 percent.
The researchers discovered security in existing implants is weakest where it connects to
other systems, like the medical management platforms used by doctors and surgeons, and
where data transfer between the implant, software, and other networks takes place. The
most advanced of these devices currently in use operates over BlueTooth, which while
popular and easy to use is not the most secure protocol.
Their report warned that malicious actors could not only hijack the implant itself, causing pain
or otherwise making the body’s systems go haywire, but once the technology becomes
advanced enough, they could hold a person’s memories for ransom – forcing the victim to
pay up to access their own thoughts.
Any such implant will naturally require a backdoor so they can be accessed by medical
professionals in case of emergency, and the research team underscored the human problem
with such a setup: what’s to stop those medical professionals from selling access to a VIP
patient’s implant, or from simply forgetting to change the factory-preset password?
Even though no attacks targeting neurostimulators have been observed in the wild,
researchers warned it’s only because the technology is not yet widespread and there’s still
time to fix potential vulnerabilities before it’s too late.
Mind hacking: Scientists want new laws to stop our thoughts from being stolen
Page : 7
Published time: 26 Apr, 2017 20:16
Get short URL
Researchers have called for radical new legislation protecting people’s thoughts from being
stolen and maybe even deleted.
Biomedical ethicists Marcello Ienca and Roberto Andorno believe that while rapid advances
in neurotechnology have created opportunities in modern medicine, they also present new
challenges for human privacy.
Writing in the journal Life Sciences, Society and Policy, the pair have warned that brain-
hacking and “hazardous use of medical neurotechnology” could threaten the integrity of our
thoughts.
Page : 8
laws could be used as safeguards preventing people’s brains from being read or stimulated
without their consent.
Fear of cognitive intrusion is not paranoia borne out of science fiction, they say.
Last year, the US military successfully tested electrical brain stimulation technology aimed at
enhancing the performance of soldiers in high-pressure situations.
View image on Twitter
Brain stimulation to help US drone operators, airmen perform better at multitasking – study
http://on.rt.com/7u6z
Brain hacking, freezing time & weaponized insects: Meet US military’s dystopian plans
Published time: 6 Apr, 2018 14:33Edited time: 7 Apr, 2018 12:02
Page : 9
Get short URL
DARPA, the US Military’s research arm, has revealed it’s one step closer to achieving its
goal of an implantable human memory prosthesis – just one of several dystopian-sounding
projects in the pipeline.
The Defense Advanced Research Projects Agency (DARPA), which is part of the Pentagon,
was set up 60 years ago in response to the Soviet launch of Sputnik 1, with the stated goal of
being “the initiator and not the victim of strategic technological surprises.”
However, not all of the agency’s developments have been weapons in the traditional sense –
DARPA has also been credited with contributing to the creation of the internet and the
development of GPS.
Twitter
Page : 10Ads info and privacy
Currently, several unsettling futuristic technologies, involving militarizing the environment and
manipulating the human body and brain are under development at the Department of
Defense’s “mad science” division. Here RT looks at some of the ways the unit is utilizing AI
and genetic engineering in the name of national security.
Brain hacking
DARPA has invested heavily in brain technologies since 2013 when it unveiled its BRAIN
(Brain Research through Advancing Innovative Neurotechnologies) initiative, consisting of
several programs dedicated to making “revolutionary”advancements in neuroscience.
These technologies are something of a double-edged sword, holding the potential for
rehabilitation but also stand the risk of being applied unethically. “The brain is the next
battlespace,” James Giordano, a neuroethicist at Georgetown University Medical Center
previously told Foreign Policy.
Neuroscientists have warned of the ethical issues around such developments. In 2013, the
same year BRAIN launched, a group of scientists confirmed they had successfully implanted
false memories in a mouse’s brain.
via GIPHY
Among other brain hacking technologies in development is a human memory ‘prosthesis’.
DARPA recently reported it was on track to achieving this implantable brain chip that would
facilitate the formation of new memories and retrieval of existing ones in individuals who
have lost these capacities as a result of traumatic brain injury or neurological disease.
Brain chips are also being examined by DARPA-funded researchers as a way to treat PTSD
by deep brain stimulation. They are looking at sending low doses of electricity deep into the
brain in order to alter a person’s mood.
Page : 11
Next-Generation Nonsurgical #Neurotechnology prgm seeks to make hi-res neural interfaces
practical for able-bodied people. Physics involved are daunting, but solutions might use
ultrasound, light, magnetic or electric fields, photoacoustics, or some combo.
http://www.darpa.mil/news-events/2018-03-16 …
DARPA’s Advanced Plant Technologies (APT) Program is hoping to turn trees into spies by
genetically engineering plant-based sensors and monitoring them remotely. The plants will
be able to “detect the presence of certain chemicals, pathogens, radiation, and even
electromagnetic signals,” and by doing so keep the US military out of harm’s way.
Sea creatures are also the focus of a project hoping to utilize oceanic organisms as sensors
to track an enemy’s undersea vehicles. DARPA is looking at mollusks, crustaceans and
certain types of fish to see which are most suited to support a sensor
network monitoring threats to US naval vessels. Endangered species and intelligent
mammals, such as dolphins and whales, are excluded from the program.
Then there’s the Insect Allies program, which aims to preserve US crops from threats by
engineering insects. According to DARPA “national security can be quickly jeopardized by
Page : 12
naturally occurring threats to the crop system, including pathogens, drought, flooding, and
frost, but especially by threats introduced by state or non-state actors.”
Todd Kuiken a senior research scholar at the Genetic Engineering and Society Center at
North Carolina State University has raised concerns over a synthetic biology experiment of
this sort being funded through the Department of Defense.
“Because the US is funding these initiatives through the Department of Defense, rather than
a civilian organization, it’s not hard to see how some in the international community may
perceive these as potential bioweapons programs, rather than investments in purely
defensive technologies.
“After all, if the US is able to engineer an insect to carry a virus for protective purposes, it
wouldn’t be hard to engineer that same insect to carry a deadly virus for offensive ones. It’s a
classic dual-use technology scenario,” he wrote in Slate.
The organization has an annual budget of around $3 billion – a request of $3.17 billion has
been made for 2018 and increased to $3.44 billion for 2019.
Page : 13
Not what you had in mind? Brain implant could help with depression, study says
Published time: 1 Dec, 2018 04:47Edited time: 1 Dec, 2018 11:20
Get short URL
In case you were curious how ‘they’ were going to convince us we all need brain implants,
look no further. According to a new study, the devices may prove beneficial in treating
depression, and who hasn’t been depressed?
Increasingly common but still poorly understood, depression is a modern mental health
plague, affecting 300 million people worldwide every year. Scientists at the University of
California at San Francisco believe they’ve found a new way of treating the condition using
deep-brain stimulation (DBS). Currently used to treat Parkinson’s and epilepsy, DBS involves
implanting electrodes into specific areas of the brain and delivering controlled doses of
electricity through a pacemaker-like device. The impulses are thought to restore a healthy
electrical pattern, alleviating the patient’s symptoms.
Previous attempts at using DBS to treat depression had mixed results, sometimes
catapulting depressed patients into mania or even making them more depressed, but the UC
team zeroed in on the orbitofrontal cortex (OFC) – uncharted neurological territory that
appeared advantageous both for its role in emotion processing and its plentiful connections
to other parts of the brain involved in emotional regulation.
Given access to a small group of epileptics who already had electrodes implanted into
various parts of their brains to pinpoint seizure locations – a hyper-specialized population
that means their results are unlikely to be replicated anytime soon – the researchers
Page : 14
stimulated different brain regions over a period of several days, sometimes swapping in a
“sham stimulation” to act as a control.
Afterwards, subjects were asked about their mood. Depressed subjects were consistently
happier following OFC stimulation in a way non-depressed subjects were not, and the effect
did not appear following sham stimulation. Unlike in previous DBS experiments, the positive
mood appeared natural – both in how the subjects behaved and how their brainwaves looked
on an EEG.
The results of the study, published in Current Biology, provide a much-needed boost for the
ailing DBS-for-depression-treatment paradigm, which suffered a major blow after a large
2013 trial was discontinued for producing poor results. Other trials ended more ambiguously
with temporary, erratic improvements, but the results of this experiment have encouraged the
team to pursue the OFC further as a stimulation site for depression treatment.
© Current Biology
Depression is notoriously resistant to treatment, and current drug-based therapies fail many
patients. Surprisingly little is known about the mental circuitry responsible for depression –
the serotonin-imbalance theory has been repeatedly debunked, though effective marketing
means 13 percent of Americans continue to take antidepressants devotedly.
The researchers admit little is known about how exactly DBS produces the “complex
behavioral changes” seen in their study, but what’s the harm in trying to make the novelty
tech more mainstream? Oh, right… Just last month cybersecurityexperts warned about the
vulnerabilities of neural implant technologies that can potentially expose the human brain,
opening an entire new era of brain hacking, identity theft and memory black market.
Page : 15
Today, an average computer user cannot even keep the machine secured. So what will the
world look like when hacking your mind becomes as easy as infecting your machine with a
computer virus?
Synthetic biology is becoming one of the most powerful forms of technology in the world. But
many people fear that scientists’ games with the genetics of life forms could spin out of
control and open the door to a new age of bio-hacking and bio-terrorism.
Natural living viruses and bacteria are not only making people sick, they also control the
behavior and condition of the hosts, though without any malice. But the consequences of
getting exposed to an artificially-created virus could be much more serious than a headache
or a fever.
“Synthetic biology will lead to new forms of bioterrorism,” security expert Marc Goodman told
the Daily Mail. “Bio-crime today is akin to computer crime in the early ’80s.”
Viruses and bacteria are manipulating the chemicals inside the human body and, by
programming them to send the right agents into the brain, the bio-programmer potentially can
take control over the victim’s behavior.
We are seeing the opening stages of the synthetic biology industry. Some basic tasks like
decoding, insertion and excision of parts of the DNA, and relatively successful attempts of
cloning is pretty much everything that modern science can carry through.
But in the ’80s, computer science technology was actually at the same level of maturity. At
that time no one could really believe that 20 years later any person would have a greater
power over the computer – and not only the one that belongs to him – than the best present-
day programmers.
Cells are living computers and DNA is a programming language that can be used to control
and influence life forms, believes Andrew Hessel of Singularity University, on NASA's
research campus.
Page : 16
“Synthetic biology – the writing of life,” Hessel says. “It's growing fast. It will grow faster than
computer technologies.”
Programming the DNA, however, is more of a speculation at this point. There is no
development environment or any frameworks to manipulate the cell. Just like in computer
programming, a set of basic instructions and codes has to be developed before an average
coder could perform some task of greater complexity.
The industry is developing rapidly and the future of DNA programming seems bright. But
drawing parallels with computer science, it would be better for humankind to recognize the
problem of “malicious bio-programmers” with all possible seriousness and proactively
develop defensive and counter-offensive methods.
This one’s for the books: in a jaw-dropping study, a team just turned the human brain from a
read-only memory device to a rewritable one.
“What?” you might ask. Of course the brain is rewritable. It’s constantly using electrical and
chemical signals to encode our thoughts and memories.
But that’s biology. When it comes to current neurotechnologies, humans haven’t been able to
directly encode memories into our own brains. So far, projects like the BRAIN
Initiative or connectomics have only helped us roughly glimpse the complicated neural code
buzzing in our heads—that is, to “read” the brain.
Page : 17
37 percent—a shockingly large effect. The results were published this month in the Journal of
Neural Engineering.
“This is the first time scientists have been able to identify a patient’s own brain cell code or
pattern for memory and, in essence, ‘write in’ that code to make existing memory work
better,” said study author Dr. Robert Hampson at Wake Forest Baptist Medical Center.
The fact that the team distilled noisy neuronal chatter into “electronic” memories that the brain
can reinterpret is incredible. For the future, it’s not hard to imagine fishing out hundreds or
thousands of similar memory traces, translating them into 0s and 1s, and essentially digitizing
a memory library.
But perhaps more promising is that the technology offers a glimpse of hope for those suffering
from memory loss. People with Alzheimer’s disease, stroke, or epilepsy often struggle with
forming new memories—this new tech offers a way to give their failing brains a boost.
Hijacked Circuit
From the onset, the team took a clever approach: rather than trying to mimic all the minutiae
that goes on in the brain during memory encoding, they looked for a simple—but sufficiently
accurate—digital replacement.
At the heart of every memory is a set of electrical signals that unfold in time. To Dr. Theodore
Berger, this is the crux: it means that we can reduce something as abstract as a memory into
mathematical equations and put them into a computational framework.
“Of course people asked: can you model [memory] and put it into a device? Can you get that
device to work in any brain? It’s those things that lead people to think I’m crazy. They think it’s
too hard,” Berger said.
A biomedical engineer at the University of Southern California, Berger silenced his skeptics in
2015 when he introduced a set of mathematical theorems that powered the first memory
prosthetic based on the hippocampus.
At the heart of information processing in the hippocampus is a single pathway: electrical pulses
travel from the input node, called CA3, to the output, CA1. But memory signals are non-linear:
they’re noisy, easy to disrupt, and often overlap in time. The hippocampal network dramatically
amplifies even random changes in the signals, leading to vastly different outputs—a “chaotic
black box,” as Berger calls it.
His first success came when, together with Dr. Dong Song, he developed a new model of
hippocampal processing dubbed MIMO (multi-input, multi-output).
Page : 18
Using the algorithm, his team distilled electrical signals of a rat learning to pick a correct lever
into computer code. In rats given a drug to temporarily block their natural memory, the team
then pulsed CA1, the output region, with their code using implanted microelectrodes. Like
magic, the rats regained their memory of the correct lever.
Two rapid-fire successes followed soon after. In monkeys, the team convincingly showed that
the algorithm could also restore memory deficits due to hippocampal malfunction.
The time was ripe for the ultimate test: boosting human memory.
ROM to RAM
The team worked with 22 patients awaiting surgery for epilepsy. This is an enormously
valuable group of people: they already have electrodes implanted in their brains to help
pinpoint the source of their seizures. Similar to before, the team focused on the CA3 to CA1
hippocampal circuit, recruiting participants who had electrodes in those areas.
The team first showed a subset of volunteers images on a computer screen. After a brief delay
of staring at a blank screen, the volunteers then picked out the original image from a set of up
to seven others. This test was repeated roughly 100 times, during which the team carefully
recorded from the participants’ hippocampi.
These recordings allowed the team to parse out electrical patterns associated with the correct
response, which were fed into a MIMO model. The data trained the model to predict how the
output, CA1, fires in response to CA3 inputs in real-time.
When the team fed this “memory code” back into the participants’ hippocampi during recall,
the effects were immediate. Those who received the electrical boost performed roughly 37
percent better than their normal baseline. In contrast, randomly zapping the hippocampus
using the same frequency and strength as when directed by the algorithm didn’t work.
Longer Recall
To see if the memory prosthesis also works on longer-term memories, the team repeated the
test. This time, six participants were asked to select the original image from a set of three
following a 15 to 75-minute wait.
As before, stimulating the CA1 with the memory code boosted memory performance
regardless of the delay by roughly 35 percent. The stimulation wasn’t equally effective in all
tested patients, and on average, the benefits seemed to shrink after more than an hour’s worth
of wait (although it could be an artifact due to small sample sizes).
“In one sense we were not surprised to find that this worked. We had a long history of animal
studies in which we were testing this concept in other species—in animals in the laboratory—
and we were having success,” said Hampson.
Page : 19
“What surprised us was how successful it was. Thirty-five percent improvement in memory is
huge; and the results that we have compared to some other techniques indicate this is a very
successful attempt to restore memory,” he explained.
Memory Way
This pilot study is obviously small, but the authors stress that the observed benefits are
consistent among the test group.
“The wide range of patient conditions that affected hippocampal function did not negate the
ability of the MIMO model-based stimulation to enhance encoding and subsequent recall, even
in patients previously classified as having memory deficits,” they said.
That’s good news: even in a broken brain, as long as the hippocampal circuitry is functional,
it’s possible to apply similar principles to develop other memory prostheses for people who
suffer from memory loss.
Right now, the technology only works in people with implanted electrodes, so it’s obviously not
going mainstream anytime soon.
But the study is a powerful proof-of-concept that we’re inching closer to capturing, storing, and
reliving our own memories—even those transformed into computer code.
“To date we’ve been trying to determine whether we can improve the memory skill people still
have. In the future, we hope to be able to help people hold onto specific memories, such as
where they live or what their grandkids look like, when their overall memory begins to fail,”
Hampson said.
NeuraLink did not explain how the system translated brain activity or how the device was able to
stimulate brain cells.
ADVERTISEMENT
"It's not like suddenly we will have this incredible neural lace and will take over people's brains," Mr
Musk said during his presentation. "It will take a long time."
But he said, for those who choose it, the system would ultimately allow for "symbiosis with artificial
intelligence".
Elon Musk creates Neuralink brain electrode firm
Page : 22
Media caption Meet Elon Musk, the man who inspired Robert Downey Jr's take on Iron Man
Previously Mr. Musk has suggested that AI could destroy the human race.
"Even in a benign AI scenario, we will be left behind," he said.
"With a high bandwidth brain machine interface, we can go along for the ride and effectively have the
option of merging with AI."
Connecting the brain to an interface would create a new layer of "super intelligence" in the human
brain, he added, something people "already have via their phones".
Later, during a question and answer session, he revealed that the device NeuraLink is working on has
been tested on monkeys, with the animal able to control a computer with its brain, according to Mr
Musk.
Now the firm is putting together a submission to start human testing, which will need to be approved
by the US Food and Drug Administration.
Mr Musk is also looking to recruit more scientists to the firm, which currently has about 100
employees.
Page : 23
"The plans they describe will require many years of work to deal with technical and ethical challenges,
but the technology could be a big step in working to alleviate certain serious medical conditions like
epilepsy and Parkinson's."
The Kording Lab Twitter account, for scientists from the University of Pennsylvania's neuroscience
department, tweeted that there was "nothing revolutionary but a range of really creative ideas" which
seemed to suggest the firm was "on a great track".
Skip Twitter post by @KordingLab
So, it looks like @elonmusk @neuralink have quickly caught up with the incumbents (e.g. blackrock).
There is nothing revolutionary but a range of really creative ideas (love the guide tube like needle
idea). I need to see more data but they seem to be on a great track. https://t.co/Yxdibm9mpG
— KordingLab (@KordingLab) July 17, 2019
Report
End of Twitter post by @KordingLab
Andrew Hires. assistant professor of neurobiology at the University of Southern California, tweeted
that the company had "pushed forward" the best of existing lab technology.
Skip Twitter post by @AndrewHires
Summary: Neuralink picked the best of existing lab technology and pushed it forward in a number of
important dimensions, and most impressively has an integrated implantable product that goes beyond
the current state of the art.
Page : 24