You are on page 1of 1

Animating Dreams and Future Dream Recording

Daniel Oldis
Current Project Dream Imagery and Speech Decoding
Introduction Summary
It has been well documented that dream speech elicits corresponding phasic muscle potential in Dream visualization using functional magnetic resonance imaging (fMRI) and transcribed sub-vocal
facial, laryngeal and chin muscles (McGuigan, 1971; Shimizu, 1986), and muscles associated with speech using EMG have established early successes (Kamitani, 2008; Gallant, 2011; Horikawa, 2013;
dream motor behavior (such as leg or arm movement) demonstrate associated muscle potential Jorgensen, 2005; Bandi, 2016; Khan and Jahan, 2016).
(Dement and Kleitman, 1957; Wolpert, 1960)—though discernable speech/movement is largely
inhibited. Measurement of such musculature electrical activity is in the domain of Dream image reconstruction using fMRI consists of training software on mapping visual pattern activity
electromyography (EMG)—though near-infrared spectroscopy has also recently been employed. in the awake brain. If the software can then correlate the dreamed image or image features, it can
reverse-engineer, constructing a graphical representation of the dreamed image. From Science, 2013:
This project, a dream animation prototype, is intended to be a proof of concept for dream “Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate
movement simulation and is situated as a partial implementation of an ambitious goal of classification, detection, and identification of contents. The findings demonstrate that specific visual
digitally recording, i.e. reconstructing a dream (dream imagery, transcribed dream speech and experience during sleep is represented by brain activity patterns shared by stimulus perception,
dream motor behavior—a dream movie). It is intended that this animation project will provide a providing a means to uncover subjective contents of dreaming using objective neural measurement.”
demonstration of the feasibility of including dream motor behavior simulation in a combined
protocol directed at full, though approximate, dream reconstruction. Dream speech transcription is a category of sub-vocal (silent or imagined) speech decoding, which
utilizes trained pattern recognition of EMG signals emanating from speech muscles to synthesize or
transcribe words and sentences. While sub-vocal speech transcription has mostly focused on medical
Method applications for the physically impaired or military applications for special acoustic environments, the
The EMG/EOG data that powers the animation program was collected at the University of same techniques can be applied to dreamed speech, which is generally reported as coherent (Kilroe,
Texas, Austin, Cognitive Neuroscience Lab in March, 2016, under the direction of David Schnyer 2016). Decipherable EMG patterns associated with counting in dreams and simple sentences have
and funded by DreamsBook, Inc. Two sleep subjects were monitored and scored with been observed. My own research using laryngeal EMG correlated with dream reports further suggests
polysomnography for one night each for a total of seven recorded REM cycles. The EMG right the interesting possibility that we covertly vocalize other dream characters’ speech!
and left leg electrodes were positioned on the quadriceps, with the right arm EMG’s on the
lateral head of the triceps, and the [speech] EMG’s placed on the chin.

UT provided the REM-scored data to me in EDF and text format which comprised six 500 Hz
data points (eyes, chin, arm and legs). Initially, I loaded single points into specific muscle data
columns of OpenSim software leg and arm models through cut-and-paste into sample input
templates. I was then able to visualize upper leg muscle activity and simple arm movement.
Yet, this method was limited in its attempt to achieve full body simulation of dream movement.
I enrolled my brother, David Oldis, an iOS programmer, to create an animation from the data
files provided by UT. He wanted an animation that could be played on an iPad or iPhone, so he
selected Apple's 3D rendering tool, SceneKit. *The avatar here is assumed upright due to
limited sensors, though, in fact, the dreamer may be sitting, lying—or flying in the dream. **
Eye movements in some of the simulation models used in this project are represented by head
movements.

The Magical Mystery Dream Tour


Will lucid dreamers be the first dream video stars,
escorting the world through the magical land of dreams?

You might also like