You are on page 1of 2

Are seeing local synchrony

Must preserve phase to recover dipoles

Not valid to convert to amplitude first.

Any rectification or squaring function will not localize properly.

Technically must only convert raw waveforms

In principle, would have to do e.g. 256 projections/second

Until now, even LORETA training has used derived values – problem

Using FFT values does not produce valid projections.

If use filtered waveform, can sample more slowly.

Important to see the voxels, not flying blind.

Should not use 1 voxel, need to represent ROI

Can recover equivalent dipole and equivalent magnitude

Important to process waveforms first, then filter

sLORETA vs. LORETA

live training should show live projection

useful for a training screen

now can use ROI data as a signal

can compute coherence, phase, etc.

introduce JSTFA joint space-time frequency analysis

dipole stability; dynamics in space as well as time.

orientation of dipole constitutes additional detail about precise location

as moves around gyrus, dipole seems to rotate.


Change training – visualize changes instantly by snapshotting all voxels as references.

Can even train orientation of dipoles, specific activation.

Can be bidirectional, could even reinforce peaks versus valleys. Train absolute dipole
orientation.

You will be seeing our Live Loreta Projector (LLP) producing 30 sLORETA
projections per second, all 5,500+ voxels, directly usable for biofeedback,
using less than 10% of a laptop CPU.   Our algorithm is something like 100
times faster than any existing design.  This is true live functional
electrical brain imaging, for the first time ever.  We want to discuss the
theoretical underpinnings of how this is used for biofeedback.  We also
include per-voxel dipole orientation in the analysis, moving from joint
time-frequency analysis (JTFA) into joint space-time frequency analysis
(JSTFA), which looks not only at activation in time, but also in space.

You might also like