You are on page 1of 8

Biomedical Signal Processing and Control 77 (2022) 103818

Contents lists available at ScienceDirect

Biomedical Signal Processing and Control


journal homepage: www.elsevier.com/locate/bspc

Synthesis of EEG signals modeled using non-linear oscillator based on


speech data with EKF
Guguloth Sagar*, Vijyant Agarwal, Harish Parthasarathy, Vijay Upreti
Netaji Subhas University of Technology, New Delhi, India

A R T I C L E I N F O A B S T R A C T

Keyword: In the existing literature, one of the major issue in regression technique is the noisy training data. In this research
EEG(Electroencephalography) work, the mathematical model for parameter estimation of neurological signals is developed for investigation of
Speech dynamics Brain dynamics. This proposed model is developed by using non-linear dynamics of oscillator models. It is
Brain dynamics
assumed that state of the brain of a given person is characterized by an unknown parameter vector ‘θ’ that
Non-linear analysis
EKF (Extended kalman filter)
parameterizes a coupled nonlinear oscillator differential equation model which generates a harmonic process
serving as a model for the EEG data collected on the brain surface. It is assumed that the EEG data and speech
data of the same person are correlated, so that the speech data obeys another differential equation with a
parameter ‘ϕ’ that is a fixed function of the EEG a parameter ‘θ’. ‘θ’ and ‘ϕ’ are estimated by an EKF based on
training process based on joint the EEG and speech data. This enables us to get accurate estimates of ‘θ’ and ‘ϕ’
for each person. This training process is repeated for ‘N’ persons and accordingly a parameter vector is obtained
for each person. This completes the training process. The validation of this model involves choosing a fresh
person not belonging to the training set and estimating the EEG parameter vector for this person based only on
his speech data. For this, a function ‘ψ ’ which correlates the speech parameter ‘ϕ’ with the EEG parameter ‘θ’ as
‘ϕ = ψ (̂θ)’ is estimated by an affine linear relationship which is substituted into the speech model. The EEG
model is then used to generate this person’s EEG data and we compare this with the true EEG data of his brain.
This validation scheme can also be used to classify the fresh person’s brain by comparing his parameters with
those obtained from the training set.

1. Introduction testing part involves comparing the true EEG data of the fresh patient
with his speech based synthesized part. The EKF has been used in the
The aim of this research work, is three fold with the ultimate goal of training and validation stages. For the past few decades, BCIs (Brain-
synthesizing a patient’s EEG signal data from only his speech data: First computer interfaces) [2] have developed in real-life applications in the
we construct a mathematical model for EEG signal generation incorpo­ field of investigation in cognition neuro engineering [3] and the sub­
rating several unknown parameters in it. Then, we construct a mathe­ stantial improvements in the EEG measurement technologies. BCIs
matical model for speech generation again incorporating some unknown techniques are effectively used to evaluate the cognitive states of human
parameters in it. Then, we assume that for a given brain, the EEG and beings. The problem identification is to study and investigate the
speech model parameters are correlated. We take a sample of patients Mathematical model for parameter estimation of neurological signals
and for each patient estimate his EEG and speech parameters from exact (EEG signals) using non-linear dynamics of oscillator model [4] and
recordings of these processes. From these pairs of speech and EEG parameter estimation of speech signals using speech dynamic model
parameter estimates, we construct an affine linear model [1] which tells respectively. Non-linear dynamic models [5] have succeeded in the
us, how for a general brain the speech parameters will depend upon the biomedical applications based on Electrocardiogram (ECG), Magnetro-
EEG parameters. By using this parametric substitution into the speech encephalogram (MEG), Electroencephalogram (EEG) data. Different
model, we take a fresh patient and estimate his EEG parameters from types of methods are used to investigate the brain dynamics [eg.Com­
only his speech recordings and then use the EEG model to synthesize his puter tomography, Electroencephalography (EEG)]. Among them, EEG
EEG data, this is the second part, namely the validation part. Finally, the has an advantage over the other methods because it gives the real-time

* Corresponding author.
E-mail address: babisagar1234@gmail.com (S. Guguloth).

https://doi.org/10.1016/j.bspc.2022.103818
Received 18 September 2021; Received in revised form 28 March 2022; Accepted 16 May 2022
Available online 31 May 2022
1746-8094/© 2022 Elsevier Ltd. All rights reserved.
S. Guguloth et al. Biomedical Signal Processing and Control 77 (2022) 103818

measurements and these EEG signals are used in real-world applications


[6]. EEG is a non-invasive method used to analyze the human brain
diseases, brain injuries and function of a brain. The EEG signals are
classified into four categories such as alpha wave (8-13HZ), Beta wave
(13HZ and above), Theta wave (4-8HZ), Delta wave (0.5-4HZ) respec­
tively [7]. Dynamic speech communication process plays an important
role in the lingustic information transformation. Automatic speech
recognition technique is a new trend application used in the real life
applications. Speech dynamics are acoustics in hidden and dynamic Fig. 1. Schematic of the stochastic Coupled Duffing vander Pol Oscillator [7].
models which gives a unified view in realistic different components of
the speech chain. In acoustic dynamic models, the speech frames are speech signal and an EKF is applied based on joining the EEG and speech
associated with the same state or a segment which gives a model for time data which gives accurate estimation of the parameters. The model
varying parameters and for temporally correlated. Extended kalman validation scheme is also applied to estimate the EEG parameters ob­
filter [8] plays an important role in estimating the parameters of speech tained and compare it with EEG data obtained from the training set.
dynamics and EEG dynamics. The Kalman-filter is a real time processing The paper is divided into eight sections. Section 1 deals with the
algorithm which is known to be a good linear unbiased estimator and its Introduction part, in Section 2 Speech dynamics and Brain dynamics are
properties are used in signal analysis. An Kalman filter is used to remove discussed, Section 3 deals with the parameter estimation method used
transcranial magnetic stimulation (TMS) [9] in EEG recordings and for by the EKF Algorithm and the corresponding Section 4 deals with the
estimating the neural activity of the brain. In literature survey we have Speech and EEG signal correlation algorithm and the Section 5 deals
possessionated that the connection between the brain activation and affine mapping to estimate EEG parameter from speech model using EKF
different tasks involved with the use of EEG topography. A (MLPNN) and the simulation and implementation are discussed in Section 6 and
multilayer perceptron neural network model is used to classify the EEG the results are discussed in Section 7 and the conclusion is discussed in
signal classification based on epilepsy treatment using ‘K’- means clus­ Section 8.
tering method [10]. EEG signal is modeled as the output for stochastic
non-linear coupled oscillator models by the optimization process which
2. EEG and speech dynamics
includes Shannon entropy and Sample entropy [7]. The Mathematical
modeling of speech dynamics plays an important role in studying the
In [7] the following coupled non-linear oscillator model was pro­
speech chain–observing phenomena, formulate the hypotheses, tests it
posed for EEG signal generation
and predicts new phenomena, new theories in the scientific methods
( ) ( )
[11]. A Dynamic algorithm Hidden Markov model (HMM) is to be d2 x1 dx1
+ k + k x − k x + b x3
+ b (x − x )3
− ∊ 1 − x 2
=0 (1)
developed to recognize the speech signal in conjunction state-dependent dt2
1 2 1 2 2 1 1 2 1 2 1
dt 1

orthogonal polynomial regression method, for optimization variables


( )
and estimating model parameters [12]. Non-linear dynamics approaches d2 x2 dx2 dw
− k x + k x − b (x − x )3
− ∊ 1 − x 2
=μ (2)
have been introduced to capture the EEG nonlinear properties through dt2
2 1 2 2 2 1 2 2
dt 2
dt
computationally complex time series analysis. Nonlinear modeling and
analysis of the EEG signal has been addressed in the literature(see,for This model can be linearized using the perturbation theory of non-linear
instance [13] for a review.). To Extract the EEG abnormalities in the differential equations choosing k1 , k2 , b1 , b2 , ∊1 and ∊2 as the EEG brain
Alzheimer’s person obtained from Non-linear dynamic models and parameter as ‘θ’. The speech dynamical model of the first and the second
Conventional spectral analysis [14]. Some of the applications include order differential equations was proposed for the speech signal gener­
controlling a robotic arm for writing tasks using a hybrid BCI system and ation as [11] (See Fig. 1)
assisting robots for disabilities; additional channels of control in com­
Xt = rs Xt− 1 + (1 − rs )Ts + ωt (s) (3)
puter games; control of devices like wheel chairs, etc [15,16]. With
computationally intensive time series analysis, nonlinear dynamics and
methods have been developed to capture the EEG’s nonlinear properties
( )
[17,18]. The mathematical analysis of speech dynamics plays a signifi­ Xt = 2rs Xt− 1 − r2s Xt− 2 + (1 − rs )2 Ts + ωt s (4)
cant part in exploring the speech chain–observing patterns, formulating
hypotheses, analysing them, and estimating new phenomena, novel We propose to choose the first order dynamical model for the speech
theories, [19]. After reviewing the existing literature the research gap signal generation process [11].
with respect to this problem is identified as follows-.
XS (t + 1) = rs1 XS (t) + rs2 + WS (t + 1) (5)
Existing transformation methods have not addressed the non-linear
characteristics of EEG signals. This gap has been filled in this work where rs1 and rs2 are speech parameters and they are considered to form
using mathematical modelling of Non-linear oscillator model and speech a ‘ϕ’ vector.
dynamics. Some of the research highlights of the presented work are as [ ]
follows: ϕ = s1
r
(6)
rs2
1. Modelling the EEG and speech signals using the stochastic differen­
Fig. 2 represents the EEG parameters obtained using the proposed
tial equation incorpating the unknown parameters in both signals.
stochastic non-linear oscillator model. It should be noted that Fig. 2 is
2. Applying EKF to the EEG and speech models which gives the accurate
not EEG data, it represents EEG parameters. (See Fig. 3).
estimation of the EEG and Speech parameters.
3. Postulate a strong correlation between the EEG and Speech param­
eter to estimate the speech parameters in terms of the EEG parameter 3. Parameter estimation using the EKF algorithm
using affine linear regression method.
4. Mathematical modeling of EEG parameter based on the speech The EEG data ‘XE (t)’ are assumed in discrete time (i.e obtained by the
signal. discretizing differential equation) as:
( ) ( ( ) ) ( )
XE t + 1 = FE t, XE t , θ + GE WE t + 1 (7)
The overview of this paper is to estimate an EEG model based only on

2
S. Guguloth et al. Biomedical Signal Processing and Control 77 (2022) 103818

Fig. 2. EEG parameters obtained using the oscillator model.

Fig. 3. Block diagram.

Linearizing the model around a nominal EEG amplitude ‘XE0 ’ by setting ( ) (( )( )) ( ) ( )


X t + 1 = A η t X t + GW̃ x t+1 (13)
XE (t) = XE0 (t) + δXE (t) (8)
η(t + 1) = η(t) + Wη (t + 1) (14)
gives us a linear state variable model
( )
δFE
( () ) () ( ) Let us define the joint extended state vector models.
δXE t + 1 = t, XE0 t , θ δXE t + GE WE t + 1 (9) ⎛⎞ ⎡ ⎤
δXE XE (t)
⎝ ⎠ ⎣
ζ t = XS (t) ⎦ (15)
Here XE0 (t) satisfies the noiseless dynamics η(t)
XE0 (t + 1) = FE (t, XE0 (t), θ) (10)
The joint measurement model is
In what follows, we shall use the notation XE (t) in place of δXE (t) for Z(t) = H ∗ X(t) + V(t) (16)
notational simplicity. The state equation of the speech or state equation
is where “H” is the observation transformation model and “V(t)” is the
observation noise, which is assumed to be zero mean Gaussian white
XS (t + 1) = rs1 XS (t) + rs2 + WS (t + 1)
noise. The joint extended state equations can be expressed as:
We shall now apply the EKF to the state equations of the EEG signal and [
X(t + 1)
] [
A(η(t))X(t)
] [ ][
G 0 Wx (t + 1)
]
the speech dynamics to estimate the speech and EEG model parameters = + (17)
η(t + 1) η(t) 0 I Wθ (t + 1)
from noisy data for the measurements of both the processes.
The joint vector of EEG and speech data can be expressed as This is the extended state model and can be equivalently expressed as-
( ) [ ] ( ) [ ] ( ( )) ( )
XE (t + 1)
X t+1 = (11) ζ t+1 =
X(t + 1)
=F ζ t ̃
+ GW t+1 (18)
XS (t + 1) η(t + 1)

Let The EKF equations are as follows-


[ ] [ ] [ ]
θ ̂ + 1|t) ̂
η= (12) X(t η (t|t)) X(t|t)
= A(̂ (19)
ϕ η (t + 1|t)
̂ η (t|t)
̂

This is the joint parameter vector of the EEG and the speech models.

3
S. Guguloth et al. Biomedical Signal Processing and Control 77 (2022) 103818

Where 4. EEG and speech correlation Algorithm


[ ] [ ]
̃ = G 0 , W = Wx
G (20) The EEG data ‘XE (t)’ are assumed in discrete time (i.e obtained by
0 I Wθ
discretizing the differential equation) as
( ) ( ( ) ) ( )
̂
ζ(t + 1|t) = F(̂
ζ(t|t)) (21) XE t + 1 = FE t, XE t , θ + GE WE t + 1

The state transition model of the system is The state equation of the speech system is
( ) [ ]
A(η)X XS (t + 1) = rs1 XS (t) + rs2 + WS (t + 1)
F ζ = (22)
η
We then determine a linear relationship between ‘ϕ’ and ‘θ’ value. This
The predicted co-variance estimation of the system is is done in two stage process. First we take a bunch of ‘N’ person labelled
( ⃒) ( ( ⃒ )) ( ⃒ ) as 1, 2, …‘N’. and estimate ‘θ’ using the EKF applied to EEG measure­
⃒ ⃒ ⃒
P t + 1⃒t = F′ ̂ ζ(t|t))T + GΣ
ζ t⃒t P t⃒t F′ (̂ ̃ W GT (23) ments of the parameter oscillator model. Denote these parameter esti­
mates as ̂θ1, ̂ θ N . Likewise for each person, estimate ‘ϕ’ by applying
θ 2 …̂
can be decomposed using the EKF to the speech data and denote the parameter estimation corre­
[ ] ( )
Pxx (t + 1|t) Pxθ (t + 1|t) sponding as ϕ̂ 1, ϕ ̂N .
̂ 2 …ϕ
= P t + 1|t (24)
Pθx (t + 1|t) Pθθ (t + 1|t) ̂ = ψ (̂
ϕ θ) (34)
and
( ) [ ]
̂ = ψ +ψ ̂
ϕ 0 1θ (35)
A(θ) A′ (θ)(I⊗X)
F′ ζ = (25)
0 I Using a least square method minimize the speech dynamic equation

and ∑
N
(Xs (t + 1) − rs1 Xs (t) − rs2 )2 = 0 (36)
[ ] t=1
Σwx 0
ΣW = (26) ( ( ) () ) ()
0 Σwθ

N
Xs t + 1 − rs1 Xs t − rs2 Xs t = 0 (37)
Where t=1
[ ]
∑( ( ) () )
̃= G 0
G (27) Xs t + 1 − rs1 Xs t − rs2 = 0
0 I
∑ () ∑ ( )
and then the corrector rs1 Xs t + Nrs2 = Xs t + 1
̂
ζ(t + 1|t + 1) = ̂
ζ(t + 1|t) + K(Z(t + 1) − H ̂
ζ(t + 1|t)) (28) ∑ () ∑ ∑ ( ) ()
rs2 Xs t + rs1 Xs (t)2 = Xs t + 1 Xs t
Then the Kalman gain matrix is obtained as
( ( ⃒ ) )− 1 ∑ Xs (t) ( ) ()
1∑ 1∑
K = PH T Σv + HP t + 1⃒t H T (29) = μs , Xs (t)2 = σ 2s Xs t + 1 Xs t = σ s12 (38)
N N N
The updated co-variance estimate
μs rs1 + rs2 = μs (39)
( ⃒ ) ( ) ( ⃒)
P t + 1⃒t + 1 = I − KH P t + 1⃒t (I − KH)T + KΣW K T (30)
rs2 μs + rs1 σ2s = σs12 (40)
can be decomposed as
[ ] ( ) The matrix form of the above equation is as
Pxx (t + 1|t) Pxθ (t + 1|t)
= P t + 1|t (31) [ ][ ] [ ]
Pθx (t + 1|t) Pθθ (t + 1|t) μs 1 rs1 μs
= (41)
2
σ s μs rs2 σs12
where “H” is the state transition matrix
H = [I 0] (32) Then the speech parameter value is to be obtained as
[ ] [ ]− 1 [ ] [ ]
and rs1
=
μs 1 μs
=
ϕ1
[ ] rs2 2
σ s μs σs12 ϕ2
Kx
K= (33)
Kθ ( ) K
∂ ∂ ∑
, ̂ − ψ − ψ ̂
|| ϕ 2
1 θ k || = 0 (42)
Thus ‘η(t)’ can be estimated as ‘̂
η (t)’ using the training algorithm based ∂ψ 0 ∂ψ 1 k=1 k 0

on the EKF with the measurement of both the EEG and speech signals.
We estimate the parameter vector ‘θ’ using a least square method. Hence The optimal regression equation are obtained after substituting time
we can construct a function ‘ψ ’ which maps the EEG parameter to the regression, then the speech model becomes
speech parameter which shows that the EEG and speech are correlated. ( ) [
X (t)
] () [
X (t)
] ( )
Once such a mapping has been constructed, we can take a fresh person XS t + 1 = ψ T0 s
1
+ θT t ψ T1 s
1
+ WS t + 1
and estimate his EEG parameter using an EKF applied to the speech
model and hence synthesize his EEG. Note that the speech parameter vector is

4
S. Guguloth et al. Biomedical Signal Processing and Control 77 (2022) 103818

Fig. 4. Estimated speech signal.

[ ] [ ]
rs1 ϕ1 ( ) ( ( ) ( )) ( )
= (43) XE t + 1 = FE t, XE t , θ t + WE t + 1
rs2 ϕ2

θ(t + 1) = θ(t) + Wθ (t + 1) (45)


5. Using the affine mapping to estimate EEG parameter from the
speech model using EKF where
⎛⎞ ⎡ ⎤
Using the affine model relationship derived between the EEG XE (t)
parameter and the speech parameter
⎝ ⎠ ⎣
ζ t = XS (t) ⎦ (46)
θ(t)
̂ = ψ +ψ ̂
ϕ 0 1θ
ζ(t + 1) = F(ζ(t)) + GE WE (t + 1) (47)
We plug this with the speech model to get ⎛ ⎞ ⎡ ⎤
( ) [ ] () [ ] ( )
X (t) X (t) ⎜ ⎟ ⎢ [ FE (XE , θ) ⎥
XS t + 1 = ψ T0 s + θT t ψ T1 s + WS t + 1 ⎜ ⎟ ⎢ ] [ ] ⎥
1 1 ⎜ ⎟ ⎢ T Xs (t) Xs (t) ⎥
F ⎜ζ⎟ = ⎢ ψ 0 + θT ψ T1 ⎥ (48)
⎜ ⎟ ⎢ 1 1 ⎥
⎝ ⎠ ⎣ ⎦
The noisy measurement of the speech data as θ
ZS (t + 1) = XS (t + 1) + WS (t + 1) (44)
[ ]
ψ 01
From the speech state model and its measurement model, we derive the ψ0 = (49)
ψ 02
EKF for estimating ‘θ’ from ‘ZS ’. The estimated value of the EEG
parameter ‘̂
θ’ obtained using the speech model combined with the affine The optimal regression equation is
relationship between the speech and EEG parameter, i.e plugged into the [ ]
∑K
EEG model to generate the EEG data. The state equation of the model ̂ ̂k − ψ = 0
θk − ψ ϕ 0 1 (50)
can be rearranged as follows k=1

Fig. 5. Real EEG signal.

5
S. Guguloth et al. Biomedical Signal Processing and Control 77 (2022) 103818

Fig. 7. Error between Synthesized EEG signal and real EEG signal.

[ ] EEG data and by comparing with the true EEG data of his brain, we

K
̂ ̂
θk − ψ 0 ϕk − ψ 1 ϕ̂T = 0 (51) validate this scheme can also be used to classify the fresh person’s brain
k=1 by comparing his EEG parameters with those obtained from the training
set.
∑ϕ
̂ ∑̂
θk
ψ1 − ψ0 k
= (52)
K K 6.1. Flow chart

∑ϕ ̂T
̂k ϕ ̂kT ∑ ̂
∑ϕ ̂T
θk ϕ .
ψ0 k
+ ψ1 = k
(53)
K K K
1. Construct a differential equation model for EEG data obtained by
6. Simulation and implementation a discretizing the oscillator differential equation model and
linearizing the non-linear differential equation model around a
In this work, we assume that the brain of a given person is charac­ nominal value.
terized by an unknown parameter vector ‘θ’ that parameterizes a
XE (t + 1) = A(θ)XE (t) + WE (t + 1) (54)
coupled nonlinear oscillator differential equation model which gener­
ates a harmonic process serving as a model for the EEG data collected on
‘θ’ as unknown EEG parameter.
the brain surface. We assume that the EEG data and speech data of the
2. Construct a differential equation model for speech data
same person are correlated, so that the speech data obeys another dif­
( ) [ ] ( )
ferential equation with a parameter ‘ϕ’ that can be derived by applying a X (t)
XS t + 1 = ϕT0 s + WS t + 1 (55)
fixed affine linear function to the EEG parameter ‘θ’. ‘θ’ and ‘ϕ’ are 1
estimated by an EKF based on training process. This enables us to get
accurate estimates of ‘θ’ and ‘ϕ’. This training process is repeated for ‘N’ ‘ϕ’ as unknown speech parameter
persons and accordingly a parameter vector ‘(θ, ϕ)’ is obtained for each 3. Club these two differential equation models to obtain a joint
person. This completes the training process. The validation of this model linear differential equation model for EEG and speech data.
involves choosing a fresh person not belonging to the training set and [ ] ( )[ ] [ ]
XE (t + 1) XE (t) WE (t + 1)
estimating the parameter vector ‘θ’ for this person based only on his XS (t + 1)
= A θ, ϕ
XS (t)
+
WS (t + 1)
speech data with the ‘ϕ’ in the speech model replaced by the affine linear
function. Fig. 4 represents the estimated speech signal using the pa­
rameters obtained from the non-linear oscillator model. Fig. 5 shows the 4. Construct the extended state vector model with the extended
real EEG data recorded in 32-channel EEG machines. For clear visual­ state equation as
isation, only the first 500 samples are taken. It should be borne in mind ⎛⎞ ⎡ ⎤
that the function ‘ψ ’ which correlates the speech parameter ‘ϕ’ with the XE (t)
⎜ ⎟ ⎢ XS (t) ⎥
EEG parameter ‘θ’ as ‘ϕ = ψ (̂ θ)’ should be estimated by a least square ζ⎜ ⎟ ⎢
⎝t⎠ = ⎣ θ(t) ⎦
⎥ (56)
method based on minimizing the order pair estimates
ϕ(t)
K ⃒⃒
∑ ⃒⃒2
⃒⃒ ̂ ⃒⃒ ⎡ ⎤ ⎡ ⎤ ⎛ ⎞
⃒⃒ ϕ k − ψ ̂
θ k ⃒⃒ = 0 XE (t +1) [ ] XE (t) [ ]
k=1 ⎢ XS (t +1) ⎥ ⎢ ⎥
A(θ(t),ϕ(t) 0 ⎢ XS (t) ⎥ b(θ(t),ϕ(t)) ⎜ ⎟
⎢ ⎥ +W ⎜ ⎟
⎣ θ(t +1) ⎦ = 0 I ⎣ θ(t) ⎦
+
0 ⎝t+1⎠
where (̂ ̂ k ) are the EEG and speech parameter obtained during the
θk, ϕ ϕ(t +1) ϕ(t)
training process. Finally using the EEG model, we simulated the EEG
process corresponding to ‘̂ θ’ and estimate the EEG parameter. Fig. 7 is an
error plot between the synthesised EEG and the real EEG. The error plot 5. Joint the EEG and speech noisy measurement model is
indicates that for first 200 time samples, the error is more as compared () [ ] [ ]
to next time samples, but it decreases significantly after training 200 Z t =H E
X (t)
+
WE (t)
(57)
samples. The EEG model can thus used to generate this person’s model XS (t) WS (t)

6
S. Guguloth et al. Biomedical Signal Processing and Control 77 (2022) 103818

Fig. 6. Synthesized EEG signal obtained using the oscillator model.

6. Express the measurement model in the form 11. Take the speech data of a fresh person corrupted by speech
measurement noise
Z(t) = H ∗ ζ(t) + W(t) (58)
Zs (t) = Xs (t) + Ws (t)

7. Taking EEG and speech measurements and derive the EKF estimate his EEG parameter ‘θ’ using the above model with
equation for estimating the extended state on a real time basis. parameter substitution.
8. Repeat the scheme for kth person and estimate the EEG and 12. With the EEG parameter ‘θ’ thus estimated, substitute it into the
speech parameters for each persons by denoting this estimation as EEG model to synthesize the fresh person EEG data.
(̂ ̂ k )for(1⩽k⩽K).
θk, ϕ
9. Construct the EEG from the speech parameter mapping model 7. Results

̂ = ψ0 + ψ1̂
ϕ θ In this work, we have first modeled the EEG data by a set of linear
differential equation after incorporating unknown EEG parameter ‘θ’
calculate ‘ψ 0 ’, ‘ψ 1 ’ by using Least Square minimization method. into the model. The objective of including these unknown parameter is
K ⃒⃒
∑ ⃒⃒2 that these parameters are disease dependent and act as a classifiers of the
⃒⃒ ̂
⃒⃒ ϕ k − ψ 0 − ψ 1 ̂
⃒⃒
θ k ⃒⃒ = 0 brain diseases. We model the speech data of the same person by a set of
k=1 speech dynamics after incorporating unknown speech parameter in a L/
P way. We postulated that the speech parameter are correlated with the
brain parameter of the same person and hence we construct a linear
10. Substitute these mapping into the speech model to obtain it in the function that maps the Speech parameter to the EEG parameter. The
[ ] [ ] ( )
Xs (t) X (t) parameter of this mapping is done through a training set as follows we
= (ψ 0 θ + ψ 1 )T s + WS t + 1 take a sample of ‘K’ person and for each person measure his EEG data
1 1
and estimate his EEG parameter from the EEG Model and likewise for the

Fig. 8. Synthesized EEG, Real EEG and Error plot.

7
S. Guguloth et al. Biomedical Signal Processing and Control 77 (2022) 103818

Fig. 9. Tracking plot between the actual EEG and Synthesized EEG.

same person measure his speech data and estimate his speech parameter References
from the speech model. For validation, we take a fresh person and es­
timate his EEG parameter only from the speech model by substituting for [1] C. Fraser, T. Yamakawa, Insights into the affine model for high-resolution satellite
sensor orientation, ISPRS Journal of Photogrammetry and Remote sensing 58 (5–6)
the speech parameter. Then we construct regression affine linear into (2004) 275–288.
regression mapping that fits each person’s speech parameter in terms of [2] V. Upreti, H. Parthasarathy, V. Agarwal, Estimating eeg parameters in the presence
the EEG parameter to his EEG parameters and then match it to the EEG of random time delays for brain computer interfaces, International Conference on
Industrial Electronics Research and Applications (ICIERA) 2021 (2021) 1–6,
classes to calculate his EEG parameter and synthesize his EEG data from https://doi.org/10.1109/ICIERA53202.2021.9726764.
the EEG model. [3] C.-T. Lin, L.-W. Ko, T.-K. Shen, Computational intelligent brain computer
Fig. 6 shows synthesised EEG data obtained using the proposed interaction and its applications on driving cognition, IEEE Computational
Intelligence Magazine 4 (4) (2009) 32–46.
model. Fig. 8 represent the synthesized EEG signal and real EEG signal in [4] G. Sagar, H. Parthasarathy, V. Agarwal, V. Upreti, Synthesizing eeg signals from
the single plot. Fig. 9 represents the tracking plot between the actual speech signals using bci’s with ekf-based training, International Conference on
EEG and Synthesized EEG. The mean square error between synthesised Industrial Electronics Research and Applications (ICIERA) 2021 (2021) 1–5,
https://doi.org/10.1109/ICIERA53202.2021.9726748.
EEG and real EEG of the first 500 samples is 2.4301e − 03.
[5] J. Langford, R. Salakhutdinov, T. Zhang, Learning nonlinear dynamic models, in,
in: Proceedings of the 26th Annual International Conference on Machine Learning,
8. Conclusion 2009, pp. 593–600.
[6] X.-W. Wang, D. Nie, B.-L. Lu, Emotional state classification from eeg data using
machine learning approach, Neurocomputing 129 (2014) 94–106.
In this research, an algorithm is developed which synthesizes EEG [7] P. Ghorbanian, S. Ramakrishnan, H. Ashrafiuon, Stochastic non-linear oscillator
signal, taking speech signal as input. This synthesized EEG is then models of eeg: the alzheimer’s disease case, Frontiers in Computational
Neuroscience 9 (2015) 48.
compared with real EEG to validate the proposed algorithm. Since the [8] R.E. Kalman, et al., Contributions to the theory of optimal control, Bol. soc. mat.
speech data is noisy, EKF is used to estimate EEG parameters. When mexicana 5 (2) (1960) 102–119.
comparing the synthesized EEG signal obtained by the proposed non- [9] P.M. Rossini, S. Rossi, Transcranial magnetic stimulation: diagnostic, therapeutic,
and research potential, Neurology 68 (7) (2007) 484–488.
linear oscillator model with the actual EEG signal, the magnitude of
[10] U. Orhan, M. Hekim, M. Ozer, Eeg signals classification using the k-means
the mean square error is found to be in the acceptable range of 10− 3 . An clustering and a multilayer perceptron neural network model, Expert Systems with
extension of this algorithm can also be used to analyze any neurological Applications 38 (10) (2011) 13475–13481.
disorders using speech, another application of this proposed algorithm is [11] L. Deng, Dynamic speech models: theory, algorithms, and applications, Synthesis
Lectures on Speech and Audio Processing 2 (1) (2006) 1–118.
to classify a person’s mental state at any given instant. [12] L. Deng, M. Aksmanovic, X. Sun, C.J. Wu, Speech recognition using hidden markov
models with polynomial regression functions as nonstationary states, IEEE
transactions on speech and audio processing 2 (4) (1994) 507–520.
Funding [13] C.J. Stam, Nonlinear dynamical analysis of eeg and meg: review of an emerging
field, Clinical neurophysiology 116 (10) (2005) 2266–2301.
[14] J. Jeong, Eeg dynamics in patients with alzheimer’s disease, Clinical
This study was not funded by any agency or source. neurophysiology 115 (7) (2004) 1490–1505.
[15] C.-T. Lin, S.-F. Tsai, L.-W. Ko, Eeg-based learning system for online motion sickness
level estimation in a dynamic vehicle environment, IEEE transactions on neural
Data availability networks and learning systems 24 (10) (2013) 1689–1700.
[16] F.-C. Lin, L.-W. Ko, C.-H. Chuang, T.-P. Su, C.-T. Lin, Generalized eeg-based
drowsiness prediction system by using a self-organizing neural fuzzy system, IEEE
The datasets generated during and/or analysed during the current Transactions on Circuits and Systems I: Regular Papers 59 (9) (2012) 2044–2055.
study are available from the corresponding author on reasonable [17] S. Sun, Y. Lu, Y. Chen, The stochastic approximation method for adaptive bayesian
classifiers: towards online brain–computer interfaces, Neural Computing and
request.
Applications 20 (1) (2011) 31–40.
[18] E.D. Übeyli, Analysis of eeg signals by implementing eigenvector methods/
recurrent neural networks, Digital Signal Processing 19 (1) (2009) 134–143.
Declaration of Competing Interest [19] J. Long, Y. Li, H. Wang, T. Yu, J. Pan, F. Li, A hybrid brain computer interface to
control the direction and speed of a simulated or real wheelchair, IEEE
Transactions on Neural Systems and Rehabilitation Engineering 20 (5) (2012)
The authors declare that they have no known competing financial 720–729.
interests or personal relationships that could have appeared to influence
the work reported in this paper.

You might also like