Professional Documents
Culture Documents
A R T I C L E I N F O A B S T R A C T
Keyword: In the existing literature, one of the major issue in regression technique is the noisy training data. In this research
EEG(Electroencephalography) work, the mathematical model for parameter estimation of neurological signals is developed for investigation of
Speech dynamics Brain dynamics. This proposed model is developed by using non-linear dynamics of oscillator models. It is
Brain dynamics
assumed that state of the brain of a given person is characterized by an unknown parameter vector ‘θ’ that
Non-linear analysis
EKF (Extended kalman filter)
parameterizes a coupled nonlinear oscillator differential equation model which generates a harmonic process
serving as a model for the EEG data collected on the brain surface. It is assumed that the EEG data and speech
data of the same person are correlated, so that the speech data obeys another differential equation with a
parameter ‘ϕ’ that is a fixed function of the EEG a parameter ‘θ’. ‘θ’ and ‘ϕ’ are estimated by an EKF based on
training process based on joint the EEG and speech data. This enables us to get accurate estimates of ‘θ’ and ‘ϕ’
for each person. This training process is repeated for ‘N’ persons and accordingly a parameter vector is obtained
for each person. This completes the training process. The validation of this model involves choosing a fresh
person not belonging to the training set and estimating the EEG parameter vector for this person based only on
his speech data. For this, a function ‘ψ ’ which correlates the speech parameter ‘ϕ’ with the EEG parameter ‘θ’ as
‘ϕ = ψ (̂θ)’ is estimated by an affine linear relationship which is substituted into the speech model. The EEG
model is then used to generate this person’s EEG data and we compare this with the true EEG data of his brain.
This validation scheme can also be used to classify the fresh person’s brain by comparing his parameters with
those obtained from the training set.
1. Introduction testing part involves comparing the true EEG data of the fresh patient
with his speech based synthesized part. The EKF has been used in the
The aim of this research work, is three fold with the ultimate goal of training and validation stages. For the past few decades, BCIs (Brain-
synthesizing a patient’s EEG signal data from only his speech data: First computer interfaces) [2] have developed in real-life applications in the
we construct a mathematical model for EEG signal generation incorpo field of investigation in cognition neuro engineering [3] and the sub
rating several unknown parameters in it. Then, we construct a mathe stantial improvements in the EEG measurement technologies. BCIs
matical model for speech generation again incorporating some unknown techniques are effectively used to evaluate the cognitive states of human
parameters in it. Then, we assume that for a given brain, the EEG and beings. The problem identification is to study and investigate the
speech model parameters are correlated. We take a sample of patients Mathematical model for parameter estimation of neurological signals
and for each patient estimate his EEG and speech parameters from exact (EEG signals) using non-linear dynamics of oscillator model [4] and
recordings of these processes. From these pairs of speech and EEG parameter estimation of speech signals using speech dynamic model
parameter estimates, we construct an affine linear model [1] which tells respectively. Non-linear dynamic models [5] have succeeded in the
us, how for a general brain the speech parameters will depend upon the biomedical applications based on Electrocardiogram (ECG), Magnetro-
EEG parameters. By using this parametric substitution into the speech encephalogram (MEG), Electroencephalogram (EEG) data. Different
model, we take a fresh patient and estimate his EEG parameters from types of methods are used to investigate the brain dynamics [eg.Com
only his speech recordings and then use the EEG model to synthesize his puter tomography, Electroencephalography (EEG)]. Among them, EEG
EEG data, this is the second part, namely the validation part. Finally, the has an advantage over the other methods because it gives the real-time
* Corresponding author.
E-mail address: babisagar1234@gmail.com (S. Guguloth).
https://doi.org/10.1016/j.bspc.2022.103818
Received 18 September 2021; Received in revised form 28 March 2022; Accepted 16 May 2022
Available online 31 May 2022
1746-8094/© 2022 Elsevier Ltd. All rights reserved.
S. Guguloth et al. Biomedical Signal Processing and Control 77 (2022) 103818
2
S. Guguloth et al. Biomedical Signal Processing and Control 77 (2022) 103818
This is the joint parameter vector of the EEG and the speech models.
3
S. Guguloth et al. Biomedical Signal Processing and Control 77 (2022) 103818
The state transition model of the system is The state equation of the speech system is
( ) [ ]
A(η)X XS (t + 1) = rs1 XS (t) + rs2 + WS (t + 1)
F ζ = (22)
η
We then determine a linear relationship between ‘ϕ’ and ‘θ’ value. This
The predicted co-variance estimation of the system is is done in two stage process. First we take a bunch of ‘N’ person labelled
( ⃒) ( ( ⃒ )) ( ⃒ ) as 1, 2, …‘N’. and estimate ‘θ’ using the EKF applied to EEG measure
⃒ ⃒ ⃒
P t + 1⃒t = F′ ̂ ζ(t|t))T + GΣ
ζ t⃒t P t⃒t F′ (̂ ̃ W GT (23) ments of the parameter oscillator model. Denote these parameter esti
mates as ̂θ1, ̂ θ N . Likewise for each person, estimate ‘ϕ’ by applying
θ 2 …̂
can be decomposed using the EKF to the speech data and denote the parameter estimation corre
[ ] ( )
Pxx (t + 1|t) Pxθ (t + 1|t) sponding as ϕ̂ 1, ϕ ̂N .
̂ 2 …ϕ
= P t + 1|t (24)
Pθx (t + 1|t) Pθθ (t + 1|t) ̂ = ψ (̂
ϕ θ) (34)
and
( ) [ ]
̂ = ψ +ψ ̂
ϕ 0 1θ (35)
A(θ) A′ (θ)(I⊗X)
F′ ζ = (25)
0 I Using a least square method minimize the speech dynamic equation
and ∑
N
(Xs (t + 1) − rs1 Xs (t) − rs2 )2 = 0 (36)
[ ] t=1
Σwx 0
ΣW = (26) ( ( ) () ) ()
0 Σwθ
∑
N
Xs t + 1 − rs1 Xs t − rs2 Xs t = 0 (37)
Where t=1
[ ]
∑( ( ) () )
̃= G 0
G (27) Xs t + 1 − rs1 Xs t − rs2 = 0
0 I
∑ () ∑ ( )
and then the corrector rs1 Xs t + Nrs2 = Xs t + 1
̂
ζ(t + 1|t + 1) = ̂
ζ(t + 1|t) + K(Z(t + 1) − H ̂
ζ(t + 1|t)) (28) ∑ () ∑ ∑ ( ) ()
rs2 Xs t + rs1 Xs (t)2 = Xs t + 1 Xs t
Then the Kalman gain matrix is obtained as
( ( ⃒ ) )− 1 ∑ Xs (t) ( ) ()
1∑ 1∑
K = PH T Σv + HP t + 1⃒t H T (29) = μs , Xs (t)2 = σ 2s Xs t + 1 Xs t = σ s12 (38)
N N N
The updated co-variance estimate
μs rs1 + rs2 = μs (39)
( ⃒ ) ( ) ( ⃒)
P t + 1⃒t + 1 = I − KH P t + 1⃒t (I − KH)T + KΣW K T (30)
rs2 μs + rs1 σ2s = σs12 (40)
can be decomposed as
[ ] ( ) The matrix form of the above equation is as
Pxx (t + 1|t) Pxθ (t + 1|t)
= P t + 1|t (31) [ ][ ] [ ]
Pθx (t + 1|t) Pθθ (t + 1|t) μs 1 rs1 μs
= (41)
2
σ s μs rs2 σs12
where “H” is the state transition matrix
H = [I 0] (32) Then the speech parameter value is to be obtained as
[ ] [ ]− 1 [ ] [ ]
and rs1
=
μs 1 μs
=
ϕ1
[ ] rs2 2
σ s μs σs12 ϕ2
Kx
K= (33)
Kθ ( ) K
∂ ∂ ∑
, ̂ − ψ − ψ ̂
|| ϕ 2
1 θ k || = 0 (42)
Thus ‘η(t)’ can be estimated as ‘̂
η (t)’ using the training algorithm based ∂ψ 0 ∂ψ 1 k=1 k 0
on the EKF with the measurement of both the EEG and speech signals.
We estimate the parameter vector ‘θ’ using a least square method. Hence The optimal regression equation are obtained after substituting time
we can construct a function ‘ψ ’ which maps the EEG parameter to the regression, then the speech model becomes
speech parameter which shows that the EEG and speech are correlated. ( ) [
X (t)
] () [
X (t)
] ( )
Once such a mapping has been constructed, we can take a fresh person XS t + 1 = ψ T0 s
1
+ θT t ψ T1 s
1
+ WS t + 1
and estimate his EEG parameter using an EKF applied to the speech
model and hence synthesize his EEG. Note that the speech parameter vector is
4
S. Guguloth et al. Biomedical Signal Processing and Control 77 (2022) 103818
[ ] [ ]
rs1 ϕ1 ( ) ( ( ) ( )) ( )
= (43) XE t + 1 = FE t, XE t , θ t + WE t + 1
rs2 ϕ2
5
S. Guguloth et al. Biomedical Signal Processing and Control 77 (2022) 103818
Fig. 7. Error between Synthesized EEG signal and real EEG signal.
[ ] EEG data and by comparing with the true EEG data of his brain, we
∑
K
̂ ̂
θk − ψ 0 ϕk − ψ 1 ϕ̂T = 0 (51) validate this scheme can also be used to classify the fresh person’s brain
k=1 by comparing his EEG parameters with those obtained from the training
set.
∑ϕ
̂ ∑̂
θk
ψ1 − ψ0 k
= (52)
K K 6.1. Flow chart
∑ϕ ̂T
̂k ϕ ̂kT ∑ ̂
∑ϕ ̂T
θk ϕ .
ψ0 k
+ ψ1 = k
(53)
K K K
1. Construct a differential equation model for EEG data obtained by
6. Simulation and implementation a discretizing the oscillator differential equation model and
linearizing the non-linear differential equation model around a
In this work, we assume that the brain of a given person is charac nominal value.
terized by an unknown parameter vector ‘θ’ that parameterizes a
XE (t + 1) = A(θ)XE (t) + WE (t + 1) (54)
coupled nonlinear oscillator differential equation model which gener
ates a harmonic process serving as a model for the EEG data collected on
‘θ’ as unknown EEG parameter.
the brain surface. We assume that the EEG data and speech data of the
2. Construct a differential equation model for speech data
same person are correlated, so that the speech data obeys another dif
( ) [ ] ( )
ferential equation with a parameter ‘ϕ’ that can be derived by applying a X (t)
XS t + 1 = ϕT0 s + WS t + 1 (55)
fixed affine linear function to the EEG parameter ‘θ’. ‘θ’ and ‘ϕ’ are 1
estimated by an EKF based on training process. This enables us to get
accurate estimates of ‘θ’ and ‘ϕ’. This training process is repeated for ‘N’ ‘ϕ’ as unknown speech parameter
persons and accordingly a parameter vector ‘(θ, ϕ)’ is obtained for each 3. Club these two differential equation models to obtain a joint
person. This completes the training process. The validation of this model linear differential equation model for EEG and speech data.
involves choosing a fresh person not belonging to the training set and [ ] ( )[ ] [ ]
XE (t + 1) XE (t) WE (t + 1)
estimating the parameter vector ‘θ’ for this person based only on his XS (t + 1)
= A θ, ϕ
XS (t)
+
WS (t + 1)
speech data with the ‘ϕ’ in the speech model replaced by the affine linear
function. Fig. 4 represents the estimated speech signal using the pa
rameters obtained from the non-linear oscillator model. Fig. 5 shows the 4. Construct the extended state vector model with the extended
real EEG data recorded in 32-channel EEG machines. For clear visual state equation as
isation, only the first 500 samples are taken. It should be borne in mind ⎛⎞ ⎡ ⎤
that the function ‘ψ ’ which correlates the speech parameter ‘ϕ’ with the XE (t)
⎜ ⎟ ⎢ XS (t) ⎥
EEG parameter ‘θ’ as ‘ϕ = ψ (̂ θ)’ should be estimated by a least square ζ⎜ ⎟ ⎢
⎝t⎠ = ⎣ θ(t) ⎦
⎥ (56)
method based on minimizing the order pair estimates
ϕ(t)
K ⃒⃒
∑ ⃒⃒2
⃒⃒ ̂ ⃒⃒ ⎡ ⎤ ⎡ ⎤ ⎛ ⎞
⃒⃒ ϕ k − ψ ̂
θ k ⃒⃒ = 0 XE (t +1) [ ] XE (t) [ ]
k=1 ⎢ XS (t +1) ⎥ ⎢ ⎥
A(θ(t),ϕ(t) 0 ⎢ XS (t) ⎥ b(θ(t),ϕ(t)) ⎜ ⎟
⎢ ⎥ +W ⎜ ⎟
⎣ θ(t +1) ⎦ = 0 I ⎣ θ(t) ⎦
+
0 ⎝t+1⎠
where (̂ ̂ k ) are the EEG and speech parameter obtained during the
θk, ϕ ϕ(t +1) ϕ(t)
training process. Finally using the EEG model, we simulated the EEG
process corresponding to ‘̂ θ’ and estimate the EEG parameter. Fig. 7 is an
error plot between the synthesised EEG and the real EEG. The error plot 5. Joint the EEG and speech noisy measurement model is
indicates that for first 200 time samples, the error is more as compared () [ ] [ ]
to next time samples, but it decreases significantly after training 200 Z t =H E
X (t)
+
WE (t)
(57)
samples. The EEG model can thus used to generate this person’s model XS (t) WS (t)
6
S. Guguloth et al. Biomedical Signal Processing and Control 77 (2022) 103818
6. Express the measurement model in the form 11. Take the speech data of a fresh person corrupted by speech
measurement noise
Z(t) = H ∗ ζ(t) + W(t) (58)
Zs (t) = Xs (t) + Ws (t)
7. Taking EEG and speech measurements and derive the EKF estimate his EEG parameter ‘θ’ using the above model with
equation for estimating the extended state on a real time basis. parameter substitution.
8. Repeat the scheme for kth person and estimate the EEG and 12. With the EEG parameter ‘θ’ thus estimated, substitute it into the
speech parameters for each persons by denoting this estimation as EEG model to synthesize the fresh person EEG data.
(̂ ̂ k )for(1⩽k⩽K).
θk, ϕ
9. Construct the EEG from the speech parameter mapping model 7. Results
̂ = ψ0 + ψ1̂
ϕ θ In this work, we have first modeled the EEG data by a set of linear
differential equation after incorporating unknown EEG parameter ‘θ’
calculate ‘ψ 0 ’, ‘ψ 1 ’ by using Least Square minimization method. into the model. The objective of including these unknown parameter is
K ⃒⃒
∑ ⃒⃒2 that these parameters are disease dependent and act as a classifiers of the
⃒⃒ ̂
⃒⃒ ϕ k − ψ 0 − ψ 1 ̂
⃒⃒
θ k ⃒⃒ = 0 brain diseases. We model the speech data of the same person by a set of
k=1 speech dynamics after incorporating unknown speech parameter in a L/
P way. We postulated that the speech parameter are correlated with the
brain parameter of the same person and hence we construct a linear
10. Substitute these mapping into the speech model to obtain it in the function that maps the Speech parameter to the EEG parameter. The
[ ] [ ] ( )
Xs (t) X (t) parameter of this mapping is done through a training set as follows we
= (ψ 0 θ + ψ 1 )T s + WS t + 1 take a sample of ‘K’ person and for each person measure his EEG data
1 1
and estimate his EEG parameter from the EEG Model and likewise for the
7
S. Guguloth et al. Biomedical Signal Processing and Control 77 (2022) 103818
Fig. 9. Tracking plot between the actual EEG and Synthesized EEG.
same person measure his speech data and estimate his speech parameter References
from the speech model. For validation, we take a fresh person and es
timate his EEG parameter only from the speech model by substituting for [1] C. Fraser, T. Yamakawa, Insights into the affine model for high-resolution satellite
sensor orientation, ISPRS Journal of Photogrammetry and Remote sensing 58 (5–6)
the speech parameter. Then we construct regression affine linear into (2004) 275–288.
regression mapping that fits each person’s speech parameter in terms of [2] V. Upreti, H. Parthasarathy, V. Agarwal, Estimating eeg parameters in the presence
the EEG parameter to his EEG parameters and then match it to the EEG of random time delays for brain computer interfaces, International Conference on
Industrial Electronics Research and Applications (ICIERA) 2021 (2021) 1–6,
classes to calculate his EEG parameter and synthesize his EEG data from https://doi.org/10.1109/ICIERA53202.2021.9726764.
the EEG model. [3] C.-T. Lin, L.-W. Ko, T.-K. Shen, Computational intelligent brain computer
Fig. 6 shows synthesised EEG data obtained using the proposed interaction and its applications on driving cognition, IEEE Computational
Intelligence Magazine 4 (4) (2009) 32–46.
model. Fig. 8 represent the synthesized EEG signal and real EEG signal in [4] G. Sagar, H. Parthasarathy, V. Agarwal, V. Upreti, Synthesizing eeg signals from
the single plot. Fig. 9 represents the tracking plot between the actual speech signals using bci’s with ekf-based training, International Conference on
EEG and Synthesized EEG. The mean square error between synthesised Industrial Electronics Research and Applications (ICIERA) 2021 (2021) 1–5,
https://doi.org/10.1109/ICIERA53202.2021.9726748.
EEG and real EEG of the first 500 samples is 2.4301e − 03.
[5] J. Langford, R. Salakhutdinov, T. Zhang, Learning nonlinear dynamic models, in,
in: Proceedings of the 26th Annual International Conference on Machine Learning,
8. Conclusion 2009, pp. 593–600.
[6] X.-W. Wang, D. Nie, B.-L. Lu, Emotional state classification from eeg data using
machine learning approach, Neurocomputing 129 (2014) 94–106.
In this research, an algorithm is developed which synthesizes EEG [7] P. Ghorbanian, S. Ramakrishnan, H. Ashrafiuon, Stochastic non-linear oscillator
signal, taking speech signal as input. This synthesized EEG is then models of eeg: the alzheimer’s disease case, Frontiers in Computational
Neuroscience 9 (2015) 48.
compared with real EEG to validate the proposed algorithm. Since the [8] R.E. Kalman, et al., Contributions to the theory of optimal control, Bol. soc. mat.
speech data is noisy, EKF is used to estimate EEG parameters. When mexicana 5 (2) (1960) 102–119.
comparing the synthesized EEG signal obtained by the proposed non- [9] P.M. Rossini, S. Rossi, Transcranial magnetic stimulation: diagnostic, therapeutic,
and research potential, Neurology 68 (7) (2007) 484–488.
linear oscillator model with the actual EEG signal, the magnitude of
[10] U. Orhan, M. Hekim, M. Ozer, Eeg signals classification using the k-means
the mean square error is found to be in the acceptable range of 10− 3 . An clustering and a multilayer perceptron neural network model, Expert Systems with
extension of this algorithm can also be used to analyze any neurological Applications 38 (10) (2011) 13475–13481.
disorders using speech, another application of this proposed algorithm is [11] L. Deng, Dynamic speech models: theory, algorithms, and applications, Synthesis
Lectures on Speech and Audio Processing 2 (1) (2006) 1–118.
to classify a person’s mental state at any given instant. [12] L. Deng, M. Aksmanovic, X. Sun, C.J. Wu, Speech recognition using hidden markov
models with polynomial regression functions as nonstationary states, IEEE
transactions on speech and audio processing 2 (4) (1994) 507–520.
Funding [13] C.J. Stam, Nonlinear dynamical analysis of eeg and meg: review of an emerging
field, Clinical neurophysiology 116 (10) (2005) 2266–2301.
[14] J. Jeong, Eeg dynamics in patients with alzheimer’s disease, Clinical
This study was not funded by any agency or source. neurophysiology 115 (7) (2004) 1490–1505.
[15] C.-T. Lin, S.-F. Tsai, L.-W. Ko, Eeg-based learning system for online motion sickness
level estimation in a dynamic vehicle environment, IEEE transactions on neural
Data availability networks and learning systems 24 (10) (2013) 1689–1700.
[16] F.-C. Lin, L.-W. Ko, C.-H. Chuang, T.-P. Su, C.-T. Lin, Generalized eeg-based
drowsiness prediction system by using a self-organizing neural fuzzy system, IEEE
The datasets generated during and/or analysed during the current Transactions on Circuits and Systems I: Regular Papers 59 (9) (2012) 2044–2055.
study are available from the corresponding author on reasonable [17] S. Sun, Y. Lu, Y. Chen, The stochastic approximation method for adaptive bayesian
classifiers: towards online brain–computer interfaces, Neural Computing and
request.
Applications 20 (1) (2011) 31–40.
[18] E.D. Übeyli, Analysis of eeg signals by implementing eigenvector methods/
recurrent neural networks, Digital Signal Processing 19 (1) (2009) 134–143.
Declaration of Competing Interest [19] J. Long, Y. Li, H. Wang, T. Yu, J. Pan, F. Li, A hybrid brain computer interface to
control the direction and speed of a simulated or real wheelchair, IEEE
Transactions on Neural Systems and Rehabilitation Engineering 20 (5) (2012)
The authors declare that they have no known competing financial 720–729.
interests or personal relationships that could have appeared to influence
the work reported in this paper.