You are on page 1of 36

Biomedical Signal Processing

UNIT IV
CONTENTS
• Biomedical signal processing by wavelet analysis
• Multi Resolution Analysis
• Chaotic Signals
• Principal Component Analysis
• Independent Component Analysis
• Coherent treatment of various biomedical signal
processing methods and applications.
• Application areas of Bio-Signals
• Correlation
• Regression
BIOMEDICAL SIGNAL PROCESSING BY
WAVELET ANALYSIS
• Mathematical transformations are applied to signals to obtain
further information from that signal that is not readily available in
the raw signal.
• There are a number of transformations that can be applied, among
which the Fourier transforms are probably by far the most popular.
• In many cases, the most distinguished information is hidden in the
frequency content of the signal.
• The frequency spectrum of a signal is basically the frequency
components (spectral components) of that signal.
• The frequency spectrum of a signal shows what frequencies exist
in the signal. Often times, the information that cannot be readily
seen in the time-domain can be seen in the frequency domain
• Although the Fourier transforms tells us how much its
frequency exists in a signal, it does not tell us when in time
these frequency components occur.
• This information is required when the signal is non-
stationary. All the real world signals are not stationary,
since their frequency changes in time.
• A wavelet is a wave-like oscillation that is localized in the
sense that it grows from zero, reaches to a maximum
amplitude and then decreases back to zero amplitude again.
• It thus has a location where it maximizes, a characteristic
oscillation period, and also a scale over which it amplifies
and declines. Wavelets can be used in signal analysis,
image processing and data compression.
• They are useful for sorting out scale information, while still
maintaining some degree of time or space locality because the
structure functions are obtained by scaling and translating one
or two "mother functions", time-scale wavelets are particularly
appropriate for analyzing fields that are fractal.
• Wavelets can be appropriate for analyzing non-stationary time
series, whereas Fourier analysis generally is not. They can be
applied to time series as a sort of fusion (or compromise)
between filtering and Fourier analysis.
• Wavelet Transform (WT) is a useful tool for a variety of signal
processing and compression applications.
• One type of wavelet transform is designed to be easily
reversible (invertible); that means the original signal can be
easily recovered after it has been transformed.
• This kind of wavelet transform is used for image or signal
compression and cleaning or denoising (noise and blur
reduction) .
• The wavelet transform is an emerging signal processing
technique that can be used to represent real-life non stationary
signals with high efficiency .
• The wavelet transform has emerged over recent years as the
most favored tool by researchers for analyzing problematic
signals across a wide variety of areas in science, engineering
and medicine.
• Wavelets are mathematical functions that cut up data into
different frequency components, and then study each
component with a resolution matched to its scale .
• Signal information in wavelet development is conveyed by a
comparatively small number of large coefficients. This
property of the wavelet transform makes the use of wavelets
mainly ideal in signal estimation.
• It has been revealed that wavelets can eliminate noise more
effectively than previously used methods.
• Wavelet transforms can decompose a signal into numerous
scales that represent dissimilar frequency bands and at each
scale, the position of the signal’s instantaneous structures can
be determined approximately.
• The purpose is to satisfy certain mathematical necessities and
is used in representing data or other functions of a wavelet.
• The signals are examined and expressed as a linear
combination of the sum of the product of the wavelet
coefficients and a mother wavelet by wavelet transform .
Figure: Illustration of Wavelet Transform
• A wavelet is simply a small wave, which has energy
concentrated in time to give a tool for the analysis of transient,
non-stationary or time-varying phenomena.
• In basic a wavelet function is defined as a wavelet that
decomposes a signal in various multi-resolution parts in
accordance with a basic function.
• In the wavelet transform, the original signal (1-D, 2-D, 3-D) is
transformed using predefined wavelets. The wavelets are
orthogonal, orthonormal or bi-orthogonal, scalar or multi-
wavelets.
• A Wavelet is a “small wave” having the oscillating wavelike
characteristics and the ability to allow simultaneous time and
frequency analysis by the way of a time-frequency localization
of the signal .
• The wavelet transform is scaled and shifted version of the time
mother wavelet (a signal with tiny oscillations).
• The mother wavelet function is the main base of wavelet
transforms that would permit identification of correlated
coefficients across multiple signals.
• The more similar the mother wavelet function is to the wavelet
coefficients across signals, the more precisely the signal of
interest can be identified and isolated; hence, identification of
a mother wavelet function is of paramount significance
MUTLI RESOLUTION ANALYSIS
• The big disadvantage of a Fourier expansion is that it has only
frequency resolution and no time resolution. This means that
although we might be able to determine all the frequencies present
in a signal, we do not know when they are present.
• To overcome this problem, in the past decades several
multiresolution solutions have been developed which are more or
less able to represent a signal in the time and frequency domain at
the same time and we have discussed in detail in the previous
chapter.
• The idea behind the time-frequency joint representations is to cut the
signal of interest into several parts and then analyze the parts
separately.
• It is clear that analyzing a signal this way will give more
information about the when and where of different frequency
components.
MRA
• Short time Fourier transform (STFT) provides a time-frequency
representation that is not possible in FT.
• In STFT, a window function is chosen in such a way that the portion
of a non-stationary signal which is covered by the window function
seems stationary.
• This window function is then convolved with the original signal so
that the signal or data part covered by the window is selected only.
• FT is then applied to the newly generated stationary signal. The
window is then moved to the next slot of signal and the previously
mentioned steps are applied repeatedly until the whole signal is
completely analyzed. For an image (x, y) , its STFT can be defined as
• Compared to FT, STFT can show where the frequency components are as shown in
Figure

Figure : Different tilings in time frequency space

• The main drawback with this approach is in determining the size of the window. The
frequency representation in the transformed domain is controlled by the width of the
chosen window.
• When the window is wide, it provides a better frequency resolution (close frequency
components can be separated) but a poor time resolution, and vice versa when the
window is narrow .
• For this reason, a window size may be appropriate for one image but not for another.
Therefore, STFT is not an ideal tool to extract features from images as the exact
locations of spectral components cannot be obtained with it.
• Multiresolution methods or hierarchical approaches attempt to find a specific
frequency at a specific location, which is the main shortcoming of FT and STFT.
However, it is not possible to find a specific frequency at a specific location
simultaneously. Therefore, as a tradeoff between time-frequency representations,
multiresolution methods are created.
• Multiresolution methods are designed to obtain a good time resolution but less
accurate frequency resolution at high frequencies and a good frequency resolution but
less accurate time resolution at low frequencies.
• This approach is useful when the signal contains high frequency components for short
durations and low frequency components for long duration .
• Hence multiresolution approaches are more effective in image analysis and they
overcome the limitations of frequency and location resolutions found in FT and STFT.
• A set of coefficients are obtained from a multiresolution transform. These coefficients
correspond to the frequency information at a different resolution, location and
sometimes the orientation of the image.
• Many different researchers have developed face authentication systems that consist of
two modules: facial feature extraction and face recognition modules. PCA, ICA and
LDA are well known approaches to face recognition that use feature sub-spaces.
• Wavelet Transform is a popular tool in image processing and computer vision, because
of its ability to capture localized time- frequency information of image extraction.
• The decomposition of the data into different frequency ranges allows us to isolate the
frequency components introduced by intrinsic deformations due to expression or
extrinsic factors (like illumination) into certain sub bands.
• Wavelet-based methods prune away these variable sub bands, and focus on the sub
bands that contain the most relevant information to better represent the data .
• By applying wavelet transform on an image we gain three benefits which are very
useful in face recognition. Those are 1) Dimensionality reduction, which will lead to less
computational complexity, 2) Multiresolution data approximation, and 3) Insensitive
feature extraction.
• The success of wavelets is mainly due to the good performance for piecewise smooth
functions in one dimension.
• Unfortunately, such is not the case in two dimensions. In essence, wavelets are good at
catching zero-dimensional or point singularities, but two-dimensional piecewise smooth
signals resembling images have one-dimensional singularities.
• That is, smooth regions are separated by edges, and while edges are discontinuous
across, they are typically smooth curves.
• Hence wavelets in two dimensions are obtained by a tensor-product of one dimensional
wavelets and they are thus good at isolating the discontinuity across an edge, but will
not see the smoothness along the edge.
Chaotic Signals

• Chaotic signals are neither periodic nor stochastic, which makes them very
difficult to predict beyond a short time into the future.
• The difficulty in prediction is due to their extreme sensitivity to initial conditions,
characteristic of these nonlinear systems.
• The essential problem in nonlinear bio signal analysis is to determine whether a
given bio signal (a time series) is a deterministic signal from a dynamical system.
• Frequency Analysis:
• The statistical analysis of chaotic signals includes spectral analysis to confirm
the absence of any spectral lines, since chaotic signals do not have any periodic
deterministic component.
• Absence of spectral lines would indicate that the signal is either chaotic
or stochastic. However, chaos is a complicated non-periodic motion distinct
from stochastic processes in that the amplitude of the high-frequency spectrum
shows an exponential decline. The frequency spectrum can be evaluated using
FFT-based methods

Estimating Correlation Dimension of a Chaotic Signal:
• One of the ways to measure the dimensionality of the phase portrait of a
chaotic signal is through what is known as the correlation dimension D2.
It is defined as

as r0, where r is the length of the side of the hyper cubes that are
needed to cover the phase plot and C(r) is correlation sum. The
correlation sum is defined as

where pi is the probability of a finding a single point belonging to the phase


plot within the hypercube. M(r) is the number of m-dimensional cells or
hyper cubes of side r needed to cover the entire phase plot.
Principal component analysis (PCA)
Definition:
• Principal component analysis (PCA) is a method of decomposition of multichannel of
one particular time period data into components that are linearly independent; that is,
spatially and temporally uncorrelated. Depending on the field of application, it is also
named the discrete Karhunen-Loe´ve transform (KLT), the Hotelling transform, or
proper orthogonal decomposition (POD).
• The geometrical intuition behind this method is as followed by treating samples from
all channels at a given time moment as a point in the space of dimension equal to the
number of channels.
• One epoch of multichannel data forms a cloud of points in that space. This cloud can
be spread more in some directions than in the others. The measure of the spread is the
variance of the points locations in a given direction.
• The PCA finds a set of orthogonal axes, a base, such that each consecutive axis spans
the directions with consecutively decreasing variance.
• The projections of the points onto these axes constitute the components. Each
component can be visualized as a time series.
• On the other hand the original time series can be recovered as a linear combination of
these components. Components corresponding to the smallest variance can be
neglected and in this way a reduction of data dimensionality can be achieved.
Computation
• PCA can be performed by means of the singular value
decomposition (SVD) algorithm. A particular period of k
channels data of length m can be represented as a mxk matrix
X. It is always possible to find three matrices P, A, and M
such that:
• x = PAMT
where P is the m× k matrix containing k normalized
principal component waveforms (PTP = 1), A is k × k, diagonal
matrix of components amplitudes, M is a k × k matrix mapping
components to original data such that Mij is the contribution of
jth component to ith channel; MTM = 1. The SVD algorithm is
implemented in MATLAB as a function svd.
Applications

• PCA can be used as a tool for decomposition of multichannel


data into a linear combination of components characterized by
different spatial, temporal, and amplitude distribution and used
as a preprocessing step in source localization techniques.
• Factor analysis (FA), which leads to the reduction of
dimensionality of the data is. FA is related to PCA but not
identical.
• PCA performs a variance- maximizing rotation (varimax) of
the variable space, thus it takes into account all variability in
the data.
Independent components analysis (ICA)
Definition:
• Independent component analysis (ICA) is a statistical signal
processing method that decomposes a multichannel signal into
components that are statistically independent.
• In simple words two components s1 and s2 are independent when
information of the value of s1 does not give any information about the
value of s2. The ICA can be represented by a simple generative model:
x = Ds
• where the x ={ x1, x2,... , xn} is the measured n channel signal, D is the
mixing matrix, and s = { s1, s2,..., sn }is the activity of n sources.
• The main assumption about s is that the si are statistically
independently assumed.
• Assume that the independent components have non Gaussian
distribution.
This model implies the following:
• The signal is a linear mixture of the activities of the sources.
• The signals due to each source are independent.
• The process of mixing sources and the sources themselves are
stationary.
• The variances of the independent components cannot be determined
clearly.
• The natural choice to solve this ambiguity is to fix the magnitude
of the independent components so that they have unit variance:
E[si]= 1.
• The order of the components is arbitrary. If we reorder both in the
same way: the components in s, and the columns in D, we obtain
exactly the same signal x.
• The main computational issue in ICA is the estimation of the
mixing matrix D. Once it is known, the independent components
can be obtained by:
S = D−1x
Estimation
• Finding the independent components can be considered in the light of the
central limit theorem as finding components of least Gaussian
distributions. To comprehend this approach let us follow the heuristic
method.
• Let us define y = wTx.
• Please note, that if wT is one of the columns of the matrix D−1, then y is one
of the sought components. With the change of variables z = DTw.
• we can write y = wTx = wTDs = zTs. This exposes the fact that y is a linear
combination of the components si with the weights given by zi.
• It stems from central limit theorem that the sum of independent random
variables has more Gaussian character than each of the variables alone.
• The linear combination becomes least Gaussian when z has only one
nonzero element.
• Therefore the problem of estimation of the ICA model can be formulated
as a problem of finding w which maximizes the non gaussianity of y = wTx
.
Computation:
• The intuitive heuristics of maximum nongaussianity can be used to derive
Applications
• Used in the field of biomedical signal analysis, related to EEG and MEG artifact reduction.
• Feature extraction in studies of event-related brain dynamics.
• A preprocessing step in the search of sources of EEG and MEG activity localization.
• ICA is applied as well in analysis of heart and muscle signals
Coherent Treatment Of Various Biomedical Signal
Processing Methods And Applications
• Brain signals such as local field potentials (LFP), electro corticogram (ECoG),
electroencephalogram (EEG), and magnetoen- cephalogram (MEG), event-related responses
(ERP), and evoked fields (EF)
• Electroencephalogram (EEG) is a record of the electric signal generated by the cooperative
action of brain cells, or more precisely, the time course of neuronal extracellular field
potentials generated by their synchronous action.
• EEG recorded in the absence of stimuli is called spontaneous EEG.
• A brain electric field generated as a response to external or internal stimulus is called an
event-related potential (ERP). EEG can be measured by means of electrodes placed on the
scalp or directly on the cortex.
• In the latter case it is called an electro corticogram (ECoG) ,lately also iEEG (intracranial
EEG) abbreviation is used.
– Electric fields measured intra cortically with electrodes implanted in the brain structures were
named local fields potentials (LFP). The same electrodes (if small enough) can also record
action potentials (spikes).
• The amplitude of EEG of a normal subject in the awake state, recorded with the scalp
electrodes, is 10-100 μV.
• In case of epilepsy the EEG amplitudes may increase by almost an order of magnitude.
• Scalp potentials are largely independent of electrode size which is due to severe space
averaging by volume conduction between brain and scalp.
Applications of EEG
• EEG found application in the investigation of the information processing in the
brain and in medical diagnosis. In particular it is helpful in solving the following
medical problems and diseases:
• Epilepsy
• Brain tumors, head injury, stroke
• Psychiatric diseases
• Sleep disorders
• CNS disorders, e.g., cerebral anoxia, cerebral inflammatory processes, cerebral
palsy disease .
• Testing of psychotropic and anticonvulsant drugs
• Monitoring alertness, e.g., controlling anaesthesia depth
• Testing sensory pathways
Another application of EEG concerns design of brain machine interfaces for
direct communication between brain and computer and then possibly with specific
devices. In the investigation of brain processes, especially perception and cognitive
functions usually ERPs are analyzed, since they supply information on the reaction of
the brain to specific external or internal stimuli. ERPs are also used for testing the
sensory pathways, in psychiatric diseases, and developmental disorders, e.g., in
dyslexia diagnosis.
MEG(Magnetoen Cephalogram):-
• Variable electric field generates a magnetic field which
follows from the Maxwell equations.
• The recording of the magnetic field of the brain is called a
magnetoen- cephalogram (MEG).
• The amplitude of the MEG is less than 0.5 picotesla (pT) and
its frequency range is similar to that of the EEG.
• Since electric and magnetic fields are orthogonal, radial
sources (dipoles oriented perpendicular to the skull) are better
visible in EEG and tangential sources in MEG.
• Radial sources, due to geometry, hardly contribute to MEG,
but the advantage of this technique is that the magnetic field is
weakly influenced by the structures of the head.
• The below figure shows schematic representations of the
electric and magnetic fields generated by a current dipole.
- +

a b

Figure: Isolines of the electric (a) and magnetic (b) fields generated by a current dipole.
Applications of MEG:
• Magnetoencephalography (MEG), in which magnetic fields generated by brain
activity are recorded outside of the head, is now in routine clinical practice
throughout the world.
• MEG has become a recognized and vital part of the pre-surgical evaluation of
patients with epilepsy and patients with brain tumors.
• The big advantage of the MEG technique is that the magnetic field generated inside
the brain is affected to a much lesser extent by the conductivities of the skull and
scalp than the electric field generated by the same source.
• In the limiting case of spherical head model, concentric in homogeneities do not
affect the magnetic field at all, whereas they have to be taken into account in the
analysis of EEG data ,this property of MEG is very advantageous when localizing
sources of activity.
• Magnetic evoked fields (EF) are a counterpart of ERP; they are an effect of event-
related brain activity which can be elicited by visual, auditory, sensory, or internal
stimuli. Their analysis usually concerns identification of the brain region
responsible for their generation.
• The MEG sensors are not directly attached to the subject’s head. This implies that
during the MEG measurement the subject’s head should not move in respect to the
sensors. This requirement limits the possibility of long-session measurements, or
long-term monitoring.
Application Areas of Bio-Signals
• Bio signals represent space-time records with one or multiple independent or
dependent variables that capture some aspect of a biological event. They can be
either deterministic or random in nature.
• Deterministic signals very often can be compact, described by syntactic
techniques, while random signals are mainly described by statistical techniques.
• Here presentation of the most common biosignals and the events from which they
were generated. Below able describes these signals. Biosignals are usually
divided into the following groups:

Event Signal
Heart electrical conduction Electrocardiogram (ECG)
at limb surfaces

Surface CNS electrical activity Electroencephalogram (EEG)


Magnetic fields of neural activity Magnetoencephalogram (MEG)
Muscle electrical activity Electromyogram (EMG)
Bioelectrical (electrophysiological) signals:
Electrical and chemical transmissions form the electrophysiological
communication between neural and muscle cells. Signal transmission between cells takes
place as each cell becomes depolarized relative to its resting membrane potential.
• These changes are recorded by electrodes in contact with the physiological tissue that
conducts electricity.
• While surface electrodes capture bioelectric signals of groups of correlated nerve or
muscle cell potentials, intracellular electrodes show the di fference in electric potential
across an individual cell membrane.
Biomechanical signals:
They are produced by tissue motion or force with highly correlated time-series
from sample to sample, enabling an accurate modeling of the signal over long time
periods.
Biomagnetic signals:
Body organs produce weak magnetic fields as they undergo electrical changes,
and these biosignals can be used to produce three-dimensional images.
Biochemical signals:
They provide functional physiological information and show the levels and
changes of various biochemicals. Chemicals such as glucose and metabolites can be also
measured
Correlation
• Correlation is a measure of association between two variables. The
variables are not designated as dependent or independent.
• The two most popular correlation coefficients are spearman’s correlation
coefficient rho and pearson’s product –moment correlation coefficient.
• When calculating a correlation coefficient for ordinal data, spearman’s
technique. For interval or ratio-type data, use pearson as technique.
• The value of a correlation can vary from minus one to plus one as positive
and negative correlation.
• A correlation of zero means there is no relationship between the two
variables.
• For negative correlation ,the variables work opposite each other.
• For positive correlation, the variables move together.
• The standard error of a correlation coefficient is used to
determine the confidence intervals around a true correlation of
zero.
• The standard error can be calculated for interval or ratio-type
data.
• The significance of correlation coefficient is determined from
the t-statistic.
• The probability of the t-statistic indicates whether the
observed correlation coefficient occurred by chance if the true
correlation is zero. X(t)and h(t) are two signals.
Regression
• Simple regression is used to examine the relationship between
one dependent variable.
• After performing an analysis, the regression statistics can be
used to predict the dependent variable when the independent
variable is known.
• Regression goes beyond correlation by adding prediction
capabilities.
• Quantitative regression adds precision by developing a
mathematical formula that can be used for predictive purposes.
• For example, the method of ordinary least squares computes
the unique line that minimizes the sum of squared distances
between true data and etc.
• Regression analysis is primarily used for two conceptually
distinct purposes.
• First, regression analysis is widely used for prediction and
forecasting.

• ayx , byx are correlation coefficients


References
• Eugene N Bruce, “Biomedical Signal Processing and Signal
Modeling” , John Wiley & Son’s publication, 2001.
• Myer Kutz, “Biomedical Engineering and Design Handbook ,
Volume I” . McGraw Hill, 2009.
• D C Reddy , “Biomedical Signal Processing” , McGraw Hill, 2005.
• Source: STANDARD HANDBOOK OF BIOMEDICAL
ENGINEERING AND DESIGN by Digital Engineering Library @
McGraw Hill (www.digitalengineeringlibrary.com).
• Biomedical Signal Analysis - Contemporary Methods and
Applications (https://www.researchgate.net/publication/225037499).
• Chapter 3 MULTIRESOLUTION ANALYSIS – Shodhganga
shodhganga.inflibnet.ac.in › bit stream
• MIT Open Courseware (http://ocw.mit.edu) HST.582J / 6.555J /
16.456J Biomedical Signal and Image Processing Spring 2007

You might also like