You are on page 1of 41

PROJECT REPORT ON BLIND SOURCE SEPARATION USING H’ FILTER

PROJECT REPORT SUBMITTED TOWARDS PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF BACHELOR OF TECHNOLOGY IN APPLIED ELECTRONICS & INSTRUMENTATION ENGINEERING UNDER BIJU PATTNAIK UNIVERSITY OF TECHNOLOGY,ROURKELA BY VIJAY PRASAD POUDEL SAURAV SAMANTRAY NAIRITA BANERJEE 0701209374 DEEPAK KUMAR SWAIN JYOTI RANJAN SAHOO 0701209230 0701209357

0701209376 0821209029

UNDER THE GUIDANCE OF Er. SUDHANSHU MOHAN BISWAL

DEPARTMENT OF APPLIED ELECTRONICS &INSTRUMENTATION

1

ENGINEERING SILICON INSTITUTE OF TECHNOLOGY SILICON HILLS,PATIA,BHUBANESWAR-751024

SILICON INSTITUTE OF TECHNOLOGY, BHUBANESWAR
DEPARTMENT OF APPLIED ELECTRONICS AND INSTRUMENTATION SILICON INSTITUTE OF TECHNOLOGY, BHUBANESWAR

BIJU PATTNAIK UNIVERSITY OF TECHNOLOGY

CERTIFICATE
This is to certify that the project entitled SOURCE SIGNAL SEPARATION BY H by VIJAY PRASAD POUDEL, FILTER BLIND is submitted NAIRITA

SAURAV

SAMANTRAY,

BANERJEE , DEEPAK KUMAR SWAIN

& JYOTI RANJAN SAHOO

bearing Regd. No.: 0701209230, 0701209357, 0701209374, 0701209376 & 0821209029 respectively, in the partial

fulfillment of the bachelor of technology under Biju Pattnaik Applied University Electronics of and Technology, Rourkela, in

Instrumentation

engineering

is a bonfide supervision and

work carried out by guidance of

them under the

Er. SUDHANSHU MOHAN

2

BISWAL during the 7th semester of academic session 2010-2011.

H.O.D

PROJECT GUIDE

EXTERNAL

ACKNOWLEDGEMENT
The satisfaction that accompanies the successful completion of any task would be incomplete without the people, whose constant guidance and encouragement crowns all effort with success.

We take the opportunities to express our sincere thanks to the management of SILICON INSTITUE OF TECHNOLOGY, who has been a constant source of inspiration and strength to us throughout our study.

It gives us immense pleasure to have the privilege of expressing us indebtedness and gratitude to our respected H.O.D Er. NARAYAN NAYAK for always being a motivational force.

We express our gratitude to our Project guide Er.SUDHANSHU MOHAN BISWAL for positive criticism, valuable suggestions and guidance helped us to complete our work successfully.

3

Lastly, words run to express our gratitude to all lecturers and friends for their co-operation, constructive criticism and valuable suggestions during the project work.

Thanks to allªª..

CONTENTS
CHAPTER-1 Pages y What Is Blind Signal Separation --------------------------------------7 y Why Blind Signal Separation? ----------------------------------------8 y Applications Of BSS -----------------------------------------------------8 y Assumptions Made In Blind Source Separation -------------------8 CHAPTER -2 y Techniques Used In Blind Source Separation ----------------------9 y Principal Component Analysis ----------------------------------------10 y Independent Component Analysis ------------------------------------10 y Sparse Component Analysis -------------------------------------------11 CHAPTER-3 y Adaptive Filtering Technique ----------------------------------------18 y Least Mean Square Filters ---------------------------------------------19 y Kalman Filter -------------------------------------------------------------20

4

y Why H-Infinity Filtering -----------------------------------------------21 CHAPTER- 4 y Bss Problem Formulation ---------------------------------------------22 y Matlab Code -------------------------------------------------------------26 CHAPTER-5
y

Applications -------------------------------------------------------------36

FIGURES

pages

FIG-1-------------------------------------------------------------------- 7 FIG-2-------------------------------------------------------------------- 7 FIG-3-------------------------------------------------------------------- 7 FIG-4-------------------------------------------------------------------- 15 FIG-5-------------------------------------------------------------------- 16 FIG-6-------------------------------------------------------------------- 18 FIG-7-------------------------------------------------------------------- 19 FIG-8-------------------------------------------------------------------- 23 FIG-9-------------------------------------------------------------------- 36

5

CHAPTER-1 BLIND SIGNAL SEPARATION

6

WHAT IS BLIND SIGNAL SEPARATION?
In blind signal separation, multiple streams of information are extracted from linear mixtures of these signal streams. This process is blind if examples of the source signals, along with their corresponding mixtures, are unavailable for training. Figure 1: An illustration of blind source separation. This figure shows three source signals, or independent components.

Figure 2: Due to some external circumstances, only linear mixtures of the source signals in Fig. 1, as depicted here, can be observed.

7

Figure 3: Using only the linear mixtures in Fig. 2, the source signals in Fig. 1 can be estimated, up to some multiplying factors. This figure shows the estimates of the source signals.

WHY BLIND SIGNAL SEPARATION?

Interest in blind signal separation has recently developed for three reasons: (1) the development of statistical frameworks for understanding the BSS task, (2) a corresponding development of several useful BSS methods (3) the identification of many potential applications of BSS.

APPLICATIONS OF BSS 1. Blind I/Q Signal Separation for Receiver Signal Processing 2. Blind Separation of Co-Channel Signals Using an Antenna Array 3. Separation of Overlapping Radio Frequency Identification (RFID) Signals by Antenna Arrays 4. Blind Signal Separation in Biomedical Applications like
Electroencephalography (EEG),Magneto encephalography (MEG),Electrocardiography (ECG/EKG),Functional Magnetic Resonance Imaging (fMRI)

5. Communication system: to separate/reduce intersymbol interference (ISI),

ASSUMPTIONS MADE IN BLIND SOURCE SEPARATION

8

1. Assumes no information about mixing process or sources: BLIND 2. Does not assume knowledge of direction of arrival (DOA) of sources. 3. Does not require signals to be separable in frequency domain. 4. Mutual statistical independence of sources i.e. they don·t have mutual information.

CHAPTER-2 TECHNIQUES USED IN BLIND SIGNAL SEPARATION

9

TECHNIQUES USED IN BLIND SOURCE SEPARATION
1.Principal Component Analysis (PCA) 2.Independent Component Analysis (ICA) 3.Sparse Component Analysis (SCA)

PRINCIPAL COMPONENT ANALYSIS
Principal component analysis is one of the unsupervised learning method.PCA projects the data into a new space spanned by the principal components. Each successive principal component is selected to be orthonormal to the previous ones and to capture the maximum variance that is not already present in the previous components. The constraint of mutual orthogonality of components implied by classical PCA, however, may not be appropriate for biomedical data. Moreover, since PCA features all use second-order statistics technique to achieve the correlation learning rules and only covariances between the observed variables are used in the estimation, these features are only sensitive to second-order statistics. The failure of correlation-based learning algorithms is that they reflect only the amplitude spectrum of the signal and ignore the phase spectrum. Extracting and characterizing the most informative features of the signals, however, require higherorder statistics.

INDEPENDENT COMPONENT ANALYSIS
The goal of ICA is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible. Such a 10

representation can be used to capture the essential structure of the data in many applications, including feature extraction and signal separation. ICA is already widely used for performing blind source separation (BSS) in signal processing. ICA is a fairly new and a generally applicable method to several challenges in signal processing. It reveals a diversity of theoretical questions and opens a variety of potential applications. Successful results in EEG, fMRI, speech recognition and face recognition systems indicate the power and optimistic hope in the new paradigm.

SPARSE COMPONENT ANALYSIS
The blind separation technique includes two steps, one is to estimate a mixing matrix (basis matrix in the sparse representation), the second is to estimate sources (coefficient matrix). If the sources are sufficiently sparse, blind separation can be carried out directly in the time domain. Otherwise, blind separation can be implemented in time-frequency domain after applying wavelet packet transformation preprocessing to the observed mixtures.thus in these cases sparse component analysis is used.

PRINCIPLE COMPONENT ANALYSIS
The following is a detailed description of PCA using the covariance method Organize the data set Suppose we have data comprising a set of observations of M variables, and you want to reduce the data so that each observation can be described with only L variables, L < M. Suppose further, that the data are arranged as a set of N data vectors the M variables. 


with each

representing a single grouped observation of

Write

as column vectors, each of which has M rows.

Place the column vectors into a single matrix X of dimensions M × N. Calculate the empirical mean 

Find the empirical mean along each dimension m = 1, ..., M.

11 

Place the calculated mean values into an empirical mean vector u of dimensions M × 1.

Calculate the deviations from the mean
Mean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data[6]. Hence we proceed by centering the data as follows: 
Subtract the empirical mean vector u from each column of the data matrix X.  Store mean-subtracted data in the M × N matrix B.

where h is a 1 × N row vector of all 1s: Find the covariance matrix Find the M × M empirical covariance matrix C from the outer product of matrix B with itself:

where is the expected value operator, is the outer product operator, and is the conjugate transpose operator. Note that if B consists entirely of real numbers, which is the case in many applications, the "conjugate transpose" is the same as the regular transpose. . Find the eigenvectors and eigenvalues of the covariance matrix Compute the matrix V of eigenvectors which diagonalizes the covariance matrix C: where D is the diagonal matrix of eigenvalues of C. Matrix D will take the form of an M × M diagonal matrix, where is the mth eigenvalue of the covariance matrix C, and

12

Matrix V, also of dimension M × M, contains M column vectors, each of length M, which represent the M eigenvectors of the covariance matrix C. The eigenvalues and eigenvectors are ordered and paired. The mth eigenvalue corresponds to the mth eigenvector. Rearrange the eigenvectors and eigenvalues Sort the columns of the eigenvector matrix V and eigenvalue matrix D in order of decreasing eigenvalue. Make sure to maintain the correct pairings between the columns in each matrix. Compute the cumulative energy content for each eigenvector The eigenvalues represent the distribution of the source data's energy among each of the eigenvectors, where the eigenvectors form abasis for the data. The cumulative energy content g for the mth eigenvector is the sum of the energy content across all of the eigenvalues from 1 through m:

Select a subset of the eigenvectors as basis vectors Save the first L columns of V as the M × L matrix W: where

Use the vector g as a guide in choosing an appropriate value for L. The goal is to choose a value of L as small as possible while achieving a reasonably high value of g on a percentage basis. For example, you may want to choose L so that the cumulative energy g is above a certain threshold, like 90 percent. In this case, choose the smallest value of L such that

Convert the source data to z-scores Create an M × 1 empirical standard deviation vector s from the square root of each element along the main diagonal of the covariance matrix C:

13

Calculate the M × N z-score matrix: (divide element-by-element) Project the z-scores of the data onto the new basis The projected vectors are the columns of the matrix W* is the conjugate transpose of the eigenvector matrix.

LIMITATIONS OF PRINCIPLE COMPONENT ANALYSIS
There are however some limitations with PCA that we should take into consideration. First of all it·s a linear method. While the PCA still tries to produce components by variance, it fails as the largest variance is not along a single vector, but along a non-linear path. Neural networks on the other hand are perfectly capable of dealing with nonlinear problems and can on their own do this. In addition, they can do scaling directly so that the principal components can be scaled by their importance (eigenvalues): So while PCA in theory is an optimal linear feature extractor, it can be bad for non-linear problems.

linear independent component analysis
Independent component analysis (ICA) is a statistical and computational technique for revealing hidden factors that underlie sets of random variables, measurements, or signals. Linear independent component analysis can be divided into noiseless and noisy cases, where noiseless ICA is a special case of noisy ICA.In the definitions, the observed m-dimensional random vector is denoted by General defination The data is represented by the random vector and the .

components as the random vector . The task is to transform the observed data x, using a linear static transformation W as ,

14

into maximally independent components s measured by some function Noisy ICA model ICA of a random vector the data: of independence.

consists of estimating the following generative model for

where the latent variables (components) si in the vector

are assumed

independent. The matrix is a constant 'mixing' matrix, and is a mdimensional random noise vector.This definition reduces the ICA problem to ordinary estimation of a latent variable model. Noise-free ICA model ICA of a random vector the data:

consists of estimating the following generative model for

and are as in noisy model.Here the noise vector has been omitted. the where natural relation is used with n=m.

One of the applications of ICA is in solving the cocktail party problem

15

FIG-4

A simple application of ICA is the ´cocktail party problemµ, where the underlying speech signals are separated from a sample data consisting of people talking simultaneously in a room. Usually the problem is simplified by assuming no time delays or echoes. An important note to consider is that if N sources are present, at least N observations (e.g. microphones) are needed to get the original signals. This constitutes the square (M= N, where M is the input dimension of the data and N is the dimension of the model).

Mixing matrix A s1 Sources x2 s2 n sources, m=n observations
FIG-5

x1 Observations

x = As

16

CHAPTER-3 ADAPTIVE FILTERING TECHNIQUE

17

ADAPTIVE FILTERING TECHNIQUE

An adaptive filter is a filter that self-adjusts its transfer function according to an optimizing algorithm. Because of the complexity of the optimizing algorithms, most adaptive filters are digital filters that perform digital signal processing and adapt their performance based on the input signal. A conventional fixed Filter,which is used to extract information from an input time sequence, is linear and time invariant. An adaptive filter is a filter which automatically adjusts its coefficients to optimize an objective function. A conceptual adaptive filter is shown in Fig.5, where the filter minimizes the objective function of mean square error by modifying itself and is thus a time varying system. An adaptive filter is useful when an exact filtering operation may be unknown and/or this operation may be mildly non-stationary.

FIG-6

18

e(k)=d(k)-y(k) where w1,w2,w3 are the adjustment weights d(k)=desired signal y(k)=filter output signal e(k)=error signal

Least mean square filters
Least mean squares (LMS) algorithms are a class of adaptive filter used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean squares of the error signal (difference between the desired and the actual signal). It is a stochastic gradient descent method in that the filter is only adapted based on the error at the current time.

FIG-7 Most linear adaptive filtering problems can be formulated using the block diagram above. That is, an unknown system attempts to adapt the filter directly observable. definition of symbols is to be identified and the adaptive filter

to make it as close as possible to , while using only observable signals x(n), d(n) and e(n); but y(n), v(n) and h(n) are not

19

d(n) = y(n) + (n)

LMS algorithm summary The LMS algorithm for a pth order algorithm can be summarized as Parameters: Initialisation: Computation: For n = 0,1,2,... p = filter order = step size

where

denotes the Hermitian transpose of

.

KALMAN FILTER
Its purpose is to use measurements that are observed over time that contain noise (random variations) and other inaccuracies, and produce values that tend to be closer to the true values of the measurements and their associated calculated values. The Kalman filter has many applications in technology, and is an essential part of the development of space and military technology The Kalman filter produces estimates of the true values of measurements and their associated calculated values by predicting a value, estimating the uncertainty of the predicted value, and computing a weighted average of the predicted value and the measured value. The most weight is given to the value with the least uncertainty. The estimates produced by the method tend to be closer to the true values than the original measurements because the weighted average has a better estimated uncertainty than either of the values that went into the weighted average. The Kalman filter is a recursive estimator. This means that only the estimated state from

20

the previous time step and the current measurement are needed to compute the estimate for the current state.

WHY H-INFINITY FILTERING H2 filtering, also known as Kalman filtering, is an estimation method which minimizes "average" estimation error. More precisely, the Kalman filter minimizes the variance of the estimation error. But there are a couple of serious limitations to the Kalman filter.   The Kalman filter assumes that the noise properties are known. What if we don't know anything about the system noise? The Kalman filter minimizes the "average" estimation error. What if we would prefer to minimize the worst-case estimation error?

These limitations gave rise to H-infinity filtering, also known as minimax filtering. Minimax filtering minimizes the "worst-case" estimation error. More precisely, the minimax filter minimizes the maximum singular value of the transfer function from the noise to the estimation error. While the Kalman filter requires a knowledge of the noise statistics of the filtered process, the minimax filter requires no such knowledge.

21

CHAPTER-4 BSS PROBLEM FORMULATION

22

BSS PROBLEM FORMULATION

g11 x1[n] g12 g21 g22 x2[n]
FIG-8

y1[n]

h11 h12 u1[n]

y2[n]

h21 h22 u2[n]

R unobserved source signals: x[n] = ( x1[n], x2[n], « xR[n] ) R random signals: y[n] = ( y1[n], y2[n], « yR[n] ) as linear instantaneous mixtures, given as: y[n] = W x[n]

Consider the problem of estimating the variables of some system. In dynamic systems (that is, systems which vary with time) the system variables are often denoted by the term state variables. Assume that the system variables, represented by the vector x, are governed by the equation xk+1 = Axk + wk where wk is random process noise, and the subscripts on the vectors represent the time step. Now suppose we can measure some combination of the states. Then our measurement can be represented by the equation zk = Hxk + vk where vk is random measurement noise. Now suppose we want to find an estimator for the state x based on the measurements z and our knowledge of the system equation. The estimator structure is assumed to be in the following predictor-corrector form:

23

where Kk is some gain which we need to determine. If we want to minimize the 2norm (the variance) of the estimation error, then we will choose Kk based on the Kalman filter. However, if we want to minimize the infinity-norm (the "worst-case" value) of the estimation error, then we will choose Kk based on the minimax filter. Several minimax filtering formulations have been proposed. The one we will consider here is the following: Find a filter gain Kk such that the maximum singular value is less than g . This is a way of minimizing the worst-case estimation error. This problem will have a solution for some values of g but not for values of g which are too small. If we choose a g for which the stated problem has a solution, then the minimax filtering problem can be solved by a constant gain K which is found by solving the following simultaneous equations:

In the above equations, the superscript -1 indicates matrix inversion, the superscript T indicates matrix transpostion, and I is the identity matrix. The simultaneous solution of these three equations is a problem in itself, but once we have a solution, the matrix K gives the minimax filtering solution. If g is too small, then the equations will not have a solution. One method to solve the three simultaneous equations is to use an iterative approach. A more analytical approach is as follows:  Form the following 2n 2n matrix:
  



Find the eigenvectors of Z. Denote those eigenvectors corresponding to eigenvalues outside the unit circle as c i (i = 1, . . . , n). If we have a square matrix D, then the eigenvalues of D are defined as all values of l that satisfy the equation Dg= g for some vector g. It turns out that if D is an nXn matrix, then there are always n values of l that satisfy this equation; that is, D has n eigenvalues. For instance, consider the following D matrix:

24

The eigenvalues of this matrix are 1 and 3. In other words, the two solutions to the Dg= g equation are as follows:3 Eigenvalues are difficult to compute in general. But if you have a 2X2 matrix of the following form:

The two eigenvalues of this matrix are: 

Form the following matrix: 

where X1 and X2 are n n matrices. Compute M = X2 X1-1.

This method only works if X1 has an inverse. If X1 does not have an inverse, that means that the chosen value of g is too small. At this point we see that both Kalman and minimax filtering have their pros and cons. The Kalman filter assumes that the noise statistics are known. The minimax filter does not make this assumption, but further assumes that absolutely nothing is known about the noise.

¡

25

MATLAB CODE

1.FOR SIGNAL SEPARATION(INPUT IS A SINE WAVE,SAWTOOTH WAVE
AND NOISE SIGNAL)

function [sources,mixtures,A]=input_data(samples); % Get sources and mixtures.

no_sources = 3; no_mixtures = no_sources; %sources=zeros(num_sources,samples); fc=50e6; t=0:1/100/fc:60/fc; y=sin(2*pi*fc*t); sources(1,:)=y(1:samples)'; y=sawtooth(4*pi*fc*t); sources(2,:)=y(1:samples)'; load gong; sources(3,:)=y(1:samples)';

sources=sources'; % Make mixing matrix A. A = randn(no_mixtures,no_sources); mixtures=sources*A;

26

BLIND SOURCE SEPARATION USING H INFINITY FILTER BY PCA

clear all; clc; no_of_sources no_of_mixtures % INPUT DATA. [sources mixtures A] = input_data_test(); % one mixture per column. % Evaluation of R AND P. [gamma1,gamma2]=gamma(); % Filter each column of mixtures array. S=filter(gamma1,1,mixtures); L=filter(gamma2,1,mixtures); % Find short-term and long-term covariance matrices. R=cov(S,1); P=cov(L,1); % Find eigenvectors W and eigenvalues d. [W d]=eig(R,P); W=real(W); % Recover source signals. ys = mixtures*W; % PLOT RESULTS figure(1); subplot(3,1,1); plot(sources(:,1)); subplot(3,1,2); plot(sources(:,2)); Subplot(3,1,3); plot(sources(:,3)); figure(2); = 3; = no_of_sources;

27

subplot(3,1,1); plot(mixtures(:,1)); subplot(3,1,2); plot(mixtures(:,2)); subplot(3,1,3); plot(mixtures(:,3)); figure(3); subplot(3,1,1); plot(ys(:,2)); Subplot(3,1,2); plot(ys(:,1)); Subplot(3,1,3); plot(ys(:,3));

FUNCTION TO INPUT VOICE FILES
function [sources,mixtures,A,f]=input_data_test(); % Get sources and mixtures.

no_sources = 3; no_mixtures = no_sources; %sources=zeros(num_sources,samples); [temp,f]=wavread('voice1.wav'); temp=temp'; sources=temp(1,:); N=length(sources); [temp,f]=wavread('voice2.wav'); temp=temp'; sources(2,:)=temp(1,1:N); [temp,f]=wavread('voice3.wav'); temp=temp'; sources(3,:)=temp(1,1:N); %fc=50e6; %t=0:1/100/fc:60/fc; %y=sin(2*pi*fc*t); %sources(1,:)=y(1:samples)'; %y=sawtooth(4*pi*fc*t); %sources(2,:)=y(1:samples)'; 28

%load gong;

sources(3,:)=y(1:samples)';

sources=sources'; % Make mixing matrix A. A = randn(no_mixtures,no_sources); mixtures=sources*A;

CODE FOR THE GAMMA ESTIMATION
function[gamma1,gamma2]=gamma(); lower_bound upper_bound max_len= 5000; n = 1; = 900000; = 8;

h=lower_bound; t = n*h; gamma = 2^(-1/h); temp = [0:t-1]'; gammas = ones(t,1)*gamma; mask = gamma.^temp; mask(1) = 0; mask = mask/sum(abs(mask)); mask(1) = -1; gamma1=mask; s_len = length(gamma1);

h=upper_bound;t = n*h; t = min(t,max_len); t=max(t,1); gamma = 2^(-1/h); temp = [0:t-1]'; gammas = ones(t,1)*gamma; mask = gamma.^temp; mask(1) = 0; mask = mask/sum(abs(mask)); mask(1) = -1; gamma2=mask; l_len = length(gamma2);

29

Input voice signals

LINEAR MIXTURE OF THE SOURCE SIGNALS

30

SIGNALS RECOVERED THROUGH BSS

BLIND SOURCE SEPARATION USING H INFINITY FILTER BY ICA clc fc=50; t=0:1/1000/fc:6/fc; y=sin(2*pi*fc*t); sources(1,:)=y'; samples=length(y); y=sawtooth(4*pi*fc*t); sources(2,:)=y(1:samples)'; y=sin(200*pi*fc*t); % Create sawtooth signal. y = awgn(y,10,'measured'); sources(3,:)=y(1:samples)';

31

A=[23 4 50;3 21 43;67 26 7]; r=A*sources; c=r*r'; c=c^-(1/2); w=c*r; fNorm = 50 / (6000/2); %0.3Hz cutoff frequency, 6kHz sample rate [b, a] = butter(2, fNorm, 'low'); %generates some vectors, b & a for the filter w(1,:) = filtfilt(b, a, w(1,:)); w(2,:) = filtfilt(b, a, w(2,:)); w(3,:) = filtfilt(b, a, w(3,:)); figure(1); subplot(3,1,1) plot(sources(1,:)); subplot(3,1,2) plot(sources(2,:)); subplot(3,1,3) plot(sources(3,:)); figure(2); subplot(3,1,1); plot(r(1,:)); subplot(3,1,2) plot(r(2,:)); subplot(3,1,3) plot(r(3,:)); figure(3); subplot(3,1,1) plot(w(3,:)); subplot(3,1,2) plot(w(2,:)); subplot(3,1,3) plot(w(1,:));

32

Input signals

LINEAR MIXTURE OF THE SOURCE SIGNALS

33

SIGNALS RECOVERED THROUGH BSS

H-Infinity Performance
34

The modern approach to characterizing closed-loop performance objectives is to measure the size of certain closed-loop transfer function matrices using various matrix norms. Matrix norms provide a measure of how large output signals can get for certain classes of input signals. Optimizing these types of performance objectives over the set of stabilizing controllers is the main thrust of recent optimal control theory, such as L1, H2, H , and optimal control.

Vector norms
A vector norm is defined as follows In order to avoid the square root sign in the rest of this article, we will use the square of the norm:

CHAPTER-5
35

APPLICATIONS

APPLICATIONS Blind I/Q Signal Separation for Receiver Signal Processing
In order to increase the receiver flexibility whilst one also emphasizes the receiver integrability and other implementation related aspects, the design of radio receivers `is no longer dominated by the traditional superheterodyne architecture. Instead, alternative receiver structures, like the direct conversion and low-IF architectures, are receiving more and more attention. The analog front-end of these types of receivers is partially based on complex or I/Q signal processing. In direct-conversion receivers, the image signal is inherently a self-image (the desired signal itself at negative frequencies), and the analog front-end image attenuation might be sufficient with low-order modulations. However, practical analog implementations of the needed I/Q signal processing have mismatches in the amplitude and phase responses of the I and Q branches, leading to finite attenuation of the image band signal. With higher-order modulations, such as 16- or 64-QAM, the distortion due to self-image cannot be neglected and again some kind of compensation is needed. The I/Q mismatches and carrier offsets as well as the linear distortion due to general bandpass channels were shown to create crosstalk between the transmitted I and Q signals. Using blind signal separation (BSS) techniques, the crosstalk or mixing of the I and Q is removed for receiver signal processing. Combining the presented I/Q mismatch

36

and carrier offset compensation and the channel equalizer principles into a single (or a cascade of two) I/Q separator(s) results in a versatile receiver building block for future radio communication systems. Generally speaking, this idea gives new views for applying complex or I/Q signal processing efficiently in radio receiver design and to take full advantage of the rich signal structure inherent to complex-valued communications signals.

FIG-9 Blind Separation of Co-Channel Signals Using an Antenna Array Wireless communication systems are witnessing rapid advances in volume and range of services. A major challenge for these systems today is the limited radio frequency spectrum available. Approaches that increase spectrum efficiency are therefore of great interest. The key idea involved here is that if one can make a sufficient number of measurements of independent linear combinations of the message signals from several sources, separation of the signals can be accomplished by solving a system of linear equations. One way to make such independent measurements is to use an antenna array at the base station. Array processing techniques can then be used to receive and transmit multiple signals that are separated in space. Hence, multiple co-channel users can be supported per cell to increase capacity. We study the problem of separating multiple synchronous digital signals received at an antenna array. The goal is to reliably demodulate each signal in the presence of other co-channel signals and noise. Several algorithms have been proposed in the array processing literature for separating co-channel signals based on availability of prior spatial or temporal information. The traditional spatial algorithms combine high-resolution directionfinding techniques such as MUSIC and ESPRIT with optimum beamforming to estimate the signal waveforms. However, these algorithms require that the number of signal wavefronts including multipath reflections be less than the number of sensors, which restricts their applicability in a wireless setting. In the recent past, several techniques have been developed that exploit the temporal structure of communication signals while assuming no prior spatial knowledge. These

37

techniques take advantage of signal properties such as constant modulus (CM), discrete alphabet, self-coherence, and high-order statistical properties. We developed a light-weight iterative least-squares (ILS) method for blind separation of co-channel signals using an antenna array. This algorithm can blindly recover the co-channel digital signal without the requirement of training signal or the knowledge of direction of arrival. This is particularly useful in situations where training signals are not available. For example, in communications intelligence, training signals are not accessible. In cellular applications, blind algorithms can be used to reject interference from adjacent cells.

Separation of Overlapping Radio Frequency Identification (RFID) Signals by Antenna Arrays Radio frequency identification (RFID) is a generic term that is used to describe a system that transmits the identity (in the form of a unique serial number) of an object or person wirelessly, using radio waves. RFID has become a key technology in mainstream applications that help the efficient tracking of manufactured goods and materials by technology achievements in microelectronics and communications. Unlike barcode technology, RFID does not require a line of sight. Some uses of RFID technology can be found in general application areas such as security and access control, transportation and supply chain management. An RFID system includes three primary components: a transponder (tag), a transceiver (reader) and a data collection device. The operation of RFID systems often involves a situation in which numerous transponders are present in the reading zone of a single reader at the same time. The reader's ability of processing a great quantity of tags simultaneously for data collection is important. If multiple tags are activated simultaneously, their messages can collide and cancel each other at the reader. This situation will require a retransmission of the tag IDs, which results in a waste of bandwidth and increases the overall delay in identifying the objects. A mechanism for handling tag collisions is necessary. Current solutions are based on collision avoidance using MAC protocols (e.g. slotted ALOHA and binary tree algorithms). If tags collide, they are instructed to wait a random time up to a certain maximum, which is doubled at each iteration until no collisions are reported. In other standards, spread spectrum or similar techniques are used to deterministically separate reader and tag transmissions, when permitted by local regulations. This can be a time-consuming process. The collision problem has hardly been studied from a signal processing perspective. If the reader is equipped with an antenna array, we arrive at a MIMO problem

38

("multiple input-multiple output"), and it may be possible to separate the overlapping collisions based on differences in the spatial locations of the tags. Therefore an antenna array in combination with blind source separation techniques can be used to separate multiple overlapping tag signals. The source signals can be modeled as Zero Constant Modulus (ZCM) signals, and with antenna arrays, the blind source separation algorithms can efficiently separate overlapping RFID signals.

Blind Signal Separation in Biomedical Applications The term Blind Signal Separation (BSS) refers to a wide class of problems in signal and image processing, where one needs to extract the underlying sources from a set of mixtures. Almost no prior knowledge about the sources, nor about the mixing is known, hence the name blind. In practice, the sources can be one-dimensional (e.g. acoustic signals), two-dimensional (images) or three-dimensional (volumetric data). The mixing can be linear, or nonlinear, and instantaneous or convolutive; in the latter case, the problem is referred to as multichannel blind deconvolution (BD) or convolutive BSS. In many medical applications, the instantaneous linear mixing model holds, hence the most common situation is when the mixtures are formed by superposition of sources with different scaling coefficients. These coefficients are usually referred to as mixing or crosstalk coefficients, and can be arranged into a mixing (crosstalk) matrix. The number of mixtures can be smaller, larger or equal to the number of sources. In the application of medical signal and image processing, the BSS problem arises in analysis of electroencephalogram (EEG), magnetoencephalogram (MEG) and electrocardiogram (ECG/EKG) signals and functional magnetic resonance images (fMRI). In these applications, the linear mixture assumption is usually justified by the physical principles of signal formation, and high signal propagation velocity allows the use of the instantaneous mixture model. Otherwise, nonlinear BSS or BD methods are used. Electroencephalography (EEG) The brain cortex can be thought of as a field of K tiny sources, which in turn are modeled as current dipoles. The j-th dipole is characterized by the location vectorrj and the dipole moment vector qj. The electromagnetic field produced by the neural activity determines the potential on the scalp surface, sampled at a set of Msensors.

39

Magnetoencephalography (MEG) Similarly to EEG, the forward model in MEG is also essentially linear. The sensors measure the vector of the magnetic field bj around the scalp. The forward field at sensor j due to dipole i can be expressed as bij = Gij qj, where Gij= G(rj,r'i) is the matrix kernel depending on the geometry and the electromagnetical properties of the head. BSS can be used for separation of independent temporal components in the same way it is used in EEG. Electrocardiography (ECG/EKG) The mechanical action of the heart is initiated by a quasi-periodic electrical stimulus, which causes an electrical current to propagate through the body tissues and results in potential differences. The potential differences measured by electrodes on the skin (cutaneous recording) as a function of time is termed electrocardiogram (ECG/EKG). The measured ECG/EKG signal can be considered as a superposition of several independent processes, resulting, for example, from electromyographic activity (electrical potentials generated by muscles), 50Hz or 60Hz net interferences, or the electrical activity of a fetal heart (FECG/FEKG). The latter contains important indications about the fetus' health. BSS methods have been successfully used for separation of interference in ECG/EKG data. Functional Magnetic Resonance Imaging (fMRI) The principle of fMRI is based on different magnetic properties of oxygenated and deoxygenated hemoglobin, which allows as to obtain a Blood Oxygenation Level Dependent (BOLD) signal. The observed spatio-temporal signal q(r,t) of magnetic induction can be considered as a superposition of N spatially independent components, each associated with a unique time course and a spatial map. Each source represents the loci of concurrent neural activity and can be either task-related or non task-related (e.g., physiological pulsations, head movements, background brain activity, etc). The spatial map corresponding to each source determines its influence in each volume element (voxel), and is assumed to be fixed in time. Spatial maps can be overlapping. The main advantage of BSS techniques over fMRI analysis tools is that there is no need to assume any a priori information about the time course of processes contributing to the measured signals

40

BIBLIOGRAPHY
1. Simon Haykin Adaptive Filter Theory, 2. Bernard Widrow, Samuel D. Stearns: Adaptive Signal Processing, Prentice Hall, 3. Simon, D. (2006). Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches 4. Eleswhere

41