You are on page 1of 12

Digital Speech Processing

Dr.Sudharsan P
sudharsan@nitt.edu

October 10, 2020

1 / 12
Predict x[n] from previous samples
p
Predicting nth sample as a
ˆ =
X
weighted sum of previous x[n] αk x[n − k]
samples k=1
p
X
Error = Observed - Calculated e[n] = x[n] − αk x[n − k]
k=1
p
X
E (z) = X (z) − αk z −k X (z)
k=1

E (z)
X (z) = p
αk z −k
P
1−
k=1

2 / 12
Minimize mean squared error

X X p
X
2
M= e [n] = (x[n] − αk x[n − k])2
n n k=1

Find the α coefficients that will minimize this


p
∂M X X
= −2( (x[n] − αk x[n − k]))x[n − j] = 0
∂αj n k=1
"summation over n" x(n-j)
p multiplied on both sides
X X X
αk x[n − j]x[n − k] = x[n]x[n − j]
k=1 n n

p Equations with p Unknowns !

3 / 12
Replace x[n] with sn [m] and j with i
p
X X X
αk sn [m − i]sn [m − k] = sn [m]sn [m − i]
k=1 m m

Non stationary signals-Auto covariance method:


p
X
αk Φn (i, k) = Φn (i, 0), i = 1, 2, .., p
k=1
    
Φn (1, 1) Φn (1, 2) .... Φn (1, p) α1 Φn (1, 0)
Φn (2, 1) Φn (2.2) .... Φn (2, p) α2  Φn (2, 0)
    
 .... ... ... ...   .... =  .... 
   

 .... ... ... ...  ....  .... 
Φn (p, 1) Φn (p, 2) .... Φn (p, p) αp Φn (p, 0)

4 / 12
Φα = φ
Φ is a symmetric matrix.

Φ = VDVT
j
X
Φn (i, j) = vik dk vjk , 1 ≤ j ≤ i − 1 (1)
k=1
j−1
X
Φn (i, j) = vik dk vjk + vij dj vjj , 1 ≤ j ≤ i − 1
k=1
vjj=1
j−1
X
vij dj = Φn (i, j) − vik dk vjk , 1 ≤ j ≤ i − 1
k=1

https://en.wikipedia.org/wiki/Cholesky_decomposition#LDL_decomposition_2

5 / 12
When i=j, from equation 1,
i
X
Φn (i, i) = vik2 dk
k=1
Since vii=1
i−1
X
di = Φn (i, i) − vik2 dk
k=1

d1 = Φn (1, 1)
2
d2 = Φn (2, 2) − v21 d1
v21 d1 = Φn (2, 1) We have to get v21 after this
step (we need v_{j,j-1} in
final eqn)

6 / 12
VDVT α = φ
Let
DVT α = y
VT α = D−1 y (2)
So
Vy = φ (3)
Using equation 3, as V is a lower trianglular matrix with unit
diagonal elements,
i
X
vij yj = φi , 1 ≤ i ≤ p
j=1

7 / 12
vii=1
i−1
X
vii yj + yi = φi , 1 ≤ i ≤ p
j=1

i−1
X
y i = φi − vii yj
j=1

y1 = φ1
y2 = φ2 − v21 y1

8 / 12
Now equation 2 can be solved for α
p
X yi
vji αj =
di
j=i
vii=1 p
yi X
αi = − vji αj 1 <= i <= p
di
j=i+1

Initially
yp
αp =
dp
yp−1
αp−1 = − vp,p−1 αp
dp−1
Important!!!
Follow the bookmarks for Have to start from last
getting the coeffs during alpha coeff for solving
solving!!

9 / 12
Stationary Signals–Auto correlation method Function of difference

Φn (i, k) = Rn (|i − k|)


p
X
αk Rn (|i − k|) = Rn (i), 1 ≤ i ≤ p
k=1

    
Rn (0) Rn (1) .... Rn (p − 1) α1 Rn (1)
 Rn (1)
 Rn (0) .... Rn (p − 2)
 α2  Rn (2)
   

 .... ... ... ...  .... =  .... 
    
 .... ... ... ...  ....  .... 
Rn (p − 1) Rn (p − 2) .... Rn (0) αp Rn (p)

Toeplitz matrix: each descending diagonal from left to right is constant


https://en.wikipedia.org/wiki/Toeplitz_matrix

10 / 12
Levinson Durbin recursion algorithm :
1. E (0) = R(0)
i−1
P (i−1)
R(i)− αj R(i−j)
j=1
2. Ki = E (i−1)
,1≤i ≤p
(i)
3.αi = Ki
(i) (i−1) (i−1)
4.αj = αj − Ki αi−j , 1 ≤ j < i
5.E (i)= (1 − Ki2 )E (i−1)
(p)
Repeat steps 1 to 5. Finally, αj = αj

11 / 12
Example: p=2
    
R(0) R(1) α1 R(1)
=
R(1) R(0) α2 R(2)
1. E (0) = R(0)
2. K1 = R(1)
R(0)
(1) R(1)
3. α1 = K1 = R(0)
R(0)2 −R(1)2
4. E (1) = (1 − K12 )E (0) = R(0)
R(1)
R(2)−R(1) R(0) 2
5. K2 = E (1)
= R(2)R(0)−R(1)
R(0)2 −R(1)2
(2)
6. α2 = α2 = K2
(2) (1) (1)
7. α1 =α1 =α1 − K2 α1 = R(1)R(0)−R(2)R(1)
R(0)2 −R(1)2

12 / 12

You might also like