Lecture 10 - Linear Prediction PDF

Title Lecture 10 - Linear Prediction
Author Zahra Moussavi
Course Digital Signal Processing
Institution University of Manitoba
Pages 7
File Size 246.6 KB
File Type PDF
Total Downloads 10
Total Views 136

Summary

Linear Prediction...


Description

52

Forward Linear Prediction If we assume an AR model for a stochastic process, it means that we can predict the future values from a limited observations of its past values. P

(*1) xˆ (n ) = −∑ a p (k )x (n − k ) , P is the order of the system (filter) where {-ap(k)} are the tapk =1

weights and are called Prediction Coefficients of one-step forward linear prediction error filter (PEF). x(n)

fp(n)≡ error

Σ -

z-1

xˆ (n )

PEF

P

The prediction error: f p (n ) = x (n ) − xˆ (n ) ⇒ f p (n ) = x (n ) + ∑ a p (k )x (n − k ) using Eq. (*1). k =1

P

f p (n ) = ∑ ap (k ) x (n − k ) with ao = 1 (*2). Comparing this equation with AR model leads to: k =o

fp(n) = w(n).

Take Z-Transform from both sides of Eq. (*2): Fp (z ) = Ap (z ) ⋅ X (z ) ⇒ Ap (z ) =

Fp ( z) X ( z)

=

Fp (z ) Fo (z )

Note that X(z) is in fact, Fo(z) as the Eq. (*2) is a recursive equation in nature. P

Ap (z ) = ∑a p (k )z − k with ao =1. This is a FIR filter (all-zeros). k =o

x(n

z-1

z -1 ao = 1

ap (1

…… ap (2

+ fp(n)

z -1

53

How to find the filter coefficients, ap(k)? A way to solve is to minimize the variance of the error fp(n). That is:

{

}

⎧⎡

P

⎤⎡

P

⎤⎫

⎩⎣

k= 1

⎦⎣

l=1

⎦⎭

ε fp = E f p (n ) = E ⎨⎢ x(n) + ∑ a p (k )x(n − k )⎥ ⎢ x* (n) + ∑ a*p (l) x* ( n − l)⎥ ⎬ 2

⎫ P ⎧P f ⇒ ε p = γ xx (0) + 2 Re⎨∑ a P (k )γ xx (k )⎬ + ∑ ⎭ k= 1 ⎩ k= 1

P

∑ a (l )a (k )γ (l − k ) * p

p

xx

l= 1

This is a quadratic function of the tap-weights {ap(k)} and it has a ball-shaped (P+1) dimensional surface. This surface has a unique minimum. At the minimum point, the gradient vector ∇ εkf = 0 for k=0,1,….,P-1 independently. If we let

a p (k ) = αk + j βk in general, then ∇ ε kf =

∂ε kf ∂ε f +j k . ∂α ∂β

Taking this gradient vector and making it equal to zero, leads to the equation: P

γ xx (l ) = −∑ a p (k )γ xx (l − k ), l =1,2,.............., p k =1

This equation is called “Normal Equation". In matrix form: P

∑ a (k )γ (l − k ) = 0 p

l = 1,2,... p and a p (0 ) = 1

xx

k =o

Γxx( n) ⋅ a p = 0 With this solution, the minimum mean-square prediction error will be:

[ ]

P

min ε pf = E pf = γ xx (0) + ∑ ap ( k )γ xx (− k) k= 1

Backward Linear Prediction

One-step backward predictor of order p: p −1

xˆ (n − p ) = −∑ bp (k ) x(n − k ) k= o

Backward Prediction error: g p (n) = x(n − p )− xˆ ( n − p )

54

p −1

p

k=0

k =0

→ g p (n ) = x (n − p ) + ∑b p (k )x (n − k ) = ∑b p (k )x (n − k ) where bp(p) =1 Therefore, backward linear prediction filter can be realized either by a direct-form FIR filter structure similar to forward linear prediction filter or as a lattice structure.

Note that:

b p (k ) = a*p ( p − k ) .

Also, we can write: G p ( z ) = B p ( z ) ⋅ X ( z) B p (z ) =

G p (z ) X (z )

=

G p (z ) Go (z )

p

Also that Bp ( z) = ∑ bp ( k ) z− k k =0 p

= ∑a *p (p − k )z −k

let p − k = k ' ⇒ −k = k ' − p

k =0

=

∑ a (k )z 0

* p

'

k’ = k k '− p

k' = p

( )

= z − p ∑ a *p (k ) z k = z − pAp* z −1 k =0

( )

∴ Bp ( z) = z− p A*p z−1

This implies that the zeros of the FIR filter with system function Bp(z) are simply the conjugate reciprocals of the zeros of Ap(z). Hence, Bp(z) is called the reciprocal or reverse polynomial of Ap(z).

FIR Lattice Structure f0(n)

Σ

f1(n)

……….. fP-1(n)

Σ

fP (n)

k1* k*

x(n) k1 z-1

kp

………..

Σ

g0(n)

g1(n)

z-1

Σ

gP-1(n)

gP (n)

Stage P Stage 1

55

(*)

f0(n) = g0(n) = x(n) fm(n) = fm-1(n) + kmgm-1(n-1) gm(n)

=km* fm-1(n)

m = 1, 2,……, p

+ gm-1 (n-1)

km are called to reflection coefficients. Note: km = ap(p)

In order to derive ap(k) from km, take the Z-transform of Equations (*) F0(z) = G0(z) = X(z) Fm(z) = Fm-1(z) + kmz-1 Gm-1(z)

m = 1, 2, ……, p

Gm(z) = km* Fm-1(z) + z-1 Gm-1(z)

Now replace Fm (z ) = Am (z )⋅ X (z ) and Gm (z ) = Bm ( z )⋅ X ( z ) and cancel X(z) from both sides. Then we get:

Ao (z ) = Bo (z ) = 1 Am (z ) = Am −1 (z ) + km z −1 Bm −1 (z ) Bm (z ) = km* Am − 1(z ) + z −1 Bm −1 (z ) ⎡ Am (z )⎤ ⎡1 km z−1 ⎤ ⎡ Am −1 ( z )⎤ or ⎢ ⎥= ⎢ * ⎥ −1⎥ ⎢ ⎣ Bm (z )⎦ ⎣ km z ⎦ ⎣ Bm −1 ( z)⎦

⇒ Am −1 (z ) =

Am (z ) − k m Bm (z ) 2 1 − km

Recalling that am(0) = 1 and am(m) = km, we can also write:

km

am− 1( k) =

bm(k)

am ( k ) − am (m ) ⋅ a *m (m − k ) 1 − am (m)

2

The point is that a direct FIR structure to derive ap(k) requires

p ( p + 1) filter coefficients (due to 2

stages A1(z), A2(z),….Ap(z), while the lattice structure needs only p, {k1, k2,…,kp}, coefficients. Also:

{

ε bp = E g p (n )

2

}

[ ]

b b f and min ε p = E p = E p and

|km| < 1. If |km| = 1, the recursive

equations breaks down. |km| = 1 is indicative that Am-1(z) has roots on the unit circle. Also note

(

that: Emf = 1 − km

2

)E

f m −1

, which is a monotonically decreasing sequence.

56

Relationship Between AR Process and Linear Prediction Error Filter (important)

If a process x(n) is really an AR process, then ap(k), the coefficients of the Prediction Error Filter (PEF), are in fact, the same as AR parameters in Yull-Walker equation and minimum MSE at the pth order is in fact σ 2w and therefore, the PEF has become optimized. If x(n) is not an AR process, still the PEF coefficients are the best approximates of the AR parameters that can represent x(n). Example

Consider the following AR process: x (n )+ c1 x (n − 1 )+ c2 x (n − 2 ) = w (n ) where c1 = -0.1 and c2 = -0.8 and σ 2w =0.27

a) Find σ x2 b) Find the reflection coefficients (km) c) Find min mean-squared error Em Solution Note that a2(0)=1, a2(1)= c1=-0.1 and a2(2)= c2=-0.8

a) σ x2 = γ xx (0 )

Using the Yull-Walker equation, we have: ⎡γ (0 ) γ (1) γ (2 )⎤ ⎡ 1 ⎤ ⎡σ 2w ⎤ ⎢ γ (1 ) γ (0 ) γ (1 ) ⎥ ⎢ c ⎥ = ⎢ 0 ⎥ ⎥⎢ 1⎥ ⎢ ⎥ ⎢ ⎢⎣γ (2 ) γ (1) γ (0 ) ⎥⎦ ⎢⎣ c 2 ⎥⎦ ⎢⎣ 0 ⎥⎦ To solve for γ (o ), γ (1) and γ ( 2) , rewrite it as: ⎡1 ⎢c ⎢ 1 ⎢⎣ c 2

c1 1+ c 2 c1

c 2 ⎤ ⎡ γ (0 )⎤ ⎡σ w2 ⎤ ⎢ ⎥ 0 ⎥ ⎢ γ (1) ⎥ = ⎢ 0 ⎥ ⎥ ⎥⎢ 1 ⎥⎦ ⎢⎣ γ (2)⎥⎦ ⎢⎣ 0 ⎥⎦

Matrix A Δ = |A|

σ w2 Using Cramer rule: γ (0 ) =

0 0

c1 1 + c2 c1

c2

0 = σ w2 (1 + c 2 ) 1

Δ

57

(1 + c 2 )σw 2 = 1 for this example. (1− c 2 ) (1 + c 2 ) 2 − c12 )

γ (0 ) =

b) k2 = a2(2)=c2 =-0.8 Using recursive equation: a2 (1) = −0.1 = a1 (1) + k2 a1(1) ⇒ -0.1= a1(1).(1-0.8) ⇒ a1(1)= k1 =-

1 2

c) E 0 = σ x2 = γ ( 0) = 1

(

E1 = E 0 1 − k 1

(

2

E2 = E1 1 − k2

) =1 − 14 = 34 2

64 ⎞ 3 36 = 0.27 = σ )= 34⎛⎜⎝1 − 100 ⎟= × ⎠ 4 100

2 w

More Examples

Determine the lattice coefficients corresponding to the FIR filter with system function

H ( z ) = A2 ( z ) = 1 +

3 −1 1 − 2 z + z 8 2

Solution

1 ⎡ 3 1⎤ p = 2 and a p = ⎢1, , ⎥ ⇒ k2 = a2 (2) = 8 2 2 ⎣ ⎦

a2 (0 ) a2 (1) a 2 (2 )

( )

1 ⎤ 1 3 ⎡ 3 B2 ( z ) = z −2 A2 z −1 = z −2 ⎢1 + z + z 2 ⎥ = + z −1 + z −2 2 ⎦ 2 8 ⎣ 8 A2 (z ) − k2 B2 (z )

1 ⎡ 3 − 1 1 − 2 1 ⎛1 3 − 1 − 2 ⎞⎤ 1 + z + z − ⎜ + z + z ⎟⎥ 1 ⎢⎣ 8 2 2 ⎝2 8 ⎠⎦ 1 − k2 − 1 4 4⎡ 3 1⎤ 1 = ⎢1 + z −1 − ⎥ = 1 + z −1 3 ⎣ 16 4⎦ 4 A1 (z ) =

2

=

a1(1)

⇒ k1 = a1(1 ) =

1 4

58

Example P. 11.7

Determine the impulse response of FIR filter described by lattice coefficients k1 = 0.6, k2 = 0.3, k3 = 0.5, k4 = 0.9 Solution

A0(z) = B0(z) =1 ⎧⎪ A1 (z ) = A0 (z ) + k1 z−1 B0 (z ) = 1 + 0.6 z−1 ⎨ ⎪⎩B1 (z ) = k1 A0 (z ) + z −1B0 (z ) = 0.6 + z −1

(

⎧ A2 (z )= A1 (z )+ k2 z −1 B1 (z )= 1 + 0.6 z − 1 + 0.3z −1 0.6 + z −1 ⎪ = 1 + 0.78 z −1 + 0.3z −2 ⎨ ⎪B z k A z z −1B z 0.3 0.78 z −1 z −2 + + 1 ( )= ⎩ 2 ( )= 2 2 ( )+

)

with the same routine. ⎧⎪ A3 ( z) = 1 + 0.93 z− 1 + 0.69 z− 2 + 0.5 z− 3 ⎨ ⎪⎩B3 ( z ) = 0.5 + 0.69 z − 1 + 0.93 z − 2 + z Finally, A4 ( z ) = H (z ) = 1 + 1 .38 z − + 1 .311 z − + 1 .337 z − + 0 .9 z − . 1

2

3

4

If it was asked to

determine an all-pole filter corresponding to the same lattice coefficients, then H(z) would have been

1

A4 (z )

. See problem 11.22....


Similar Free PDFs