Solution Manual for Adaptive Filter Theory, 5th Edition

Solution Manual for Adaptive Filter Theory, 5th Edition provides you with expert textbook solutions that ensure you understand every concept thoroughly.

Seller Steve
Contributor
4.5
45
10 months ago
Preview (16 of 423 Pages)
100%
Log in to unlock

Page 1

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 1 preview image

Loading page ...

Solution ManualforAdaptive Filter Theory 5eCreated bySimon HaykinandKelvin HallMcMaster UniversityHAMILTON, ONTARIOCANADA

Page 2

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 2 preview image

Loading page ...

Page 3

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 3 preview image

Loading page ...

Table of ContentsAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P iiCorrections for Question-Descriptions in the Textbook . . . . . . . . . . . . . . P ivNotes on Computer Simulations and Provided Programs . . . . . . . . . . . . P viChapter 1: Stochastic Processes and Models . . . . . . . . . . . . . . . . . . . . . . . . P 1Chapter 2: Wiener Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P 21Chapter 3: Linear Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P 48Chapter 4: Method of Steepest Descent . . . . . . . . . . . . . . . . . . . . . . . . . . P 102Chapter 5: Method of Stochastic Gradient Descent . . . . . . . . . . . . . . . . P 120Chapter 6: The Least-Mean-Square(LMS) Algorithm . . . . . . . . . . . . . . P 128Chapter 7: Normalized Least-Mean-Square(LMS) Algorithmand Its Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P 167Chapter 8: Block-Adaptive Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P 178Chapter 9: Method of Least-Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P 190Chapter 10: The Recursive Least-Squares (RLS) Algorithm . . . . . . . . P 214Chapter 11: Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P 229Chapter 12: Finite-Precision Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P 243Chapter 13: Adaptation in Nonstationary Environments . . . . . . . . . . . . P 251Chapter 14: Kalman Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P 304Chapter 15: Square-Root Adaptive Filtering Algorithms . . . . . . . . . . . P 324Chapter 16: Order-Recursive Adaptive Filtering Algorithm . . . . . . . . P 341Chapter 17: Blind Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P 380Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P 394iii

Page 4

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 4 preview image

Loading page ...

Notes on the Computer Simulations and Provided ProgramsThe computer experiments completed for this solutions manual were completed al-most exclusively using Matlab®, for ease of readability. To improve the accessibility ofthe solutions, to the users of this manual, specialized signal processing toolkits were notused in programs included. Graphical solutions are provided along with the .m files incase the user of the textbook is interested in completing the exercises in a different pro-gramming language, in which case a graphical solution is available for comparison.The solutions of the computer problems in Chapter 13 were completed by AshiqueRupam Mahmood, Computer Science, University of Alberta, the creator of the Autostepalgorithm. The solutions being completed prior to the rest of the manual are only avail-able in the programming language python, which is similar to Matlab® and therefore quitereadable.vi

Page 5

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 5 preview image

Loading page ...

Chapter 1Problem 1.1Letru(k) =E[u(n)u(nk)](1)ry(k) =E[y(n)y(nk)](2)we are given thaty(n) =u(n+a)u(na)(3)Hence, substituting Equation (3) into Equation (2), and then using Equation (1), we getry(k) =E[(u(n+a)u(na))(u(n+ak)u(nak))]=2ru(k)ru(2a+k)ru(2a+k)Problem 1.2We know that the correlation matrixRis Hermitian; that is to say thatRH=RGiven that the inverse matrixR1exists, we may writeR1RH=IwhereIis the identity matrix. Taking the Hermitian transpose of both sides:RRH=I1

Page 6

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 6 preview image

Loading page ...

PROBLEM 1.3.CHAPTER 1.Hence,RH=R1That is, the inverse matrixR1is Hermitian.Problem 1.3For the case of a two-by-two matrix, it may be stated asRu=Rs+Rν=[r11r12r21r22]+[σ200σ2]=[r11+σ2r12r21r22+σ2]ForRuto be nonsingular, we requiredet(Ru)6= 0(r11+σ2)(r22 +σ2)r12r226= 0Withr12=r21for real data, this condition reduces to(r11+σ2)(r22+σ2)r2126= 0Since this is a quadratic inσ2, we may impose the following conditions onσ2for nonsin-gularity ofRu:σ26= 12(r11+r22)(√14∆r(r11+r22)21)wherer=r11r22r212Problem 1.4We are givenR=[1111]2

Page 7

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 7 preview image

Loading page ...

PROBLEM 1.5.CHAPTER 1.This matrix is positive definite because it satisfies the condition:aTRa=[a1a2] [1111] [a1a2]=a21+ 2a1a2+a22=(a1+a2)2>0for all nonzero values ofa1anda2But the matrixRis singular because:det(R) = (1)2(1)2= 0Hence, it is possible for a matrix to be both positive definite and singular at the same time.Problem 1.5a)RM+1=[r(0)rHrRM](1)LetR1M+1=[abHbCM](2)wherea,bandCare to be determined. Multiply Equation (1) by Equation (2):IM+1=[r(0)rHrRM] [abHbC]WhereIM+1is the identity matrix. Therefore,r(0)a+rHb= 1(3)ra+RMb=0(4)rbH+RMC=IM(5)r(0)bH+rHC=0(6)Equation (4) can be rearranged to solve forbas:b=R1Mra(7)3

Page 8

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 8 preview image

Loading page ...

PROBLEM 1.5.CHAPTER 1.Hence, from equations (3) and (7):a=1r(0)rHR1Mr(8)Correspondingly,b=R1MrrHR1Mr(0)rHR1Mr(9)From Equation (5):C=R1MR1MrbHC=R1M+R1MrrHR1Mr(0)rHR1Mr(10)As a check, the results of Equations (9) and (10) should satisfy Equation (6)r(0)bH+rHC=r(0)rHR1Mr(0)rHR1Mr+rHR1M+rHR1MrrHR1Mr(0)rHR1MR=0We have thus shown thatR1M+1=[000R1M]+a[1rHR1MR1MrR1MrrHR1M]=[000R1M]+a[1R1Mr] [1rHR1M]where the scalarais defined by Equation (8)b)RM+1=[RMrBrBTr(0)](11)LetR1M+1=[DeeHf](12)4

Page 9

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 9 preview image

Loading page ...

PROBLEM 1.5.CHAPTER 1.whereD,eandfare to be determined. Multiplying Equation (11) by Equation (12) youget:IM+1=[RMrBrBTr(0)] [DeeHf]Therefore:RMD+rBeH=I(13)RMe+rBf=0(14)rBTe+r(0)f= 1(15)rBTD+r(0)eH=0(16)From Equation (14):e=R1MrB(17)Hence, from Equation (15) and Equation (17):f=1r(0)rBTR1MrB(18)Correspondingly,e=R1MrBr(0)rBTR1MrB(19)From Equation (13):D=R1MR1MrBeH=R1M+R1MrBrBTR1Mr(0)rBTR1MrB(20)As a check, the results of Equation (19) and Equation (20) must satisfy Equation (16):rBTD+r(0)eH=0rBTR1M+rBTR1MrBrBTR1Mr(0)rBTR1MrBr(0)rBTR1Mr(0)rBTR1MrB=0We have thus shown thatR1M+1=[R1M000]+f[R1MrBrBTR1MR1MrBrBTR1M1]=[R1M000]+f[R1MrB1] [rBTR1M1]where the scalarfis defined by Equation (18)5

Page 10

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 10 preview image

Loading page ...

PROBLEM 1.6.CHAPTER 1.Problem 1.6a)We express the difference equation describing the first-order AR processu(n)asu(n) =ν(n) +w1u(n1)wherew1=a1. Solving the equation by repeated substitution, we getu(n) =ν(n) +w1ν(n1) +w1u(n2)=ν(n) +w1ν(n1) +w21ν(n2) +. . .+wn11ν(1)(1)Here we used the initial conditionu(0) = 0Taking the expected value of both sides of Equation (1) and usingE[ν(n)] =μwe get the geometric seriesE[u(n)] =μ+w1μ+w21μ+. . .+wn11μ={μ(1wn11w1),w16= 1μn,w1= 1}This result shows that ifμ6= 0, thenE[u(n)]is a function of timen. Accordingly, the ARprocessu(n)is not stationary. If, however, the AR parameter satisfies the condition:|a1|<1or|w1|<1thenE[u(n)]μ1w1asn→ ∞Under this condition, we say that the AR process is asymptotically stationary to order one.b)When the white noise processν(n)has zero mean, the AR processu(n)will likewise havezero mean. Thenvar[ν(n)] =σ2ν6

Page 11

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 11 preview image

Loading page ...

PROBLEM 1.6.CHAPTER 1.var[u(n)] =E[u2(n)](2)Substituting Equation (1) into Equation (2), and recognizing that for the white noise pro-cessE[ν(n)ν(k)] ={σ2νn=k0,n6=k(3)we get the geometric seriesvar[u(n)] =σ2ν(1 +w21+w41+. . .+w2n21)=σ2ν(1w2n11w21),w16= 1σ2νn,w1= 1When|a1|<1or|w1|<1, thenvar[u(n)]σ2ν1w21=σ2ν1a21for largenc)The autocorrelation function of the AR processu(n)equalsE[u(n)u(nk)]. SubstitutingEquation (1) into this formula, and using Equation (3), we getE[u(n)u(nk)] =σ2ν(wk1+wk+21+. . .+wk+2n21)={σ2νwk1(1w2n11w21),w16= 1σ2νn,w1= 1For|a1|<1or|w1|<1, we may therefore express this autocorrelation function asr(k) =E[u(n)u(nk)]σ2νwk11w21for largenCase 1:0< a1<1In this case,w1=a1is negative, andr(k)varies withkas follows:7

Page 12

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 12 preview image

Loading page ...

PROBLEM 1.7.CHAPTER 1.9for largenCase 1:0 <a1< 1In this case,w1= -a1is negative, andr(k) varies withkas follows:Case 2:-1 <a1< 0In this case,w1= -a1is positive andr(k) varies withkas follows:1.7(a) The second-order AR processu(n) is described by the difference equation:Henceand the AR parameters equalAccordingly, we write the Yule-Walker equations asσvw11w12----------------4-3-2-10+1+2+3+4kr(k)-4-2-10+2+3+4kr(k)-3+1u n()u n1()0.5u n2()v n()+=w11=w20.5=a11=a20.5=Case 2:1< a1<0In this case,w1=a1is positive, andr(k)varies withkas follows:9for largenCase 1:0 <a1< 1In this case,w1= -a1is negative, andr(k) varies withkas follows:Case 2:-1 <a1< 0In this case,w1= -a1is positive andr(k) varies withkas follows:1.7(a) The second-order AR processu(n) is described by the difference equation:Henceand the AR parameters equalAccordingly, we write the Yule-Walker equations asσv2w1k1w12----------------4-3-2-10+1+2+3+4kr(k)-4-2-10+2+3+4kr(k)-3+1u n()u n1()0.5u n2()v n()+=w11=w20.5=a11=a20.5=Problem 1.7a)The second-order AR processu(n)is described by the difference equation:u(n) =u(n1)0.5u(n2) +ν(n)which, rewritten, statesw1= 1w2=0.5as the AR parameters are equal to:a1=1a2= 0.5Accordingly, the Yule-Walker equation may be written as:[r(0)r(1)r(1)r(0)] [10.5]=[r(1)r(2)]b)Writing the Yule-Walker equations in expanded form:r(0)0.5r(1) =r(1)8

Page 13

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 13 preview image

Loading page ...

PROBLEM 1.8.CHAPTER 1.r(1)0.5r(0) =r(2)Solving the first relation forr(1):r(1) = 23r(0)(1)Solving the second relation forr(2):r(2) = 16r(0)(2)c)Since the noiseν(n)has zero mean, the associated AR processu(n)will also have zeromean. Hence,var[u(n)] =E[(u2)]=r(0)It is known thatσ2ν=2k=0akr(k)=r(0) +a1r(1) +a2r(2)(3)Substituting Equation (1) and Equation (2) into Equation (3), and solving forr(0), weget:r(0) =σ2ν1 +23a1+16a2= 1.2Problem 1.8By Definition,P0=Average power of the AR processu(n)=E[|u(n)|2]=r(0)(1)wherer(0)is the autocorrelation function ofu(n)with zero lag. We note that{a1, a2, . . . , aM}{r(1)r(0),r(2)r(0), . . . ,r(M)r(0)}9

Page 14

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 14 preview image

Loading page ...

PROBLEM 1.9.CHAPTER 1.Equivalently, except for the scaling factorr(0),{a1, a2, . . . , aM}{r(1),r(2), . . . ,r(M)}(2)Combining Equation (1) and Equation (2):{P0, a1, a2, . . . , aM}{r(0),r(1),r(2), . . . ,r(M)}(3)Problem 1.9a)The transfer function of the MA model of Fig. 1.3 isH(z) = 1 +b1z1+b2z2+. . .+bKzKb)The transfer function of the ARMA model of Fig. 1.4 isH(z) =b0+b1z1+b2z2+. . .+bKzK1 +a1z1+a2z2+. . .+aMzMc)The ARMA model reduces to an AR model whenb0=b1=. . .=bK= 0The ARMA model reduces to MA model whena1=a2=. . .=aM= 010

Page 15

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 15 preview image

Loading page ...

PROBLEM 1.10.CHAPTER 1.Problem 1.10Taking thez-transform of both sides of the correct equation:X(z) = (1 + 0.75z1+ 0.25z2)V(z)Hence, the transfer function of the MA model is:X(z)V(z) =1 + 0.75z1+ 0.75z1=1(1 + 0.75z1+ 0.75z1)1(1)Using long division we may perform the following expansion of the denominator in Equa-tion (1):(1 + 0.75z1+ 0.75z1)1= 134z1+ 516z2364z311256z4451024z5914096z6+9316283z78565536z8627262144z9+15411048576z10+. . .10.75z1+ 0.3125z20.0469z30.043z40.0439z50.0222z6+ 0.0057z70.0013z80.0024z9+ 0.0015z10(2)a)M= 2Retaining terms in Equation (2) up toz2, we may approximate the MA model with anAR model of order two as follows:X(z)V(z)110.75z1+ 0.3125z2Correction: the question was meant to ask the reader to consider an MA processx(n)of order twodescribed by the difference equationx(n) =ν(n) + 0.75ν(n1) + 0.25ν(n2)not the equationx(n) =ν(n) + 0.75ν(n1) + 0.75ν(n2)11

Page 16

Solution Manual for Adaptive Filter Theory, 5th Edition - Page 16 preview image

Loading page ...

PROBLEM 1.11.CHAPTER 1.b)M= 5Retaining terms in Equation (2) up toz5, we may approximate the MA model with anAR model of order two as follows:X(z)V(z)110.75z1+ 0.3125z20.0469z30.043z4+ 0.0439z5c)M= 10Retaining terms in Equation (2) up toz10, we may approximate the MA model with anAR model of order two as follows:X(z)V(z)1D(z)whereD(z)is given by the polynomial on the right-hand side of Equation (2).Problem 1.11a)The filter output isx(n) =wHu(n)whereu(n)is the tap-input vector. The average power of the filter output is thereforeE[|x(n)|2] =E[wHu(n)uH(n)w]=wHE[u(n)uH(n)]w=wHRwb)Ifu(n)is extracted from a zero-mean white noise with varianceσ2, thenR=σ2IwhereIis the identity matrix. Hence,E[|x(n)|2] =σ2wHw12
Preview Mode

This document has 423 pages. Sign in to access the full document!