oi+8 + /I(i) (2.3) 4>0 is the sinusoidal frequency normalized e /I O'~. 11 From this definition of the input signal, the values of E\\x(i)\\2, px, and can be easily calculated. As each of these three terms are based on the input signal's auto-correlation, start by determining an equation for the auto-correlation: x(i)x(i-k)*) aej2**°i+e + ^ i ) ] ^ 2 ^ - ^ 6 + u(k)^ a V 2 ^ + ^ ( J - f c ) . Apply this result to obtain values for each of the three expectation terms in (2.2): 1 ej(2n4>0) _ eJ(27r ••i•• ii iiii Iii i 0.5 Figure 3.3. Prediction filter frequency response before and after a step in the input frequency a step in the input sinusoid's frequency. As can be seen, the filter adapts quickly to the new frequency, and has completely forgotten the old frequency. Also, one might notice the wider filter bandwidth shortly after the step in the input frequency. This is due to the fact that the bandwidth of the filter is temporarily increased, after the step, in order to increase the adaptive line enhancer's convergence time. A greater understanding of this process will be given later. A somewhat less important benefit is that the complexity of calculating the filter output samples does not increase when the filter bandwidth is decreased, as it does with the FIR prediction filter. This results in more noise rejection for a given amount of computational complexity. The main drawback in using an IIR prediction filter is that analyzing the tracking and convergence performance is mathematically very difficult. This limits the order of the filter that can be used, and consequently the amount of noise rejection that can be achieved by the filter. 0.9 0.8 ~ 0.7 c 8~. 0.6 0::: II> O.S 'U ::J ~ g> :;E - SO , ,., ,., - - - 250 \". ,,,,,,,,,,,.,,., .. ....... ----_ ... ------ -8.5 o Normalized Frequency : .. , , ~, 20 ~\"'\" II '. I f I I I I I I I I I I I I I I I I I I I I I I I I I I ,,I I, , o.S 21 3.3 Adapting the Prediction Filter Coefficients The purpose of the next few sections is to develop an algorithm for adapting the prediction filter coefficients for a given input signal. The development of this algorithm can be separated into two processes: first, the process of adapting the value of (f>; and second, the process of adapting the value of OJ. The effect of adapting (f> is to place the center frequency of the prediction filter's pass-band at the frequency of the input sinusoid. The effect of adapting a is to minimize the filter bandwidth for maximum noise rejection, and to speed up the process of determining the optimal value of (f>. The optimal value of each of the prediction filter's coefficients can be found using the LMS adaptive algorithm. This algorithm was discussed in the Chapter 2, and is an iterative algorithm that approximates the gradient-descent algorithm. As a reminder, the gradient-descent algorithm works by adapting each filter coefficient according to the update equation: where w(i) is one of the filter coefficients, fj, is the step size, and ( is a continuously differentiable function that has a single minimum, or maximum, at the optimal value of the coefficient. In the general case of adaptive filtering algorithms, (, also called the cost function, is defined as: Due to the random nature of the input signal, the value of ( cannot be exactly determined at rimtime, and must be approximated. The LMS algorithm approximates the value of £ by removing the expectation operator: w(i) w(+ 1) -// vC C = £|e(0|2. CHe«l2. (3.4) Based on the above information, the steps in developing an algorithm to adapt each of the filter coefficients are: ; and second, the process of adapting the value of Q. The effect of ¢ is to place the center frequency of the prediction filter's pass-band at Q proces..'l ¢. coefficient w( i) = w( i + 1) - J-t V ( (3.2) J-t is the step size, and ( is a continuously (3.3) nmtime, ( 22 1. Determine a cost function, (, that is continuously differentiable with respect to each coefficient, and has a single minimum, or maximum, at the optimal value of each coefficient. 2. Determine the LMS approximation to that cost function, or (. 3. Determine the gradient of ( with respect to each filter coefficient. 4. Substitute the results into (3.2), or the general gradient-descent update equa- This process will be followed in order to develop an algorithm for adapting and a in the subsequent sections. Because the purposes of the two coefficients are different, a distinct cost function and an update algorithm will be used for each. The purpose of adapting (j> is to place the center of the prediction filter's pass-band at the same frequency as that of the input sinusoid. If the pass-band center frequency is not at the correct location, the sinusoid will be attenuated by the prediction filter, and the optimal noise rejection will not be attained. As noted earlier, in order to determine the optimal value for 0, a cost function is required that has a single minimum at the frequency of the input sinusoid. The mean-square of the adaptive fine enhancer's error signal is just such a function, and is defined as: The following derivation will show that this cost fimction has a single minimum, and is continuously differentiable with respect to (j>. The value of the cost function will first be calculated in the frequency domain; then an equation for the mean-square error will be obtained by integrating the power-spectral density over the entire frequency spectrum. This integral is defined as [2]: tion. 3.4 0 < = £ | e « | 2. (3.5) C = £|e(n)|2 = 0ee(O) = / $ee(ej2*d)d0 J-0.5 flmction, coefficient. ~ <; coefficient. equation. ¢ and Algorithm for Adapting ¢ ¢ is to place the center of the prediction filter's passband a.'l pa.'ls-band ¢, ha.., line ( = Ele(iW· flmction ¢. The value of the cost function will (3.6) 23 where $ee(z) is called the error signal's power-spectral density. $ee(z) is calculated according to the following equation: $ee(z) = $xx(z)\\H(z)\\2 (3.7) where H(z) is the Z-domain transfer function of the entire adaptive line enhancer. It has the form: H(z) = 1 - z~lW{z). (3.8) Substitute this equation, and (3.1) into (3.7) and simplify. This results in: *ee(z) ee(z) flmction H(z) = 1 - z-lW(z). Substitute this equation, and (3.1) into (3.7) and simplify. This results in: xx( z) I H(z) 12 - xx(z)ll - z-lW(zW (1 - a)ej21T*

*xx(z)ll - Z-l \"2 '\" 112 1 - aeJ 'Il\"'I'Z- (1 - a)ej21T :1::1: ( z) 11 - 1 - aeJ\" 21'T\"' I'Z-1 I 1 - ej21TX:l:(z) I 1 _ aej21T*

**(i) (a(i) 6y*(~ - 1) _ e*(i _ 1)). 6a(z - 1) (3.33) sure if (a(i + 1) >= amax ) a(i + 1) = amax else if (a(i + 1) < 0) i + (3.34) 30 3.6 Summary of Algorithm and Results has shown with the FIR adaptive line enhancer. Also, an algorithm has been developed that will simultaneously adapt 0 and in a way that both maximizes the noise rejection, and minimizes convergence time. Finally, the algorithms for updating both coefficients can be combined in such a way that several of the operations are shared. The complete algorithm is summarized in Table 3.1. Note that the initial value of a is set to a value much less than one, in order to speed initial convergence of the algorithm. Updating a in the positive direction of the gradient . fulfills the requirement that the maximum value of the cost function be found. In summary, this chapter ha..<; developed an IIR adaptive line enhancer algorithm that can be effective at tracking changes in the input sinusoid's frequency. This algorithm does not exhibit a memory of the input sinusoid's past frequencies, as was Also; ¢ a 31 3.1. Complete Adaptive Line Enhancer Algorithm H = MO( a{%)f i) = i))x{i) + i)y(i yii) = m(i)e>27r {i) = i + 1 ) - = a w ) a(i + 1) = amax a(i + < i + Initial conditions should be set as follows: 0) = 0.7 (3.45) **

* cost function large enough that it can quickly respond to changes in the input Vehicle-Base Station, SNR=1 Vehicle-Base Station, SNR=10 - - Vehicle-Vehicle, SNR=1 - - - Vehicle-Vehicle, SNR=10 0.02 0.04 0.06 0.08 0.1 r---,---'i===========:==::::::;-] - 0,07 0.06 ~ ~ 0,05 \"U J<'D 0,04 0.03 ... ,,', '· '· '1 Vehicle-I-lo , , ,\" ,,,,,.,, . ,' , .. -' \",''. ' 0.1 Figure Magnitude of ~::~ for a given value of /lo 49 /l 'TJ ~::ess' Ao, 'TJ Next, there are two parameters that affect how the IIR adaptive algorithm's a coefficient is adapted. These are: la, which is the a step size; and amax , which is /la. First, la determines how qUickly the bandwidth of the prediction filter can be adjusted. Also, it determines how quickly can be adapted to track changes in the input sinusoid's frequency, as described in Chapter 3. Values of la that are ~::~ess' /la increased. Figure 5.5 shows how /la affects the value of ~::ess' From this figure, a conservative value of /la 0.005 was chosen for the simulations. The second parameter, amax , determines the maximum value of a. As mentioned in Chapter 3, there are two reasons for this parameter. First, to keep a within the allowable range of zero to one; and second, to keep the slope of the cost function large enough that it can quickly respond to changes in the input 1.2 0.8 % 8 0.6 0.4 Vehicle-Base Station, SNR=1 SNR=--Vehicle-Vehicle, 1 - - - Vehicle-Vehicle, SNR=10 Figure 5.4. Magnitude of fj^ess ^ o r a S i v e n v a m e of 77 0.02 0.018 0.016 0.014 0.012 5 g 0.01 0 *