| OCR Text |
Show 52 ub. To see how close, f is reevaluated at u1 and its magnitude tested. If it is not small enough, the process is repeated. We thus need a measure, E , of how small is small enough. Since the function f represents, in practice, the Y coordinate on the screen, the maximum error can be chosen in screen units. The value .01 has been used successfully for the pictures here. The initaial algorithm, then, for solving f(u)=O is; Ul-U Uo-U f" (uo) :::: (uo-u) 2f' (u s ) f"(uo) f(uo) 2(f'(UO))2 wh i I elf (u) I > t: u = u-f(u)/f' (u) We must now see what conditions are necessary to guarantee that the iteration converges to the desired solution. To see how the error in u decreases at each step we look at a second order expansion. o f(uo) + (u-uo) f'(uo) -+ (U-UO)2 f"(uo) dividing the f' (uO) and using our definition of u1 2 f" (uo) o (U-Ul) + (u-uo) 2f' (u s ) 2 fll(UO) (Ul-U) (Uo-u) 2f' (u j ) That is, the error at step 1 is proportional to the square of the error at step O. This is called quadratic convergance and shows that, if it converges at all it will do so increasingly rapidly at each step. The ratio of errors is |