| OCR Text |
Show 92 The Berkeley algorithm was simulated by Van Tuyl under various assump-tions, among them, P = 10 pages, S = 32 milliseconds (or pages), M = 32 pages and T = 10 milliseconds. Table IV-2 shows that U = 15 < U = 20 with these data, so that the 1st algorithm really behaves better than the second algorithm, and the bottleneck really lies in the channel. If the memory size and the spcod of the channel are decreased to P = 5, S = 64, :■' = 10, and T = 10, tho second algorithm performs much better than the first one: U = 23 > U = 10, and tho bottleneck of the first algorithm lies in the memory. Now follows a study of how the rosourc; utilization would chanqe if the characteristics of the available hardware were to chanae. The size of the memory, M, the length of a drum revolution, 8| or the bandwidth of the channel, B could be varied. B va* supposed to be equal to 1 in the previous computations. More rioncrally: U / n. 2 8 Ü , 3 ! ] l * maX ^T' B 2 B j H2 0t.-«(T,!l^iI't|J I) Effect of handwnitli. Piqure IV-3 shows the offcrt of ban<)width. For high b.inlwidth, algorithm »2 ptrform« bettor than algorithm •!, as ox;«ct«'d. Not«- that thi«; it m»t tru« if T were high (in which case b^th alonnthm«» would b« CPU bound)} but the assumption is mdo th.it (TU'«« are netting faster and ehoarcr, and arc not the critical resources of »enl^rn com-puter systems. |