| OCR Text |
Show Speedup of Central Server Queuelng Model Speedup of Central Server Network S ped lock Detecti stu ReoeySrtg Speeup Deadlock Deutecion and Recovery 0C3l1sM 13 a W. d@L 3V%& oClassicaw server1S5a3p. .m s. ern. O Clesscal w. serve (10 paftrahfic) dL s3s. or. 3 1%O mIcal sac.sre (10p ercen taffi) aZp la15.31%. 3. 3.V Classical Wcs. erva( 50 paras traffic) stm. ern. *clusical bec. eava (S parmt trfic) Cap. "m aft *Classical sa. serve 69 pertoasoic) do. asr. en. t CZMSIClaacl . ins. (90 Part" aki) cp. in'. en 2 *'Claikai" PCFS prawn 2. e Delamoo"asm a-ice tmW * Expomnmialldys it"wu oar.. en. Il ttind IPCFSp ina aExpemmally ditrtimad earv, em. D 0t 1 4 16 64 256 1024 1 4 16 64 256 1024 Message Population Message Population Figure S. Speedup ofcentral server queueing moel Figure 10. Speedup of detection and recovery simulator with one classical server. Deadlock Ratio for Queueing Network Simulator Deadlock Ratio for Central Server Network Deadlock Detection and Recovery Strategy DR Deadlock Detection and Recovery 10% 10000 1 * ltascal cesral smai. dar sqr'. ena. c Cluical cmiserver.. cap. sar. ern. 1000- 1000- V CCllaassssiiccaaslal a m. csa.c ar'm. e((r15 00p pawrcetnna traaflfkicf)i )c da.. ss.e r. uan n, *Classicasl acs. erver( 50 percent trfi k) cap,s av, us, *Classca wc sal,. (90 percmt traffic) dCL ri. tM. t ClassicaSlK tarer (90 pwcusrl alfi) tip. mmrt.O n "Clasical' PCFS process 100. 9 Detesuostwusaz uwud en 100- Rsad FCFSp moans 0 Dcernnuse scrace tina Eaponanualdlyu tibsted a. a n, 1 4 16 64 256 102 1 4 16 64 256 1024 Message Population Message Population Figure 9. Overhead of central server queueing network simulator. Figure 11. Overhead of detection and recovery simulator wit one classical server program described earlier, and the remaining servers used the optimized Speedup and overhead curves for the deadlock avoidance simulator are program. The resulting simulator is not unlike one that would result if one shown in figures 12 and 13. The deadlock avoidance simulator tends to be of the servers was (say) a prioritized queue while the others wereC CFS. more forgiving of processes with poor lookahead. Poor performance The speedup and efficiency of the deadlock detection and recovery results when the central server process has poor iookahead. However, simulator is shown in figures 10 and 11. When the central server (the performance begins to approach that of the optimized simulator in some process receiving messages from the merge process) has poor lookahead situations where one of the secondary servers has poor lookahead. In properties, perfonnance is almost as poor as when all of the sarvers have particular, good performance is obtained if a significant fraction of the poor loolcahead. When one of the secondary servers (the servers receiving message traffic (S0 to 90 percent) is routed around the process with poor messages from the fork process) has poor lookalsead. performance is lookahead. Unlike the deadlock detection and recovery simulator. null better, but still well below that of the simulator using only optimized message traffic is generated by the classical server to allow the merge servers. Thiese results are consistent with those obtained using synthetic process to proceed. Because processes with poor lookahead tend to buffer workloads, and demonstrate that a few processes with poor lcxrkahead ca messages rather than immediately forwarding them, it is best to minimize significantly degrade overall performance in the deadlock detection and the amount of traffic routed to the classical server because this onl% recovery simulator, detracts from the available parallelism. When the classical propuan was used to implement a secondary server, 7. Communication Network Simulations the routing probabilities in the fork were modified so that 10, 50. and finally 90 percent of the message traffic was routed to the classical server. Simulations of the message passing subsystem of a hypothetical It is interesting to note that performance improves as moare traffic is routed multicomputer were also performed. The multicomputer is organized in a toward the server with poor lookahead. If little traffic is directed toward hypercube topology, and Sullivan's algorithm is used to route messages to this server, the simulator is constantly deadlocking because the merge their respective destinations (Su[177a]. Like the queueing network and process is forced to black because it cannot determine whether or nooitt is synthetic workload experiments, a fixed message population was used to safe to proceed without first receiving a message from this server. Routing control the amnount of available parallelism. Initially, each message is additional message traffic toward this server helps the simulator to assigned a destination to which it is to be routed, and a message length. overcame (somewhat) the server's poor lookahead characteristics. The destination is selected from a uniform distribution (excluding the |