| OCR Text |
Show for measured programs is logarithmic or better in the size of the problem. The effectiveness of the history cache depends much more on the program itself than on the program size. It follows that the performance of the history cache scales well for increasingly large programs and that even small size history caches are effective for large size programs. 5.4.6 Overhead Estimate This section summarizes the time overhead imposed on the computer system by the support for the history. Saving the history influences the speed of the processor, since every Store to memory must be implemented as a Read/Modify/Write operation. To estimate this overhead in systems without a cache, the following assumptions are made in the model [31, 67]: (a) Read/Modify/Write operation takes twice the time of a Store, (b) Load from memory represents 20% of executed instructions, and (c) Store to memory represents 10% of executed instructions. Under these assumptions, the traffic from the processor to the memory for n executed instructions consists of n Load operations to fetch instructions, 0.2n Load operations to fetch operands, and O.ln Store operations to store operands. In total, n + 0.2n + 0.1n = 1.3n operations on the main memory are performed by n instructions. The support for the history performs a Read/Modify/Write for each Store, so the memory traffic increases to at most n + 0.2n + 2*0.1n= lAn operations. The overhead of the history is thus at most = 1.077 or around 8%. Assumption (a) represents the worst case, valid for static RAM. Since in most dynamic RAM Read/Modify/Write can be executed with around 30% overhead over Write, the 8% overhead represents an upper bound and can be significantly reduced in practice. The history cache can be efficiently implemented as well in systems with a memory cache. One of the methods for implementing precise interrupts in pipelined 145 |