UCSC-CRL-93-41: SELECTIVE VICTIM CACHING: A METHOD TO IMPROVE THE PERFORMANCE OF DIRECT-MAPPED CACHES

10/01/1994 09:00 AM
Computer Engineering
Although direct-mapped caches suffer from higher miss ratios as compared to set-associative caches, they are attractive for today\'s high-speed pipelined processors that require very low access times. Victim caching was proposed by Jouppi (Jouppi-91) as an approach to improve the miss rate of direct-mapped caches without affecting their access time. This approach augments the direct-mapped main cache with a small fully-associate cache, called victim cache, that stores cache blocks evicted from the main cache as a result of replacements. We propose and evaluate an improvement of this scheme, called `selective victim caching\'. In this scheme, incoming blocks into the first-level cache are placed selectively in the main cache or a small victim cache by the use of a prediction scheme based on their past history of use. In addition, interchanges of blocks between the main cache and the victim cache are also performed selectively. We show that the scheme results in significant improvements in miss rate as well as the average memory access time, for both small and large caches (4 Kbytes -- 128 Kbytes). For example, simulations with 10 instruction traces from the SPEC \'92 benchmark suite showed an average improvement of approximately 21 percent in miss rate over simple victim caching for a 16-Kbyte cache with a block size of 32 bytes; the number of blocks interchanged between the main and victim caches reduced by approximately 70 percent. Implementation alternatives for the scheme in an on-chip processor cache are also described.

This report is not available for download at this time.