The following describes what the entries should be for the tool to work properly.
by checking all the boxes of the policies wanted to simulate.
Ø If LRU-K is selected then input a numerical value into the space provided. Press the Enter key to enter this value. The basic idea behind the LRU-K replacement policy is that it keeps track of the last K references. When the cache becomes full it replaces the page with the largest K length. However, there can be several pages with the maximum length of K, and when this happens several different routes can be taken to decided which one to choose. In this tool it cycles through the pages to attempt to keep the same page from being replaced every time it comes across this situation.
enter a numerical value for the number of Page frames wanted.
*Note: spaces before this value or the value entered for K above could cause errors
enter a sequence of Page References; the references are separated by a space
For example: 4 0 2 1 3 5 4 1 3 221 2 21
Ø Once all values are entered either press the Step button to step through the results one by one, or press Final to skip through the steps and show the final result. The Final button can be pressed at any time to show the final results.
Final Results will show a complete walkthrough of the cache pages and the Hit
Ratio, Number of Misses, and Number of Hits for each policy selected.
Once the final results are displayed user can change values, and upon pressing either Step or Final the tool will restart with values at the inputs.
A Phantom Hit can occur in both ARC and CAR policies. It is a physical miss in cache, however is a hit in secondary cache. This causes a change within that policy's allocation of page frames. See ARC and CAR descriptions, or references below for more information.
Caution: If Step or Final button is pressed without proper inputs runtime errors may occur.
*Note: The values for the sequence, number of page frames, and K can be changed at any point either before starting the tool or after final results are displayed to the right.
First-In First-Out (FIFO)
FIFO is a very simple replacement policy, and is also known as a round robin. The way it works is once the cache is full it simply replaces the first page that was placed in the cache with the desired page, and the next replacement will be the second page placed in the cache and so on….
Random is just that, when the cache is full it randomly picks which page will be replaced.
This is the ideal case, and is impossible to perform in practice. This is where the system can look at all the references that will happen in the future. The page that is then chosen to be replaced is the one that is either not being referenced again, or is being referenced again the farthest into the future compared to the other in the cache.
Least Frequently Used (LFU)
LFU keeps a list of all the pages referenced in the cache and how many times they have been referenced in the past. Once the cache becomes full it replaces the page that has been referenced the fewest, with the new one.
Least Recently Used (LRU)
An ideal LRU policy keeps track of how long ago each page was referenced. When a page needs to be replaced in order to make room for a new one, the page that was least recently used is replaced. In other words the page in the cache that hasn’t been used for the longest time is replaced. On every reference all of the pages’ history counters are incremented by one, and when a hit occurs the history counter for that page is set back to zero.
Least Recently Used K (LRU-K)
LRU-K is a practical implementation of the LRU policy. The basic idea behind the LRU-K replacement policy is that it keeps track of the last K references. When the cache becomes full it replaces the page with the largest K length. However, there can be several pages with the maximum length of K, and when this happens several different routes can be taken to decided which one to choose. In this tool it cycles through the pages to attempt to keep the same page from being replaced every time it comes across this situation.
LRU-K does not have a higher hit ratio then the ideal LFU policy in most cases. However, LRU-K is more practical because it limits the number of bits used to monitor how recently a value was last used. One example when it does out perform the other policies is the following:
e.g. 1 2 3 4 5 6 8 4 1 2 13 10 15 2 7 9 5 1 2 3 4 5 12 8 6 4 1 2 3 4 5 12 4 8 7 4 6
number of pages = 10 LRU-K = 4
Adaptive Replacement Cache (ARC)
Recently developed in response to the downfalls of LRU policies. LRU captures only recency and not frequency, and can be easily “polluted” by a scan. A scan is a sequence of one-time use only page requests, and leads to lower performance. ARC overcomes these two downfalls by using four doubly linked lists. Lists T1 and T2 are what is actually in the cache at any given time, while B1 and B2 act as a second level. B1 and B2 contain the pages that have been thrown out of T1 and T2 respectively. The total number of pages therefore needed to implement these lists is 2*C, where C is the number of pages in the cache. T2 and B2 contain only pages that have been used more than once. The lists both use LRU replacement, in which the page removed from T2 is placed into B2. T1 and B1 work similarly together, except where there is a hit in T1 or B1 the page is moved to T2. The part that makes this policy very adaptive is the sizes of the lists change. List T1 size and T2 size always add up to the total number of pages in the cache. However, if there is a hit in B1, also known as a Phantom Hit (i.e. not real hit in cache), it increases the size of T1 by one and decreases the size of T2 by one. In the other direction, a hit in B2 (Phantom Hit) will increase the size of T2 by one and decrease the size of T1 by one. This allows the cache to adapt to have either more recency or more frequency depending on the workload.
The following sequence shows how ARC out performs all of the other policies.
e.g. 0 1 2 3 0 4 1 2 3 0 1 2 3 0 1 2 4 3
number of pages = 3
The sequence below demonstrates that ARC doesn't always have the best performance however in many cases the number of cold start misses is reduced.
e.g. 1 2 5 4 3 2 1 5 4 6 8 4 5 2 7 10 1 2 15 15 2 4 6 3 2 21 8 78 6 1 2 3 4 5 6 7 8 9 1 3 22 5 4 9 15 12 9
number of pages = 10
Clock with Adaptive Replacement (CAR)
CAR works in much the same way that ARC does. However, CAR avoids the last two downfalls of LRU that ARC does not. These downfalls are: 1) the overhead of moving a page to the most recently used position on every page hit, and 2) serialized, in the fact that when moving these pages to the most recently used position the cache is locked so that it doesn’t change during this transition. This then causes delays in a multi-thread environment. In reality CAR only overcomes the first downfall, the second still has a presence. CAR uses two clocks instead of lists, a clock is a circular buffer. The two circular buffers are similar in nature to T1 and T2, and it does contain a B1 and B2 as well. The main difference is that each page in T1 and T2 contain a reference bit. When a hit occurs this reference bit is set on. T1 and T2 still vary in size the same way they do in ARC (i.e. Phantom Hits cause these changes). When the cache is full the corresponding clock begins reading around itself (i.e. the buffer) and replaces the first page whose reference bit is zero. It also sets the reference bits back to zero if they are currently set to one. The clock will continue to travel around until it finds a page with reference bit equal to zero to replace.
The CAR policy reduces overhead compared to LRU. This can not be seen when viewing the simulator, however the CAR policy uses a circular buffer and this removes the overhead of having to move the most-recently used position on every page hit. The following sequence however shows that it can outperform the policies used today (ARC,LRU,LRU-K). CAR does not outperform the ideal LFU in terms of hit ratio, however LFU has much more overhead because it keeps track of the number of times a value is used for an infinite number of values.
e.g. 1 2 3 4 5 6 1 2 5 4 8 9 6 2 5 4 10 2 5 4
number of pages = 4 LRU-K= 3
E.J. O'Neil, P.E. O'Neil, and G. Weikum, "An Optimality Proof of the LRU-K Page Replacement Algorithm,” J. ACM, vol. 46,no.1,1999, pp. 92-112.
Nimrod Megiddo and Dharmendra S. Modha,
"ARC: A Self-Tuning, Low Overhead Replacement Cache." USENIX
File and Storage Technologies (FAST), March 31, 2003, San Francisco, CA.
Nimrod Megiddo and Dharmendra S. Modha, "One Up on LRU ". The Magazine of the USENIX Association, vol. 28, no. 4, pp.7-11, August 2003.
Nimrod Megiddo and Dharmendra S. Modha, "Outperforming LRU with an Adaptive Replacement Cache Algorithm" IEEE Computer Magazine, pp. 58-65, April 2004.
Sorav Bansal and Dharmendra S. Modha, "CAR: Clock with Adaptive Replacement." USENIX File and Storage Technologies (FAST), March 31-April 2, 2004, San Francisco, CA.