Skip to main content. Search SpringerLink Search. Noga Authors M. Chrobak View author publications. View author publications. Additional information Received June 2, ; revised January 28, Rights and permissions Reprints and Permissions. About this article Cite this article Chrobak, M. You will throw away items that you have not used for a long time, and keep the ones that you use frequently. An evolution of that algorithm an improvement to simple LRU would be to throw away items that have not been used for a long time, and are not expensive to replace if you need them after all.
Yes, that is correct. LRU is Least Recently Used, the cache element that hasn't been used the longest time is evicted on the hunch that it won't be needed soon.
Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Collectives on Stack Overflow. Learn more. Asked 8 years, 7 months ago. Active 2 years, 5 months ago. Viewed 29k times. Am I right? Improve this question. Vadim Landa 2, 5 5 gold badges 21 21 silver badges 30 30 bronze badges.
Andrey Yaskulsky Andrey Yaskulsky 2, 6 6 gold badges 33 33 silver badges 67 67 bronze badges. A random eviction policy degrades gracefully as the loop gets too big.
In practice, on real workloads, random tends to do worse than other algorithms. But what if we take two random choices and just use LRU between those two choices? These are ratios algorithm miss rate : random miss rate ; lower is better. Each cache uses the same policy at all levels of the cache. To see if anything odd is going on in any individual benchmark, we can look at the raw results on each sub-benchmark. The L1, L2, and L3 miss rates are all plotted in the same column for each benchmark, below:.
As we might expect, LRU does worse than 2-random when the miss rates are high, and better when the miss rates are low.
0コメント