Extensive research has been carried out to improve cache replacement policies, yet designing an efficient cache replacement policy that incurs low hardware overhead remains a challenging and time-consuming task. Given the surging interest in applying machine learning (ML) to challenging computer architecture design problems, we use ML as an offline tool to design a cost-effective cache replacement policy. We demonstrate that ML is capable of guiding and expediting the generation of a cache replacement policy that is competitive with state-of-The-Art hand-crafted policies. In this work, we use Reinforcement Learning (RL) to learn a cache replacement policy. After analyzing the learned model, we are able to focus on a few critical features that might impact system performance. Using the insights provided by RL, we successfully derive a new cache replacement policy-Reinforcement Learned Replacement (RLR). Compared to the state-of-The-Art policies, RLR has low hardware overhead, and it can be implemented without needing to modify the processor's control and data path to propagate information such as program counter. On average, RLR improves single-core and four-core system performance by 3.25% and 4.86% over LRU, with an overhead of 16.75KB for 2MB last-level cache (LLC) and 67KB for 8MB LLC.
|Original language||English (US)|
|Title of host publication||Proceeding - 27th IEEE International Symposium on High Performance Computer Architecture, HPCA 2021|
|Publisher||IEEE Computer Society|
|Number of pages||13|
|State||Published - Feb 2021|
|Event||27th Annual IEEE International Symposium on High Performance Computer Architecture, HPCA 2021 - Virtual, Seoul, Korea, Republic of|
Duration: Feb 27 2021 → Mar 1 2021
|Name||Proceedings - International Symposium on High-Performance Computer Architecture|
|Conference||27th Annual IEEE International Symposium on High Performance Computer Architecture, HPCA 2021|
|Country/Territory||Korea, Republic of|
|Period||2/27/21 → 3/1/21|
Bibliographical notePublisher Copyright:
© 2021 IEEE.