Deep Reinforcement Learning for Adaptive Caching in Hierarchical Content Delivery Networks

Alireza Sadeghi, Gang Wang, Georgios B. Giannakis

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

Caching is envisioned to play a critical role in next-generation content delivery infrastructure, cellular networks, and Internet architectures. By smartly storing the most popular contents at the storage-enabled network entities during off-peak demand instances, caching can benefit both network infrastructure as well as end users, during on-peak periods. In this context, distributing the limited storage capacity across network entities calls for decentralized caching schemes. Many practical caching systems involve a parent caching node connected to multiple leaf nodes to serve user file requests. To model the two-way interactive influence between caching decisions at the parent and leaf nodes, a reinforcement learning (RL) framework is put forth. To handle the large continuous state space, a scalable deep RL approach is pursued. The novel approach relies on a hyper-deep Q-network to learn the Q-function, and thus the optimal caching policy, in an online fashion. Reinforcing the parent node with ability to learn-and-adapt to unknown policies of leaf nodes as well as spatio-temporal dynamic evolution of file requests, results in remarkable caching performance, as corroborated through numerical tests.

Original languageEnglish (US)
Article number8807260
Pages (from-to)1024-1033
Number of pages10
JournalIEEE Transactions on Cognitive Communications and Networking
Volume5
Issue number4
DOIs
StatePublished - Dec 2019

Bibliographical note

Funding Information:
Manuscript received February 26, 2019; revised June 30, 2019; accepted August 10, 2019. Date of publication August 20, 2019; date of current version December 12, 2019. This work was supported in part by NSF grants 1711471, 1514056, and 1901134. The associate editor coordinating the review of this article and approving it for publication was H. T. Dinh. (Corresponding author: Gang Wang.) The authors are with the Digital Technology Center, University of Minnesota, Minneapolis, MN 55455 USA, and also with the Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455 USA (e-mail: sadeghi@umn.edu; gangwang@umn.edu; georgios@umn.edu). Digital Object Identifier 10.1109/TCCN.2019.2936193

Keywords

  • Caching
  • deep Q-network
  • deep RL
  • function approximation
  • next-generation networks

Fingerprint Dive into the research topics of 'Deep Reinforcement Learning for Adaptive Caching in Hierarchical Content Delivery Networks'. Together they form a unique fingerprint.

Cite this