Cooperative Scenarios for Multi-agent Reinforcement Learning in Wireless Edge Caching

Nanveet Garg, Tharm Ratnarajah

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Wireless edge caching is an important strategy to fulfill the demands in the next generation wireless systems. Recent studies have indicated that among a network of small base stations (SBSs), joint content placement improves the cache hit performance via reinforcement learning, since content requests are correlated across SBSs and files. In this paper, we investigate multi-agent reinforcement learning (MARL), and identify four scenarios for cooperation. These scenarios include full cooperation (S1), episodic cooperation (S2), distributed cooperation (S3), and independent operation (no-cooperation). MARL algorithms have been presented for each scenario. Simulations results for averaged normalized cache hits show that cooperation with one neighbor (S3) can improve the performance significantly closer to full-cooperation (S1). Scenario 2 shows the importance of frequent cooperation, when the level of cooperation is high, which depends on the number of SBSs.
Original languageEnglish
Title of host publicationICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing
PublisherInstitute of Electrical and Electronics Engineers
DOIs
Publication statusPublished - 13 May 2021

Publication series

NameInternational Conference on Acoustics, Speech, and Signal Processing (ICASSP)
PublisherIEEE
ISSN (Print)1520-6149
ISSN (Electronic)2379-190X

Fingerprint

Dive into the research topics of 'Cooperative Scenarios for Multi-agent Reinforcement Learning in Wireless Edge Caching'. Together they form a unique fingerprint.

Cite this