HAShCache: Heterogeneity-aware shared dramcache for integrated heterogeneous systems

Adarsh Patil, Ramaswamy Govindarajan

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Integrated Heterogeneous System (IHS) processors pack throughput-oriented General-Purpose Graphics Pprocessing Units (GPGPUs) alongside latency-oriented Central Processing Units (CPUs) on the same die sharing certain resources, e.g., shared last-level cache, Network-on-Chip (NoC), and the main memory. The demands for memory accesses and other shared resources from GPU cores can exceed that of CPU cores by two to three orders of magnitude. This disparity poses significant problems in exploiting the full potential of these architectures. In this article, we propose adding a large-capacity stacked DRAM, used as a shared last-level cache, for the IHS processors. However, adding the DRAMCache naively, leaves significant performance on the table due to the disparate demands from CPU and GPU cores for DRAMCache and memory accesses. In particular, the imbalance can significantly reduce the performance benefits that the CPU cores would have otherwise enjoyed with the introduction of the DRAMCache, necessitating a heterogeneity-aware management of this shared resource for improved performance. In this article, we propose three simple techniques to enhance the performance of CPU application while ensuring very little to no performance impact to the GPU. Specifically, we propose (i) PrIS, a prioritization scheme for scheduling CPU requests at the DRAMCache controller; (ii) ByE, a selective and temporal bypassing scheme for CPU requests at the DRAMCache; and (iii) Chaining, an occupancy controlling mechanism for GPU lines in the DRAMCache through pseudo-associativity. The resulting cache, Heterogeneity-Aware Shared DRAMCache (HAShCache), is heterogeneity-aware and can adapt dynamically to address the inherent disparity of demands in an IHS architecture. Experimental evaluation of the proposed HAShCache results in an average system performance improvement of 41% over a naive DRAMCache and over 200% improvement over a baseline system with no stacked DRAMCache.

Original languageEnglish
Article number51
Pages (from-to)1-26
JournalACM Transactions on Architecture and Code Optimization
Issue number4
Publication statusPublished - 18 Dec 2017

Keywords / Materials (for Non-textual outputs)

  • 3D-stacked memory
  • Cache sharing
  • DRAM cache
  • Integrated CPU-GPU processors


Dive into the research topics of 'HAShCache: Heterogeneity-aware shared dramcache for integrated heterogeneous systems'. Together they form a unique fingerprint.

Cite this