Automatic Skeleton-Driven Memory Affinity for Transactional Worklist Applications

Luís Fabrício Wanderley Góes, Christiane Pousa Ribeiro, Márcio Castro, Jean-François Méhaut, Murray Cole, Marcelo Cintra

Research output: Contribution to journalArticlepeer-review

Abstract

Memory affinity has become a key element to achieve scalable performance on multi-core platforms. Mechanisms such as thread scheduling, page allocation and cache prefetching are commonly employed to enhance memory affinity which keeps data close to the cores that access it. In particular, software transactional memory (STM) applications exhibit irregular memory access behavior that makes harder to determine which and when data will be needed by each core. Additionally, existing STM runtime systems are decoupled from issues such as thread and memory management. In this paper, we thus propose a skeleton-driven mechanism to improve memory affinity on STM applications that fit the worklist pattern employing a two-level approach. First, it addresses memory affinity in the DRAM level by automatic selecting page allocation policies. Then it employs data prefetching helper threads to improve affinity in the cache level. It relies on a skeleton framework to exploit the application pattern in order to provide automatic memory page allocation and cache prefetching. Our experimental results on the STAMP benchmark suite show that our proposed mechanism can achieve performance improvements of up to 46 %, with an average of 11 %, over a baseline version on two NUMA multi-core machines.
Original languageEnglish
Pages (from-to)365-382
Number of pages18
JournalInternational journal of parallel programming
Volume42
Issue number2
DOIs
Publication statusPublished - 2014

Keywords

  • Memory affinity
  • Software transactional memory
  • Parallel algorithmic skeleton
  • Multi-core platforms

Fingerprint

Dive into the research topics of 'Automatic Skeleton-Driven Memory Affinity for Transactional Worklist Applications'. Together they form a unique fingerprint.

Cite this