Level Graphs: Generating Benchmarks for Concurrency Optimizations in Compilers

Andres Goens, Sebastian Ertel, Justus Adam, Jeronimo Castrillon

Research output: Contribution to conferencePaperpeer-review

Abstract

Benchmarks are needed in order to test compiler and languagebased approaches to optimize concurrency. These have to be varied, yield reproducible results and allow comparison between different approaches. In this paper, we propose a framework for generating synthetic benchmarks that aims at attaining these goals. Based on generating code from random graphs, our framework operates at a high level of abstraction. We test our benchmarking framework with a usecase, where we compare three state-of-the-art systems to optimize I/O concurrency in microservice-based software architectures. We show how using our benchmarks we can reliably compare between approaches, and even between the same approach using different coding styles.
Original languageEnglish
Number of pages6
Publication statusPublished - 24 Jan 2018
Event11th International Workshop on Programmability and Architectures for Heterogeneous Multicores (MULTIPROG-2018) - Manchester, United Kingdom
Duration: 24 Jan 201824 Jan 2018
Conference number: 11
https://research.ac.upc.edu/multiprog/multiprog2018/

Workshop

Workshop11th International Workshop on Programmability and Architectures for Heterogeneous Multicores (MULTIPROG-2018)
Abbreviated titleMULTIPROG 2018
Country/TerritoryUnited Kingdom
CityManchester
Period24/01/1824/01/18
Internet address

Fingerprint

Dive into the research topics of 'Level Graphs: Generating Benchmarks for Concurrency Optimizations in Compilers'. Together they form a unique fingerprint.

Cite this