Abstract
Benchmarks are needed in order to test compiler and languagebased approaches to optimize concurrency. These have to be varied, yield reproducible results and allow comparison between different approaches. In this paper, we propose a framework for generating synthetic benchmarks that aims at attaining these goals. Based on generating code from random graphs, our framework operates at a high level of abstraction. We test our benchmarking framework with a usecase, where we compare three state-of-the-art systems to optimize I/O concurrency in microservice-based software architectures. We show how using our benchmarks we can reliably compare between approaches, and even between the same approach using different coding styles.
Original language | English |
---|---|
Number of pages | 6 |
Publication status | Published - 24 Jan 2018 |
Event | 11th International Workshop on Programmability and Architectures for Heterogeneous Multicores (MULTIPROG-2018) - Manchester, United Kingdom Duration: 24 Jan 2018 → 24 Jan 2018 Conference number: 11 https://research.ac.upc.edu/multiprog/multiprog2018/ |
Workshop
Workshop | 11th International Workshop on Programmability and Architectures for Heterogeneous Multicores (MULTIPROG-2018) |
---|---|
Abbreviated title | MULTIPROG 2018 |
Country/Territory | United Kingdom |
City | Manchester |
Period | 24/01/18 → 24/01/18 |
Internet address |