A Large-scale Cross-architecture Evaluation of Thread-coarsening

Alberto Magni, Christophe Dubach, Michael F. P. O'Boyle

Research output: Chapter in Book/Report/Conference proceedingConference contribution


OpenCL has become the de-facto data parallel programming model for parallel devices in today's high-performance supercomputers. OpenCL was designed with the goal of guaranteeing program portability across hardware from different vendors. However, achieving good performance is hard, requiring manual tuning of the program and expert knowledge of each target device.

In this paper we consider a data parallel compiler transformation --- thread-coarsening --- and evaluate its effects across a range of devices by developing a source-to-source OpenCL compiler based on LLVM. We thoroughly evaluate this transformation on 17 benchmarks and five platforms with different coarsening parameters giving over 43,000 different experiments. We achieve speedups over 9x on individual applications and average speedups ranging from 1.15x on the Nvidia Kepler GPU to 1.50x on the AMD Cypress GPU. Finally, we use statistical regression to analyse and explain program performance in terms of hardware-based performance counters.
Original languageEnglish
Title of host publicationProceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
Place of PublicationNew York, NY, USA
Number of pages11
ISBN (Print)978-1-4503-2378-9
Publication statusPublished - 2013


  • GPU, OpenCL, regression trees, thread coarsening

Fingerprint Dive into the research topics of 'A Large-scale Cross-architecture Evaluation of Thread-coarsening'. Together they form a unique fingerprint.

Cite this