BenchPress: A Deep Active Benchmark Generator

Foivos Tsimpourlas*, Pavlos Petoumenos, Min Xu, Chris Cummins, Kim Hazelwood, Ajitha Rajan, Hugh Leather

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Finding the right heuristics to optimize code has always been a difficult and mostly manual task for compiler engineers. Today this task is near-impossible as hardware-software complexity has scaled up exponentially. Predictive models for compilers have recently emerged which require little human effort but are far better than humans in finding near optimal heuristics. As any machine learning technique, they are only as good as the data they are trained on but there is a severe shortage of code for training compilers. Researchers have tried to remedy this with code generation but their synthetic benchmarks, although thousands, are small, repetitive and poor in features, therefore ineffective. This indicates the shortage is of feature quality more than corpus size. It is more important than ever to develop a directed program generation approach that will produce benchmarks with valuable features for training compiler heuristics.
We develop BenchPress, the first ML benchmark generator for compilers that is steerable within feature space representations of source code. BenchPress synthesizes compiling functions by adding new code in any part of an empty or existing sequence by jointly observing its left and right context, achieving excellent compilation rate. BenchPress steers benchmark generation towards desired target features that has been impossible for state of the art synthesizers (or indeed humans) to reach. It performs better in targeting the features of Rodinia benchmarks in 3 different feature spaces compared with (a) CLgen - a state of the art ML synthesizer, (b) CLSmith fuzzer, (c) SRCIROR mutator or even (d) human-written code from GitHub. BenchPress is the first generator to search the feature space with active learning in order to generate benchmarks that will improve a downstream task. We show how using BenchPress, Grewe’s et al. CPU vs GPU heuristic model can obtain a higher speedup when trained on BenchPress’s benchmarks compared to other techniques. BenchPress is a powerful code generator: Its generated samples compile at a rate of 86%, compared to CLgen’s 2.33%. Starting from an empty fixed input, BenchPress produces 10× more unique, compiling OpenCL benchmarks than CLgen, which are significantly larger and more feature diverse.
Original languageEnglish
Title of host publicationPACT '22: Proceedings of the International Conference on Parallel Architectures and Compilation Techniques
PublisherAssociation for Computing Machinery (ACM)
Pages505-516
Number of pages12
ISBN (Electronic)9781450398688
ISBN (Print)9781450398688
DOIs
Publication statusPublished - 27 Jan 2023
EventThe 31st International Conference on Parallel Architectures and Compilation Techniques, 2022 - Chicago, United States
Duration: 10 Oct 202212 Oct 2022
Conference number: 31
https://pact22.cs.illinois.edu/

Publication series

NameACM Conferences
PublisherAssociation for Computer Machinery
ISSN (Electronic)1089-795X

Conference

ConferenceThe 31st International Conference on Parallel Architectures and Compilation Techniques, 2022
Abbreviated titlePACT 2022
Country/TerritoryUnited States
CityChicago
Period10/10/2212/10/22
Internet address

Fingerprint

Dive into the research topics of 'BenchPress: A Deep Active Benchmark Generator'. Together they form a unique fingerprint.

Cite this