Synthesizing Benchmarks for Predictive Modeling

Chris Cummins, Pavlos Petoumenos, Zheng Wang, Hugh Leather

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Predictive modeling using machine learning is an effective method for building compiler heuristics, but there is a shortage of benchmarks. Typical machine learning experiments outside of the compilation field train over thousands or millions of examples. In machine learning for compilers, however, there are typically only a few dozen common benchmarks available. This limits the quality of learned models, as they have very sparse training data for what are often high-dimensional feature spaces. What is needed is a way to generate an unbounded number of training programs that finely cover the feature space. At the same time the generated programs must be similar to the types of programs that human developers actually write, otherwise the learning will target the wrong parts of the feature space.

We mine open source repositories for program fragments and apply deep learning techniques to automatically construct models for how humans write programs. We sample these models to generate an unbounded number of runnable training programs. The quality of the programs is such that even human developers struggle to distinguish our generated programs from hand-written code.

We use our generator for OpenCL programs, CLgen, to automatically synthesize thousands of programs and show that learning over these improves the performance of a state of the art predictive model by 1.27×. In addition, the fine covering of the feature space automatically exposes weaknesses in the feature design which are invisible with the sparse training examples from existing benchmark suites. Correcting these weaknesses further increases performanceby 4.30×.
Original languageEnglish
Title of host publication2017 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)
Place of PublicationAustin, TX, USA
PublisherInstitute of Electrical and Electronics Engineers
Pages86-99
Number of pages14
ISBN (Electronic)978-1-5090-4931-8
ISBN (Print)978-1-5090-4932-5
DOIs
Publication statusPublished - 28 Feb 2017
EventInternational Symposium on Code Generation and Optimization (CGO) 2017 - Austin, Texas, United States
Duration: 4 Feb 20178 Feb 2017

Publication series

NameInternational Symposium on Code Generation and Optimization
PublisherIEEE
ISSN (Print)2164-2397

Conference

ConferenceInternational Symposium on Code Generation and Optimization (CGO) 2017
Country/TerritoryUnited States
CityAustin, Texas
Period4/02/178/02/17

Keywords / Materials (for Non-textual outputs)

  • Synthetic program generation
  • OpenCL
  • Benchmarking
  • Deep Learning
  • GPUs

Fingerprint

Dive into the research topics of 'Synthesizing Benchmarks for Predictive Modeling'. Together they form a unique fingerprint.

Cite this