Abstract
Graphics Processing Units (GPUs) are notoriously hard to optimize for manually. What is needed are good automatic code generators and optimizers. Accelerate, Futhark and Lift demonstrated that a functional approach is well suited for this challenge. Lift, for instance, uses a system of rewrite rules with a multi-stage approach. Algorithmic optimizations are first explored, followed by hardware-specific optimizations such as using shared memory and mapping parallelism.
While the algorithmic exploration leads to correct transformed programs by construction, it is not necessarily true for the latter phase. Exploiting shared memory and mapping parallelism while ensuring correct synchronization is a delicate balancing act, and is hard to encode in a rewrite system. Currently, Lift relies on heuristics with ad-hoc mechanisms to check for correctness. Although this practical approach eventually produces high-performance code, it is not an ideal state of affairs.
This paper proposes to extract parallelization constraints automatically from a functional IR and use a solver to identify valid rewriting. Using a convolutional neural network on a mobile GPU as a use case, this approach matches the performance of the ARM Compute Library GEMM convolution and the TVM-generated kernel consuming between 2.7x and 3.6x less memory on average. Furthermore, a speedup of 12x is achieved over the ARM Compute Library direct convolution implementation.
While the algorithmic exploration leads to correct transformed programs by construction, it is not necessarily true for the latter phase. Exploiting shared memory and mapping parallelism while ensuring correct synchronization is a delicate balancing act, and is hard to encode in a rewrite system. Currently, Lift relies on heuristics with ad-hoc mechanisms to check for correctness. Although this practical approach eventually produces high-performance code, it is not an ideal state of affairs.
This paper proposes to extract parallelization constraints automatically from a functional IR and use a solver to identify valid rewriting. Using a convolutional neural network on a mobile GPU as a use case, this approach matches the performance of the ARM Compute Library GEMM convolution and the TVM-generated kernel consuming between 2.7x and 3.6x less memory on average. Furthermore, a speedup of 12x is achieved over the ARM Compute Library direct convolution implementation.
Original language | English |
---|---|
Title of host publication | Proceedings of the 31st ACM SIGPLAN International Conference on Compiler Construction |
Editors | Bernhard Egger, Aaron Smith |
Publisher | ACM Association for Computing Machinery |
Pages | 218-230 |
Number of pages | 13 |
ISBN (Print) | 9781450391832 |
DOIs | |
Publication status | Published - 19 Mar 2022 |
Event | ACM SIGPLAN 2022 International Conference on Compiler Construction - Online Conference Duration: 2 Apr 2022 → 3 Apr 2022 Conference number: 31 |
Conference
Conference | ACM SIGPLAN 2022 International Conference on Compiler Construction |
---|---|
Abbreviated title | CC 2022 |
Period | 2/04/22 → 3/04/22 |
Keywords / Materials (for Non-textual outputs)
- code generation
- convolution
- mobile GPU
- parallelism