Abstract
With the end of Dennard scaling and Moore’s law, it is becoming increasingly difficult to build hardware for emerging applications that meet power and performance targets, while remaining flexible and programmable for end users. This is particularly true for domains that have frequently changing algorithms and applications involving mixed sparse/dense data structures, such as those in machine learning and graph analytics. To overcome this, we present a flexible accelerator called Transmuter, in a novel effort to bridge the gap between General-Purpose Processors (GPPs) and Application-Specific Integrated Circuits (ASICs). Transmuter adapts to changing kernel characteristics, such as data reuse and control divergence, through the ability to reconfigure the on-chip memory type, resource sharing and dataflow at run-time within a short latency. This is facilitated by a fabric of light-weight cores connected to a network of reconfigurable caches and crossbars. Transmuter addresses a rapidly growing set of algorithms exhibiting dynamic data movement patterns, irregularity, and sparsity, while delivering GPU-like efficiencies for traditional dense applications. Finally, in order to support programmability and ease-of-adoption, we prototype a software stack composed of low-level runtime routines, and a high-level language library called TransPy, that cater to expert programmers and end-users, respectively.
Our evaluations with Transmuter demonstrate average throughput (energy-efficiency) improvements of 5.0× (18.4×) and 4.2× (4.0×) over a high-end CPU and GPU, respectively, across a diverse set of kernels predominant in graph analytics, scientific computing and machine learning. Transmuter achieves energy-efficiency gains averaging 3.4× and 2.0× over prior FPGA and CGRA implementations of the same kernels, while remaining on average within 9.3× of state-of-the-art ASICs.
Our evaluations with Transmuter demonstrate average throughput (energy-efficiency) improvements of 5.0× (18.4×) and 4.2× (4.0×) over a high-end CPU and GPU, respectively, across a diverse set of kernels predominant in graph analytics, scientific computing and machine learning. Transmuter achieves energy-efficiency gains averaging 3.4× and 2.0× over prior FPGA and CGRA implementations of the same kernels, while remaining on average within 9.3× of state-of-the-art ASICs.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the ACM International Conference on Parallel Architectures and Compilation Techniques |
| Publisher | ACM Association for Computing Machinery |
| Pages | 175-190 |
| Number of pages | 16 |
| ISBN (Electronic) | 9781450380751 |
| DOIs | |
| Publication status | Published - 30 Sept 2020 |
| Event | 29th International Conference on Parallel Architectures and Compilation Techniques - Virtual conference Duration: 3 Oct 2020 → 7 Oct 2020 https://pact20.cc.gatech.edu/ |
Conference
| Conference | 29th International Conference on Parallel Architectures and Compilation Techniques |
|---|---|
| Abbreviated title | PACT 2020 |
| City | Virtual conference |
| Period | 3/10/20 → 7/10/20 |
| Internet address |
Keywords / Materials (for Non-textual outputs)
- reconfigurable architectures
- memory reconfiguration
- dataflow reconfiguration
- hardware acceleration
- general-purpose acceleration
Fingerprint
Dive into the research topics of 'Transmuter: Bridging the Efficiency Gap using Memory and Dataflow Reconfiguration'. Together they form a unique fingerprint.Profiles
-
Murray Cole
- School of Informatics - Personal Chair of Patterned Parallel Computing
- Institute for Computing Systems Architecture
- Computer Systems
Person: Academic: Research Active