Towards Mapping Lift to Deep Neural Network Accelerators

Naums Mogers, Michel Steuwer, Aaron Smith, Christophe Dubach, Dimitrios Vytiniotis, Ryota Tomioka

Research output: Contribution to conferencePaperpeer-review

Abstract / Description of output

Deep Neural Network (DNN) accelerators enjoy a rise in popularity due to the ubiquity of DNN applications. Devices to accelerate DNNs – CPUs, GPUs, ASICs, FPGAs – vary significantly and pose an increasingly difficult challenge to extract performance from them. Approaches proposed to address this problem lack in either portability or extensibility.

Lift is a novel approach that produces performance-portable GPU and CPU code for linear algebra, sparse matrix and stencil computations. Lift uses rewrite rules to detect and transform patterns for parallelism, memory configuration
and instruction set of the target hardware. This paper presents preliminary work in applying Lift to the generation of opti- mised code for DNN accelerators by mapping expressions to coarse-grained ISA primitives; discussion of the additions to the IR, type system, code generation and rewrite rules makes a case for extensibility of Lift.
Original languageEnglish
Publication statusPublished - 21 Jan 2019
Event1st HiPEAC Workshop on Emerging Deep Learning Accelerators (EDLA) - Valencia, Spain
Duration: 21 Jan 201921 Jan 2019


Workshop1st HiPEAC Workshop on Emerging Deep Learning Accelerators (EDLA)
Abbreviated titleHiPEAC EDLA 2019
Internet address

Keywords / Materials (for Non-textual outputs)

  • deep learning
  • compilation
  • performance portability


Dive into the research topics of 'Towards Mapping Lift to Deep Neural Network Accelerators'. Together they form a unique fingerprint.

Cite this