The efficiency of tensor contraction is of great importance. Compilers cannot optimize it well enough to come close to the performance of expert-tuned implementations. All existing approaches that provide competitive performance require optimized external code. We introduce a compiler optimization that reaches the performance of optimized BLAS libraries without the need for an external implementation or automatic tuning. Our approach provides competitive performance across hardware architectures and can be generalized to deliver the same benefits for algebraic path problems. By making fast linear algebra kernels available to everyone, we expect productivity increases when optimized libraries are not available.
|Number of pages||27|
|Journal||ACM Transactions on Architecture and Code Optimization|
|Publication status||Published - 4 Sep 2018|
- Tensor contractions
- high-performance computing
- matrix-matrix multiplication