Tile Low-Rank GEMM Using Batched Operations on GPUs

A. Charara, D.E. Keyes, H. Ltaief
Euro-Par 2018: Parallel Processing. Euro-Par 2018. Lecture Notes in Computer Science, volume 11014, pp. 811-825, (2018)

Tile Low-Rank GEMM Using Batched Operations on GPUs

Keywords

Hierarchical low-rank matrix computations, Matrix multiplication - GEMM, High performance computing, GPU Computing, KBLAS

Abstract

​Dense General Matrix-Matrix (GEMM) multiplication is a core operation of the Basic Linear Algebra Subroutines (BLAS) library, and therefore, often resides at the bottom of the traditional software stack for many scientific applications. In fact, chip manufacturers give a special attention to the GEMM kernel implementation since this is exactly where most of the high-performance software libraries extract hardware performance. With the emergence of big data applications involving large data-sparse, hierarchically low-rank matrices, the off-diagonal tiles can be compressed to reduce the algorithmic complexity and the memory footprint. The resulting tile low-rank (TLR) data format is composed of small data structures, which retain the most significant information for each tile. However, to operate on low-rank tiles, a new GEMM operation and its corresponding API have to be designed on GPUs so that the data sparsity structure of the matrix can be exploited while leveraging the underlying TLR compression format. The main idea consists of aggregating all operations into a single kernel launch to compensate for their low arithmetic intensities and to mitigate the data transfer overhead on GPUs. The new TLR-GEMM kernel outperforms the cuBLAS dense batched GEMM by more than an order of magnitude and creates new opportunities for TLR advanced algorithms.

Code

DOI: 10.1007/978-3-319-96983-1_57

Sources

Website

See all publications 2018