KBLAS: An Optimized Library for Dense Matrix-Vector Multiplication on GPU Accelerators

A. Abdelfattah, D. Keyes, H. Ltaief
ACM Transactions on Mathematical Software, 42, 3, 18:1-18:31, (2016)

KBLAS: An Optimized Library for Dense Matrix-Vector Multiplication on GPU Accelerators

Keywords

Basic linear algebra subroutines, memory-bound kernels, GPU accelerators, CUDA optimizations

Abstract

​KBLAS is an open-source, high-performance library that provides optimized kernels for a subset of Level 2 BLAS functionalities on CUDA-enabled GPUs. Since performance of dense matrix-vector multiplication is hindered by the overhead of memory accesses, a double-buffering optimization technique is employed to overlap data motion with computation. After identifying a proper set of tuning parameters, KBLAS efficiently runs on various GPU architectures while avoiding code rewriting and retaining compliance with the standard BLAS API. Another optimization technique allows ensuring coalesced memory access when dealing with submatrices, especially for high-level dense linear algebra algorithms. All KBLAS kernels have been leveraged to a multi-GPU environment, which requires the introduction of new APIs. Considering general matrices, KBLAS is very competitive with existing state-of-the-art kernels and provides a smoother performance across a wide range of matrix dimensions. Considering symmetric and Hermitian matrices, the KBLAS performance outperforms existing state-of-the-art implementations on all matrix sizes and achieves asymptotically up to 50% and 60% speedup against the best competitor on single GPU and multi-GPUs systems, respectively. Performance results also validate our performance model. A subset of KBLAS high-performance kernels have been integrated into NVIDIA's standard BLAS implementation (cuBLAS) for larger dissemination, starting from version 6.0.

Code

DOI: 10.1145/2818311

Sources

Website

See all publications 2016