loading page

A native tensor-vector multiplication algorithm for high performance computing
  • Pedro J. Martinez-Ferrer ,
  • A. N. Yzelman ,
  • Vicenç Beltran
Pedro J. Martinez-Ferrer
Barcelona Supercomputing Center (BSC)

Corresponding Author:[email protected]

Author Profile
A. N. Yzelman
Author Profile
Vicenç Beltran
Author Profile


Tensor computations are important mathematical operations for applications that rely on multidimensional data. The tensor–vector multiplication (TVM) is the most memory-bound tensor contraction in this class of operations. This paper proposes an open-source TVM algorithm which is much simpler and efficient than previous approaches, making it suitable for integration in the most popular BLAS libraries available today. Our algorithm has been written from scratch and features unit-stride memory accesses, cache awareness, mode obliviousness, full vectorization and multi-threading as well as NUMA awareness for non-hierarchically stored dense tensors. Numerical experiments are carried out on tensors up to order 10 and various compilers and hardware architectures equipped with traditional DDR and high bandwidth memory (HBM). For large tensors the average performance of the TVM ranges between 62\% and 76\% of the theoretical bandwidth for NUMA systems with DDR memory and remains independent of the contraction mode. On NUMA systems with HBM the TVM exhibits some mode dependency but manages to reach performance figures close to peak values. Finally, the higher-order power method is benchmarked with the proposed TVM kernel and delivers on average between 58\% and 69\% of the theoretical bandwidth for large tensors.
01 Dec 2022Published in IEEE Transactions on Parallel and Distributed Systems volume 33 issue 12 on pages 3363-3374. 10.1109/TPDS.2022.3153113