Skip to Main content Skip to Navigation
Conference papers

Batched Cholesky Factorization for tiny matrices

Abstract : Many linear algebra libraries, such as the Intel MKL, Magma or Eigen, provide fast Cholesky factorization. These libraries are suited for big matrices but perform slowly on small ones. Even though State-of-the-Art studies begin to take an interest in small matrices, they usually feature a few hundreds rows. Fields like Computer Vision or High Energy Physics use tiny matrices. In this paper we show that it is possible to speedup the Cholesky factorization for tiny matrices by grouping them in batches and using highly specialized code. We provide High Level Transformations that accelerate the factorization for current Intel SIMD architectures (SSE, AVX2, KNC, AVX512). We achieve with these transformations combined with SIMD a speedup from 13 to 31 for the whole resolution compared to the naive code on a single core AVX2 machine and a speedup from 15 to 33 with multithreading compared to the multithreaded naive code.
Complete list of metadata

Cited literature [13 references]  Display  Hide  Download
Contributor : Lionel Lacassagne Connect in order to contact the contributor
Submitted on : Tuesday, September 6, 2016 - 5:53:28 PM
Last modification on : Friday, January 8, 2021 - 5:32:08 PM
Long-term archiving on: : Wednesday, December 7, 2016 - 1:25:31 PM


Files produced by the author(s)


  • HAL Id : hal-01361204, version 1


Florian Lemaitre, Lionel Lacassagne. Batched Cholesky Factorization for tiny matrices. Design and Architectures for Signal and Image Processing (DASIP), ECSI, Oct 2016, Rennes, France. pp.1--8. ⟨hal-01361204⟩



Record views


Files downloads