Batched Cholesky Factorization for tiny matrices

Abstract : Many linear algebra libraries, such as the Intel MKL, Magma or Eigen, provide fast Cholesky factorization. These libraries are suited for big matrices but perform slowly on small ones. Even though State-of-the-Art studies begin to take an interest in small matrices, they usually feature a few hundreds rows. Fields like Computer Vision or High Energy Physics use tiny matrices. In this paper we show that it is possible to speedup the Cholesky factorization for tiny matrices by grouping them in batches and using highly specialized code. We provide High Level Transformations that accelerate the factorization for current Intel SIMD architectures (SSE, AVX2, KNC, AVX512). We achieve with these transformations combined with SIMD a speedup from 13 to 31 for the whole resolution compared to the naive code on a single core AVX2 machine and a speedup from 15 to 33 with multithreading compared to the multithreaded naive code.
Complete list of metadatas

Cited literature [13 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01361204
Contributor : Lionel Lacassagne <>
Submitted on : Tuesday, September 6, 2016 - 5:53:28 PM
Last modification on : Thursday, March 21, 2019 - 1:03:51 PM
Long-term archiving on : Wednesday, December 7, 2016 - 1:25:31 PM

File

dasip_2016_final_draft.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01361204, version 1

Citation

Florian Lemaitre, Lionel Lacassagne. Batched Cholesky Factorization for tiny matrices. Design and Architectures for Signal and Image Processing (DASIP), ECSI, Oct 2016, Rennes, France. pp.1--8. ⟨hal-01361204⟩

Share

Metrics

Record views

261

Files downloads

363