HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
Skip to Main content Skip to Navigation
Conference papers

Choosing a twice more accurate dot product implementation

Abstract : The fused multiply and add (FMA) operation computes a floating point multiplication followed by an addition or a subtraction as a single floating point operation. Intel IA-64, IBM RS/6000 and PowerPC architectures implement this FMA operation. The aim of this talk is to study how the FMA improves the computation of dot product with classical and compensated algorithms. The latters double the accuracy of the former at the same working precision. Six algorithms are considered. We present associated theoretical error bounds. Numerical experiments illustrate the actual efficiency in terms of accuracy and running time. We show that the FMA does not improve in a significant way the accuracy of the result whereas it increases significantly the actual speed of the algorithms.
Document type :
Conference papers
Complete list of metadata

Contributor : Lip6 Publications Connect in order to contact the contributor
Submitted on : Wednesday, November 21, 2018 - 1:05:07 PM
Last modification on : Friday, January 8, 2021 - 5:40:03 PM
Long-term archiving on: : Friday, February 22, 2019 - 2:19:25 PM


Files produced by the author(s)


  • HAL Id : hal-01351480, version 1


Stef Graillat, Philippe Langlois, Nicolas Louvet. Choosing a twice more accurate dot product implementation. ICNAAM: International Conference of Numerical Analysis and Applied Mathematics, Sep 2006, Hersonnisos, Crete, Greece. pp.498-499. ⟨hal-01351480⟩



Record views


Files downloads