Efficient GPU Implementation of the Linearly Interpolated Bounce-Back Boundary Condition

Abstract : Interpolated bounce-back boundary conditions for the lattice Boltzmann method (LBM) make the accurate representation of complex geometries possible. In the present work, we describe an implementation of a linearly interpolated bounce-back (LIBB) boundary condition for graphics processing units (GPUs). To validate our code, we simulated the flow past a sphere in a square channel. At low Reynolds numbers, results are in good agreement with experimental data. Moreover, we give an estimate of the critical Reynolds number for transition from steady to periodic flow. Performance recorded on a single node server with eight GPU based computing devices ranged up to 2.63×1092.63×109 fluid node updates per second. Comparison with a simple bounce-back version of the solver shows that the impact of LIBB on performance is fairly low.
Complete list of metadatas

Cited literature [16 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-00731150
Contributor : Laboratoire Cethil <>
Submitted on : Monday, June 9, 2014 - 2:49:03 PM
Last modification on : Monday, December 10, 2018 - 10:54:01 AM
Long-term archiving on : Tuesday, September 9, 2014 - 10:38:07 AM

File

ACL31.pdf
Files produced by the author(s)

Identifiers

Citation

C. Obrecht, F. Kuznik, Bernard Tourancheau, J.-J. Roux. Efficient GPU Implementation of the Linearly Interpolated Bounce-Back Boundary Condition. Computers and Mathematics with Applications, Elsevier, 2013, 65 (6), http://dx.doi.org/10.1016/j.camwa.2012.05.014. ⟨10.1016/j.camwa.2012.05.014⟩. ⟨hal-00731150⟩

Share

Metrics

Record views

207

Files downloads

300