Single-step deep reinforcement learning for open-loop control of laminar and turbulent flows - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Physical Review Fluids Année : 2021

Single-step deep reinforcement learning for open-loop control of laminar and turbulent flows

Hassan Ghraieb
  • Fonction : Auteur
  • PersonId : 1060349
Aurélien Larcher
P. Meliga
Elie Hachem

Résumé

This research gauges the ability of deep reinforcement learning (DRL) techniques to assist the optimization and control of fluid mechanical systems. It relies on introducing single-step proximal policy optimization (PPO), a “degenerate” version of the PPO algorithm, intended for situations where the optimal policy to be learnt by a neural network does not depend on state, as is notably the case in open-loop control problems. The numerical reward fed to the neural network is computed with an in-house stabilized finite elements environment implementing the variational multiscale method. Several prototypical separated flows in two dimensions are used as testbed. The method is applied first to two relatively simple optimization test cases (maximizing the mean lift of a NACA 0012 airfoil and the fluctuating lift of two side-by-side circular cylinders, both in laminar regimes) to assess convergence and accuracy by comparing to in-house direct numerical simulation (DNS) data. The potential of single-step PPO for reliable black-box optimization of computational fluid dynamics systems is then showcased by tackling several problems of open-loop control with parameter spaces large enough to dismiss DNS. The approach proves relevant to map the best positions for placement of a small control cylinder in the attempt to reduce drag in laminar and turbulent cylinder flows. All results are consistent with in-house data obtained by the adjoint method, and the drag of a square cylinder at Reynolds numbers in the range of a few thousands is reduced by 30%, which matches well reference experimental data available from literature. The method also successfully reduces the drag of the fluidic pinball, an equilateral triangle arrangement of rotating cylinders immersed in a turbulent stream. Consistently with reference machine learning results from the literature, drag is reduced by almost 60% using a so-called boat tailing actuation made up of a slowly rotating front cylinder and two downstream cylinders rotating in opposite directions so as to reduce the gap flow between them.
Fichier principal
Vignette du fichier
main6.pdf (3.91 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03027908 , version 1 (27-11-2020)
hal-03027908 , version 2 (17-11-2021)

Identifiants

  • HAL Id : hal-03027908 , version 2

Citer

Hassan Ghraieb, Jonathan Viquerat, Aurélien Larcher, P. Meliga, Elie Hachem. Single-step deep reinforcement learning for open-loop control of laminar and turbulent flows. Physical Review Fluids, 2021. ⟨hal-03027908v2⟩
174 Consultations
374 Téléchargements

Partager

Gmail Facebook X LinkedIn More