3–4 Nov 2022
Max Planck Institute for Dynamics of Complex Technical Systems
Europe/Berlin timezone

Cancelled - Optimizable Large Eddy Simulation: Coupling PDE solvers and Reinforcement Learning

4 Nov 2022, 14:15
1h
Main/groundfloor-V0.05/2+3 - Prigogine (Max Planck Institute for Dynamics of Complex Technical Systems)

Main/groundfloor-V0.05/2+3 - Prigogine

Max Planck Institute for Dynamics of Complex Technical Systems

Sandtorstr. 1 39106 Magdeburg
100
Talk

Speaker

Andrea Beck (University of Stuttgart)

Description

In the recent years, reinforcement learning (RL) has been identified as a potentially potent optimization method for stochastic control problems. The mathematical underpinning of RL is the Markov Decision Process (MDP), which provides a formal framework for devising policies for optimal decision making under uncertainties. While RL is just one method for finding such policies that solve the MDP, it is particularly useful when the reward signals are sampled, evaluative and sequential. It is the machine-learning method of choice for learning strategies in the context of dynamical systems. One early application of RL in fluid mechanics is to flow control problems, but research here is still in its infancy.
In this talk, I will give a brief introduction to RL, in particular policy gradient methods and their features. I will then discuss the problem of turbulence modelling, in particular for the discretization-filtered Navier-Stokes equations and highlight the mathematical difficulties. As a possible remedy, I will show how to formulate the task of finding an optimal closure model as an MDP, which we can solve via RL once we define the reward, state and action spaces. The environment and its transition function are given by the Discontinuous Galerkin Spectral Element solver FLEXI for the compressible Navier-Stokes equations. Since such optimization problems are quite resource-intensive and classical PDE schemes and RL methods pose disparate demands on hardware and HPC-aware design, I will briefly discuss the parallelization on hybrid architectures (on HAWK + AI extension at HLRS) and its potential for training at scale. For this coupled RL-DG framework, I will present how the RL optimization yields discretization-aware model approaches for the LES equations that outperform the current state of the art. While this is not a classical example from flow control, it shows the great potential of the “solver in the loop” optimization based on RL.

Primary author

Andrea Beck (University of Stuttgart)

Presentation materials

There are no materials yet.