- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
From August, 19th to August, 23rd we are maintaining our IT infrastructure. During this week, there may be unexpected downtimes of GITLAB, Mattermost, indico and the CI services.
GAMM Juniors' Summer School on Applied Mathematics and Mechanics:
Virtual event
The aim of this summer school is to study recent developments in the area of learning models from data, which will be presented by three top-level experts in lectures and exercise classes. The combination and interplay of different techniques from physics-based modeling, data-driven modeling, model reduction, system identification and machine learning techniques shall bring together researchers from different disciplines.
In particular, the summer school is tailored for young researchers, i.e. master students in their final phase, PhD students and post-doctoral researchers.
Each participant is invited to present a poster on his/her own research with the emphasis on how learning techniques are used or are intended to be used.
See the event page for more details.
Introduction to Stochastic Dynamics and Transition Operators
- Markov Property and Transition Kernels
- Perron-Frobenius and Koopman Operator
- Examples (ODEs, Brownian Motion)
- Stationarity, Reversibility
Introduction to traditional (intrusive, projection-based) model reduction
- POD/PCA, greedy
- Galerkin ROMs
- DEIM
Introduction to data-driven modeling
- DMD
- Koopman
- Time-delay embeddings (HAVOK)
We propose a supervised learning methodology for use of the random feature model as a data-driven surrogate for operators mapping between spaces of functions. Although our methodology is quite general, we consider operators defined by partial differential equations (PDEs); here, the inputs and outputs are themselves functions, with the input parameters being functions required to specify a well-posed problem and the outputs being solutions of the problem. Upon discretization, the model inherits several desirable attributes from this function space viewpoint, including mesh-invariant approximation error and the capability to be trained at one mesh resolution and then deployed at different mesh resolutions. We demonstrate the random feature model's ability to cheaply and accurately approximate the nonlinear parameter-to-solution maps of prototypical PDEs arising in physical science and engineering applications, which suggests the applicability of the method as a surrogate for expensive full-order forward models arising in many-query problems.
Partial differential equations (PDEs) are commonly used to model complex systems in applied sciences. Methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high and expensive. To make these problems tractable we use reduced-order models (ROMs) to reduce the computational cost of PDE solves. PDE models of fluid flow or other any advection dominated physics may produce a discontinuous solution. We will construct a bijection that aligns features in a fixed reference domain such that snapshots have jump locations at the same coordinates, independent of the parameters. We are proposing a procedure to align features in the reference domain because this will improve (increase) the N-widths decay and explicitly deal with discontinuities in the construction and definition of the ROM. To perform the alignment, we convert the discretized conservation law into a PDE-constrained optimization problem. We build a projection-based ROM in the reference domain where discontinuities are aligned. It is our goal at the offline stage during which computationally expensive training tasks compute a representative basis for the system state. Then, during the inexpensive online stage, we solve an optimization problem to compute approximate solutions for an arbitrary parameter. The solution of a new parameter would be aligned in the reference domain with the rest of the parameters encountered during the offline stage.
The formation and oscillation of bubbles is important in cavitation related to turbomachinery, and in biomedical applications, such as contrast-enhanced ultrasound imaging and drug delivery for cancer treatment. There is an extensive literature on the modeling and analysis of bubble oscillations in these settings, allowing for detailed simulations from first principles. However, there are still many open questions that may benefit from machine learning (ML). In this research, we apply data-driven and ML methods to analyzing and controlling the nonlinear dynamics of bubble oscillations. In this context, the Rayleigh-Plesset equation (RPE) is a central object of study [1]. It exhibits richly-structured chaotic solutions when describing an acoustically-driven bubble for certain parameter values [2]. Nonspherical shape modes - which are important for enhancing ultrasound imaging and promoting drug delivery - can be overlaid as perturbations to the basic spherical mode [3, 4, 5], leading to a dynamical system of much higher dimension. Recently, experimental studies of bubble shape modes evolving in acoustic fields have captured large amounts of high-quality time series data [6, 7, 8, 9]. We are therefore interested in discovering reduced-order models of microbubble dynamics from raw experimental data and comparing these to data-driven analyses of first-principle, physics-based models. Additionally, we want to apply our data-driven model to develop a framework for nonlinear control [10] of both individual bubbles and bubbly flows using acoustic forcing. To this end, we have developed a deep neural network (DNN) to forecast time series previously generated numerically from the RPE. We intend to train this on experimental time series and use it to predict the dynamic response of bubbles to changes in acoustic forcing. We are also exploring the Singular Value Decomposition (SVD) of Hankel matrices built from these time series to identify a Koopman embedding of the RPE when acoustically-driven. This Koopman embedding provides a coordinate system wherein the nonlinear dynamics of bubble oscillations becomes linear, allowing the application of tools from classical control theory.
References
[1] Christopher Earls Brennen. Cavitation and Bubble Dynamics. Cambridge University Press, Cambridge, 2013.
[2] Werner Lauterborn and Engelbert Suchla. Bifurcation Superstructure in a Model of Acoustic Turbulence. Physical Review Letters, 53(24):2304-2307, December 1984. Publisher: American Physical Society.
[3] Michael Calvisi, Olgert Lindau, John Blake, and Andrew Szeri. Shape Stability and Violent Collapse of Microbubbles in Acoustic Traveling Waves. Physics of Fluids, 19, April 2007.
[4] M. S. Plesset. On the Stability of Fluid Flows with Spherical Symmetry. Journal of Applied Physics, 25(1):96-98, January 1954. Publisher: American Institute of Physics.
[5] Matthieu Guedra and Claude Inserra. Bubble shape oscillations of finite amplitude. Journal of Fluid Mechanics, 857:681-703, 2018. Edition: 2018/10/25 Publisher: Cambridge University Press.
[6] Sarah Cleve, Matthieu Guedra, Claude Inserra, Cyril Mauger, and Philippe Blanc-Benon. Surface modes with controlled axisymmetry triggered by bubble coalescence in a high-amplitude acoustic field. Physical Review E, 98, September 2018.
[7] M. Guedra, C. Inserra, B. Gilles, and C. Mauger. Periodic onset of bubble shape instabilities and their influence on the spherical mode. In 2016 IEEE International Ultrasonics Symposium (IUS), pages 1-4, September 2016. Journal Abbreviation: 2016 IEEE International Ultrasonics Symposium (IUS).
[8] Matthieu Guedra, Sarah Cleve, Cyril Mauger, Philippe Blanc-Benon, and Claude Inserra. Dynamics of nonspherical microbubble oscillations above instability threshold. Physical Review E, 96(6):063104, December 2017. Publisher: American Physical Society.
[9] Matthieu Guedra, Sarah Cleve, Cyril Mauger, Claude Inserra, and Philippe Blanc-Benon. Time-resolved dynamics of micrometer-sized bubbles undergoing shape oscillations. The Journal of the Acoustical Society of America, 141:3736-3736, May 2017.
[10] Joshua Proctor, Steven Brunton, and J. Kutz. Generalizing Koopman Theory to Allow for Inputs and Control. SIAM Journal on Applied Dynamical Systems, 17, February 2016.
Harmful algal blooms (HABs) are a growing public health concern both nation and worldwide. Last year there were 25 major sites of HABs in the state of Utah alone. These blooms are caused in part by excess nutrients (nitrogen and phosphorus) being discharged from wastewater treatment plants (WWTPs). To combat the growing prevalence of HABs the state of Utah is imposing new nitrogen and phosphorus effluent standards for WWTPs. Utah State University is working in collaboration with Central Valley Water Reclamation Facility (CVWRF), the largest municipal WWTP in the state of Utah treating 60 million gallons per day, and WesTech Engineering-Inc. to develop a novel biological process to help WWTPs meet these new standards. This process is the rotating algae biofilm reactor (RABR) that removes nutrients from wastewater by producing algae biomass that can be used in bioproduct production. The RABR consists of disks rotating through a growth substrate (wastewater) to produce an attached growth biofilm and remove nutrients from the substrate. This biofilm can be mechanically harvested and converted into value-added bioproducts including biofuels, bioplastics, animal feed, and fertilizers. Extensive research has been conducted on the RABR at laboratory and pilot scales, but in preparation for scale-up and industrial applications a mathematical model describing the system must be developed. Due to high concentrations of nitrogen and phosphorus in the growth substrate and high summertime light intensity, the system is often light inhibited. An analytical model has been adapted from work performed by Bara and Bonneford that describes light limited algae growth. This model will be augmented using sparse identification of nonlinear dynamics (SINDy), a data-driven approach allowing for the identification and development of important growth terms, on data previously collected from the RABR at laboratory and pilot scales along with data currently being collected.
We are developing a pneumatic Hybrid-Fluidic Elastomer Actuator (H-FEA) by combining an additively manufactured internal structure and silicone elastomer. It is evident that in many soft robotic applications, there is a need to be able to sense shape of the robot and collision with the environment. To address these needs, we are developing an analytical model of the nonlinear kinematics of the H-FEA with internal energy-based models that combine both the linear and nonlinear components of the H-FEA. Using the analyical model, we are able to determine the shape of the actuator given the internal pressure. To extend this model and detect external perturbations in obstructed environments, we propose to use a probabilistic learning model. This learning model is trained on mapping of the input volume to determine perturbation or collision probability at the state given by the analytical model.
Tendon-Driven Continuum Manipulators (TD-CMs) have gained increasing popularity in various minimally invasive surgical robotic applications. However, the adverse effects of tendon-sheath friction along the transmission path may result in significant non-uniform cable tension and subsequently motion losses, which affects the deformation behavior of a TD-CM. Most of the current approaches for modeling friction have been mainly developed based on either simplifying assumptions (e.g., constant-curvature deformation behavior or point load friction forces) or experimentally-tuned lumped models that are not extendable to a generic deformation behavior for a VC-CM. We propose developing a physics-based modeling approach for modeling deformation behavior of a TD-CM by extending the typical geometrically exact model based on the Cosserat rod theory and include the effect of Curvature-Dependent Distributed Friction Force (CDDF) between the tendon and sheath.
The Dynamic Mode Decomposition (DMD) algorithm was first introduced in the fluid mechanics community for analyzing the behavior of nonlinear systems. DMD processes empirical data and produces approximations of eigenvalues and eigenvectors (“DMD modes”) of the linear Koopman operator that represents the nonlinear dynamics. In fluid dynamics, this approach has been used to both analyze constituent flow patterns in complex flows, and to design control and sensing strategies. In this work, we focus on predicting the transition to buffeting of a 2D airfoil in a transonic regime. Buffeting is a vibration that occurs as the angle-of-attack increases and the interactions between the shock and flow separation induce limit-cycle oscillations. We demonstrate that this bifurcation can be predicted by tracking the eigenvalue with the greatest real part across a range of parameter values $\alpha$, which is the airfoil's angle. We evaluate the performance of our approach on a synthetic Hopf-bifurcation flow and both pseudo-time simulations of a standard 2D airfoil. As part of the next stage of this research analysis for the time-resolved simulations of a standard 2D airfoil is carried out.
Within each animal cell is a complex infrastructure of microtubules and motor proteins that translate energy from ATP cycles into a complex fluid flow. Although this process is vital for intracellular transport of nutrients, a quantitative mathematical model for this system remains elusive. Recent experimental work has produced high-resolution video of this system and made possible attempts to derive a model directly from data. In this poster, I will discuss the application of a data-driven model discovery method called ”PDE-Find” to this complex system. I will describe the accuracy and robustness of PDE-Find for the simplified task of reconstructing a proposed model from simulation data and discuss the corresponding challenges. I will also propose methodologies for overcoming those challenges and future steps to utilize the experimental data.
The mass of a nucleus is its fundamental quantity. It dictates the stability of a particular nucleus, the type of decays and nuclear reactions it can undergo, and much more. Yet after decades of experimental efforts, we are unable to experimentally measure the masses of thousands of exotic isotopes. They cannot be produced in the laboratory so we have to rely on theoretical models. However, more than a dozen different physics-based models predict very different values for extrapolated nuclear masses because of different assumptions, missing physics, etc. in each of them. We use a data-driven approach to predict the masses of these exotic isotopes by modeling the residuals, i.e. the difference between the experimental masses and theoretically predicted masses accounting for the missing piece in theoretical models. In particular, we use Bayesian Gaussian Process Regression that also provides credibility intervals on our predictions and helps in uncertainty quantification. We further use Bayesian Model Averaging to combine the predictive powers of different models and also account for model selection uncertainty.
Cognitive impairment is one of the most prominent symptoms of age-related diseases such as Alzhei-mer’s disease or Lewy body disease. Therefore, it is not surprising that cognitive impairment is one of the variables that is usually measured in longitudinal studies of Alzheimer’s disease. However, if we look naively at the progression of cognitive impairment in a patient, we cannot obtain enough information of the progression of Alzheimer’s disease in them. The reason of this mismatch is that cognitive impairment is an overlapping symptom caused by multiple chronic diseases and modulated by intrinsic and extrinsic variables.
Each age-related chronic disease, such as Alzheimer’s disease, is characterized by a set of biological processes that can be measured by biomarkers. In recent years, multiple machine learning models have been proposed to predict cognitive impairment given the measurements of biomarkers of different chronic diseases. Nevertheless, measuring biomarkers for a large cohort in a longitudinal study is more complicated and more expensive than measuring cognitive impairment. This results in sparser biomarkers measurements for each patient. Therefore, there is the need for a model that reconstructs the biomarker progression of different chronic diseases from sparse measurements of biomarkers and the progression of cognitive measurements.
Our project looks for an interpretable and simple model, that can reconstruct the progression of different chronic diseases leveraging the mechanistic knowledge that is available in the literature.
We present a new neural-network architecture, called the Cholesky-factored symmetric positive definite neural network (SPD-NN), for modeling constitutive relations in computational mechanics. Instead of directly predicting the stress of the material, the SPD-NN trains a neural network to predict the Cholesky factor of the tangent stiffness matrix, based on which the stress is calculated in the incremental form. As a result of this special structure, SPD-NN weakly imposes convexity on the strain energy function, satisfies time consistency for path-dependent materials, and therefore improves numerical stability, especially when the SPD-NN is used in finite element simulations. Depending on the types of available data, we propose two training methods, namely direct training for strain and stress pairs and indirect training for loads and displacement pairs. We demonstrate the effectiveness of SPD-NN on hyperelastic, elasto-plastic, and multiscale fiber-reinforced plate problems from solid mechanics. The generality and robustness of SPD-NN make it a promising tool for a wide range of constitutive modeling applications.
The ability for sparse symbolic machine learning techniques to discover governing equations from data [1], [2] has opened up many opportunities in fluid mechanics. The equations solved in fluid mechanics are conservation of mass, momentum, and energy as well as the closure models. Closure models arise from averaging the conservation equations. Averaging introduces additional terms, which require additional equations, termed closure models, to solve. It is in discovering the equations governing the closure models that sparse symbolic machine learning is most useful. Closure models are not based upon strict physical laws, but on experimental data and engineering judgement. This makes them ideal for machine learning techniques. Moreover, sparse symbolic machine learning has an advantage over previous neural network approaches, [3], in that the output can be easily integrated into existing computer codes, with an understanding of how it will extrapolate to untrained conditions. There has already been quite a bit of attention to using sparse symbolic learning for turbulence closure models [4]–[6]. Similar things are possible for two-phase flow.
Two-phase flows are typically modeled using the two fluid model, [7], [8], in which each phase is modeled as a continua and has its own set of conservation equations (mass, momentum, and energy), and the phases are coupled by interfacial transfer terms. The interfacial transfer terms require closure models. The current state of the art for nuclear reactor system codes (RELAP, TRACE, CATHARE) is for the interfacial transfer terms to be correlated in terms of the flow regime. In two phase flow, the flow regime describes the topology of the flow, i.e. whether the gas forms bubbles inside the liquid (bubbly flow) or whether the liquid is confined to the walls and as droplets inside the gas core (annular flow). The flow regime transitions themselves are also empirically correlated. It has been noted that the interfacial transfer closures can be written in terms of $a_i$, the interfacial area per unit volume, as (Interfacial transfer) = $a_i$ x (Driving Force) [9]. Moreover, the interfacial area changes dramatically with the flow regime, so if one could write an equation for the interfacial area, the empirical flow regime correlations could be dispensed with. Various derivations of an interfacial area equation have been performed [10]–[12], the simplest of which [10] is:
$ \frac{\partial a_i}{\partial t} = \nabla \cdot \left( a_i v_i \right) = \sum_{j=1}^{4} \phi_j + \phi_{ph} $
In this equation, $\phi_j$ represents the interfacial area rate of change due to coalescence or breakup, and $\phi_{ph}$ represents the rate of change due to phase change. Attempts to validate this equation against high fidelity experimental data using the current state of the art for models of $\phi$ have noted that there are substantial issues once the flow regime moves beyond bubbly flow [13].
Our aim is to use sparse machine learning to derive the governing equation for the rate of interfacial area change. There are additional challenges when attempting to learn multiphase as opposed to single phase flow closure models. The rate of interfacial area change must be learned from time resolved planar measurements, which is the state of the art for experimental gas liquid measurement techniques [14]. This is because multiphase DNS (direct numerical simulation) and LES (large eddy simulation) methods are not developed enough to apply machine learning techniques directly on numerical data, as was done to learn single phase turbulence closures [4]-[6]. However, the benefit of an accurate interfacial area transport equation is the potential to dramatically improve nuclear reactor system codes and thereby nuclear reactor safety.
[1] M. Schmidt and H. Lipson, “Distilling free-form natural laws from experimental data,” Science (80-. )., vol. 324, no. 5923, pp. 81–85, 2009, doi: 10.1126/science.1165893.
[2] S. L. Brunton, J. L. Proctor, J. N. Kutz, and W. Bialek, “Discovering governing equations from data by sparse identification of nonlinear dynamical systems,” Proc. Natl. Acad. Sci. U. S. A., vol. 113, no. 15, pp. 3932–3937, 2016, doi: 10.1073/pnas.1517384113.
[3] K. Duraisamy, G. Iaccarino, and H. Xiao, “Turbulence Modeling in the Age of Data,” Annu. Rev. Fluid Mech., vol. 51, no. 1, pp. 357–377, 2019, doi: 10.1146/annurev-fluid-010518-040547.
[4] S. Beetham and J. Capecelatro, “Formulating turbulence closures using sparse regression with embedded form invariance,” pp. 1–34, 2020, [Online]. Available: http://arxiv.org/abs/2003.12884.
[5] M. Schmelzer, R. P. Dwight, and P. Cinnella, “Machine Learning of Algebraic Stress Models using Deterministic Symbolic Regression,” 2019, [Online]. Available: http://arxiv.org/abs/1905.07510.
[6] M. Schmelzer, R. Dwight, and P. Cinnella, “Data-driven deterministic symbolic regression of nonlinear stress-strain relation for RANS turbulence modelling,” 2018 Fluid Dyn. Conf., 2018, doi: 10.2514/6.2018-2900.
[7] J. E. Drew and S. L. Passman, Theory of Multicomponent Fluids, vol. 59. 1998.
[8] M. Ishii and T. Hibiki, Thermo-Fluid Dynamics of Two-Phase Flow. 2006.
[9] G. Kocamustafaogullari and M. Ishii, “Foundation of the interfacial area transport equation and its closure relations,” Int. J. Heat Mass Transf., vol. 38, no. 3, pp. 481–493, 1995, doi: 10.1016/0017-9310(94)00183-V.
[10] X. Y. Fu and M. Ishii, “Two-group interfacial area transport in vertical air-water flow - I. Mechanistic model,” Nucl. Eng. Des., vol. 219, no. 2, pp. 143–168, 2003, doi: 10.1016/S0029-5493(02)00285-6.
[11] C. Morel, N. Goreaud, and J. M. Delhaye, “The local volumetric interfacial area transport equation: Derivation and physical significance,” Int. J. Multiph. Flow, vol. 25, no. 6–7, pp. 1099–1128, 1999, doi: 10.1016/S0301-9322(99)00040-3.
[12] D. A. Drew, “Evolution of geometric statistics,” SIAM J. Appl. Math., vol. 50, no. 3, pp. 649–666, 1990, doi: 10.1137/0150038.
[13] A. J. Dave, A. Manera, M. Beyer, D. Lucas, and M. Bernard, “Evaluation of two-group interfacial area transport equation model for vertical small diameter pipes against high-resolution experimental data,” Chem. Eng. Sci., vol. 162, no. January, pp. 175–191, 2017, doi: 10.1016/j.ces.2017.01.001.
[14] H. M. Prasser, M. Misawa, and I. Tiseanu, “Comparison between wire-mesh sensor and ultra-fast X-ray tomograph for an air-water flow in a vertical pipe,” Flow Meas. Instrum., vol. 16, no. 2–3, pp. 73–83, 2005, doi: 10.1016/j.flowmeasinst.2005.02.003.
In the context of multi-material lightweight assemblies, structural joints such as adhesives and bolts should be taken into account in the FE models for a reliable representation of the reality. The goal of this research work is to identify the parameters of the joints models exploiting the potential of the Virtual Sensing techniques.
Parameters identification can be achieved via the minimization of the error between model results and experimental results. In the current research work, a parametric-reduced model and a set of measurements are combined in a stochastic estimator such as an Extended Kalman filter, that tracks the dynamic states and parameters of the assembly under investigation.
In the next steps, Machine Learning approaches will be investigated in view of a benchmarking with the current methods, but also in view of an integration between them. Machine Learning will be used to define new surrogate models, able to mimic the relation between the physics-inspired model parameters (to be later identified) and product performance. According to this scheme, the physics-inspired models will be used to produce a set of training data for the Machine Learning algorithm, and the resulting surrogate model will be used in the above-mentioned parameter identification schemes.
The need to solve discrete ill-posed problems arises in many areas of science and engineering. Solutions of these problems, if they exist, are very sensitive to perturbations in available data. Regularization replaces the original problem by a nearby regularized problem, whose solution is less sensitive to the error in the data. The
regularized problem contains a fidelity term and a regularization term. Recently, the use of a $p$-norm to measure the fidelity term and a $q$-norm to measure the regularization term has received considerable attention. The balance between these terms is determined by a regularization parameter. In many applications, such as in image restoration, the desired solution is known to live in a convex set, such as the nonnegative orthant. It is natural to require the computed solution of the regularized problem to satisfy the same constraint(s). This paper shows that this procedure induces a regularization method and describes a modulus-based iterative method for computing a constrained approximate solution of a smoothed version of the regularized problem. Convergence of the iterative method is shown, and numerical examples that illustrate the performance of the proposed method are presented.
Recent years have seen a massive explosion of datasets across all areas of science, engineering, technology, medicine, and the social sciences. The central questions are: How do we optimally learn from data through the lens of models? And how do we do so taking into account uncertainty in both data and models? These questions can be mathematically framed as Bayesian inverse problems. While powerful and sophisticated approaches have been developed to tackle these problems, such methods are often challenging to implement and typically require first and second order derivatives that are not always available in existing computational models. In this paper, we present an extensible software framework MUQ-hIPPYlib that overcomes this hurdle by providing unprecedented access to state-of-the-art algorithms for deterministic and Bayesian inverse problems. MUQ provides a spectrum of powerful Bayesian inversion models and algorithms, but expects forward models to come equipped with gradients/Hessians to permit large-scale solution. hIPPYlib implements powerful large-scale gradient/Hessian-based solvers in an environment that can automatically generate needed derivatives, but it lacks full Bayesian capabilities. By integrating these two libraries, we created a robust, scalable, and efficient software framework that realizes the benefits of each to tackle complex large-scale Bayesian inverse problems across a broad spectrum of scientific and engineering areas.
Due to the notable potentials of additive manufacturing (AM), the interest in AM has risen significantly across several industries during the past decade. One of the key factors governing the mechanical properties of an additively-manufactured part is the solidification microstructure. However, the spatial and temporal resolution required for the simulation of the solidification process is several orders of magnitude smaller than the dimensions of the final part imposing infeasibly high computational expenses on the simulations. Model order reduction can potentially help reduce this computational burden and allow for the development of microstructure-aware models at part scale. We have developed a projection-based model reduction for a one-dimensional solidification model consisting of the phase-field equation for the order parameter coupled with the heat equation. The inherent nonlinearity of the full model is accounted for by lifting transformations to expose a polynomial structure where the operators of the ROM for the lifted model are learned non-intrusively using the operator inference method (OpInf). Owing to the non-intrusive nature of OpInf, the lifted form need not be discretized and solved, and its ROM operators are learned from snapshots of the original full model.
Intense lasers have the ability to accelerate ions to high energies over very short distances, but the beam quality generated through these methods is not yet ready for many applications. We developed a framework using evolutionary algorithms to automatically run thousands of one-dimensional (1D) particle-in-cell simulations to optimize the conversion from laser energy to ion energy. The “optimal” 1D target found with this approach also outperformed conventional targets in more-realistic fully-three-dimensional (3D) simulations. We plan to extend this approach to develop synthetic datasets and use machine learning techniques to help control ion beam properties and to better understand the complex relationship between computationally-inexpensive reduced-dimensionality (1D/2D) simulations with more realistic, but computationally-expensive 3D simulations and experiments.
Recently, the advent of deep learning has spurred interest in the development of physics-informed neural networks (PINN) for efficiently solving partial differential equations (PDEs), particularly in a parametric setting. Among all different classes of deep neural networks, the convolutional neural network (CNN) has attracted increasing attention in the scientific machine learning community, since the parameter-sharing feature in CNN enables efficient learning for problems with large-scale spatiotemporal fields. However, one of the biggest challenges is that CNN only can handle regular geometries with image-like format (i.e., rectangular domains with uniform grids). In this paper, we propose a novel physics-constrained CNN learning architecture, aiming to learn solutions of parametric PDEs on irregular domains without any labeled data. In order to leverage powerful classic CNN backbones, elliptic coordinate mapping is introduced to enable coordinate transforms between the irregular physical domain and regular reference domain. The proposed method has been assessed by solving a number of PDEs on irregular domains, including heat equations and steady Navier-Stokes equations with parameterized boundary conditions and varying geometries. Moreover, the proposed method has also been compared against the state-of-the-art PINN with fully-connected neural network (FC-NN) formulation. The numerical results demonstrate the effectiveness of the proposed approach and exhibit notable superiority over the FC-NN based PINN in terms of efficiency and accuracy.
Phase field models, in particular, the Allen-Cahn type and Cahn-Hilliard type equations, have been widely used to investigate interfacial dynamic problems. Designing accurate, efficient, and stable numerical algorithms for solving the phase field models has been an active field for decades. We focus on using the deep neural network to design an automatic numerical solver for the Allen-Cahn and Cahn-Hilliard equations by proposing an improved physics informed neural network (PINN). Though the PINN has been embraced to investigate many differential equation problems, we find a direct application of the PINN in solving phase-field equations won't provide accurate solutions. Thus, we propose various techniques that add to the approximation power of the PINN. As a major contribution of this paper, we propose to embrace the adaptive idea in both space and time and introduce various sampling strategies, such that we are able to improve the efficiency and accuracy of the PINN on solving phase field equations. In addition, the improved PINN has no restriction on the explicit form of the PDEs, making it applicable to a wider class of PDE problems and sheds light on numerical approximations of other PDEs in general. simulations.
System identification from noisy data is challenging in many science and engineering fields. In current work, we present an approach of system identification by sparse Bayesian learning methods. The key idea is to determine the sparse relevant weights from a constructed library by learning from noisy data. The sparse promoting prior is used to regularize the learning process. Furthermore, to identify a parsimonious system, the sequential threshold training is incorporated into sparse Bayesian learning. It is especially helpful when the learned data has large noise. Furthermore, we extend our approach to learn a parametric system by using group sparsity. Several explicit and implicit ODE/PDE systems are used to demonstrate the effectiveness of this method.
We present a weak formulation and discretization of the system discovery problem from noisy measurement data. This method of learning differential equations from data replaces point-wise derivative approximations with local integration and improves on the standard SINDy algorithm by orders of magnitude. Linear transformations associated with local integration are used to construct covariance matrices which enforce discovery of parsimonious best-fit models by accurately scaling the error in the residuals during sequentially-thresholded generalized least squares. In the absence of noise, this so-called Weak SINDy framework (WSINDy) is capable of recovering the correct nonlinearities from synthetic data with error in the recovered coefficients falling below the tolerance of the data simulation scheme. As demonstrated by adding white noise directly to the state variables, WSINDy also naturally accounts for measurement noise, with errors in the recovered coefficients scaling proportionally to the signal-to-noise ratio, while significantly reducing the required number of data points and the size of linear systems involved. Altogether, WSINDy combines the ease of implementation of the SINDy algorithm with the natural noise-reduction of integration to arrive at a more robust and user-friendly method of sparse recovery that correctly identifies systems in both small-noise and large-noise regimes. Examples include nonlinear ODEs (Van der Pol Oscillator, Lorenz system) and PDEs (Allen-Cahn, Kuramato-Sivashinsky, Reaction-Diffusion systems) with sharp transitions and/or chaotic behavior.
Stochastic Calculus
- Stochastic Integral and SDEs
- Ito’s Formula
- The Generator of an SDE
- Examples
Introduction to DMD/Koopman for ROMs data-driven modeling
- DMD integration into Galerkin ROMs
- Koopman/DMD models for ROMs
(E)DMD for Stochastic Dynamics
- Basic EDMD
- Variational Principle for Reversible Dynamics
- Generator EDMD
Introduction to neural networks for DMD & Koopman approximations
- NN for learning coordinate transformations
- Koopman reductions more linear ROMs
Metabolism plays a key role in a multitude of different biological processes ranging from food production and biofuel production to human health. Predicting the metabolism of a living organism, however, can be a challenging task. Genome-scale models (GEMs) can provide this predictive power by accounting for all metabolic reactions in an organism's genome. So far, GEMs have been used to model metabolism through optimization approaches, but this approach shows limitations. We propose a new approach based on a combination of Markov Chain Monte Carlo and Bayesian inference that provides all metabolic states compatible with the available experimental data. We discuss efficient sampling techniques which can leverage high performance computing to efficiently handle the associated computational burden. These techniques are based on Hamiltonian Monte Carlo methods that leverage artificial neural networks for efficient gradient calculation. The corresponding numerical results for case studies related to predictive modeling of metabolism are presented and analyzed. This technique represents a first step towards modeling microbial communities in the future.
Interpolatory methods offer a powerful framework for generating reduced‑order models for non‑parametric or parametric systems with time‑varying inputs. Choosing the interpolation points adaptively remains an area of active interest. A greedy framework has been introduced in [1, 2] to choose interpolation points automatically using a posteriori error estimators. Nevertheless, when the parameter range is large or if the parameter space dimension is larger than two, the greedy algorithm may take considerable time, since the training set needs to include a considerable number of parameters.
In this work, we introduce an adaptive training technique by learning an efficient a posteriori error estimator over the parameter domain. A fast learning process is created by interpolating the error estimator using radial basis functions over a fine parameter training set, representing the whole parameter domain. The error estimator is evaluated only on a coarse training set consisting of only a few parameter samples. The algorithm is an extension of the work in [3] to interpolatory model order reduction in the frequency domain. Possibilities exist to use other sophisticated machine‑learning techniques like artificial neural networks, etc. to learn the error estimator, based on data at a few parameter samples. However, we do not pursue this in the present work. Selected numerical examples demonstrate the efficiency of the proposed approach.
References
[1] Feng, L., Antoulas, A.C., Benner, P.: Some a posteriori error bounds for reduced‑order modelling of (non‑)parametrized linear systems. ESAIM: Math. Model. Numer. Anal. 51(6), 2127–2158 (2017).
[2] Feng, L., Benner, P.: A new error estimator for reduced‑order modeling of linear parametric systems. IEEE Trans. Microw. Theory Techn. 67(12), 4848–4859 (2019).
[3] Chellappa, S., Feng, L., Benner, P.: An adaptive sampling approach for the reduced basis method. e‑prints 1910.00298, arXiv (2019). URL https://arxiv.org/abs/1910.00298. Math.NA.
[4] Chellappa, S., Feng, L., de la Rubia, V., Benner, P.: Adaptive Interpolatory MOR by Learning the Error Estimator in the Parameter Domain. e‑prints 2003.02569, arXiv (2020).
URL https://arxiv.org/abs/2003.02569. Math.NA.
Artificial neural network for bifurcating phenomena modelled by nonlinear parametrized PDEs
The aim of this work is to show the applicability of the Reduced Basis (RB) model reduction and Artificial Neural Network (ANN) dealing with parametrized Partial Differential Equations (PDEs) in nonlinear systems undergoing bifurcations.
Bifurcation analysis, i.e., following the different bifurcating branches due to the non-uniqueness of the solution, as well as determining the bifurcation points themselves, are complex computational tasks. Reduced Order Models (ROM) and Machine Learning (ML) techniques can potentially reduce the computational burden by several orders of magnitude.
Models describing bifurcating phenomena arising in several fields with interesting applications, from continuum to quantum mechanics passing through fluid dynamics [4,5,6].
Following the approach in [1, 2], we analyzed different bifurcating test cases where both physical and geometrical parameters were considered. In particular, we studied the Navier-Stokes equations for a viscous, steady and incompressible flow in a planar straight channel with a narrow inlet.
We reconstructed the branching solutions and explored a new empirical strategy in order to employ the RB and ANN for an efficient detection of the critical points.
All the simulations were performed within the open source software FEniCS and RBniCS [7] for the ROM, while we chose PyTorch to construct the neural network.
References
[1] M. Guo and J. S. Hesthaven. Data-driven reduced order modeling for time-dependent problems. Computer methods in applied mechanics and engineering, 345:75–99, 2019.
[2] J. S. Hesthaven and S. Ubbiali. Non-intrusive reduced order modeling of nonlinear problems using neural networks. Journal of Computational Physics, 363:55–78, 2018.
[3] F. Pichi, F. Ballarin, J. S. Hesthaven, and G. Rozza. Artificial neural network for bifurcating phe- nomena modelled by nonlinear parametrized PDEs. In preparation, 2020.
[4] F. Pichi, A. Quaini, and G. Rozza. A reduced order technique to study bifurcating phenomena: application to the Gross-Pitaevskii equation. ArXiv preprint https://arxiv.org/abs/1912.06089, 2019.
[5] F. Pichi and G. Rozza. Reduced basis approaches for parametrized bifurcation problems held by non-linear Von Kármán equations. Journal of Scientific Computing, 339:667–672, 2019.
[6] M. Pintore, F. Pichi, M. Hess, G. Rozza, and C. Canuto. Efficient computation of bifurcation diagrams with a deflated approach to reduced basis spectral element method. ArXiv preprint arXiv:1907.07082, 2019.
[7] RBniCS. http://mathlab.sissa.it/rbnics.
In the context of industrial applications involving machine learning techniques, a challenging problem is represented by object detection, as can be seen in [1]. A particular application of it inside a leading company in the field of professional appliances, such as Electrolux Professional, is represented by the recognition and localization of different types of objects.
A possible approach to deal with object detection problems is represented by Artificial Neural Networks (ANN) and in particular by Convolutional Neural Networks (CNN). In order to solve this issue, we need to handle with two different tasks: classification and localization. The particular architecture of the existing CNNs is useful to extract the low-level features of the objects (i.e. edges, lines, …), but it is not enough to cope also with the problem of finding their position in a picture. Therefore, some extra layers must be added on the top of a chosen CNN in order to detect the high-level features, such as the position.
We have decided to study mainly two state-of-the-art meta-architectures: Faster Region Based Convolutional Neural Network (Faster R-CNN) [3] and Single Shot Detector (SSD) [2], because the first is very accurate, whereas the second is very fast. Since our algorithm have to be included in a professional appliance, the computing time needed to detect a new object has to be as fast as possible (ideally real-time). Hence, in this work we will make a comparison between the two architectures in terms of speed and accuracy, proposing at the same time a new strategy for the construction of the training batches using an unsupervised approach.
References
[1] Goodfellow, I., Bengio Y. and Courville A., 2016, Deep Learning. MIT Press.
[2] Liu W., Anguelov D., Erhan D., Szegedy C., Reed S., Fu C.-Y. and C. Berg A., 2016, SSD: Single Shot MultiBox Detector.
[3] Ross S., He K., Girshick R., and Sun J., 2015, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. Advances in Neural Information Processing Systems.
Identifying dynamical systems from measured data is an important step towards accurate modeling and control. Model order reduction (MOR) constitutes a class of methods that can be used to replace large, complex models of dynamical processes with simpler, smaller models. The reduced-order models (ROMs) can be then used for further tasks such as control, design, and simulation. One typical approach for projection-based model reduction for both linear and non-linear dynamical systems is by employing interpolation. Projection-based methods require access to the internal dynamics of the system which is not always available. The aim here is to compute ROMs without having access to the internal dynamics, by means of a realization-independent method. The proposed methodology will fall into the broad category of data-driven approaches.
The method under consideration, which will be referred to as the Loewner framework (LF), was originally introduced by the third author. Based on data, LF identifies state-space models in a direct way. In the original setup, the framework relies on compressing the full data set to extract dominant features and, at the same time, to eliminate the inherent redundancies. In the broader class of nonlinear control systems, the LF has been already extended to certain classes with a special structure such as quadratic or bilinear systems. As an application of the aforementioned method is the well studied Lorenz attractor in comparison with other model learning techniques.
Computer simulations of natural and physical systems are subject to various sources of uncertainty necessitating the facilitation of uncertainty quantification and sensitivity analysis methods in the development of mathematical models. As complexity of mathematical models grows, non-intrusive methods draw the attention for identification and characterisation of uncertainties in model outputs. In this setting, Global Sensitivity Analysis (GSA) enables a holistic approach to apportioning output uncertainty to uncertain model inputs. The advantage of GSA over previous local sensitivity analysis methods is the computation of sensitivity indices for wider classes of mathematical models considering nonlinear statistical and structural dependencies among inputs and outputs [1]. However, GSA requires the estimation of conditional variances based on Monte Carlo simulations, which might be computationally prohibitive for physical models of high complexity. Addressing this issue, metamodel-based GSA methods have been developed to utilise data-driven models as surrogate response surfaces that accelerate GSA [2]. The authors aim to incorporate recent advances in machine learning for system identification and model reduction with implications for the computational efficiency of GSA.
References:
1. Iooss, B. and Lemaître, P. (2014) ‘A review on global sensitivity analysis methods’, Uncertainty management in Simulation-Optimization of Complex Systems: Algorithms and Applications, 30, p. 23. doi: 10.1007/978-1-4899-7547-8_5.
2. Gratiet, L. Le, Marelli, S. and Sudret, B. (2016) ‘Metamodel-Based Sensitivity Analysis: Polynomial Chaos Expansions and Gaussian Processes’, in Ghanem, R., Higdon, D., and Owhadi, H. (eds) Handbook of Uncertainty Quantification. Cham: Springer International Publishing, pp. 1–37. doi: 10.1007/978-3-319-11259-6_38-1.
The scope of this contribution is to present some recent results on how interpolation-based data-driven methods such as
can handle noisy data sets. More precisely, it will be assumed that the input-output measurements used in these methods, i.e., transfer function evaluations, are corrupted by additive Gaussian noise.
The notion of "sensitivity to noise" is introduced and it is used to understand how the location of measurement points affects the "quality" of reduced-order models. For example, models that have poles with high sensitivity are hence deemed prohibited since even small perturbations could cause an unwanted behavior (such as instability). Moreover, we show how different data splitting techniques can influence the sensitivity values. This is a crucial step in the Loewner framework; we present some illustrative examples that include the effects of splitting the data in the "wrong" or in the "right" way.
Finally, some perspectives for the future: we would like to employ statistics and machine learning techniques in order to avoid "overfitting". More precisely, it is said that a model that has learned the noise instead of the true signal is considered an "overfit" because it fits the given noisy dataset but has a poor fit with other new datasets. We present some possible ways to avoid "overfitting" for the methods under consideration.
Multibody systems are the state-of-the-art tool to model complex mechanical mechanisms. However, they typically include redundant coordinates plus constraints, leading to differential algebraic equations for the dynamics which require dedicated integration schemes and control/estimation algorithms.
In my work, autoencoder neural networks are combined with the multibody physics information. In this way, the autoencoder does not only perform a dimensionality reduction of the original coordinates but can be used for a model order reduction obtaining a reduced-order model where the dynamics is expressed with ordinary differential equations and standard estimation algorithms can be used.
This permits to combine the physics-informed neural network with measurements in order to estimate unknown parameters or inputs in the system, for instance with an extended Kalman filtering scheme.
In this work, we investigate the capabilities of deep neural networks for solving hyperbolic conservation laws with non-convex flux functions. The behavior of the solution of these problems depends on the underlying small scale regularization. In many applications concerning phase transition phenomena, the regularization terms consist of diffusion and dispersion which are kept in balance in the limit. This may lead to the development of both classical and nonclassical (or undercompressive) shock waves at the same time which makes finding the solution of these problems challenging from both theoretical and numerical points of view. Here, as a first step, we consider a scalar conservation law with cubic flux function as a toy model and investigate the capabilities of physics-informed deep learning algorithms for solving this problem.
Scanning quantum dot microscopy is a technique for imaging electrostatic surface potentials with atomic resolution. To this end it uses a sensor molecule, the so-called quantum dot (QD), which is bonded to the tip of a frequency modulated non-contact atomic force microscope. The QD is moved in the vicinity of the surface atoms so that it experiences the surface potential. By superimposing an external potential using a tip-surface bias voltage V_b, the QD's potential is modulated to reach critical values at which the QD changes its charge state. In consequence to these charging events, the tuning fork's oscillation frequency $f$ changes abruptly, making the charging events appear as dips in the so-called spectrum $\Delta f(V_b)$, characterized by the positions $V^{\mp}$ of their respective minimum points. These dip positions $V^{\mp}$ are used for reconstructing the surface potential image.
While scanning the sample in a raster pattern, $V^{\mp}$ change with the position of the dip in consequence of the sample topography and its electrical properties. To efficiently and reliably track $V^{\mp}$ we have employed a two-degree-of-freedom control paradigm within which a Gaussian process and an extremum seeking controller are used. The Gaussian process is employed in the feedforward part to compute a prediction of $V^{\mp}$ for the next line based on the data of previous lines. This prediction is thereafter applied pixel by pixel as initial point to the extremum seeking controller, which is used in the feedback part to correct deviations between the prediction and the true value. Obtaining an accurate prediction is hereby critical for correct operation as the extremum seeking controller is only capable of tracking $V^{\mp}$ as long as the current value $V_b$ is within the dip. For reducing the computation time in order to make the controller suitable for utilization in practice, we have implemented and tested different approximation approaches for sparse GP implementation.
In simulative testings, we have shown that using the proposed two-degree-of-freedom control para-digm results in shorter scan times while achieving a high image quality when compared to feedback control only. The most promising approximation approach for sparse GP implementation regarding scan time and image quality has been found to be the fully independent training conditional approximation. We have further shown that its computation is sufficiently low for usability in practice.
The need to devise model order reduction methods is strictly related to the finite nature of the available resources, including the computational budget, the amount of memory at disposal and the limited time, which may range from a life-time to real-time queries. Parametric studies, from optimization tasks to the design of response surfaces, suffer particularly from the curse of dimensionality since they usually scale exponentially with the dimension of the parameter space. A key pre-processing step is therefore reducing the dimension of the space of parameters discovering some notion of low-dimensional structure beneath.
Under mild regularity assumptions on the model function of interest, Active Subspaces have proven to be a versatile and beneficial method in engineering applications: from the shape-optimization of the hull in naval engineering to model order reduction coupled with the reduced basis method for the study of stenosis of the carotid. The procedure involved can be ascribed to gradient-based sufficient dimension reduction methods. In the context of approximation with ridge functions, it finds theoretical validation from the minimization of an upper bound on the approximation error through the application of Poincaré-type inequalities and Singular Value Decomposition (SVD).
We are going to present a possible extension which address especially the linear nature of the Active Subspace in search for a non-linear counterpart. The turning point comes from the theory on Reproducing Kernel Hilbert Spaces (RKHS) which have been fruitfully employed in machine learning to devise non-linear manifold learning algorithms such as Kernel Principal Component Analysis (KPCA). An essential feature of the method that exploits the non-linear Active Subspaces should be the flexibility to account for non-linear behaviours of the model function.
Our implementation is tested on toy-problems designed to exhibit the strengths of the non-linear variant and on a benchmark with heterogeneous parameters for the study of the lift and drag coefficients of a NACA airfoil. The numerical method applied for the approximation is the renowned Discontinuous-Galerkin method. Future directions involve the developement of other nonlinear extensions of the active subspaces method with deep neural networks.
Data-driven methods are a promising approach for optimizing traffic control systems. Today’s vehicle technology allows to collect an increasing amount of data to improve the vehicles’ performance, reliability and safety. Concerning mobility infrastructure and communication technology, larger and larger datasets can be transmitted faster every year. Our goal is to use (real-time) data, communicated between cars and infrastructure, to improve traffic flow in the future and to support holistic, efficient and sustainable mobility solutions.
We therefore model different networks using a microscopic traffic simulation where Reinforcement Learning (RL) methods are used to let agents (vehicles) learn to drive more fluently through typical traffic situations. The agents obtain real-time information from other vehicles and learn to improve the traffic flow by repetitive observation and algorithmic optimization. Accordingly, we use RL to control traffic guidance systems, such as traffic lights. In [1], an illustrating example is given, where the traffic light system of the “Opel roundabout”, Kaiserslautern’s largest roundabout, is considered in a model – it has been set up and improved by Reinforcement Learning. As underlying model structures for all RL approaches, we use, e.g., linear models, radial-basis function networks and neural nets. In the future we plan to investigate the performance of other model variants, such as Gaussian Processes, and we will enhance this model-free approach with physics-based microscopic traffic models to improve the mathematical description of the underlying dynamical system.
[1] U. Baumgart. Reinforcement Learning for Traffic Control. Master’s Thesis, University of Mannheim, 2019.
Until now, only classical approaches for the parameter identification of gradient-enhanced damage models combined with e.g. finite plasticity or rate-dependent phenomena are used in order to characterize the damage evolution in metal forming processes. In the future, the models will be extended to simulate hot forming processes. Considering the increasingly complex material models with significant numbers of parameters, the capabilities of machine learning techniques shall be examined for this application. Later on, considering the complex boundary value problems of the different processes, model reduction will be used to decrease the computational cost of the finite element simulations while maintaining the accuracy of micro-mechanical material models to characterize the damage evolution in the processes. Therefore, a neural network will be trained with the constitutive response of the micro-mechanical material models.
Physical phenomena like chemically reacting flows are computationally expensive to simulate due to the interaction between different physics at a wide range of time and length scales. Chemically reacting flows can be described by systems of hyperbolic partial differential equations with stiff source terms. The governing equations can be simplified by assuming chemical equilibrium and then it is possible to replace the full system with a simpler system. We investigate model adaptation for such systems. The model adaptation is carried out between the full system of equations referred to as the complex system and the simple system, which is obtained by projecting the complex system on to the equilibrium manifold. When numerically solving the simple system, to compute the flux a mapping from the state space of the simple system to the state space of the complex system needs to be employed. This involves solving a computationally expensive non-linear system of equations. To further reduce the computational expenses needed when solving the simple system, the mapping employed in the simple system can be replaced by an approximate mapping, which has to be constructed by accounting for the physics behind the mapping. Such an approximate mapping can be constructed employing machine learning techniques like physics based or constraint aware neural networks. Model adaptation is carried out by decomposing the computational domain in space and time and then the complex model is employed where necessary and the simple system, employing the machine learned approximate mapping, where sufficient. The domain decomposition is carried by constructing a posteriori error estimates which take in to account the discretization, modeling errors and errors incurred due to employing the approximate mapping.
Human mortality patterns and trajectories in closely related subpopulation are likely linked together and share similarities. It is always desirable to model them simultaneously while taking their heterogeneity into account. This poster introduces two new models for jointly mortality modelling and forecasting of multiple subpopulations in adaptations of the multivariate functional principal component analysis techniques. The first model extends the classical independent functional data model to a multi-population modelling setting. The second one is a natural extension of the first model in a coherent direction. Its design primarily fulfils the idea that when several sub-population groups have similar socio-economic conditions or common biological characteristics and such these close connections are expected to evolve in a non-diversifying fashion. We demonstrate the proposed methods by using sex-specific mortality data of Japan. Their forecast performances are then further compared with several existing models, including the independent functional data model and the product-ratio model, through a comparison with mortality data of ten developed countries. Our experiment results show that the first proposed model maintains a comparable forecast ability with the existing methods. In contrast, the second proposed model outperforms the first model as well as the current models, in terms of forecast accuracy, plus several desirable properties.
The poster will give insights into my PhD research. I combine time-series prediction and heuristic optimization algorithms to cope with time-varying optimization problems. A frequent task in dynamic optimization is to track the moving optimum as accurately as possible. Originally designed for static optimization, nature-inspired algorithms on dynamic problems suffer from premature convergence. To circumvent this different approaches have been developed, prediction is one among others. The trajectory of solutions found during the optimization process is interpreted as representation of the optimum dynamics. With time-series prediction techniques that are learned online, for this trajectory the next step is predicted which in turn is employed to lead the optimizer's search in direction of the predicted optimum. By this means, a faster convergence and tracking accuracy might be achievable.
In my thesis, I investigate different neural network architectures as prediction models, and propose strategies to utilize the prediction in nature-inspired optimization algorithms (evolution strategy, particle swarm optimization). Furthermore, I suggest to adapt the optimizer's operators based on the predictive uncertainty in order to prevent the optimizer from being misled by a poor prediction.
‘Virtual Acoustics’ is the field of science that deals with simulating and synthesizing sound in virtual domains. The areas of application are widespread, e.g., building design, virtual entertainment and hearing research. The problem is extremely challenging because it involves simulating time-dependent wave propagation over a broad frequency spectrum in large and complex domains – ideally under real-time constraints. In our previous work, we have developed a high-fidelity massively parallel DGFEM based acoustics simulator and a method for exploring pre-computed simulation results in an audio-visual virtual reality experience for static scenes. However, the ultimate goal is to perform the simulations in real time, thus allowing for interactive and dynamic scenes in the VR. Our future research will be to explore whether physics-informed, data-driven surrogate modelling techniques can be applied to solve the problem under real-time constraints. We will pursue a combination of reduced basis techniques and efficient data-driven surrogate modeling. In such a setup, one leverages the high computational efficiency of the reduced basis model to create a large labeled data set, which serves to train the surrogate model based on Gaussian Process Regression or, alternatively, a feed forward Neural Network in a simple supervised learning approach. Our hope is that the evaluation of such surrogate models is extremely efficient and that this will provide the last step to reach the required acceleration to enable real-time or near real-time performance.
Modelling data assimilation allows to fill the gap between numerical simulations and experimental data. Optimal control problems governed by parametrized partial differential equations is suited for this kind of application, where you want to track problem solutions towards known quantities, given by data collections or previous knowledge. Still, the computational effort increases when one has to deal with nonlinear and/or time-dependent governing equations.
Reduced order methods are an effective approach to solve data assimilation problems in a reliable and faster way. We apply the POD-Galerkin methodology in environmental marine sciences where different parameters describe several physical configurations.
We present two numerical experiments: a boundary control for riverbed current represented by time-dependent Stokes equations, and a nonlinear time-dependent tracking problem for velocity-height solutions of shallow water equations.
Mathematical models of physical processes often depend on parameters, such as material properties or source terms, that are known only with some uncertainty. Measurement data can help estimate these parameters and thereby improve the meaningfulness of the model. As experiments can be costly, it is important to choose sensor positions carefully to obtain informative data on the unknown parameter. In this poster we consider an observability coefficient that characterizes the sensitivity of measurements to parameter changes, and show its connection to optimal experimental design criteria. We then show how the observability coefficient can be used for sensor selection.
Deep learning approaches are widely used for many tasks and applications, spanning from object detection, to classification and control. Certifying or enforcing performance and stability guarantees for controllers based on deep learning is, however, challenging. This work considers the use of so called non-autonomous input-output stable deep neural networks for the control of dynamical systems. We train the neural network based on an existing controller that achieves desirable nominal closed loop system properties. Assuming the infinite layer network leads to a stable closed loop, we derive bounds for the finite number of layers of the neural network, such that stability of the nominal closed loop system under the deep network controller is guaranteed. We furthermore derive conservative conditions which can be easily integrated in the learning phase to enforce stability based on the small gain theorem. The results are underlined by a simulation study considering the control of a continuously stirred tank reactor.
Dynamic Mode Decomposition (DMD) has emerged as a prominent data-driven technique to identify the spatio-temporal coherent structures in dynamical systems, owing to its strong relation with the Koopman operator. For dynamical systems with inputs (external forcing) and outputs (measurement), the input-output DMD (ioDMD) provides a natural extension to DMD so that the learned model approximates the input-output behavior of the underlying dynamics. Both DMD and ioDMD assume access to full-state measurements. In this work, we propose a novel methodology, called the wavelet-based DMD (WDMD), that integrates wavelet decompositions with ioDMD to approximate dynamical systems from partial measurement data. Our non-intrusive approach constructs numerical models directly from trajectories of the inputs and outputs of the full model, without requiring the full-model operators. These trajectories are generated by running a simulation of the full model or by observing the response of the original dynamical systems to inputs in an experimental framework. The performance of WDMD is explained through the use of modeling the input output vibrational response of a hollow cantilever beam. We illustrate the effectiveness of WDMD using both simulated beam data and experimental measurements.
In my PhD work, I am combining established numerical methods with machine learning techniques to build adaptive and highly accurate numerical schemes for fluid mechanics. Currently, I am interested in how neural networks can enhance the flux reconstruction process in finite-volume schemes. Most recently, I have submitted the journal paper “A data-driven physics-informed finite-volume scheme for nonclassical undercompressive shocks” to the Journal of Computational Physics. The abstract reads as follows:
„We propose a data-driven physics informed finite volume scheme for the approximation of small-scale dependent shocks. Nonlinear hyperbolic conservation laws with non-convex fluxes allow nonclassical shock wave solutions. In this work, we consider the cubic scalar conservation law as a representative of such systems. As standard numerical schemes fail to approximate nonclassical shocks, schemes with controlled dissipation and schemes with well-controlled dissipation have been introduced by LeFloch and Mohammadian and by Ernest and coworkers, respectively. Emphasis has been placed on matching the truncation error of the numerical scheme with physically relevant small-scale mechanisms. However, aforementioned schemes can introduce oscillations as well as excessive dissipation around shocks. In our approach, a convolutional neural network is used for an adaptive nonlinear flux reconstruction. Based on the local flow field, the network combines local interpolation polynomials with a regularization term to form the numerical flux. This allows to modify the discretization error by nonlinear terms. Via a supervised learning task, the model is trained to predict the time evolution of exact solutions to Riemann problems using the method of lines. The model is physics informed as it respects the underlying conservation law. Numerical experiments for the cubic scalar conservation law show that the resulting method is able to approximate nonclassical shocks very well. The adaptive reconstruction surpresses oscillations and enables sharp shock capturing. Generalization to unseen shock configurations and smooth intial value problems is robust and shows very good results.“
In aforementioned work, the machine learning part is limited to the selection of local interpolation polynomials and combining them with regularization terms. This is done in order to guarantee a physically consistent numerical scheme. I am very much interested in how to relax these restrictions on the machine learning part while maintaining physical consistency in a numerical method. Therefore, in my poster I will present details on the data-driven scheme for undercompressive nonclassical shocks and possible extensions as to how the machine learning part can be further extended.
Mathematical models for physical phenomena typically show certain structures if formulated correctly. Hamiltonian systems are an example for such structured systems. They rely on the so-called symplectic structure, which is responsible for the characteristic property to preserve the Hamiltonian function over time. In numerical mathematics, preservation of these structures shows great improvements in stability and accuracy e.g. for numerical integration [1] or model order reduction (MOR) [2].
Our goal is to show how so-called symplectic reduced-order bases can be computed from data, which is relevant for structure-preserving MOR of Hamiltonian systems. To this end, we give a short introduction to symplecticity and Hamiltonian systems. Based thereon, we discuss symplectic basis generation techniques in comparison to the classical Proper Orthogonal Decomposition (also: Principal Component Analysis). Based on a two- and a three-dimensional linear elasticity model, we show how such techniques can be used (a) for classical data compression and reconstruction tasks and (b) for symplectic MOR.
[1] E. Hairer, G. Wanner, and C. Lubich. Geometric Numerical Integration. Springer, Berlin, Heidelberg, 2006. ISBN 978-3-540-30666-5. doi: 10.1007/3-540-30666-8.
[2] L. Peng and K. Mohseni. Symplectic Model Reduction of Hamiltonian Systems. SIAM J. Sci. Comput., 38(1):A1–A27, 2016. doi: 10.1137/140978922.
The optimization of vibro-acoustic systems in terms of vibration or sound radiation requires many system evaluations for varying parameters. Often, material or geometric uncertainties have to be considered. Vibro-acoustic systems are typically large and numerically expensive to solve, so it is desirable to use an efficient parametrized surrogate model for optimization tasks. Classic reduced order modelling has already been used to create parametrized reduced models of vibro-acoustic systems (Aumann et al. 2019; van Ophem et al. 2019). In this contribution, we want to investigate the potential of using machine learning techniques, such as neural networks or regression models to create parametrized surrogate models for vibro-acoustic systems.
Swischuk et al. (2019) created parametrized surrogate models for a structural system combining reduced order modelling and machine learning methods. They used proper orthogonal decomposition (POD) for non-linear systems in the time domain to generate the POD coefficients and trained their surrogate model with a neural network and different regression methods to map parameter sets to POD coefficients. Their resulting surrogate model is used for real-time structural damage evaluation. We want to pursue a similar approach for vibro-acoustic systems in the frequency domain. Using a classic model order reduction method, we extract the dominant modes of systems with given parameter sets and use them to train a surrogate model. The model shall be trained to find the proper poles of the reduced system given an unknown set of parameters. Using the poles, a reduced order model for this parameter set can be created and evaluated efficiently. Its response can also be transformed to obtain the full system’s response and can be used for optimization tasks.
Such a surrogate model can be used, for example, to optimize the radiation characteristics of a violin, which heavily depends on the thickness of its corpus and the used materials. Another application is system identification. The surrogate model can be trained to map obstacle locations in an acoustic cavity to its response to a defined excitation. In an inverted process, the trained model can then test an actual response resulting from an obstacle against possible obstacle locations to find its actual position.
References:
Aumann, Q.; Miksch, M.; Müller, G. (2019): Parametric model order reduction for acoustic metamaterials based on local thickness variations. In J. Phys.: Conf. Ser. 1264, p. 12014. DOI: 10.1088/1742-6596/1264/1/012014.
Swischuk, Renee; Mainini, Laura; Peherstorfer, Benjamin; Willcox, Karen (2019): Projection-based model reduction: Formulations for physics-based machine learning. In Computers & Fluids 179, pp. 704–717. DOI: 10.1016/j.compfluid.2018.07.021.
van Ophem, S.; Deckers, E.; Desmet, W. (2019): Parametric model order reduction without a priori sampling for low rank changes in vibro-acoustic systems. In Mechanical Systems and Signal Processing 130, pp. 597–609. DOI: 10.1016/j.ymssp.2019.05.035.
We are concerned with optimal control strategies subject to uncertain demands. In many real-world situations, taking uncertainty into account gains in importance. Supply chain management and the energy transition are just two examples where control strategies coping with uncertainties are of high practical importance. A compensation of deviations from the actual demand might be very costly and should be avoided. To address this problem, we control the inflow in the hyperbolic supply system at a given time to optimally meet an uncertain demand stream. To enhance supply reliability, we require demand satisfaction at a prescribed probability level, mathematically formulated in terms of a chance constraint. The stochastic optimal control framework has been set up in [LGK19]. The hyperbolic supply system is modeled by hyperbolic balance laws and the Ornstein-Uhlenbeck process represents the uncertain demand stream.
In future work, we would like to extend the setting to include uncertainty not only in the demand but also within the model of the supply system, where parameters shall be learned from data.
Acknowledgment: The authors are grateful for the support of the German Research Foundation (DFG) within the project ``Novel models and control for networked pro-blems: from discrete event to continuous dynamics'' (GO1920/4-1).
[LGK19] Lux, K., Göttlich, S., and Korn, R. ``Optimal control of electricity input given an uncertain demand.'', MMOR, 2019.
The poster presents a novel approach to diagnose rotordynamic faults like unbalance and coupling misalignment from measured vibration. For that purpose, a large database of virtual hydropower rotors and their vibration has been calculated. The goal is to create a data-driven diagnosis system from this database, that will be applicable to a variety of real hydropower rotors. In a first step, a gradient boosting regression algorithm has been applied to the database with some preprocessing and showed promising results that now shall be improved in accuracy.
We are interested in real-time capable simulation of soil and soil-tool interaction forces. In previous work, we have successfully implemented a solution of precomputing data using the Discrete Element Method (DEM) and efficiently processing and saving it in a lookup table. Within the respective online phase, the data is accessed in an efficient way [1,2].
We also perform measurements at a test pit at the soil laboratory at TUK with different kinds of soil, e.g. coarse gravel and coarse sand. We plan to use this data to include the frequency behavior in the reaction forces in order to improve the above mentioned approach. Interesting signal processing tools which may be used here comprise Fourier Transform, Power Spectral Density and others.
[1] Jahnke, J.; Steidel, S.; Burger, M. Soil Modeling with a DEM Lookup approach, PAMM, 2019
[2] Jahnke, J.; Steidel, S.; Burger, M.; Simeon, B. Efficient Particle Simulation Using a Two-Phase DEM-Lookup Approach, Proceedings of the 9th ECCOMAS on MBD, pp. 425-432, 2020
My research's topic focuses on developing and investigating computational data-driven methods in order to model the material laws from observed data. The methodology is expected to deliver the governing mathematical model of the observed problem in the form of a set of symbolic equations that potentially enable new discoveries in data-rich fields of continuous physical problems. Artificial neural networks (ANN) have been proposed as an efficient data-driven method for constitutive modelling, accepting either synthetic data solutions or experimental datasets. Sparse regression has the potential to identify relationships between field quantities directly from data in the form of symbolic expressions. Depending on the richness of the given data (function values vs. gradients, densely vs. sparsely sampled) specific techniques are required to obtain accurate numerical evaluations of spatial and temporal derivatives from sparse data representing smooth or non-smooth states; e.g. via hierarchical multi-level gradient estimation, Gaussian smoothing or gradient capturing techniques. A machine learning prototype is implemented using the FEniCS framework coupled with a trained neural network on the basis of PyTorch.
A non-intrusive data-driven model order reduction method is introduced that learns low-dimensional dynamical models for a parametrized non-traditional shallow water equation (NTSWE). The reduced-order model is learnt by setting an appropriate least-squares optimization problem in a low-dimen-sional subspace. Computational challenges that particularly arise from the optimization problem, such as ill-conditioning are discussed. The non-intrusive model order reduction framework is extended to a parametric case using the parameter dependency at the level of the partial differential equation. The efficiency of the proposed non-intrusive method is illustrated to construct reduced-order models for NTSWE and compared with an intrusive method, proper orthogonal decomposition with Galerkin projection. Furthermore, the predictability of both models outside the range of the training data is discussed.
Piezoelectric energy harvesters (PEHs) are a potential alternative to batteries in large-scale sensor networks and implanted health trackers, but the low output power and the narrow work range has been a bottleneck for its practical application.
To alleviate this problem, the present research will develop a data-driven reduced-order model for flow-induced PEHs based on the dataset obtained from a nonlinear and parametric electro-mechanical model. This model will be a high-fidelity monolithic computational model established by the weighted residuals method and corresponding numerical solutions will be calculated by the finite element method in FEniCS. Then a projection-based model order reduction will be implemented and machine learning will be introduced to address challenges resulting from nonlinearity and multi-parameters.
Once the reduced-order model is validated, a reliable and fast method to predict the performance of flow-induced PEHs will be achieved, promising real-time optimization of the design of PEHs. It will promote the further commercialization of PEHs.
Kernel methods provide a mathematically rigorous way of learning, however they usually lack efficiency on large amounts of data due to a bad scaling in the number of data points. Furthermore, they are flat models, in the sense that they consist only of one linear combination of non-linear functions. Another drawback is that they do not allow for end-to-end learning, since the model learning is decoupled from the data representation learning. In contrast, Neural Network techniques are able to make use of such large amounts of data and computational resources and combine the representation learning with the model learning.
Based on a recent Representer Theorem for Deep Kernel learning [1], we examine different setups and optimization strategies for Deep Kernels including some theoretical analysis. We show that - even with simple kernel functions - the Deep Kernel approach leads to setups similar to Neural Networks but with optimizable activation functions. A combination of optimization and regularization approaches from both Kernel methods and Deep Learning methods, yields to improved accuracy in comparison to flat kernel models. Furthermore, the proposed approach easily scales to large amounts of training data in high dimension which is important from the application point of view. Preliminary results on a fluid dynamics application (with a dataset with up to 17 million data points in 30 dimensions show favorable results compared to standard Deep Learning methods [2].
[1] Bohn, Bastian, Christian Rieger, and Michael Griebel. "A Representer Theorem for Deep Kernel Learning." Journal of Machine Learning Research 20.64 (2019): 1-32.
[2] T. Wenzel, G. Santin, B. Haasdonk: "Deep Kernel Networks: Analysis and Comparison", Preprint, University of Stuttgart, in preparation.
In this paper, we intend to use a deep-learning based approach for the construction of locally conservative flux fields with heterogeneous and high-contrast media in the context of flow models. In previous work, the problem is solved through a variation of the Generalized Multiscale Finite Element Method(GMsFEM), which is computationally expensive. The key ingredients of GMsFEM include multiscale basis functions and coarse-scale parameters, which are obtained by solving local problems in each coarse neighborhood. In case of the time-dependent media, we have to recompute key ingredients in different time steps. The objective of our work is to make use of deep learning techniques to mimic the nonlinear relation between the permeability field and the GMsFEM discretizations, and use neural networks to perform fast computation of GMsFEM ingredients repeatedly for a class of media. The flux values are obtained through the use of a Ritz formulation in which we argument the resulting linear system of the continuous Garlerkin(CG) formulation in the higher-order GMsFEM approximation space. Furthermore, we postprocess the velocity field with some postprocessing approachesto obtain the local conservation property.
We aim to utilize machine learning methods to learn superstructures in turbulent flow to obtain a data-driven reduced model for turbulent convection. The underlying data will stem from both numerical simulations and experiments and will be used as training data for various machine learning architectures in order to predict the behavior of the underlying system and to extract hidden structures of the turbulent flow.
My PhD research concerns mathematical modelling, numerical simulations and applications to electrochemical energy storage devices, in particular Zn-air batteries (ZAB).
Zn-air battery (ZAB) concepts exhibit storage potentialities ranging from low-power portable consumer electronics, to automotive and home applications (see [2]). During recharge, the regeneration of Zn is however daunted by severe morphological changes leading to low cycle life. These morphological changes are related to metal growth instabilities.
The main goal of the project is to set up a research framework aimed at attacking such battery electrode problems with a Machine Learning (ML) approach based on a Training Set of data resulting from numerical solutions of a reaction-diffusion PDE model, that is able to capture the essential features of unstable material growth in electrochemical systems by means of the so-called Turing patterns (see [1,2,3] and reference therein).
In this mesoscopic model, referred to as the “DIB model", the recharge instability is controlled by the interaction between material “shape" and material “chemistry"; the source terms include the physics describing the growth process and the parameters involved account for the battery operating conditions (chiefly electrolyte chemistry and charge rate). One of the key results of the analysis of the model [3,4,5] is the correlation of the values of the model parameters with the occurrence and type of growth instabilities. In particular, in [3] a segmentation of the parameter space in morphological classes has been proposed (see [6]).
The first application I am planning is to train the ML algorithm with a computed set of morphological maps, and to use it to classify a set of experimental maps, obtained by optical microscopy observations of electrodeposited alloys.
REFERENCES
[1] Lacitignola D, Bozzini B, Sgura I- Spatio-temporal organization in a morphochemical electrodeposition model: Hopf and Turing instabilities and their interplay, European Journal of Applied Mathematics (2015) 26(2) 143-173, dx.doi.org/10.1017/S0956792514000370
[2] Bozzini B, Mele C, D'Autilia MC, Sgura I. - Dynamics of zinc-air battery anodes: An electrochemical and optical study complemented by mathematical modelling, Metallurgia Italiana (2019) 111(7-8), 33-40
[3] Lacitignola D, Bozzini B, Frittelli M, Sgura I- Turing pattern formation on the sphere for a morphochemical reaction-diffusion model for electrodeposition, Communications in Nonlinear Science and Numerical Simulation (2017) 48, 484-508, dx.doi.org/10.1016/j.cnsns.2017.01.008
[4] Sgura I, Lawless A, Bozzini B - Parameter estimation for a morphochemical reaction-diffusion model of electrochemical pattern formation, Inverse Problems in Science and Engineering (2019) 27(5), 618-647 doi.org/10.1080/17415977.2018.1490278
[5] Sgura I, Bozzini B - XRF map identification problems based on a PDE electrodeposition model, Journal of Physics D: Applied Physics (2017) 50(15), dx.doi.org/10.1088/1361-6463/aa5a1f
[6] https://www.researchgate.net/figure/Segmentation-of-the-Turing-region-six-subregions-R-0-R-1-R-5-from-top-to-bottom_fig4_326338358
Friction brakes can exhibit high-intensity vibrations in the frequency range above 1 kHz, which is typically known as squeal. Those vibrations are self-excited due to the friction-interface between brake pads and disk. Decades of research have been spent on modelling this phenomenon, but even today predictive modelling is out of reach. The root causes, amongst others, are considered to be the multi-scale temporal effects, multi-physic interactions involving mechanics, thermodynamics and chemistry, unknown system parameters and emergent behaviour. Continuing recent works, we present a machine learning approach to predict the dynamic instability from multiple complex loading conditions using recurrent neural networks and a large experimental database. In order to generate new designs that are less prone to self-excited vibrations, the trained networks are exposed to model-agnostic explainers, that can disaggregate the complex nonlinear relations that were learned during the training phase. Importance values are assigned to loading sequences and are visualized by colour mappings. The validated models are virtual twins for the actual brake system and can serve as a reduce-order model. Furthermore, classical analytical models are compared and updated using the virtual twins for generating low-dimensional representations of complex dynamical systems.
Modeling and simulations are a pillar in the development of complex technical systems. However, for time-critical applications a conduction of high-fidelity simulations is not always feasible. To mitigate this computational bottleneck model order reduction (MOR) can be applied. For nonlinear models, linear MOR approaches are only practicable to a limited extend. Nonlinear approaches, on the contrary, often require deep interventions in the used simulation code. If access is not possible, non-intrusive nonlinear model order reduction can be the key to success.
The goal of this work is to implement two different non-intrusive approaches using linear model order reduction along with machine learning algorithms. They both rely on the idea to learn the dynamics in the reduced space. In the first approach, a linear ODE is supplemented with the nonlinear inner forces discovered by the algorithms. In contrast, the second one aims to learn the sequence of the reduced dynamics of a system directly.
By applying these methods to problems arising from the field of structural dynamics, accurate surrogate models are received. They can speed up the simulation time significantly, while still providing high-quality state approximations.
Model order reduction for advection dominated problems has always been not effective due to the slow decay of the Kolmogorov $N$-width of the problems. Even very simple problems, such as linear transport equations of sharp gradients, show already this behavior. This difficulty can be overcome with different techniques. What we propose is to change the original solution manifold thanks to a geometrical transformation that aligns the advected features of the different solutions and that leads to an Arbitrary Lagrangian Eulerian formulation.
In order to be able to use this formulation, we need to know the so-called mesh velocity. In this context, the map is chosen according to parameter and time and can be generated with some expensive detection and optimization algorithms. To effectively use it in the online phase of the model order reduction technique a regression map must be used. To do so, we compare different regression maps (polynomials and deep learning maps). The results for some examples in 1D are shown.
More details on https://arxiv.org/abs/2003.13735
This study aims to model transonic airfoil-gust interaction and the gust response on transonic aileron-buzz problems using high fidelity computational fluid dynamics (CFD) and the Long Short-Term Memory (LSTM) based deep learning approach. It first explores the rich physics associated with these interactions, which show strong flow field nonlinearities arising from the complex shock-boundary layer interactions using CFD. In the transonic regime, most linear Reduced Order Models (ROMs) fail to reconstruct the unsteady global parameters such as the lift, moment and drag coefficients and the unsteady distributive flow variables such as velocity, pressure, and skin friction coefficients on the airfoil or in the entire computational domain due to the nonlinear shock-gust interaction. As it is well known that a deep-learning framework creates several hypersurfaces to generate a nonlinear functional relationship between the gust or structural input and the unsteady flow variables as an output an algorithm is proposed to overcome the limitations of linear ROMs. This algorithm consists of two integral steps, namely a dimensionality reduction where the Discrete Empirical Interpolation Method (DEIM) based linear data compression approach is applied and the reduced state is trained using the LSTM based Recurrent Neural Network (RNN) for the reconstruction of unsteady flow variables. Current study further modifies the loss function inside the LSTM network using the residual from the Navier Stokes equation and propose a Physics guided LSTM network. The present work shows its potential for predicting transonic airfoil gust response and the aileron buzz problem demonstrating several orders of computational benefit as compared to high fidelity CFD.
Physics-informed neural networks are applied to incompressible two-phase flow problems. We investigate the forward problem, where the governing equations are solved from initial and boundary conditions, as well as the inverse problem, where continuous velocity and pressure fields are inferred from data on the interface position scattered across time. We employ a volume of fluid approach, i.e. the auxiliary variable here is the volume fraction of the fluids within each phase. For the forward problem, we solve a two-phase Couette and Poiseuille flow. As for the inverse problem, three classical test cases in two-phase modeling are investigated. In particular, a drop in a shear flow, an oscillating drop and a rising bubble is studied. The data of the interface position over time is generated with a valitated CFD solver. The inferred velocity and pressure fields are then compared to the CFD solution. An effective way to distribute the spatial training points to fit the interface, i.e. the volume fraction field, and the residual points is presented. Furthermore, we show that the weighting of the losses that are associated with the residua of the partial differential equations is crucial for the training process. The benefit of using adaptive activation functions is evaluated for both the forward and inverse problem.
When conducting measurement on existing structures, e.g. collecting the time-response of a building, and trying to compute the same response by a suitable computational method, one often notices discrepancies between the measurement and model data. These discrepancies are due to a wide range of errors, done in both the measurement and the modelling. The model errors can stem from uncertainty in the model parameters and, often more importantly, the model error itself.
One can then use stochastic methods to obtain a more robust response prediction of the structure at hand (forward Uncertainty Quantification (UQ)) and use the data gathered to learn about the model parameters and the errors involved in the modelling (Inverse UQ). Often, Bayesian methods are applied to solve the inverse problem at hand. In any case, applying sampling-based approaches to UQ requires the repeated evaluation of the model and can become infeasible for computationally demanding models.
To address this issue we introduced a novel surrogate model that is especially suitable for approximating linear structural dynamic models in the frequency domain (Schneider et al. 2020). The surrogate approximates the original model by a rational of two polynomial chaos expansions (PCE) over the stochastic input space. The complex coefficients in the expansions are obtained by solving a regression problem in a non-intrusive manner.
One drawback in the PCE based surrogate model is the factorial growth of the number of basis terms in the expansions with the number of input dimensions and polynomial order, known as the curse of dimensionality. To circumvent this restriction, often approaches that find sparse bases representations are applied. One of these approaches is Sparse Bayesian Learning (Tipping, 2001). Implementing such an approach for the rational surrogate model could help in obtaining a sparse and thus efficient surrogate model for UQ in the frequency domain.
Further improvements to the method include extending the approach to work with vector-valued output in an efficient manner, as now, the surrogate is only able to approximate scalar model output. Promising approaches include Proper Generalized Decomposition (PGD) (Chevreuil et al. 2012), among others.
References:
[1] Schneider, F., Papaioannou, I., Ehre, M., & Straub, D. (2020). Polynomial chaos based rational approximation in linear structural dynamics with parameter uncertainties. Computers & Structures, 233, 106223.
[2] Chevreuil, M., & Nouy, A. (2012). Model order reduction based on proper generalized decomposition for the propagation of uncertainties in structural dynamics. International Journal for Numerical Methods in Engineering, 89(2), 241-268.
[3] Tipping, M. E. (2001). Sparse Bayesian learning and the relevance vector machine. Journal of machine learning research, 1(Jun), 211-244.
With the continuously increasing size of the wind turbine blades, the complexity of the blade casting process and the risk of failures has also increased. The vacuum-assisted resin transfer moulding (VATRM) production process at the Siemens Gamesa Renewable Energy facility in Aalborg, Denmark, does not permit the visual inspection of the process. Hence a sensor system (possibly virtual) for process control and monitoring is highly prized. Therefore, in this poster, a simple methodology to identify a low-dimensional stochastic grey-box spatiotemporal model of the flow-front dynamics inside the vacuum assisted resin transfer moulding process is described. A numerical case-study is presented demonstrating the effectiveness of the proposed methodology.
In the field of environmental modelling, especially modelling problems in the water resources sector, the acquisition of observation data is usually expensive, and/or the underlying model representations are incredibly complex. The spatially distributed models typically used for water quantity and quality prediction yield significant uncertainties even after being carefully calibrated, and they tend to have a high computational cost with long runtimes. These issues profoundly affect the performance of the models and impact the efficiencies of sensitivity and uncertainty analysis; ultimately, achieving robust decision making is very difficult for end-users who need knowledge of the behaviour of such models and the credibility of their predictions.
Machine learning can provide a means for the practical construction of surrogate models of the original response surface by learning from data, and this dramatically helps with computational efficiency and performance. Currently, several machine learning techniques such as Gaussian processes and polynomial chaos expansions have been widely used for generating surrogate models, but a gap still exists on how to efficiently combine the surrogate model construction and sensitivity analysis.
Sensitivity analysis relies heavily on the sampling choices and model runtimes. How can different sampling methods be designed to more efficiently explore the behaviour of surrogate models, and how to best construct surrogate models to assist the convergence of sensitivity analysis metrics? These are still challenging problems. Machine learning techniques will be explored as a potential solution to these challenging problems.
Keywords: Machine Learning; Environmental Modelling; Hydrological Modelling; Sensitivity Analysis; Surrogate Model; Uncertainty
As in many engineering fields, Computational Fluid Dynamics (CFD) lives upon modelling reality in a feasible way to come to a desired solution. One good example in fluid dynamics is turbulence, which is mathematical modelled in most simulations, but there are many cases where it is necessary to resolve turbulent eddy’s to take crucial effects into consideration. If this is coupled with a flow optimization problem, the computation time becomes a limiting factor for companies. An example were machine learning solves a CFD problem like this is in multiphase flow simulation [1]. Where a computationally intensive problem is solved with less processing time.
Build on the research of Peter A. Leitl Et al. [2] where the flow in the turbine centre frame (TCF), which is the part between high and low pressure turbine in an aircraft engine, was improved, we will investigate optimization methods for the first stage low pressure turbine in consideration of the changed flow field due to the application of drag reducing micro channel surfaces in the TCF.
[1] Ansari, A., Boosari, S. S. H., & Mohaghegh, S. D. (2020). Successful Implementation of Artificial Intelligence and Machine Learning in Multiphase Flow Smart Proxy Modeling: Two Case Studies of Gas-Liquid and Gas-Solid CFD Models. J Pet Environ Biotechnol, 11, 401.
[2] Leitl, P. A., Göttlich, E., Flanschger, A., Peters, A., Feichtinger, C., Marn, A., & Reschenhorfer, B. (2020). Numerical investigation of optimal riblet size for turbine center frame strut flow and the impact on the performance. AIAA Scitech 2020 Forum, 307.
Introduction to Kernel Methods for Data Driven Modeling
- Kernel-based EDMD and Generator EDMD
Introduction to data-driven learning of physics models
- The SINDy method
- PDE-FIND
- ROMs with SINDy
Coarse Graining for SDEs
- Effective Dynamics
- Reduced Generator
- Parameter Estimation
Multifidelity and using data-fit models together with traditional model for, e.g., uncertainty quantification
Learning coordinates and models
- SINDy autoencoders
- Koopman autoencoders