Digital thread-based Design of turbo Engines with embedded AI and high precision Simulation (DARWIN)

Principal Investigators:
Dr. Andreas Knüpfer
Project Manager:
Sebastian Strönisch
HPC Platform used:
CPU and GPU Clusters
Date published:
Introduction:
In the joint BMWi Lufo VI project DARWIN, the Center for Information Services and High Performance Computing (ZIH) and the Chair of Turbomachinery and Aero Engines (TFA) at the TU Dresden are working in cooperation with Rolls Royce Germany on the further development, application and validation of innovative digital simulation and design methods to improve the interdisciplinary understanding of engine systems. Work includes improving load balancing of highly parallelized coupled simulation codes, measuring surface roughness and wear effects and feeding them back into simulation models, as well as applying machine learning (ML) methods to predict flow fields.
Body:

The joint project involves the further development, application and validation of innovative digital simulation and design methods in the following three main topics: 
1. High-fidelity simulation of the multidisciplinary behavior of aircraft engine systems using hybrid and cross-scale methods, as well as performance and scalability improvement of highly parallel simulation runs and, in particular, load balancing of coupled simulations.
2. Creation of digital twins of aircraft engine components and integration of data from the development, validation and operational phases of the product lifecycle into the "digital thread" concept.
3. Application and further development of artificial intelligence and machine learning methods to accelerate design processes and improve the quality and accuracy of simulation results.
In the following, parts of the work done in the third main topic will be briefly explained.
Machine learning (ML) and artificial intelligence (AI) are often used as synonymous terms.  They have been an interesting field of research within computer science for many years, but have only found widespread practical use in the last decade or so. The main reason for this has been a massive increase in available computing power for the rather specialized computational operations that dominate the training of these methods. This applies in particular for so called “Deep Learning”, which is one of the most widespread and successful ML methods.
The necessary computing power was achieved by GPU accelerator cards and so-called TPUs designed directly for this purpose. Common general-purpose CPUs are still orders of magnitude slower for this kind of computing operations.
The essential feature of all ML methods is that solutions are not designed and programmed as algorithms but are learned independently on the basis of training data. This is particularly successful for image analysis, object recognition, and generally for the classification of input vectors into classes or categories.
Most ML methods have in common that they require very large computational effort for training, but relatively small effort for retrieving the learned knowledge, the so-called "inference". This is one of the properties that makes the use of surrogate models based on DNNs particularly interesting. The basic idea behind this is to generate the necessary training data for a DNN from measurements or complex numerical simulations. This and the following training process require high computing power and long computing times. However, the DNN model obtained can be executed with much less computational effort on significantly weaker and cheaper hardware that requires less electrical power.
Nevertheless, very fast response times, sometimes in real time, can be realized, although the original numerical simulation on high-performance computers would not allow this. Nevertheless, the suitability and quality of such a surrogate model must be evaluated precisely for each application.
This work on improving the numerical flow solver using ML and AI methods aims to shorten the runtime of the solver by improving initial solutions. For this purpose, a dataset of steady-state flow solutions is created on the basis of which the training of a suitable model architecture is then carried out. The training on academic examples takes several days on a single GPU, whereas the inference takes only a few milliseconds. In order to be able to represent complex geometries and don’t compromise on accuracy, graph-based algorithms are used so that the numerical mesh can be fed directly to the algorithm. In order to apply these methods to numerical meshes of industry scale (larger than 1 mio. cells), a multi-gpu approach with sophisticated communication strategy is required.

Institute / Institutes:
ZIH, TU Dresden, TFA, TU Dresden, Rolls-Royce Deutschland Ltd & Co KG, DLR, University of Surrey/UK, Brandenburg University of Technology Cottbus-Senftenberg, Intelligent Light, Rutherford, NJ/USA, and more
Affiliation:
TU Dresden, BTU Cottbus-Senftenberg, University of Surrey
Image:
Digital thread-based Design of turbo Engines with embedded AI and high precision Simulation (DARWIN)