-
targetDART: Dynamisch adaptive und reaktive Verteilung von Rechenaufgaben auf heterogenen Exascale-Architekturen
Project type | BMBF Programm: „Neue Methoden und Technologien für das Exascale-Höchstleistungsrechnen“ (SCALEXA) |
Funded by | BMBF |
Begin | October 2022 |
End | September 2025 |
Leader | Univ.-Prof. Dr. Michael Bader, TUM |
Staff | David Schneller, Mario Wille |
Contact person | Univ.-Prof. Dr. Michael Bader |
Co-operation partner | Jose Gracia and Christian Siebert (HLRS Stuttgart); Christian Terboven and Adrian Schmitz (RWTH Aachen University) |
Brief description
targetDART develops reactive and adaptive dynamic load-balancing mechanisms to mitigate variations in computational load and performance on heterogeneous exascale systems. targetDART builds on the preceeding Chameleon project, which has realised dynamic task offloading for MPI+OpenMP applications on CPU-only systems, and extends the load-balancing approach towards dynamic scheduling to (and between) GPUs as well. Lead applications in targetDART are based on the simulation software packages ExaHyPE and SeisSol, which feature complex algorithms with dynamic load on (dynamically) adaptive meshes.
For more details, see the project website.
-
TaLPas: Task-basierte Lastverteilung und Auto-Tuning in der Partikelsimulation
Project type | BMBF Programm: Grundlagenorientierte Forschung für HPC-Software im Hoch- und Höchstleistungsrechnen |
Funded by | BMBF |
Begin | January 2017 |
End | June 2020 |
Leader | Univ.-Prof. Dr. Hans-Joachim Bungartz, TUM, Philipp Neumann, Universität Hamburg |
Staff | Univ.-Prof. Dr. Hans-Joachim Bungartz, Nikola Tchipev, M.Sc., Steffen Seckler, M.Sc. (hons) |
Contact person | Nikola Tchipev, M.Sc. |
Co-operation partner | Philipp Neumann, Universität Hamburg, Colin W. Glass, HLRS/Universität Stuttgart, Guido Reina, VISUS/Universität Stuttgart, Felix Wolf, TU Darmstadt, Martin Horsch, TU Kaiserslautern, Jadran Vrabec, Universität Paderborn |
Brief description
The main goal of TaLPas is to provide a solution to fast and robust simulation of many, potentially dependent particle systems in a distributed environment. This is required in many applications, including, but not limited to,
- sampling in molecular dynamics: so-called “rare events”, e.g. droplet formation, require a multitude of molecular dynamics simulations to investigate the actual conditions of phase transition,
- uncertainty quantification: various simulations are performed using different parametrisations to investigate the sensitivity of the parameters on the actual solution,
- parameter identification: given, e.g., a set of experimental data and a molecular model, an optimal set of model parameters needs to be found to fit the model to the experiment.
For this purpose, TaLPas targets
- the development of innovative auto-tuning based particle simulation software in form of an open-source library to leverage optimal node-level performance. This will guarantee an optimal time-to-solution for small- to mid-sized particle simulations,
- the development of a scalable task scheduler to yield an optimal distribution of potentially dependent simulation tasks on available HPC compute resources,
- the combination of both auto-tuning based particle simulation and scalable task scheduler, augmented by an approach to resilience. This will guarantee robust, that is fault-tolerant, sampling evaluations on peta- and future exascale platforms.
For more details, see the project website.
-
ATHLET-preCICE - Erweiterung von ATHLET durch die allgemeine Kopplungsschnittstelle preCICE für die Simulation von Multiphysikproblemen in der Reaktorsicherheit
Project type | PT-GRS Reaktorsicherheitsforschung im Förderbereich Transienten und Unfallabläufe |
Funded by | BMWi |
Begin | 2019 |
End | 2022 |
Leader | Dr. rer. nat. Benjamin Uekermann , Univ.-Prof. Dr. Hans-Joachim Bungartz |
Staff | Gerasimos Chourdakis, M.Sc. |
Contact person | Dr. rer. nat. Benjamin Uekermann |
Co-operation partner | Dr.-Ing. Fabian Weyermann, Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) gGmbH |
Brief description
Durch den Einsatz passiver Sicherheitssysteme bei Reaktoren der Generation 3+ können der Kühlkreislauf und das Containment nicht mehr getrennt voneinander betrachtet werden. So sind zum Beispiel bei Gebäudekondensatoren physikalische Effekte beider Systeme stark gekoppelt: Thermohydraulik in den Rohrleitungen, Wärmeleitung in komplizierten dreidimensionalen Strukturen (Kühlrippen) und eine konvektive Gas- oder Dampfströmung auf der Kondensatoraußenseite. Die Simulation des Gesamtsystems ist daher ein Multiphysikproblem, und damit ist eine Kopplung mehrerer Simulationsprogramme notwendig. Eine allgemeine Code-unabhängige Kopplung kann mittels der Open-Source Kopplungsbibliothek preCICE, sehr effizient realisiert werden. Im Rahmen dieses Projektes wollen wir eine preCICE-Schnittstelle für AC2 entwickeln. Diese soll zuerst für das Modul ATHLET implementiert werden. Da schon eine große Anzahl verschiedenster Simulationsprogramme wie ANSYS Fluent, COMSOL, OpenFOAM, CalculiX, oder Code_Aster über eine preCICE-Schnittstelle verfügen, würden dadurch alle diese Programme unmittelbar für gekoppelte Analysen mit ATHLET nutzbar. Ein weiterer Vorteil dieser Schnittstelle ist, dass dadurch nicht nur die gleichzeitige Kopplung von zwei Rechenprogrammen, sondern drei oder auch mehr, möglich ist. Die detaillierte Simulation des genannten Beispiels des Gebäudekondensators wird hierdurch erst möglich. Da ähnliche multiphysikalische Probleme auch bei den modularen Reaktoren, die in vielen Ländern als die Zukunft der Nukleartechnik gesehen, auftreten, ist die angestrebte Implementierung einer preCICE- Schnittstelle in ATHLET ein notwendiger Schritt für die Zukunftsfähigkeit von ATHLET.
-
ExaHyPE-MVP – Mixed and Variable Precision for an Exascale Hyperbolic PDE Engine
Funded by | DFG |
Begin | 2021 |
End | 2024 |
Leader | Univ.-Prof. Dr. Michael Bader |
Staff | Marc Marot-Lassauzaie |
Contact person | Univ.-Prof. Dr. Michael Bader |
Brief description
The goal of ExaHyPE-MVP is to systematically explore and exploit the use of mixed and variable floating point precision in the ExaHyPE engine for solving hyperbolic systems of partial differential equations. ExaHyPE is based on high-order ADER-DG (discontinuous Galerkin with Arbitrary High-order DERivative time stepping) discretisation, which consists of a multitude of kernels, such as space-time predictors for the element-local solutions, Riemann solvers to deal with discontinuities between elements or Finite-Volume-based subcell limiting.
We will extend ExaHyPE's code generation utilities to allow engine users and developers to specify the precision used for each kernel. In addition, we will extend ExaHyPE to support variable target precision, for example in areas of higher or lower accuracy demands. For both mixed and variable precision, we will explore criteria for adaptively selecting the precision, thus introducing "epsilon adaptivity" (i.e., adaptive variable precision) as an HPC-motivated counterpart to the concepts of h- or p-adaptivity.
Research Software Sustainability
-
preDOM – Domestication of the Coupling Library preCICE
Funded by | DFG |
Begin | 2018 |
End | 2021 |
Leader | Dr. rer. nat. Benjamin Uekermann, Univ.-Prof. Dr. Hans-Joachim Bungartz |
Staff | |
Contact person | Dr. rer. nat. Benjamin Uekermann |
Brief description
The purpose of the proposed project is to domesticate preCICE – to make preCICE usable without support by the developer team. To achieve this goal, usability and documentation of preCICE have to be improved significantly. Marketing and sustainability strategies are required to build-up awareness of and trust in the software in the community. In addition, best practices on how to make a scientific software prototype usable for a wide academic range, can be derived and shall be applied to similar software projects.
Reference: preCICE Webpage, preCICE Source Code
-
SeisSol-CoCoReCS – SeisSol as a Community Code for Reproducible Computational Seismology
Funded by | DFG |
Begin | 2018 |
End | 2021 |
Leader | Univ.-Prof. Dr. Michael Bader, Dr. Anton Frank, (LRZ), Dr. Alice-Agnes Gabriel (LMU) |
Staff | Ravil Dorozhinskii, M.Sc., Lukas Krenz, M.Sc., Carsten Uphoff |
Contact person | Univ.-Prof. Dr. Michael Bader |
Brief description
The project is funded as part of DFG's initiative to support sustainable research software. In the CoCoReCS project, we will improve several issues that impede a wider adoption of the earthquake simulation software SeisSol. This includes improvements to the workflows for CAD and meshing, establishing better training and introductory material and the setup of an infrastructure to reproduce test cases, benchmarks and user-provided simulation scenarios.
-
Priority Program 1648 SPPEXA - Software for Exascale Computing
Coordination Project
Funded by | DFG |
Begin | 2012 |
End | 2020 |
Leader | Univ.-Prof. Dr. Hans-Joachim Bungartz |
Staff | Severin Reiz |
Contact person | Univ.-Prof. Dr. Hans-Joachim Bungartz |
Brief description
The Priority Programme (SPP) SPPEXA is different from other SPP with respect to its genesis, its volume, its funding via DFG's Strategy Fund, with respect to the range of disciplines involved, and to a clear strategic orientation towards a set of time-critical objectives. Therefore, despite its distributed structure, SPPEXA also resembles a Collaborative Research Centre to a large extent. Its successful implementation and evolution will require both more and more intense structural measures. The Coordination Project comprises all intended SPPEXAwide activities, including steering and coordination, internal and international collaboration and networking, and educational activities.
Reference: Priority Program 1648 SPPEXA - Software for Exascale Computing
-
ExaFSA - Exascale Simulation of Fluid-Structure-Acoustics Interaction
Funded by | DFG |
Begin | 2012 |
End | 2019 |
Leader | Prof. Dr. Miriam Mehl |
Staff | Dr. rer. nat. Benjamin Uekermann, Benjamin Rüth |
Contact person | Prof. Dr. Miriam Mehl |
Brief description
In scientific computing, an increasing need for ever more detailed insights and optimization leads to improved models often including several physical effects described by different types of equations. The complexity of the corresponding solver algorithms and implementations typically leads to coupled simulations reusing existing software codes for different physical phenomena (multiphysics simulations) or for different parts of the simulation pipeline such as grid handling, matrix assembly, system solvers, and visualization. Accuracy requirements can only be met with a high spatial and temporal resolution making exascale computing a necessary technology to address runtime constraints for realistic scenarios. However, running a multicomponent simulation efficiently on massively parallel architectures is far more challenging than the parallelization of a single simulation code. Open questions range from suitable load balancing strategies over bottleneck-avoiding communication, interactive visualization for online analysis of results, synchronization of several components to parallel numerical coupling schemes. We intend to tackle these challenges for fluid-structure-acoustics interactions, which are extremely costly due to the large range of scales. Specifically, this requires innovative surface and volume coupling numerics between the different solvers as well as sophisticated dynamical load balancing and in-situ coupling and visualization methods.
Reference: ExaFSA Webpage, preCICE Webpage, preCICE Source Code
-
EXAHD - An Exa-Scalable Two-Level Sparse Grid Approach for Higher-Dimensional Problems in Plasma Physics and Beyond
Funded by | DFG |
Begin | 2012 |
End | 2020 |
Leader | Univ.-Prof. Dr. Hans-Joachim Bungartz |
Staff | Michael Obersteiner |
Contact person | Univ.-Prof. Dr. Hans-Joachim Bungartz |
Brief description
Higher-dimensional problems (i.e., beyond four dimensions) appear in medicine, finance, and plasma physics, posing a challenge for tomorrow's HPC. As an example application, we consider turbulence simulations for plasma fusion with one of the leading codes, GENE, which promises to advance science on the way to carbon-free energy production. While higher-dimensional applications involve a huge number of degrees of freedom such that exascale computing gets necessary, mere domainde composition approaches for their parallelization are infeasible since the communication explodes with increasing dimensionality. Thus, to ensure high scalability beyond domain decomposition, a second major level of parallelism has to be provided. To this end, we propose to employ the sparse grid combination scheme, a model reduction approach for higher-dimensional problems. It computes the desired solution via a combination of smaller, anisotropic and independent simulations, and thus provides this extra level of parallelization. In its randomized asynchronous and iterative version, it will break the communication bottleneck in exascale computing, achieving full scalability. Our two-level methodology enables novel approaches to scalability (ultra-scalable due to numerically decoupled subtasks), resilience (fault and outlier detection and even compensation without the need of recomputing), and load balancing (high-level compensation for insufficiencies on the application level).
Reference: Priority Program 1648 SPPEXA - Software for Exascale Computing
-
SFB-TRR 89: Invasive Computing
Funded by | DFG |
Begin | Mid 2010 |
End | 3rd phase in mid 2022 |
Leader | Univ.-Prof. Dr. Hans-Joachim Bungartz (D3), Univ.-Prof. Dr. Michael Bader (A4) |
Staff | Santiago Narvaez, M.Sc., Emily Mo-Hellenbrand, M.Sc., Alexander Pöppl, M.Sc., Dr. rer. nat. Tobias Neckel, Dr. rer. nat. Philipp Neumann; former staff: Dr. rer. nat. Martin Schreiber |
Contact person | Univ.-Prof. Dr. Hans-Joachim Bungartz (D3), Univ.-Prof. Dr. Michael Bader (A4) |
Brief description
In the CRC/Transregio "Invasive Computing", we investigate a novel paradigm for designing and programming future parallel computing systems - called invasive computing. The main idea and novelty of invasive computing is to introduce resource-aware programming support in the sense that a given program gets the ability to explore and dynamically spread its computations to neighbour processors similar to a phase of invasion, then to execute portions of code of high parallelism degree in parallel based on the available (invasible) region on a given multi-processor architecture. Afterwards, once the program terminates or if the degree of parallelism should be lower again, the program may enter a retreat phase, deallocate resources and resume execution again, for example, sequentially on a single processor. In order to support this idea of self-adaptive and resource-aware programming, not only new programming concepts, languages, compilers and operating systems are necessary but also revolutionary architectural changes in the design of MPSoCs (Multi-Processor Systems-on-a-Chip) must be provided so to efficiently support invasion, infection and retreat operations involving concepts for dynamic processor, interconnect and memory reconfiguration.
Reference: Transregional Collaborative Research Centre 89 - Invasive Computing
-
A4: Design-Time Characterisation and Analysis of Invasive Algorithmic Patterns
Phase 2 and 3 (2014-2022): see description of project A4 on the Invasic website
-
D3: Invasion for High Performance Computing
Phases 1,2 and 3 (2010-2022): see description of project D3 on the Invasic website
-
Bavarian Graduate School of Computational Engineering (BGCE)
Project type | Elite Study Program |
Funded by | Elite Network of Bavaria, TUM, FAU |
Begin | April 2005 |
End | April 2025 |
Leader | Univ.-Prof. Dr. Hans-Joachim Bungartz |
Staff | Dr. rer. nat. Tobias Neckel, Michael Rippl, M.Sc. (hons), Benjamin Rüth, M.Sc. (hons) |
Contact person | Dr. rer. nat. Tobias Neckel |
Co-operation partner | International Master's Program Computational Science and Engineering (TUM) International Master's Program Computational Mechanics (TUM) |
Brief description
The Bavarian Graduate School of Computational Engineering is an association of the three Master programs: Computational Engineering (CE) at the University of Erlangen-Nürnberg, Computational Mechanics (COME), and Computational Science and Engineering (CSE), both at TUM. Funded by the Elitenetzwerk Bayern, the Bavarian Graduate School offers an Honours program for gifted and highly motivated students. The Honours program extends the regular Master's programs by several academic offers:
- additional courses in the area of computational engineering, in particular block courses, and summer academies.
- Courses and seminars on "soft skills" - like communication skills, management, leadership, etc.
- an additional semester project closely connected to current research
Students who master the regular program with an above-average grade, and successfully finish the Honours program, as well, earn the academic degree "Master of Science with Honours".
-
Centre of Excellence for Exascale Supercomputing in the area of the Solid Earth (ChEESE)
Project type | EU Horizon 2020, INFRAEDI-02-2018 call Centres of Excellence on HPC |
Funded by | European Union’s Horizon 2020 research and innovation programme |
Begin | November 2018 |
End | October 2021 |
Leader | Barcelona Supercomputing Centre |
Staff | Ravil Dorozhinskii, M.Sc., Lukas Krenz, M.Sc., Leonhard Rannabauer, M.Sc., Jean-Matthieu Gallard, M.Sc. |
Contact person | Univ.-Prof. Dr. Michael Bader |
Co-operation partner | 14 participating institutes, see the ChEESE website for details. |
Brief description
The ChEESE Center of Excellence will prepare flagship codes and enable services for Exascale supercomputing in the area of Solid Earth (SE). ChEESE will harness European institutions in charge of operational monitoring networks, tier-0 supercomputing centers, academia, hardware developers and third-parties from SMEs, Industry and public-governance. The scientific ambition is to prepare 10 flagship codes to address Exascale Computing Challenging (ECC) problems on computational seismology, magnetohydrodynamics, physical volcanology, tsunamis, and data analysis and predictive techniques for earthquake and volcano monitoring.
SCCS contributes SeisSol and ExaHyPE as flagship in ChEESE. See the ChEESE website for further information!
-
ENERXICO - Supercomputing and Energy for Mexico
Project type | EU Horizon 2020, call FETHPC-01-2018 International Cooperation on HPC |
Funded by | European Union’s Horizon 2020 research and innovation programme |
Begin | June 2019 |
End | June 2021 |
Leader | Barcelona Supercomputing Centre |
Staff | Dr. Anne Reinarz, Sebastian Wolf, M.Sc. |
Contact person | Univ.-Prof. Dr. Michael Bader |
Co-operation partner | 16 participating institutes, see the ENERXICO website for details. |
Brief description
ENERXICO is a collaborative research and innovation action that shall foster the collaboration between Europe and Mexico in supercomputing. ENERXICO will develop performance simulation tools that require exascale HPC and data intensive algorithms for different energy sources: wind energy production, efficient combustion systems for biomass-derived fuels (biogas) and exploration geophysics for hydrocarbon reservoirs.
SCCS is mainly concerned with large-scale seismic simulations based on SeisSol and ExaHyPE. See the ENERXICO website for further information!
-
Helmholtz Gemeinschaft: MUnich School of Data Science (MUDS): Integrated Data Analysis 2.0
Project type | Research Project |
Funded by | Helmholtz Gemeinschaft |
Begin | September 2019 |
End | August 2023 |
Leader | Univ.-Prof. Dr. Hans-Joachim Bungartz, Prof. Frank Jenko (MPP) |
Staff | Dr. rer. nat. Tobias Neckel, Ravi Kislaya, M.Sc. |
Contact person | Dr. rer. nat. Tobias Neckel |
Co-operation partner | Michael Bergmann (MPP) |
Brief description
In this project of MUDS, the existing approaches for Bayesian Inversion in the context of fusion plasma simulations (the so-called Integrated Data Analysis) will be generalized and extended to incorporate a) stochastic information for forward propagation of uncertainties and b) simulation results of plasma microturbulence back into the Inversion process. In particular, the code GENE will be used.
-
Optimisation of SeisSol for Large Scale Simulations of Induced Earthquakes
Project type | KONWIHR |
Funded by | KONWIHR |
Begin | Oktober 2021 |
End | September 2022 |
Leader | Univ.-Prof. Dr. Michael Bader |
Staff | Sebastian Wolf |
Contact person | Univ.-Prof. Dr. Michael Bader, Sebastian Wolf |
Co-operation partner | Alice-Agnes Gabriel (LMU München), Gregor Hillers (U Helsinki) |
Brief description
Induced seismicity are earthquakes caused by human activities, such as by operating enhanced geothermal systems (EGS) for geothermal energy or oil/gas reservoirs. Induced earthquakes are potentially hazardous as they typically happen in shallow depth and close to urban environments. However, modeling of induced seismicity poses research questions on the physical mechanisms of induced earthquakes and on the complexity of the multi-physics feedback mechanisms between evolving fault and fracture networks and the reservoir stimulation. More advanced material models than a simple elastic medium are required to model the response of the solid Earth in geo-reservoirs:
-
To correctly capture ground shaking and seismic wave propagation, we need to consider a porous medium, where the pores of rocks or sediments are filled by a fluid phase.
-
In addition, the interplay of earthquake nucleation, propagation and arrest and poroelastic wave effects is also not properly studied.
-
To investigate sound disturbances, we have to couple seismic wave propagation in the solid Earth to acoustic waves in the atmosphere.
In the proposed project, we will improve and optimise the earthquake simulation software SeisSol for extreme-scale simulations of two demonstrator scenarios that take up recent research by the groups of Alice-Agnes Gabriel (Ludwig-Maximilians-University Munich) and Gregor Hillers (University of Helsinki).
Key contributions of the project will be to optimise SeisSol in terms of node performance, scalability and overall time to solution for the required advanced seismic wave propagation models (esp. poroelastic media and coupling to acoustic media). This will include algorithmic improvements and optimisation of the required novel numerical scheme for poroelasticity.
-
ProPE-AL: Process-oriented Performance Engineering Service Infrastructure for Scientific Software at German HPC Centers - Algorithms
Project type | KONWIHR |
Funded by | KONWIHR |
Begin | Oktober 2017 |
End | September 2020 |
Leader | Univ.-Prof. Dr. Michael Bader, Univ.-Prof. Dr. Hans-Joachim Bungartz |
Staff | Hayden Liu Weng, M.Sc. (hons) |
Contact person | Univ.-Prof. Dr. Michael Bader, Univ.-Prof. Dr. Hans-Joachim Bungartz |
Co-operation partner | Univ.-Prof. Dr. Gerhard Wellein, FAU Erlangen-Nürnberg, Univ.-Prof. Dr. Matthias Müller, RWTH Aachen, Univ.-Prof. Dr. Wolfgang Nagel, TU Dresden |
Brief description
As part of the DFG call "Performance Engineering for Scientific Software", the Project partners G. Wellein (FAU Erlangen-Nuremberg), M. Müller (RWTH Aachen) and W. Nagel (TU Dresden) initiated the project "Process-oriented Performance Engineering Service Infrastructure for Scientific Software at German HPC Centers" (acronym ProPE). The project aims at implementing performance engineering (PE) as a well-defined, structured process to improve the resource efficiency of programs. This structured PE process should allow for target-oriented optimization and parallelization of application codes guided by performance patterns and performance models. The associated KONWIHR project ProPE-Algorithms (ProPE-AL) adds a further algorithmic optimization step to this well-defined, structured process. This extension takes into account that the best possible sustainable use of HPC resources through application codes is not only a question of the efficiency of the implementation, but also a question of the efficiency of the (numerical) algorithms that application codes are based on.
-
International Graduate School of Science and Engineering (IGSSE): An Exascale Library for Numerically Inspired Machine Learning (ExaNIML)
Project type | International IGGSE project |
Funded by | International Graduate School of Science and Engineering |
Begin | June 2018 |
End | December 2022 |
Leader | Univ.-Prof. Dr. Hans-Joachim Bungartz |
Staff | Dr. rer. nat. Tobias Neckel, Severin Reiz |
Contact person | Severin Reiz |
Co-operation partner | The University of Texas at Austin Institute for Computational Engineering and Sciences |
Brief description
There is a significant gap between algorithms and software in Data Analytics and those in Computational Science and Engineering (CSE) concerning their maturity on High-Performance Computing (HPC) systems. Given the fact that Data Analytics tasks show a rapidly growing share of supercomputer usage, this gap is a serious issue. This proposal aims to bridge this gap for a number of important tasks arising, e.g., in a Machine Learning (ML) context: density estimation, and high-dimensional approximation (for example (semi-supervised) classification). To this end, we aim to (1) design and analyze novel algorithms that combine two powerful numerical methods: sparse grids and kernel methods; and to (2) design and implement an HPC library that provides an open-source implementation of these algorithms and supports heterogeneous distributed-memory architectures. The attractiveness of sparse grids is mainly due to their high-quality accuracy guarantees and their foundation on rigorous approximation theory. But their shortcoming is that they require (regular) Cartesian grids. Kernel methods do not require Cartesian grids but, first, their approximation properties can be suboptimal in a practice, and second, they require regularization whose parameters can be expensive to determine. Our main idea is to use kernel methods for manifold learning and to combine them with the sparse grids to define approximations on the manifold. Such high-dimensional approximation problems find applications in model reduction, uncertainty quantification (UQ), and ML.