Master's thesis presentation. Philipp is advised by Vladyslav Fediukov and Prof. Dr. Felix Dietrich.
Previous talks at the SCCS Colloquium
Philipp Kutz: Evaluating Convolutional Neural Networks in Multi-Fidelity Modeling
SCCS Colloquium |
The first goal is to investigate how well Convolutional Neural Networks (CNNs) are suited for multi-fidelity (MF) modelling. The second objective is to analyse which architectures and approaches perform better and which perform worse and why there are differences between the various methods. Neural Networks (NN) can learn arbitrary discontinuous functions and can handle high dimensional data better than other regression methods. CNNs are NN, which are well-suited for image processing. MF modelling is an important component in surrogate modelling. There are two main learning styles: NN-based MF models learn the low-fidelity (LF) / high-fidelity (LF) relation either implicit or explicit. Terramechanical data with bi-fidelity tabular data and images were provided by the German Aerospace Center (DLR) for the investigations. Two main architecture types were examined: the Multi-Fidelity Data-Fusion (MF-DF) and the Transfer Learning Neural Network (TLNN) architecture type. The MF-DF architecture type was represented with the explicitly learning MDACNN architecture. The MDACNN architecture processes tabularized data. The TLNN architecture type was represented with the implicitly learning MFCNN-TL architecture and with the implicitly and explicitly learning MF-TLNN architecture. Both the MFCNN-TL and the MF-TLNN architecture process images. It can be concluded from the investigations that all main architectures (MF-DF and TLNN architecture types) and all learning types (explicit, implicit and mixed learning) are suitable for MF modelling. CNNs can be used effectively in MF models. The MF-TLNN architecture with its mixed-learning approach outperformed the other two architectures. This leads to the conclusion that the combination of explicit and implicit learning styles in a network increases learning performance. In future work, more implicit and explicit learning architectures and their combinations in the MF-TLNN architecture need to be investigated to optimise the performance of the TLNN architecture further.