Vorschau |
PDF, Englisch
Download (53MB) | Lizenz: Rechte vorbehalten - Freier Zugang |
Abstract
Invertible neural networks (INNs), in the setting of normalizing flows, are a type of unconditional generative likelihood model. Despite various attractive properties compared to other common generative model types, they are rarely useful for supervised tasks or real applications due to their unguided outputs. In this work, we therefore present three new methods that extend the standard INN setting, falling under a broader category we term generative invertible models. These new methods allow leveraging the theoretical and practical benefits of INNs to solve supervised problems in new ways, including real-world applications from different branches of science. The key finding is that our approaches enhance many aspects of trustworthiness in comparison to conventional feed-forward networks, such as uncertainty estimation and quantification, explainability, and proper handling of outlier data.
Dokumententyp: | Dissertation |
---|---|
Erstgutachter: | Köthe, apl.-Prof. Ullrich |
Ort der Veröffentlichung: | Heidelberg |
Tag der Prüfung: | 25 Oktober 2023 |
Erstellungsdatum: | 07 Nov. 2023 09:44 |
Erscheinungsjahr: | 2023 |
Institute/Einrichtungen: | Fakultät für Mathematik und Informatik > Institut für Informatik |
DDC-Sachgruppe: | 004 Informatik |
Normierte Schlagwörter: | Maschinelles Lernen, Deep learning, Maschinelles Sehen |