Directly to content
  1. Publishing |
  2. Search |
  3. Browse |
  4. Recent items rss |
  5. Open Access |
  6. Jur. Issues |
  7. DeutschClear Cookie - decide language by browser settings

Conditional Invertible Generative Models for Supervised Problems

Ardizzone, Lynton

[thumbnail of thesis_ardizzone_druck.pdf]
Preview
PDF, English
Download (53MB) | Lizenz: Rights reserved - Free Access

Citation of documents: Please do not cite the URL that is displayed in your browser location input, instead use the DOI, URN or the persistent URL below, as we can guarantee their long-time accessibility.

Abstract

Invertible neural networks (INNs), in the setting of normalizing flows, are a type of unconditional generative likelihood model. Despite various attractive properties compared to other common generative model types, they are rarely useful for supervised tasks or real applications due to their unguided outputs. In this work, we therefore present three new methods that extend the standard INN setting, falling under a broader category we term generative invertible models. These new methods allow leveraging the theoretical and practical benefits of INNs to solve supervised problems in new ways, including real-world applications from different branches of science. The key finding is that our approaches enhance many aspects of trustworthiness in comparison to conventional feed-forward networks, such as uncertainty estimation and quantification, explainability, and proper handling of outlier data.

Document type: Dissertation
Supervisor: Köthe, apl.-Prof. Ullrich
Place of Publication: Heidelberg
Date of thesis defense: 25 October 2023
Date Deposited: 07 Nov 2023 09:44
Date: 2023
Faculties / Institutes: The Faculty of Mathematics and Computer Science > Department of Computer Science
DDC-classification: 004 Data processing Computer science
Controlled Keywords: Maschinelles Lernen, Deep learning, Maschinelles Sehen
About | FAQ | Contact | Imprint |
OA-LogoDINI certificate 2013Logo der Open-Archives-Initiative