%0 Generic %A Ardizzone, Lynton %C Heidelberg %D 2023 %F heidok:33932 %R 10.11588/heidok.00033932 %T Conditional Invertible Generative Models for Supervised Problems %U https://archiv.ub.uni-heidelberg.de/volltextserver/33932/ %X Invertible neural networks (INNs), in the setting of normalizing flows, are a type of unconditional generative likelihood model. Despite various attractive properties compared to other common generative model types, they are rarely useful for supervised tasks or real applications due to their unguided outputs. In this work, we therefore present three new methods that extend the standard INN setting, falling under a broader category we term generative invertible models. These new methods allow leveraging the theoretical and practical benefits of INNs to solve supervised problems in new ways, including real-world applications from different branches of science. The key finding is that our approaches enhance many aspects of trustworthiness in comparison to conventional feed-forward networks, such as uncertainty estimation and quantification, explainability, and proper handling of outlier data.