TY - GEN TI - Bayesian Neural Networks for Probabilistic Machine Learning Y1 - 2021/// KW - machine learning KW - bayesian neural networks KW - active learning ID - heidok30364 A1 - Haußmann, Manuel CY - Heidelberg N2 - Deep Learning-based models are becoming more and more relevant for an increasing number of applications. Bayesian neural networks can serve as a principled way to model the uncertainty in such approaches and to include prior knowledge. This work tackles how to improve the training of Bayesian neural nets (BNNs) and how to apply them in practice. We first develop a variational inference-based approach to learn them without requiring samples during training using the popular rectified linear unit activation function's piecewise linear structure. We then show how we can use a second approach based on a central limit theorem argument to get a good predictive uncertainty signal for an active learning task. We further build a reinforcement learning-based approach in such an active learning setup, learning a second BNN that requests labels to support the primary model optimally. As a third variant, we then introduce a new method for learning BNNs by optimizing the marginal likelihood via a model selection based approach, relying on the concept of type-II maximum likelihood, also known as empirical Bayes. Using PAC-Bayes theory to develop a regularization structure, we show how to combine it with a popular deterministic model for out-of-distribution detection, demonstrating improved results. Using this joint combination of empirical Bayes and PAC-Bayes, we finally study how to use it to learn dynamical systems specified via stochastic differential equations in a way that allows incorporating prior knowledge of the dynamics and model uncertainty. UR - https://archiv.ub.uni-heidelberg.de/volltextserver/30364/ AV - public ER -