<> "The repository administrator has not yet configured an RDF license."^^ . <> . . "Comprehensive Evaluation of Machine Learning Experiments:\r\nAlgorithm Comparison, Algorithm Performance and Inferential Reproducibility"^^ . "This doctoral thesis addresses critical methodological aspects within machine learning experimentation, focusing on enhancing the evaluation and analysis of algorithm performance. The established \"train-dev-test paradigm\" commonly guides machine learning practitioners, involving nested optimization processes to optimize model parameters and meta-parameters and benchmarking against test data. However, this paradigm overlooks crucial aspects, such as algorithm variability and the intricate relationship between algorithm performance and meta-parameters. This work introduces a comprehensive framework that employs statistical techniques to bridge these gaps, advancing the methodological standards in empirical machine learning research.\r\nThe foundational premise of this thesis lies in differentiating between algorithms and classifiers, recognizing that an algorithm may yield multiple classifiers due to inherent stochasticity or design choices. Consequently, algorithm performance becomes inherently probabilistic and cannot be captured by a single metric. The contributions of this work are structured around three core themes:\r\n\r\nAlgorithm Comparison: A fundamental aim of empirical machine learning research is algorithm comparison. To this end, the thesis proposes utilizing Linear Mixed Effects Models (LMEMs) for analyzing evaluation data. LMEMs offer distinct advantages by accommodating complex data structures beyond the typical independent and identically distributed (iid) assumption. Thus LMEMs enable a holistic analysis of algorithm instances and facilitate the construction of nuanced conditional models of expected risk, supporting algorithm comparisons based on diverse data properties.\r\n\r\nAlgorithm Performance Analysis: Contemporary evaluation practices often treat algorithms and classifiers as black boxes, hindering insights into their performance and parameter dependencies. Leveraging LMEMs, specifically implementing Variance Component Analysis, the thesis introduces methods from psychometrics to quantify algorithm performance homogeneity (reliability) and assess the influence of meta-parameters on performance. The flexibility of LMEMs allows a granular analysis of this relationship and extends these techniques to analyze data annotation processes linked to algorithm performance.\r\n\r\nInferential Reproducibility: Building upon the preceding chapters, this section showcases a unified approach to analyze machine learning experiments comprehensively. By leveraging the full range of generated model instances, the analysis provides a nuanced understanding of competing algorithms. The outcomes offer implementation guidelines for algorithmic modifications and consolidate incongruent findings across diverse datasets, contributing to a coherent empirical perspective on algorithmic effects.\r\n\r\nThis work underscores the significance of addressing algorithmic variability, meta-parameter impact, and the probabilistic nature of algorithm performance. This thesis aims to enhance machine learning experiments' transparency, reproducibility, and interpretability by introducing robust statistical methodologies facilitating extensive empirical analysis. It extends beyond conventional guidelines, offering a principled approach to advance the understanding and evaluation of algorithms in the evolving landscape of machine learning and data science."^^ . "2023" . . . . . . . "Michael"^^ . "Hagmann"^^ . "Michael Hagmann"^^ . . . . . . "Comprehensive Evaluation of Machine Learning Experiments:\r\nAlgorithm Comparison, Algorithm Performance and Inferential Reproducibility (PDF)"^^ . . . "michael_hagman_phd.pdf"^^ . . . "Comprehensive Evaluation of Machine Learning Experiments:\r\nAlgorithm Comparison, Algorithm Performance and Inferential Reproducibility (Other)"^^ . . . . . . "indexcodes.txt"^^ . . . "Comprehensive Evaluation of Machine Learning Experiments:\r\nAlgorithm Comparison, Algorithm Performance and Inferential Reproducibility (Other)"^^ . . . . . . "lightbox.jpg"^^ . . . "Comprehensive Evaluation of Machine Learning Experiments:\r\nAlgorithm Comparison, Algorithm Performance and Inferential Reproducibility (Other)"^^ . . . . . . "preview.jpg"^^ . . . "Comprehensive Evaluation of Machine Learning Experiments:\r\nAlgorithm Comparison, Algorithm Performance and Inferential Reproducibility (Other)"^^ . . . . . . "medium.jpg"^^ . . . "Comprehensive Evaluation of Machine Learning Experiments:\r\nAlgorithm Comparison, Algorithm Performance and Inferential Reproducibility (Other)"^^ . . . . . . "small.jpg"^^ . . "HTML Summary of #33967 \n\nComprehensive Evaluation of Machine Learning Experiments: \nAlgorithm Comparison, Algorithm Performance and Inferential Reproducibility\n\n" . "text/html" . . . "000 Allgemeines, Wissenschaft, Informatik"@de . "000 Generalities, Science"@en . . . "004 Informatik"@de . "004 Data processing Computer science"@en . . . "310 Statistik"@de . "310 General statistics"@en . .