Direkt zum Inhalt
  1. Publizieren |
  2. Suche |
  3. Browsen |
  4. Neuzugänge rss |
  5. Open Access |
  6. Rechtsfragen |
  7. EnglishCookie löschen - von nun an wird die Spracheinstellung Ihres Browsers verwendet.

Large-scale medical image annotation with quality-controlled crowdsourcing

Heim, Eric

[thumbnail of Dissertation_EricHeim.pdf]
Vorschau
PDF, Englisch - Hauptdokument
Download (21MB) | Lizenz: Creative Commons LizenzvertragLarge-scale medical image annotation with quality-controlled crowdsourcing von Heim, Eric steht unter einer Creative Commons Namensnennung-Keine Bearbeitung 4.0

Zitieren von Dokumenten: Bitte verwenden Sie für Zitate nicht die URL in der Adresszeile Ihres Webbrowsers, sondern entweder die angegebene DOI, URN oder die persistente URL, deren langfristige Verfügbarkeit wir garantieren. [mehr ...]

Abstract

Accurate annotations of medical images are essential for various clinical applications. The remarkable advances in machine learning, especially deep learning based techniques, show great potential for automatic image segmentation. However, these solutions require a huge amount of accurately annotated reference data for training. Especially in the domain of medical image analysis, the availability of domain experts for reference data generation is becoming a major bottleneck for machine learning applications. In this context, crowdsourcing has gained increasing attention as a tool for low-cost and large-scale data annotation. As a method to outsource cognitive tasks to anonymous non-expert workers over the internet, it has evolved into a valuable tool for data annotation in various research fields. Major challenges in crowdsourcing remain the high variance in the annotation quality as well as the lack of domain specific knowledge of the individual workers. Current state-of-the-art methods for quality control usually induce further costs, as they rely on a redundant distribution of tasks or perform additional annotations on tasks with already known reference outcome. Aim of this thesis is to apply common crowdsourcing techniques for large-scale medical image annotation and create a cost effective quality control method for crowd-sourced image annotation. The problem of large-scale medical image annotation is addressed by introducing a hybrid crowd-algorithm approach that allowed expert-level organ segmentation in CT scans. A pilot study performed on the case of liver segmentation in abdominal CT scans showed that the proposed approach is able to create organ segmentations matching the quality of those create by medical experts. Recording the behavior of individual non-expert online workers during the annotation process in clickstreams enabled the derivation of an annotation quality measure that could successfully be used to merge crowd-sourced segmentations. A comprehensive validation study performed with various object classes from publicly available data sets demonstrated that the presented quality control measure generalizes well over different object classes and clearly outperforms state-of-the-art methods in terms of costs and segmentation quality. In conclusion, the methods introduced in this thesis are an essential contribution to reduce the annotation costs and further improve the quality of crowd-sourced image segmentation.

Dokumententyp: Dissertation
Erstgutachter: Hesser, Prof. Dr. Jürgen
Tag der Prüfung: 30 Mai 2018
Erstellungsdatum: 20 Jun. 2018 07:14
Erscheinungsjahr: 2018
Institute/Einrichtungen: Fakultät für Mathematik und Informatik > Dekanat der Fakultät für Mathematik und Informatik
DDC-Sachgruppe: 004 Informatik
Normierte Schlagwörter: Crowdsourcing, Computer Vision, Medical Image Analysis
Leitlinien | Häufige Fragen | Kontakt | Impressum |
OA-LogoDINI-Zertifikat 2013Logo der Open-Archives-Initiative