<> "The repository administrator has not yet configured an RDF license."^^ . <> . . "Learning Mid-Level Representations\r\nfor Visual Recognition"^^ . "The objective of this thesis is to enhance visual recognition for objects and scenes\r\nthrough the development of novel mid-level representations and appendent learning\r\nalgorithms. In particular, this work is focusing on category level recognition which\r\nis still a very challenging and mainly unsolved task. One crucial component in visual\r\nrecognition systems is the representation of objects and scenes. However, depending on\r\nthe representation, suitable learning strategies need to be developed that make it possible\r\nto learn new categories automatically from training data. Therefore, the aim of this thesis\r\nis to extend low-level representations by mid-level representations and to develop suitable\r\nlearning mechanisms.\r\nA popular kind of mid-level representations are higher order statistics such as\r\nself-similarity and co-occurrence statistics. While these descriptors are satisfying the\r\ndemand for higher-level object representations, they are also exhibiting very large and ever\r\nincreasing dimensionality. In this thesis a new object representation, based on curvature\r\nself-similarity, is suggested that goes beyond the currently popular approximation of\r\nobjects using straight lines. However, like all descriptors using second order statistics,\r\nit also exhibits a high dimensionality. Although improving discriminability, the high\r\ndimensionality becomes a critical issue due to lack of generalization ability and curse\r\nof dimensionality. Given only a limited amount of training data, even sophisticated\r\nlearning algorithms such as the popular kernel methods are not able to suppress noisy or\r\nsuperfluous dimensions of such high-dimensional data. Consequently, there is a natural\r\nneed for feature selection when using present-day informative features and, particularly,\r\ncurvature self-similarity. We therefore suggest an embedded feature selection method for\r\nsupport vector machines that reduces complexity and improves generalization capability\r\nof object models. The proposed curvature self-similarity representation is successfully\r\nintegrated together with the embedded feature selection in a widely used state-of-the-art\r\nobject detection framework.\r\nThe influence of higher order statistics for category level object recognition, is further\r\ninvestigated by learning co-occurrences between foreground and background, to reduce\r\nthe number of false detections. While the suggested curvature self-similarity descriptor\r\nis improving the model for more detailed description of the foreground, higher order\r\nstatistics are now shown to be also suitable for explicitly modeling the background.\r\nThis is of particular use for the popular chamfer matching technique, since it is prone\r\nto accidental matches in dense clutter. As clutter only interferes with the foreground\r\nmodel contour, we learn where to place the background contours with respect to the\r\nforeground object boundary. The co-occurrence of background contours is integrated\r\ninto a max-margin framework. Thus the suggested approach combines the advantages of\r\naccurately detecting object parts via chamfer matching and the robustness of max-margin\r\nlearning.\r\nWhile chamfer matching is very efficient technique for object detection, parts are only\r\ndetected based on a simple distance measure. Contrary to that, mid-level parts and\r\npatches are explicitly trained to distinguish true positives in the foreground from false\r\npositives in the background. Due to the independence of mid-level patches and parts it\r\nis possible to train a large number of instance specific part classifiers. This is contrary\r\nto the current most powerful discriminative approaches that are typically only feasible\r\nfor a small number of parts, as they are modeling the spatial dependencies between\r\nthem. Due to their number, we cannot directly train a powerful classifier to combine\r\nall parts. Instead, parts are randomly grouped into fewer, overlapping compositions that\r\nare trained using a maximum-margin approach. In contrast to the common rationale of\r\ncompositional approaches, we do not aim for semantically meaningful ensembles. Rather\r\nwe seek randomized compositions that are discriminative and generalize over all instances\r\nof a category. Compositions are all combined by a non-linear decision function which is\r\ncompleting the powerful hierarchy of discriminative classifiers.\r\nIn summary, this thesis is improving visual recognition of objects and scenes, by\r\ndeveloping novel mid-level representations on top of different kinds of low-level\r\nrepresentations. Furthermore, it investigates in the development of suitable learning\r\nalgorithms, to deal with the new challenges that are arising form the novel object\r\nrepresentations presented in this work."^^ . "2015" . . . . . . . "Angela"^^ . "Eigenstetter"^^ . "Angela Eigenstetter"^^ . . . . . . "Learning Mid-Level Representations\r\nfor Visual Recognition (PDF)"^^ . . . "thesis_eigenstetter_final.pdf"^^ . . . "Learning Mid-Level Representations\r\nfor Visual Recognition (Other)"^^ . . . . . . "lightbox.jpg"^^ . . . "Learning Mid-Level Representations\r\nfor Visual Recognition (Other)"^^ . . . . . . "preview.jpg"^^ . . . "Learning Mid-Level Representations\r\nfor Visual Recognition (Other)"^^ . . . . . . "medium.jpg"^^ . . . "Learning Mid-Level Representations\r\nfor Visual Recognition (Other)"^^ . . . . . . "small.jpg"^^ . . . "Learning Mid-Level Representations\r\nfor Visual Recognition (Other)"^^ . . . . . . "indexcodes.txt"^^ . . "HTML Summary of #18988 \n\nLearning Mid-Level Representations \nfor Visual Recognition\n\n" . "text/html" . . . "004 Informatik"@de . "004 Data processing Computer science"@en . .