<> "The repository administrator has not yet configured an RDF license."^^ . <> . . "Analysis of Adversarial Examples"^^ . "The rise of artificial intelligence (AI) has significantly impacted the field of computer vision\r\n(CV). In particular, deep learning (DL) has advanced the development of algorithms that\r\ncomprehend visual data. In specific tasks, DL exhibits human capabilities and is impacting\r\nour everyday lives such as virtual assistants, entertainment or web searches. Despite of the\r\nsuccess of visual algorithms, in this thesis we study the threat adversarial examples, which\r\nare input manipulation to let to misclassifcation.\r\nThe human vision system is not impaired and can classify the correct image, while for\r\na DL classifier one pixel change is enough for misclassification. This is a misalignment\r\nbetween the human and CV system. Therefore, we start this work by presenting the concept\r\nof an classification model to understand how these models can be tricked by the threat model\r\nā€“ adversarial examples.\r\nThen, we analyze the adversarial examples in the Fourier domain, because after this\r\ntransformation they can be better identified for detection. To that end, we assess different\r\nadversarial attacks on various classification models and datasets deviating from the standard\r\nbenchmarks\r\nAs a complementary approach, we developed an anti-pattern utilizing a frame-like patch\r\n(prompt) on the input image to counteract the input manipulation. Instead of merely identifying and discarding adversarial inputs, this prompt neutralizes adversarial perturbations\r\nduring testing.\r\nAs another detection method, we expanded the use of a characteristics of multi-dimensional\r\ndata ā€“ the local intrinsic dimensionality (LID) to differentiate between benign and attacked\r\nimages, improving detection rates of adversarial examples.\r\nRecent advances in diffusion models (DMs) have significantly improved the robustness\r\nof adversarial models. Although DMs are well-known for their generative abilities, it remains unclear whether adversarial examples are part of the learned distribution of the DM.\r\nTo address this gap, we propose a methodology that aims to determine whether adversarial\r\nexamples are within the distribution of the learned manifold of the DM. We present an exploration of transforming adversarial images using the DM, which can reveal the attacked\r\nimages."^^ . "2024" . . . . . . . "Peter"^^ . "Lorenz"^^ . "Peter Lorenz"^^ . . . . . . "Analysis of Adversarial Examples (PDF)"^^ . . . "main.pdf"^^ . . . "Analysis of Adversarial Examples (Other)"^^ . . . . . . "indexcodes.txt"^^ . . . "Analysis of Adversarial Examples (Other)"^^ . . . . . . "lightbox.jpg"^^ . . . "Analysis of Adversarial Examples (Other)"^^ . . . . . . "preview.jpg"^^ . . . "Analysis of Adversarial Examples (Other)"^^ . . . . . . "medium.jpg"^^ . . . "Analysis of Adversarial Examples (Other)"^^ . . . . . . "small.jpg"^^ . . "HTML Summary of #35211 \n\nAnalysis of Adversarial Examples\n\n" . "text/html" . . . "004 Informatik"@de . "004 Data processing Computer science"@en . .