eprintid: 33753 rev_number: 16 eprint_status: archive userid: 7581 dir: disk0/00/03/37/53 datestamp: 2023-10-04 08:02:22 lastmod: 2023-10-05 14:05:27 status_changed: 2023-10-04 08:02:22 type: doctoralThesis metadata_visibility: show creators_name: Lopez, Federico Jorge title: Learning Neural Graph Representations in Non-Euclidean Geometries subjects: 004 divisions: 90500 adv_faculty: af-09 cterms_swd: geometric deep learning cterms_swd: graph representation cterms_swd: riemannian manifold learning cterms_swd: non-euclidean geometry cterms_swd: symmetric positive definite matrices cterms_swd: graph embeddings cterms_swd: hyperbolic geometry cterms_swd: siegel space abstract: The success of Deep Learning methods is heavily dependent on the choice of the data representation. For that reason, much of the actual effort goes into Representation Learning, which seeks to design preprocessing pipelines and data transformations that can support effective learning algorithms. The aim of Representation Learning is to facilitate the task of extracting useful information for classifiers and other predictor models. In this regard, graphs arise as a convenient data structure that serves as an intermediary representation in a wide range of problems. The predominant approach to work with graphs has been to embed them in an Euclidean space, due to the power and simplicity of this geometry. Nevertheless, data in many domains exhibit non-Euclidean features, making embeddings into Riemannian manifolds with a richer structure necessary. The choice of a metric space where to embed the data imposes a geometric inductive bias, with a direct impact on the performance of the models. This thesis is about learning neural graph representations in non-Euclidean geometries and showcasing their applicability in different downstream tasks. We introduce a toolkit formed by different graph metrics with the goal of characterizing the topology of the data. In that way, we can choose a suitable target embedding space aligned to the shape of the dataset. By virtue of the geometric inductive bias provided by the structure of the non-Euclidean manifolds, neural models can achieve higher performances with a reduced parameter footprint. As a first step, we study graphs with hierarchical structures. We develop different techniques to derive hierarchical graphs from large label inventories. Noticing the capacity of hyperbolic spaces to represent tree-like arrangements, we incorporate this information into an NLP model through hyperbolic graph embeddings and showcase the higher performance that they enable. Second, we tackle the question of how to learn hierarchical representations suited for different downstream tasks. We introduce a model that jointly learns task-specific graph embeddings from a label inventory and performs classification in hyperbolic space. The model achieves state-of-the-art results on very fine-grained labels, with a remarkable reduction of the parameter size. Next, we move to matrix manifolds to work on graphs with diverse structures and properties. We propose a general framework to implement the mathematical tools required to learn graph embeddings on symmetric spaces. These spaces are of particular interest given that they have a compound geometry that simultaneously contains Euclidean as well as hyperbolic subspaces, allowing them to automatically adapt to dissimilar features in the graph. We demonstrate a concrete implementation of the framework on Siegel spaces, showcasing their versatility on different tasks. Finally, we focus on multi-relational graphs. We devise the means to translate Euclidean and hyperbolic multi-relational graph embedding models into the space of symmetric positive definite (SPD) matrices. To do so we develop gyrocalculus in this geometry and integrate it with the aforementioned framework. date: 2023 id_scheme: DOI id_number: 10.11588/heidok.00033753 ppn_swb: 1860821138 own_urn: urn:nbn:de:bsz:16-heidok-337536 date_accepted: 2022-11-25 advisor: HASH(0x564efce9afc0) language: eng bibsort: LOPEZFEDERLEARNINGNE20230829 full_text_status: public place_of_pub: Heidelberg citation: Lopez, Federico Jorge (2023) Learning Neural Graph Representations in Non-Euclidean Geometries. [Dissertation] document_url: https://archiv.ub.uni-heidelberg.de/volltextserver/33753/1/thesis.pdf