Learning Topology-Preserving Data Representations - Université de Paris - Faculté des Sciences Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Learning Topology-Preserving Data Representations

Résumé

We propose a method for learning topology-preserving data representations (dimensionality reduction). The method aims to provide topological similarity between the data manifold and its latent representation via enforcing the similarity in topological features (clusters, loops, 2D voids, etc.) and their localization. The core of the method is the minimization of the Representation Topology Divergence (RTD) between original high-dimensional data and low-dimensional representation in latent space. RTD minimization provides closeness in topological features with strong theoretical guarantees. We develop a scheme for RTD differentiation and apply it as a loss term for the autoencoder. The proposed method "RTD-AE" better preserves the global structure and topology of the data manifold than state-of-the-art competitors as measured by linear correlation, triplet distance ranking accuracy, and Wasserstein distance between persistence barcodes.
Fichier principal
Vignette du fichier
4299_learning_topology_preserving_d.pdf (5.02 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04003214 , version 1 (24-02-2023)

Identifiants

Citer

Ilya Trofimov, Daniil Cherniavskii, Eduard Tulchinskii, Nikita Balabin, Evgeny Burnaev, et al.. Learning Topology-Preserving Data Representations. ICLR 2023 International Conference on Learning Representations, May 2023, Kigali, Rwanda. ⟨hal-04003214⟩
45 Consultations
46 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More