Session
Representation Learning
Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations
Tri Dao · Albert Gu · Matthew Eichhorn · Atri Rudra · Christopher Re
Fast linear transforms are ubiquitous in machine learning, including the discrete Fourier transform, discrete cosine transform, and other structured transformations such as convolutions. All of these transforms can be represented by dense matrix-vector multiplication, yet each has a specialized and highly efficient (subquadratic) algorithm. We ask to what extent hand-crafting these algorithms and implementations is necessary, what structural prior they encode, and how much knowledge is required to automatically learn a fast algorithm for a provided structured transform. Motivated by a characterization of fast matrix-vector multiplication as products of sparse matrices, we introduce a parameterization of divide-and-conquer methods that is capable of representing a large class of transforms. This generic formulation can automatically learn an efficient algorithm for many important transforms; for example, it recovers the $O(N \log N)$ Cooley-Tukey FFT algorithm to machine precision, for dimensions $N$ up to $1024$. Furthermore, our method can be incorporated as a lightweight replacement of generic matrices in machine learning pipelines to learn efficient and compressible transformations. On a standard task of compressing a single hidden-layer network, our method exceeds the classification accuracy of unconstrained matrices on CIFAR-10 by 3.9 points---the first time a structured approach has done so---with 4X faster inference speed and 40X fewer parameters.
Distributed, Egocentric Representations of Graphs for Detecting Critical Structures
Ruo-Chun Tzeng · Shan-Hung (Brandon) Wu
We study the problem of detecting critical structures using a graph embedding model. Existing graph embedding models lack the ability to precisely detect critical structures that are specific to a task at the global scale. In this paper, we propose a novel graph embedding model, called the Ego-CNNs, that detects precise critical structures efficiently. An Ego-CNN can be jointly trained with a task model and help explain/discover knowledge for the task. We conduct extensive experiments and the results show that Ego-CNNs (1) can lead to comparable task performance as the state-of-the-art graph embedding models, (2) works nicely with CNN visualization techniques to illustrate the detected structures, and (3) is efficient and can incorporate with scale-free priors, which commonly occurs in social network datasets, to further improve the training efficiency.
Breaking the Softmax Bottleneck via Learnable Monotonic Pointwise Non-linearities
Octavian-Eugen Ganea · Sylvain Gelly · Gary Becigneul · Aliaksei Severyn
The softmax function on top of a final linear layer is the de facto method to output probability distributions in neural networks. In many applications such as language models or text generation, these models have to produce distributions over large output vocabularies. Recently, this has been shown to have limited representational capacity due to its connection with the rank bottleneck in matrix factorization. However, little is known about the limitations of linear-softmax for quantities of practical interest such as cross entropy or mode estimation, direction theoretically and empirically explored in this paper. As an efficient and effective solution to alleviate this issue, we propose to learn parametric monotonic functions on top of the logits. Theoretically, we show that such monotonic functions are likely to increase the rank of a matrix to its full rank. Empirically, our method improves over the traditional softmax-linear layer both in synthetic and real language model experiments with negligible time or memory overhead, while being comparable to the more computationally expensive mixture of softmax distributions.
Multi-Object Representation Learning with Iterative Variational Inference
Klaus Greff · Raphael Lopez Kaufman · Rishabh Kabra · Nicholas Watters · Christopher Burgess · Daniel Zoran · Loic Matthey · Matthew Botvinick · Alexander Lerchner
Human perception is structured around objects which form the basis for our higher-level cognition and impressive systematic generalization abilities. Yet most work on representation learning focuses on feature learning without even considering multiple objects, or treats segmentation as an (often supervised) preprocessing step. Instead, we argue for the importance of learning to segment and represent objects jointly. Starting from the simple assumption that a scene is composed of entities with common features, we demonstrate that it is possible to learn to segment images into interpretable objects with disentangled representations. Our method learns - without supervision - to inpaint occluded parts, and extrapolates to objects with novel feature combinations. We also show that, because our method is based on iterative variational inference, our system is able to learn multi-modal posteriors for ambiguous inputs and extends naturally to sequential data.
Cross-Domain 3D Equivariant Image Embeddings
Carlos Esteves · Avneesh Sud · Zhengyi Luo · Kostas Daniilidis · Ameesh Makadia
Spherical convolutional networks have been introduced recently as tools to learn powerful feature representations of 3D shapes. Spherical CNNs are equivariant to 3D rotations making them ideally suited for applications where 3D data may be observed in arbitrary orientations. In this paper we learn 2D image embeddings with a similar equivariant structure: embedding the image of a 3D object should commute with rotations of the object. We introduce a cross-domain embedding from 2D images into a spherical CNN latent space. Our model is supervised only by target embeddings obtained from a spherical CNN pretrained for 3D shape classification. The trained model learns to encode images with 3D shape properties and is equivariant to 3D rotations of the observed object. We show that learning only a rich embedding for images with appropriate geometric structure is in and of itself sufficient for tackling numerous applications. Evidence from two different applications, relative pose estimation and novel view synthesis, demonstrates that equivariant embeddings are sufficient for both applications without requiring any task-specific supervised training.
Loss Landscapes of Regularized Linear Autoencoders
Daniel Kunin · Jonathan Bloom · Aleksandrina Goeva · Cotton Seed
Autoencoders are a deep learning model for representation learning. When trained to minimize the distance between the data and its reconstruction, linear autoencoders (LAEs) learn the subspace spanned by the top principal directions but cannot learn the principal directions themselves. In this paper, we prove that L2-regularized LAEs are symmetric at all critical points and learn the principal directions as the left singular vectors of the decoder. We smoothly parameterize the critical manifold and relate the minima to the MAP estimate of probabilistic PCA. We illustrate these results empirically and consider implications for PCA algorithms, computational neuroscience, and the algebraic topology of learning.
Hyperbolic Disk Embeddings for Directed Acyclic Graphs
Ryota Suzuki · Ryusuke Takahama · Shun Onoda
Obtaining continuous representations of structural data such as directed acyclic graphs (DAGs) has gained attention in machine learning and artificial intelligence. However, embedding complex DAGs in which both ancestors and descendants of nodes are exponentially increasing is difficult. Tackling in this problem, we develop Disk Embeddings, which is a framework for embedding DAGs into quasi-metric spaces. Existing state-of-the-art methods, Order Embeddings and Hyperbolic Entailment Cones, are instances of Disk Embedding in Euclidean space and spheres respectively. Furthermore, we propose a novel method Hyperbolic Disk Embeddings to handle exponential growth of relations. The results of our experiments show that our Disk Embedding models outperform existing methods especially in complex DAGs other than trees.
LatentGNN: Learning Efficient Non-local Relations for Visual Recognition
Songyang Zhang · Xuming He · Shipeng Yan
Capturing long-range dependencies in feature representations is crucial for many visual recognition tasks. Despite recent successes of deep convolutional networks, it remains challenging to model non-local context relations between visual features. A promising strategy is to model the feature context by a fully-connected graph neural network (GNN), which augments traditional convolutional features with an estimated non-local context representation. However, most GNN-based approaches require computing a dense graph affinity matrix and hence have difficulty in scaling up to tackle complex real-world visual problems. In this work, we propose an efficient and yet flexible non-local relation representation based on a novel class of graph neural networks. Our key idea is to introduce a latent space to reduce the complexity of graph, which allows us to use a low-rank representation for the graph affinity matrix and to achieve a linear complexity in computation. Extensive experimental evaluations on three major visual recognition tasks show that our method outperforms the prior works with a large margin while maintaining a low computation cost.
Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness
Raphael Suter · Djordje Miladinovic · Bernhard Schölkopf · Stefan Bauer
The ability to learn disentangled representations that split underlying sources of variation in high dimensional, unstructured data is important for data efficient and robust use of neural networks. While various approaches aiming towards this goal have been proposed in recent times, a commonly accepted definition and validation procedure is missing. We provide a causal perspective on representation learning which covers disentanglement and domain shift robustness as special cases. Our causal framework allows us to introduce a new metric for the quantitative evaluation of deep latent variable models. We show how this metric can be estimated from labeled observational data and further provide an efficient estimation algorithm that scales linearly in the dataset size.
Lorentzian Distance Learning for Hyperbolic Representations
Marc Law · Renjie Liao · Jake Snell · Richard Zemel
We introduce an approach to learn representations based on the Lorentzian distance in hyperbolic geometry. Hyperbolic geometry is especially suited to hierarchically-structured datasets, which are prevalent in the real world. Current hyperbolic representation learning methods compare examples with the Poincar\'e distance. They try to minimize the distance of each node in a hierarchy with its descendants while maximizing its distance with other nodes. This formulation produces node representations close to the centroid of their descendants. To obtain efficient and interpretable algorithms, we exploit the fact that the centroid w.r.t the squared Lorentzian distance can be written in closed-form. We show that the Euclidean norm of such a centroid decreases as the curvature of the hyperbolic space decreases. This property makes it appropriate to represent hierarchies where parent nodes minimize the distances to their descendants and have smaller Euclidean norm than their children. Our approach obtains state-of-the-art results in retrieval and classification tasks on different datasets.