Timezone: »
The problem of projecting a matrix onto the space of \emph{doubly stochastic} matrices finds several applications in machine learning. For example, in spectral clustering, it has been shown that forming the normalized Laplacian matrix from a data affinity matrix has close connections to projecting it onto the set of doubly stochastic matrices. However, the analysis of why this projection improves clustering has been limited. In this paper we present theoretical conditions on the given affinity matrix under which its doubly stochastic projection is an ideal affinity matrix (i.e., it has no false connections between clusters, and is well-connected within each cluster). In particular, we show that a necessary and sufficient condition for a projected affinity matrix to be ideal reduces to a set of conditions on the input affinity that decompose along each cluster. Further, in the \emph{subspace clustering} problem, where each cluster is defined by a linear subspace, we provide geometric conditions on the underlying subspaces which guarantee correct clustering via a continuous version of the problem. This allows us to explain theoretically the remarkable performance of a recently proposed doubly stochastic subspace clustering method.
Author Information
Tianjiao Ding (Johns Hopkins University)

I am a second-year Ph.D. student at [Johns Hopkins University](https://www.jhu.edu/), advised by [René Vidal](http://cis.jhu.edu/~rvidal/). I also work closely with [Benjamin D. Haeffele](https://www.cis.jhu.edu/~haeffele/) and [Yi Ma](http://people.eecs.berkeley.edu/~yima/). Prior to joining Hopkins, I spent two years as a research assistant at [ShanghaiTech University](https://sist.shanghaitech.edu.cn/sist_en/), advised by [Manolis C. Tsakiris](https://sites.google.com/site/manolisctsakiris/) and collaborating with [Laurent Kneip](https://laurentkneip.com/). I received my undergraduate degree in computer science with honor from [ShanghaiTech](https://sist.shanghaitech.edu.cn/sist_en/), working with [Manolis](https://sites.google.com/site/manolisctsakiris/) and [Yi](http://people.eecs.berkeley.edu/~yima/). My research interests lie in the theoretical foundations of machine learning and data science as well as emerging applications. As such, I develop both rigorous mathematics and practical implementations in my work. In particular, I study subspace learning, 3D vision, and robotics.
Derek Lim (MIT)
Rene Vidal (Johns Hopkins University, USA)
Benjamin Haeffele (Johns Hopkins University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Understanding Doubly Stochastic Clustering »
Thu. Jul 21st through Fri the 22nd Room Hall E #619
More from the Same Authors
-
2023 : Learning Structured Representations with Equivariant Contrastive Learning »
Sharut Gupta · Joshua Robinson · Derek Lim · Soledad Villar · Stefanie Jegelka -
2023 : Expressive Sign Equivariant Networks for Spectral Geometric Learning »
Derek Lim · Joshua Robinson · Stefanie Jegelka · Haggai Maron -
2023 : Positional Encodings as Group Representations: A Unified Framework »
Derek Lim · Hannah Lawrence · Ningyuan Huang · Erik Thiede -
2023 Workshop: HiLD: High-dimensional Learning Dynamics Workshop »
Courtney Paquette · Zhenyu Liao · Mihai Nica · Elliot Paquette · Andrew Saxe · Rene Vidal -
2023 Oral: Equivariant Polynomials for Graph Neural Networks »
Omri Puny · Derek Lim · Bobak T Kiani · Haggai Maron · Yaron Lipman -
2023 Poster: On the Convergence of Gradient Flow on Multi-layer Linear Models »
Hancheng Min · Rene Vidal · Enrique Mallada -
2023 Poster: Equivariant Polynomials for Graph Neural Networks »
Omri Puny · Derek Lim · Bobak T Kiani · Haggai Maron · Yaron Lipman -
2023 Poster: Learning Globally Smooth Functions on Manifolds »
Juan Cervino · Luiz Chamon · Benjamin Haeffele · Rene Vidal · Alejandro Ribeiro -
2023 Poster: The Ideal Continual Learner: An Agent That Never Forgets »
Liangzu Peng · Paris Giampouras · Rene Vidal -
2023 Poster: Graph Inductive Biases in Transformers without Message Passing »
Liheng Ma · Chen Lin · Derek Lim · Adriana Romero Soriano · Puneet Dokania · Mark Coates · Phil Torr · Ser Nam Lim -
2022 : Sign and Basis Invariant Networks for Spectral Graph Representation Learning »
Derek Lim · Joshua Robinson · Lingxiao Zhao · Tess Smidt · Suvrit Sra · Haggai Maron · Stefanie Jegelka -
2022 : The Power of Recursion in Graph Neural Networks for Counting Substructures »
Behrooz Tahmasebi · Derek Lim · Stefanie Jegelka -
2022 : Sign and Basis Invariant Networks for Spectral Graph Representation Learning »
Derek Lim · Joshua Robinson -
2022 Poster: Reverse Engineering $\ell_p$ attacks: A block-sparse optimization approach with recovery guarantees »
Darshan Thaker · Paris Giampouras · Rene Vidal -
2022 Spotlight: Reverse Engineering $\ell_p$ attacks: A block-sparse optimization approach with recovery guarantees »
Darshan Thaker · Paris Giampouras · Rene Vidal -
2021 Poster: Dual Principal Component Pursuit for Robust Subspace Learning: Theory and Algorithms for a Holistic Approach »
Tianyu Ding · Zhihui Zhu · Rene Vidal · Daniel Robinson -
2021 Spotlight: Dual Principal Component Pursuit for Robust Subspace Learning: Theory and Algorithms for a Holistic Approach »
Tianyu Ding · Zhihui Zhu · Rene Vidal · Daniel Robinson -
2021 Poster: Understanding the Dynamics of Gradient Flow in Overparameterized Linear models »
Salma Tarmoun · Guilherme Franca · Benjamin Haeffele · Rene Vidal -
2021 Poster: On the Explicit Role of Initialization on the Convergence and Implicit Bias of Overparametrized Linear Networks »
Hancheng Min · Salma Tarmoun · Rene Vidal · Enrique Mallada -
2021 Spotlight: On the Explicit Role of Initialization on the Convergence and Implicit Bias of Overparametrized Linear Networks »
Hancheng Min · Salma Tarmoun · Rene Vidal · Enrique Mallada -
2021 Spotlight: Understanding the Dynamics of Gradient Flow in Overparameterized Linear models »
Salma Tarmoun · Guilherme Franca · Benjamin Haeffele · Rene Vidal -
2021 Poster: A Nullspace Property for Subspace-Preserving Recovery »
Mustafa D Kaba · Chong You · Daniel Robinson · Enrique Mallada · Rene Vidal -
2021 Spotlight: A Nullspace Property for Subspace-Preserving Recovery »
Mustafa D Kaba · Chong You · Daniel Robinson · Enrique Mallada · Rene Vidal -
2019 Poster: Noisy Dual Principal Component Pursuit »
Tianyu Ding · Zhihui Zhu · Tianjiao Ding · Yunchen Yang · Daniel Robinson · Manolis Tsakiris · Rene Vidal -
2019 Oral: Noisy Dual Principal Component Pursuit »
Tianyu Ding · Zhihui Zhu · Tianjiao Ding · Yunchen Yang · Daniel Robinson · Manolis Tsakiris · Rene Vidal