Session
Learning Theory
Refined Complexity of PCA with Outliers
Kirill Simonov · Fedor Fomin · Petr Golovach · Fahad Panolan
Principal component analysis (PCA) is one of the most fundamental procedures in exploratory data analysis and is the basic step in applications ranging from quantitative finance and bioinformatics to image analysis and neuroscience. However, it is well-documented that the applicability of PCA in many real scenarios could be constrained by an “immune deficiency” to outliers such as corrupted observations. We consider the following algorithmic question about the PCA with outliers. For a set of n points in R^d, how to learn a subset of points, say 1% of the total number of points, such that the remaining part of the points is best fit into some unknown r-dimensional subspace? We provide a rigorous algorithmic analysis of the problem. We show that the problem is solvable in time n^O(d^2) . In particular, for constant dimension the problem is solvable in polynomial time. We complement the algorithmic result by the lower bound, showing that unless Exponential Time Hypothesis fails, in time f(d) n^o(d), for any function f of d, it is impossible not only to solve the problem exactly but even to approximate it within a constant factor.
On Efficient Optimal Transport: An Analysis of Greedy and Accelerated Mirror Descent Algorithms
Tianyi Lin · Nhat Ho · Michael Jordan
We provide theoretical analyses for two algorithms that solve the regularized optimal transport (OT) problem between two discrete probability measures with at most $n$ atoms. We show that a greedy variant of the classical Sinkhorn algorithm, known as the \emph{Greenkhorn algorithm}, can be improved to $\widetilde{\mathcal{O}}\left(\frac{n^2}{\varepsilon^2}\right)$, improving on the best known complexity bound of $\widetilde{\mathcal{O}}\left(\frac{n^2}{\varepsilon^3}\right)$. Notably, this matches the best known complexity bound for the Sinkhorn algorithm and helps explain why the Greenkhorn algorithm can outperform the Sinkhorn algorithm in practice. Our proof technique, which is based on a primal-dual formulation and a novel upper bound for the dual solution, also leads to a new class of algorithms that we refer to as \emph{adaptive primal-dual accelerated mirror descent} (APDAMD) algorithms. We prove that the complexity of these algorithms is $\bigOtil\left(\frac{n^2\gamma^{1/2}}{\varepsilon}\right)$, where $\gamma>0$ refers to the inverse of the strong convexity module of Bregman divergence with respect to $\left\|\cdot\right\|_\infty$. This implies that the APDAMD algorithm is faster than the Sinkhorn and Greenkhorn algorithms in terms of $\varepsilon$. Experimental results on synthetic and real datasets demonstrate the favorable performance of the Greenkhorn and APDAMD algorithms in practice.
Passed & Spurious: Descent Algorithms and Local Minima in Spiked Matrix-Tensor Models
Stefano Sarao Mannelli · Florent Krzakala · Pierfrancesco Urbani · Lenka Zdeborova
In this work we analyse quantitatively the interplay between the loss landscape and performance of descent algorithms in a prototypical inference problem, the spiked matrix-tensor model. We study a loss function that is the negative log-likelihood of the model. We analyse the number of local minima at a fixed distance from the signal/spike with the Kac-Rice formula, and locate trivialization of the landscape at large signal-to-noise ratios. We evaluate analytically the performance of a gradient flow algorithm using integro-differential PDEs as developed in physics of disordered systems for the Langevin dynamics. We analyze the performance of an approximate message passing algorithm estimating the maximum likelihood configuration via its state evolution. We conclude by comparing the above results: while we observe a drastic slow down of the gradient flow dynamics even in the region where the landscape is trivial, both the analyzed algorithms are shown to perform well even in the part of the region of parameters where spurious local minima are present.
Teaching a black-box learner
Sanjoy Dasgupta · Daniel Hsu · Stefanos Poulis · Jerry Zhu
One widely-studied model of {\it teaching} calls for a teacher to provide the minimal set of labeled examples that uniquely specifies a target concept. The assumption is that the teacher knows the learner's hypothesis class, which is often not true of real-life teaching scenarios. We consider the problem of teaching a learner whose representation and hypothesis class are unknown---that is, the learner is a black box. We show that a teacher who does not interact with the learner can do no better than providing random examples. We then prove, however, that with interaction, a teacher can efficiently find a set of teaching examples that is a provably good approximation to the optimal set. As an illustration, we show how this scheme can be used to {\it shrink} training sets for any family of classifiers: that is, to find an approximately-minimal subset of training instances that yields the same classifier as the entire set.
PAC Learnability of Node Functions in Networked Dynamical Systems
Abhijin Adiga · Chris J Kuhlman · Madhav Marathe · S. S. Ravi · Anil Vullikanti
We consider the PAC learnability of the local functions at the vertices of a discrete networked dynamical system, assuming that the underlying network is known. Our focus is on the learnability of threshold functions. We show that several variants of threshold functions are PAC learnable and provide tight bounds on the sample complexity. In general, when the input consists of positive and negative examples, we show that the concept class of threshold functions is not efficiently PAC learnable, unless NP = RP. Using a dynamic programming approach, we show efficient PAC learnability when the number of negative examples is small. We also present an efficient learner which is consistent with all the positive examples and at least (1-1/e) fraction of the negative examples. This algorithm is based on maximizing a submodular function under matroid constraints. By performing experiments on both synthetic and real-world networks, we study how the network structure and sample complexity influence the quality of the inferred system.
Online learning with kernel losses
Niladri Chatterji · Aldo Pacchiano · Peter Bartlett
We present a generalization of the adversarial linear bandits framework, where the underlying losses are kernel functions (with an associated reproducing kernel Hilbert space) rather than linear functions. We study a version of the exponential weights algorithm and bound its regret in this setting. Under conditions on the eigen-decay of the kernel we provide a sharp characterization of the regret for this algorithm. When we have polynomial eigen-decay ($\mu_j \le \mathcal{O}(j^{-\beta})$), we find that the regret is bounded by $\mathcal{R}_n \le \mathcal{O}(n^{\beta/(2\beta-1)})$. While under the assumption of exponential eigen-decay ($\mu_j \le \mathcal{O}(e^{-\beta j })$) we get an even tighter bound on the regret $\mathcal{R}_n \le \tilde{\mathcal{O}}(n^{1/2})$. When the eigen-decay is polynomial we show a \emph{non-matching} minimax lower bound on the regret of $\mathcal{R}_n \ge \Omega(n^{(\beta+1)/2\beta})$ and a lower bound of $\mathcal{R}_n \ge \Omega(n^{1/2})$ when the decay in the eigen-values is exponentially fast. We also study the full information setting when the underlying losses are kernel functions and present an adapted exponential weights algorithm and a conditional gradient descent algorithm.
Nearest Neighbor and Kernel Survival Analysis: Nonasymptotic Error Bounds and Strong Consistency Rates
George Chen
We establish the first nonasymptotic error bounds for Kaplan-Meier-based nearest neighbor and kernel survival probability estimators where feature vectors reside in metric spaces. Our bounds imply rates of strong consistency for these nonparametric estimators and, up to a log factor, match an existing lower bound for conditional CDF estimation. Our proof strategy also yields nonasymptotic guarantees for nearest neighbor and kernel variants of the Nelson-Aalen cumulative hazards estimator. We experimentally compare these methods on four datasets. We find that for the kernel survival estimator, a good choice of kernel is one learned using random survival forests.
Fast Rates for a kNN Classifier Robust to Unknown Asymmetric Label Noise
Henry Reeve · Ata Kaban
We consider classification in the presence of class-dependent asymmetric label noise with unknown noise probabilities. In this setting, identifiability conditions are known, but additional assumptions were shown to be required for finite sample rates, and only the parametric rate has been obtained so far. Assuming these identifiability conditions, together with a measure-smoothness condition on the regression function and Tsybakov’s margin condition, we obtain, up to a log factor, the mini-max optimal rates of the noise-free setting. This rate is attained by a recently proposed modification of the kNN classifier whose analysis exists only under known noise probabilities. Hence, our results provide solid theoretical backing for this empirically successful algorithm. By contrast the standard kNN is not even consistent in the setting of asymmetric label noise. A key idea in our analysis is a simple kNN based function optimisation approach that requires far less assumptions than existing mode estimators do, and which may be of independent interest for noise proportion estimation and other randomised optimisation problems.
Uniform Convergence Rate of the Kernel Density Estimator Adaptive to Intrinsic Volume Dimension
Jisu Kim · Jaehyeok Shin · Alessandro Rinaldo · Larry Wasserman
We derive concentration inequalities for the supremum norm of the difference between a kernel density estimator (KDE) and its point-wise expectation that hold uniformly over the selection of the bandwidth and under weaker conditions on the kernel and the data generating distribution than previously used in the literature. We first propose a novel concept, called the volume dimension, to measure the intrinsic dimension of the support of a probability distribution based on the rates of decay of the probability of vanishing Euclidean balls. Our bounds depend on the volume dimension and generalize the existing bounds derived in the literature. In particular, when the data-generating distribution has a bounded Lebesgue density or is supported on a sufficiently well-behaved lower-dimensional manifold, our bound recovers the same convergence rate depending on the intrinsic dimension of the support as ones known in the literature. At the same time, our results apply to more general cases, such as the ones of distribution with unbounded densities or supported on a mixture of manifolds with different dimensions. Analogous bounds are derived for the derivative of the KDE, of any order. Our results are generally applicable but are especially useful for problems in geometric inference and topological data analysis, including level set estimation, density-based clustering, modal clustering and mode hunting, ridge estimation and persistent homology.
Maximum Likelihood Estimation for Learning Populations of Parameters
Ramya Korlakai Vinayak · Weihao Kong · Gregory Valiant · Sham Kakade
Consider a setting with $N$ independent individuals, each with an unknown parameter, $p_i \in [0, 1]$ drawn from some unknown distribution $P^\star$. After observing the outcomes of $t$ independent Bernoulli trials, i.e., $X_i \sim \text{Binomial}(t, p_i)$ per individual, our objective is to accurately estimate $P^\star$ in the sparse regime, namely when $t \ll N$. This problem arises in numerous domains, including the social sciences, psychology, health-care, and biology, where the size of the population under study is usually large yet the number of observations per individual is often limited. Our main result shows that, in this sparse regime where $t \ll N$, the maximum likelihood estimator (MLE) is both statistically minimax optimal and efficiently computable. Precisely, for sufficiently large $N$, the MLE achieves the information theoretic optimal error bound of $\mathcal{O}(\frac{1}{t})$ for $t < c\log{N}$, with regards to the earth mover's distance (between the estimated and true distributions). More generally, in an exponentially large interval of $t$ beyond $c \log{N}$, the MLE achieves the minimax error bound of $\mathcal{O}(\frac{1}{\sqrt{t\log N}})$. In contrast, regardless of how large $N$ is, the naive "plug-in" estimator for this problem only achieves the sub-optimal error of $\Theta(\frac{1}{\sqrt{t}})$. Empirically, we also demonstrate the MLE performs well on both synthetic as well as real datasets.