Skip to yearly menu bar Skip to main content


Session

Miscellaneous Aspects of Machine Learning/Reinforcement Learning

Room 318 - 320

Moderator: Haozhu Wang

Abstract:
Chat is not available.

Wed 20 July 13:30 - 13:35 PDT

Spotlight
Gradient Descent on Neurons and its Link to Approximate Second-order Optimization

Frederik Benzing

Second-order optimizers are thought to hold the potential to speed up neural network training, but due to the enormous size of the curvature matrix, they typically require approximations to be computationally tractable. The most successful family of approximations are Kronecker-Factored, block-diagonal curvature estimates (KFAC). Here, we combine tools from prior work to evaluate exact second-order updates with careful ablations to establish a surprising result: Due to its approximations, KFAC is not closely related to second-order updates, and in particular, it significantly outperforms true second-order updates. This challenges widely held believes and immediately raises the question why KFAC performs so well. Towards answering this question we present evidence strongly suggesting that KFAC approximates a first-order algorithm, which performs gradient descent on neurons rather than weights. Finally, we show that this optimizer often improves over KFAC in terms of computational cost and data-efficiency.

Wed 20 July 13:35 - 13:40 PDT

Spotlight
A Tree-based Model Averaging Approach for Personalized Treatment Effect Estimation from Heterogeneous Data Sources

Xiaoqing (Ellen) Tan · Chung-Chou H. Chang · Ling Zhou · Lu Tang

Accurately estimating personalized treatment effects within a study site (e.g., a hospital) has been challenging due to limited sample size. Furthermore, privacy considerations and lack of resources prevent a site from leveraging subject-level data from other sites. We propose a tree-based model averaging approach to improve the estimation accuracy of conditional average treatment effects (CATE) at a target site by leveraging models derived from other potentially heterogeneous sites, without them sharing subject-level data. To our best knowledge, there is no established model averaging approach for distributed data with a focus on improving the estimation of treatment effects. Specifically, under distributed data networks, our framework provides an interpretable tree-based ensemble of CATE estimators that joins models across study sites, while actively modeling the heterogeneity in data sources through site partitioning. The performance of this approach is demonstrated by a real-world study of the causal effects of oxygen therapy on hospital survival rate and backed up by comprehensive simulation results.

Wed 20 July 13:40 - 13:45 PDT

Spotlight
Efficient Online ML API Selection for Multi-Label Classification Tasks

Lingjiao Chen · Matei Zaharia · James Zou

Multi-label classification tasks such as OCR and multi-object recognition are a major focus of the growing machine learning as a service industry. While many multi-label APIs are available, it is challenging for users to decide which API to use for their own data and budget, due to the heterogeneity in their prices and performance. Recent work has shown how to efficiently select and combine single label APIs to optimize performance and cost. However, its computation cost is exponential in the number of labels, and is not suitable for settings like OCR. In this work, we propose FrugalMCT, a principled framework that adaptively selects the APIs to use for different data in an online fashion while respecting the user’s budget. It allows combining ML APIs’ predictions for any single data point, and selects the best combination based on an accuracy estimator. We run systematic experiments using ML APIs from Google, Microsoft, Amazon, IBM, Tencent, and other providers for tasks including multi-label image classification, scene text recognition, and named entity recognition. Across these tasks, FrugalMCT can achieve over 90% cost reduction while matching the accuracy of the best single API, or up to 8% better accuracy while matching the best API’s cost.

Wed 20 July 13:45 - 13:50 PDT

Spotlight
Entropic Causal Inference: Graph Identifiability

Spencer Compton · Kristjan Greenewald · Dmitriy Katz · Murat Kocaoglu

Entropic causal inference is a recent framework for learning the causal graph between two variables from observational data by finding the information-theoretically simplest structural explanation of the data, i.e., the model with smallest entropy. In our work, we first extend the causal graph identifiability result in the two-variable setting under relaxed assumptions. We then show the first identifiability result using the entropic approach for learning causal graphs with more than two nodes. Our approach utilizes the property that ancestrality between a source node and its descendants can be determined using the bivariate entropic tests. We provide a sound sequential peeling algorithm for general graphs that relies on this property. We also propose a heuristic algorithm for small graphs that shows strong empirical performance. We rigorously evaluate the performance of our algorithms on synthetic data generated from a variety of models, observing improvement over prior work. Finally we test our algorithms on real-world datasets.

Wed 20 July 13:50 - 13:55 PDT

Spotlight
Architecture Agnostic Federated Learning for Neural Networks

Disha Makhija · Xing Han · Nhat Ho · Joydeep Ghosh

With growing concerns regarding data privacy and rapid increase in data volume, Federated Learning (FL) has become an important learning paradigm. However, jointly learning a deep neural network model in a FL setting proves to be a non-trivial task because of the complexities associated with the neural networks, such as varied architectures across clients, permutation invariance of the neurons, and presence of non-linear transformations in each layer. This work introduces a novel framework, Federated Heterogeneous Neural Networks (FedHeNN), that allows each client to build a personalised model without enforcing a common architecture across clients. This allows each client to optimize with respect to local data and compute constraints, while still benefiting from the learnings of other (potentially more powerful) clients. The key idea of FedHeNN is to use the instance-level representations obtained from peer clients to guide the simultaneous training on each client. The extensive experimental results demonstrate that the FedHeNN framework is capable of learning better performing models on clients in both the settings of homogeneous and heterogeneous architectures across clients.

Wed 20 July 13:55 - 14:00 PDT

Spotlight
Conformal Prediction Sets with Limited False Positives

Adam Fisch · Tal Schuster · Tommi Jaakkola · Regina Barzilay

We develop a new approach to multi-label conformal prediction in which we aim to output a precise set of promising prediction candidates with a bounded number of incorrect answers. Standard conformal prediction provides the ability to adapt to model uncertainty by constructing a calibrated candidate set in place of a single prediction, with guarantees that the set contains the correct answer with high probability. In order to obey this coverage property, however, conformal sets can become inundated with noisy candidates---which can render them unhelpful in practice. This is particularly relevant to practical applications where there is a limited budget, and the cost (monetary or otherwise) associated with false positives is non-negligible. We propose to trade coverage for a notion of precision by enforcing that the presence of incorrect candidates in the predicted conformal sets (i.e., the total number of false positives) is bounded according to a user-specified tolerance. Subject to this constraint, our algorithm then optimizes for a generalized notion of set coverage (i.e., the true positive rate) that allows for any number of true answers for a given query (including zero). We demonstrate the effectiveness of this approach across a number of classification tasks in natural language processing, computer vision, and computational chemistry.

Wed 20 July 14:00 - 14:05 PDT

Spotlight
Scalable Computation of Causal Bounds

Madhumitha Shridharan · Garud Iyengar

We consider the problem of computing bounds for causal inference problems with unobserved confounders, where identifiability does not hold. Existing non-parametric approaches for computing such bounds use linear programming (LP) formulations that quickly become intractable for existing solvers because the size of the LP grows exponentially in the number of edges in the underlying causal graph. We show that this LP can be significantly pruned by carefully considering the structure of the causal query, allowing us to compute bounds for significantly larger causal inference problems as compared to what is possible using existing techniques. This pruning procedure also allows us to compute the bounds inclosed form for a special class of causal graphs and queries, which includes a well-studied family of problems where multiple confounded treatments influence an outcome. We also propose a very efficient greedy heuristic that produces very high quality bounds, and scales to problems that are several orders of magnitude larger than those for which the pruned LP can be solved.

Wed 20 July 14:05 - 14:25 PDT

Oral
LIDL: Local Intrinsic Dimension Estimation Using Approximate Likelihood

Piotr Tempczyk · Rafał Michaluk · Łukasz Garncarek · Przemysław Spurek · Jacek Tabor · Adam Golinski

Most of the existing methods for estimating the local intrinsic dimension of a data distribution do not scale well to high dimensional data. Many of them rely on a non-parametric nearest neighbours approach which suffers from the curse of dimensionality. We attempt to address that challenge by proposing a novel approach to the problem: Local Intrinsic Dimension estimation using approximate Likelihood (LIDL).Our method relies on an arbitrary density estimation method as its subroutine, and hence tries to sidestep the dimensionality challenge by making use of the recent progress in parametric neural methods for likelihood estimation.We carefully investigate the empirical properties of the proposed method, compare them with our theoretical predictions, show that LIDL yields competitive results on the standard benchmarks for this problem, and that it scales to thousands of dimensions. What is more, we anticipate this approach to improve further with the continuing advances in the density estimation literature.

Wed 20 July 14:25 - 14:30 PDT

Spotlight
Learning Pseudometric-based Action Representations for Offline Reinforcement Learning

Pengjie Gu · Mengchen Zhao · Chen Chen · Dong Li · Jianye Hao · Bo An

Offline reinforcement learning is a promising approach for practical applications since it does not require interactions with real-world environments. However, existing offline RL methods only work well in environments with continuous or small discrete action spaces. In environments with large and discrete action spaces, such as recommender systems and dialogue systems, the performance of existing methods decreases drastically because they suffer from inaccurate value estimation for a large proportion of out-of-distribution (o.o.d.) actions. While recent works have demonstrated that online RL benefits from incorporating semantic information in action representations, unfortunately, they fail to learn reasonable relative distances between action representations, which is key to offline RL to reduce the influence of o.o.d. actions. This paper proposes an action representation learning framework for offline RL based on a pseudometric, which measures both the behavioral relation and the data-distributional relation between actions. We provide theoretical analysis on the continuity of the expected Q-values and the offline policy improvement using the learned action representations. Experimental results show that our methods significantly improve the performance of two typical offline RL methods in environments with large and discrete action spaces.

Wed 20 July 14:30 - 14:35 PDT

Spotlight
A Statistical Manifold Framework for Point Cloud Data

Yonghyeon Lee · Seungyeon Kim · Jinwon Choi · Frank Chongwoo Park

Many problems in machine learning involve data sets in which each data point is a pointcloud in $\mathbb{R}^D$. A growing number of applications require a means of measuringnot only distances between point clouds, but also angles, volumes, derivatives, and othermore advanced concepts. To formulate and quantify these concepts in a coordinate-invariantway, we develop a Riemannian geometric framework for point cloud data. By interpretingeach point in a point cloud as a sample drawn from some given underlying probabilitydensity, the space of point cloud data can be given the structure of a statisticalmanifold -- each point on this manifold represents a point cloud -- with the Fisherinformation metric acting as a natural Riemannian metric. Two autoencoder applicationsof our framework are presented: (i) smoothly deforming one 3D object into another viainterpolation between the two corresponding point clouds; (ii) learning an optimal setof latent space coordinates for point cloud data that best preserves angles anddistances, and thus produces a more discriminative representation space. Experimentswith large-scale standard benchmark point cloud data show greatly improvedclassification accuracy vis-\'{a}-vis existing methods. Code is available at https://github.com/seungyeon-k/SMF-public.

Wed 20 July 14:35 - 14:40 PDT

Spotlight
HyperImpute: Generalized Iterative Imputation with Automatic Model Selection

Daniel Jarrett · Bogdan Cebere · Tennison Liu · Alicia Curth · Mihaela van der Schaar

Consider the problem of imputing missing values in a dataset. One the one hand, conventional approaches using iterative imputation benefit from the simplicity and customizability of learning conditional distributions directly, but suffer from the practical requirement for appropriate model specification of each and every variable. On the other hand, recent methods using deep generative modeling benefit from the capacity and efficiency of learning with neural network function approximators, but are often difficult to optimize and rely on stronger data assumptions. In this work, we study an approach that marries the advantages of both: We propose HyperImpute, a generalized iterative imputation framework for adaptively and automatically configuring column-wise models and their hyperparameters. Practically, we provide a concrete implementation with out-of-the-box learners, optimizers, simulators, and extensible interfaces. Empirically, we investigate this framework via comprehensive experiments and sensitivities on a variety of public datasets, and demonstrate its ability to generate accurate imputations relative to a strong suite of benchmarks. Contrary to recent work, we believe our findings constitute a strong defense of the iterative imputation paradigm.

Wed 20 July 14:40 - 14:45 PDT

Spotlight
A Natural Actor-Critic Framework for Zero-Sum Markov Games

Ahmet Alacaoglu · Luca Viano · Niao He · Volkan Cevher

We introduce algorithms based on natural actor-critic and analyze their sample complexity for solving two player zero-sum Markov games in the tabular case. Our results improve the best-known sample complexities of policy gradient/actor-critic methods for convergence to Nash equilibrium in the multi-agent setting. We use the error propagation scheme in approximate dynamic programming, recent advances for global convergence of policy gradient methods, temporal difference learning, and techniques from stochastic primal-dual optimization. Our algorithms feature two stages, requiring agents to agree on an etiquette before starting their interactions, which is feasible for instance in self-play. However, the agents only access to joint reward and joint next state and not to each other's actions or policies. Our complexity results match the best-known results for global convergence of policy gradient algorithms for single agent RL. We provide numerical verification of our methods for a two player bandit environment and a two player game, Alesia. We observe improved empirical performance as compared to the recently proposed optimistic gradient descent-ascent variant for Markov games.

Wed 20 July 14:45 - 14:50 PDT

Spotlight
Distributionally Robust $Q$-Learning

Zijian Liu · Jerry Bai · Jose Blanchet · Perry Dong · Wei Xu · Zhengqing Zhou · Zhengyuan Zhou

Reinforcement learning (RL) has demonstrated remarkable achievements in simulated environments. However, carrying this success to real environments requires the important attribute of robustness, which the existing RL algorithms often lack as they assume that the future deployment environment is the same as the training environment (i.e. simulator) in which the policy is learned. This assumption often does not hold due to the discrepancy between the simulator and the real environment and, as a result, and hence renders the learned policy fragile when deployed.In this paper, we propose a novel distributionally robust $Q$-learning algorithm that learns the best policy in the worst distributional perturbation of the environment. Our algorithm first transforms the infinite-dimensional learning problem (since the environment MDP perturbation lies in an infinite-dimensional space) into a finite-dimensional dual problem and subsequently uses a multi-level Monte-Carlo scheme to approximate the dual value using samples from the simulator. Despite the complexity, we show that the resulting distributionally robust $Q$-learning algorithm asymptotically converges to optimal worst-case policy, thus making it robust to future environment changes. Simulation results further demonstrate its strong empirical robustness.

Wed 20 July 14:50 - 14:55 PDT

Spotlight
Sparsity in Partially Controllable Linear Systems

Yonathan Efroni · Sham Kakade · Akshay Krishnamurthy · Cyril Zhang

A fundamental concept in control theory is that of controllability, where any system state can bereached through an appropriate choice of control inputs. Indeed, a large body of classical and modernapproaches are designed for controllable linear dynamical systems. However, in practice, we oftenencounter systems in which a large set of state variables evolve exogenously and independently of thecontrol inputs; such systems are only partially controllable. The focus of this work is on a large classof partially controllable linear dynamical systems, specified by an underlying sparsity pattern. Our mainresults establish structural conditions and finite-sample guarantees for learning to control such systems. Inparticular, our structural results characterize those state variables which are irrelevant for optimal control,an analysis which departs from classical control techniques. Our algorithmic results adapt techniquesfrom high-dimensional statistics—specifically soft-thresholding and semiparametric least-squares—toexploit the underlying sparsity pattern in order to obtain finite-sample guarantees that significantly improveover those based on certainty-equivalence. We also corroborate these theoretical improvements overcertainty-equivalent control through a simulation study.

Wed 20 July 14:55 - 15:00 PDT

Spotlight
Saute RL: Almost Surely Safe Reinforcement Learning Using State Augmentation

Aivar Sootla · Alexander I Cowen-Rivers · Taher Jafferjee · Ziyan Wang · David Mguni · Jun Wang · Haitham Bou Ammar

Satisfying safety constraints almost surely (or with probability one) can be critical for the deployment of Reinforcement Learning (RL) in real-life applications. For example, plane landing and take-off should ideally occur with probability one. We address the problem by introducing Safety Augmented (Saute) Markov Decision Processes (MDPs), where the safety constraints are eliminated by augmenting them into the state-space and reshaping the objective. We show that Saute MDP satisfies the Bellman equation and moves us closer to solving Safe RL with constraints satisfied almost surely. We argue that Saute MDP allows viewing the Safe RL problem from a different perspective enabling new features. For instance, our approach has a plug-and-play nature, i.e., any RL algorithm can be "Sauteed''. Additionally, state augmentation allows for policy generalization across safety constraints. We finally show that Saute RL algorithms can outperform their state-of-the-art counterparts when constraint satisfaction is of high importance.