Skip to yearly menu bar Skip to main content


Poster

A Trichotomy for List Transductive Online Learning

Steve Hanneke · Amirreza Shaeiri

West Exhibition Hall B2-B3 #W-909
[ ] [ ]
Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract: List learning is an important topic in both theoretical and empirical machine learning research, playing a key role in the recent breakthrough result of (Brukhim et al., 2022) on the characterization of multiclass PAC learnability, as well as addressing label ambiguity in computer vision classification tasks, among others. In this paper, we study the problem of list transductive online learning. In this framework, the learner outputs a list of multiple labels for each instance rather than just one, as in traditional multiclass classification. In the realizable setting, we demonstrate a trichotomy of possible rates of the minimax number of mistakes. In particular, if the learner plays for $\text{T} \in \mathbb{N}$ rounds, its minimax number of mistakes can only be of the orders $\Theta(\text{T})$, $\Theta(\log \text{T})$, or $\Theta(1)$. This resolves an open question raised by (Hanneke et al., 2024). On the other hand, in the agnostic setting, we characterize the learnability by constructively proving the $\widetilde{\mathcal{O}}(\sqrt{\text{T}})$ upper bound on the minimax expected regret. Along the way, we also answer another open question asked by (Moran et al., 2023). To establish these results, we introduce two new combinatorial complexity dimensions, called the Level-constrained $(\mathrm{L+1})$-Littlestone dimension and Level-constrained $(\mathrm{L+1})$-Branching dimension, if the list size is $\mathrm{L} \in \mathbb{N}$. Eventually, we conclude our work by raising an open question regarding eliminating the factor list size, which seems to be a crucial step, as it has consistently appeared in previous works on this subject.

Lay Summary:

List learning is an important topic in both theoretical and empirical machine learning research, playing a key role in the recent breakthrough result of (Brukhim et al., 2022) on the characterization of multiclass PAC learnability, as well as the ambiguity of labels in computer vision classification tasks, among others. In this paper, we theoretically study the problem of list transductive online learning. In this framework, the sequence of instances is known to the learner before the start of the game, and the learner outputs a list of multiple labels for each instance rather than just one, as in traditional multiclass classification. We prove theoretical results by introducing two new combinatorial complexity dimensions. This resolves open questions raised by prior works. Eventually, we conclude our work by raising an open question regarding eliminating the factor of list size, which seems to be a crucial step, as it has consistently appeared in previous works on this subject.

Live content is unavailable. Log in and register to view live content