Timezone: »

 
Workshop
Adaptive and Multitask Learning: Algorithms & Systems
Maruan Al-Shedivat · Anthony Platanios · Otilia Stretcu · Jacob Andreas · Ameet Talwalkar · Rich Caruana · Tom Mitchell · Eric Xing

Sat Jun 15 08:30 AM -- 06:00 PM (PDT) @ Seaside Ballroom
Event URL: https://www.amtl-workshop.org/ »

Driven by progress in deep learning, the machine learning community is now able to tackle increasingly more complex problems—ranging from multi-modal reasoning to dexterous robotic manipulation—all of which typically involve solving nontrivial combinations of tasks. Thus, designing adaptive models and algorithms that can efficiently learn, master, and combine multiple tasks is the next frontier. AMTL workshop aims to bring together machine learning researchers from areas ranging from theory to applications and systems, to explore and discuss:

* advantages, disadvantages, and applicability of different approaches to learning in multitask settings,
* formal or intuitive connections between methods developed for different problems that help better understand the landscape of multitask learning techniques and inspire technique transfer between research lines,
* fundamental challenges and open questions that the community needs to tackle for the field to move forward.

Webpage: www.amtl-workshop.org

Sat 8:30 a.m. - 8:40 a.m. [iCal]
Opening Remarks
Sat 8:40 a.m. - 9:10 a.m. [iCal]
Building and Structuring Training Sets for Multi-Task Learning (Alex Ratner) (Invited Talk) Video » 
Alexander J Ratner
Sat 9:10 a.m. - 9:40 a.m. [iCal]
Meta-Learning: Challenges and Frontiers (Chelsea Finn) (Invited Talk) Video » 
Chelsea Finn
Sat 9:40 a.m. - 9:55 a.m. [iCal]
Video » 

Meta-Reinforcement learning approaches aim to develop learning procedures that can adapt quickly to a distribution of tasks with the help of a few examples. Developing efficient exploration strategies capable of finding the most useful samples becomes critical in such settings. Existing approaches to finding efficient exploration strategies add auxiliary objectives to promote exploration by the pre-update policy, however, this makes the adaptation using a few gradient steps difficult as the pre-update (exploration) and post-update (exploitation) policies are quite different. Instead, we propose to explicitly model a separate exploration policy for the task distribution. Having two different policies gives more flexibility in training the exploration policy and also makes adaptation to any specific task easier. We show that using self-supervised or supervised learning objectives for adaptation stabilizes the training process and also demonstrate the superior performance of our model compared to prior works in this domain.

Sat 9:55 a.m. - 10:10 a.m. [iCal]
Video » 

In order to mimic the human ability of continual acquisition and transfer of knowledge across various tasks, a learning system needs the capability for life-long learning, effectively utilizing the previously acquired skills. As such, the key challenge is to transfer and generalize the knowledge learned from one task to other tasks, avoiding interference from previous knowledge and improving the overall performance. In this paper, within the continual learning paradigm, we introduce a method that effectively forgets the less useful data samples continuously across different tasks. The method uses statistical leverage score information to measure the importance of the data samples in every task and adopts frequent directions approach to enable a life-long learning property. This effectively maintains a constant training size across all tasks. We first provide some mathematical intuition for the method and then demonstrate its effectiveness with experiments on variants of MNIST and CIFAR100 datasets.

Sat 10:10 a.m. - 10:25 a.m. [iCal]
Tricks of the Trade 1 (Rich Caruana) (Lightning Talk)
Rich Caruana
Sat 10:25 a.m. - 11:00 a.m. [iCal]
Coffee Break
Sat 11:00 a.m. - 12:00 p.m. [iCal]

Accepted papers: https://www.amtl-workshop.org/accepted-papers


TuckER: Tensor Factorization for Knowledge Graph Completion Authors: Ivana Balazevic, Carl Allen, Timothy Hospedales

Learning Cancer Outcomes from Heterogeneous Genomic Data Sources: An Adversarial Multi-task Learning Approach Authors: Safoora Yousefi, Amirreza Shaban, Mohamed Amgad, Lee Cooper

Continual adaptation for efficient machine communication Authors: Robert Hawkins, Minae Kwon, Dorsa Sadigh, Noah Goodman

Every Sample a Task: Pushing the Limits of Heterogeneous Models with Personalized Regression Authors: Ben Lengerich, Bryon Aragam, Eric Xing

Data Enrichment: Multi-task Learning in High Dimension with Theoretical Guarantees Authors: Amir Asiaee, Samet Oymak, Kevin R. Coombes, Arindam Banerjee

A Functional Extension of Multi-Output Learning Authors: Alex Lambert, Romain Brault, Zoltan Szabo, Florence d'Alche-Buc

Interpretable Robust Recommender Systems with Side Information Authors: Wenyu Chen, Zhechao Huang, Jason Cheuk Nam Liang, Zihao Xu

Personalized Student Stress Prediction with Deep Multi-Task Network Authors: Abhinav Shaw, Natcha Simsiri, Iman Dezbani, Madelina Fiterau, Tauhidur Rahaman

SuperTML: Domain Transfer from Computer Vision to Structured Tabular Data through Two-Dimensional Word Embedding Authors: Baohua Sun, Lin Yang, Wenhan Zhang, Michael Lin, Patrick Dong, Charles Young, Jason Dong

Goal-conditioned Imitation Learning Authors: Yiming Ding, Carlos Florensa, Mariano Phielipp, Pieter Abbeel

Tasks Without Borders: A New Approach to Online Multi-Task Learning Authors: Alexander Zimin, Christoph H. Lampert

The Role of Embedding-complexity in Domain-invariant Representations Authors: Ching-Yao Chuang, Antonio Torralba, Stefanie Jegelka

Lifelong Learning via Online Leverage Score Sampling Authors: Dan Teng, Sakyasingha Dasgupta

Connections Between Optimization in Machine Learning and Adaptive Control Authors: Joseph E. Gaudio, Travis E. Gibson, Anuradha M. Annaswamy, Michael A. Bolender, Eugene Lavretsky

Meta-Reinforcement Learning for Adaptive Autonomous Driving Authors: Yesmina Jaafra, Jean Luc Laurent, Aline Deruyver, Mohamed Saber Naceur

PAGANDA: An Adaptive Task-Independent Automatic Data Augmentation Authors: Boli Fang, Miao Jiang, Jerry Shen

Improving Relevance Prediction with Transfer Learning in Large-scale Retrieval Systems Authors: Ruoxi Wang, Zhe Zhao, Xinyang Yi, Ji Yang, Derek Zhiyuan Cheng, Lichan Hong, Steve Tjoa, Jieqi Kang, Evan Ettinger, Ed Chi

Federated Optimization for Heterogeneous Networks Authors: Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith

Learning Exploration Policies for Model-Agnostic Meta-Reinforcement Learning Authors: Swaminathan Gurumurthy, Sumit Kumar, Katia Sycara

A Meta Understanding of Meta-Learning Authors: Wei-Lun Chao, Han-Jia Ye, De-Chuan Zhan, Mark Campbell, Kilian Q. Weinberger

Multi-Task Learning via Task Multi-Clustering Authors: Andy Yan, Xin Wang, Ion Stoica, Joseph Gonzalez, Roy Fox

Prototypical Bregman Networks Authors: Kubra Cilingir, Brian Kulis

Differentiable Hebbian Plasticity for Continual Learning Authors: Vithursan Thangarasa, Thomas Miconi, Graham W. Taylor

Active Multitask Learning with Committees Authors: Jingxi Xu, Da Tang, Tony Jebara

Progressive Memory Banks for Incremental Domain Adaptation Authors: Nabiha Asghar, Lili Mou, Kira A. Selby, Kevin D. Pantasdo, Pascal Poupart, Xin Jiang

Sub-policy Adaptation for Hierarchical Reinforcement Learning Authors: Alexander Li, Carlos Florensa, Pieter Abbeel

Learning to learn to communicate Authors: Ryan Lowe, Abhinav Gupta, Jakob Foerster, Douwe Kiela, Joelle Pineau

Ivana Balazevic, Minae Kwon, Benjamin Lengerich, Amir Asiaee, Alex Lambert, Wenyu Chen, Yiming Ding, Carlos Florensa, Joseph E Gaudio, Yesmina Jaafra, Boli Fang, Ruoxi Wang, Tian Li, SWAMINATHAN GURUMURTHY, Andy Yan, Kubra Cilingir, Vithursan (Vithu) Thangarasa, Alex Li, Ryan Lowe
Sat 12:00 p.m. - 1:45 p.m. [iCal]
Lunch Break
Sat 1:45 p.m. - 2:15 p.m. [iCal]
ARUBA: Efficient and Adaptive Meta-Learning with Provable Guarantees (Ameet Talwalkar) (Invited Talk) Video » 
Ameet Talwalkar
Sat 2:15 p.m. - 2:45 p.m. [iCal]
Efficient Lifelong Learning Algorithms: Regret Bounds and Statistical Guarantees (Massimiliano Pontil) (Invited Talk) Video » 
Sat 2:45 p.m. - 3:00 p.m. [iCal]
Tricks of Trade 2 (Rich Caruana) (Lightning Talk) Video » 
Rich Caruana
Sat 3:00 p.m. - 3:30 p.m. [iCal]
Coffee Break
Sat 3:30 p.m. - 4:00 p.m. [iCal]
Multi-Task Learning in the Wilderness (Andrej Karpathy) (Invited Talk) Video » 
Andrej Karpathy
Sat 4:00 p.m. - 4:30 p.m. [iCal]
Recent Trends in Personalization: A Netflix Perspective (Justin Basilico) (Invited Talk) Video » 
Justin Basilico
Sat 4:30 p.m. - 4:45 p.m. [iCal]
Video » 

Machine learned large-scale retrieval systems require a large amount of training data representing query-item relevance. However, collecting users' explicit feedback is costly. In this paper, we propose to leverage user logs and implicit feedback as auxiliary objectives to improve relevance modeling in retrieval systems. Specifically, we adopt a two-tower neural net architecture to model query-item relevance given both collaborative and content information. By introducing auxiliary tasks trained with much richer implicit user feedback data, we improve the quality and resolution for the learned representations of queries and items. Applying these learned representations to an industrial retrieval system has delivered significant improvements.

Ruoxi Wang
Sat 4:45 p.m. - 5:00 p.m. [iCal]
Video » 

To communicate with new partners in new contexts, humans rapidly form new linguistic conventions. Recent language models trained with deep neural networks are able to comprehend and produce the existing conventions present in their training data, but are not able to flexibly and interactively adapt those conventions on the fly as humans do. We introduce a repeated reference task as a benchmark for models of adaptation in communication and propose a regularized continual learning framework that allows an artificial agent initialized with a generic language model to more accurately and efficiently understand their partner over time. We evaluate this framework through simulations on COCO and in real-time reference game experiments with human partners.

Minae Kwon
Sat 5:00 p.m. - 5:30 p.m. [iCal]
Toward Robust AI Systems for Understanding and Reasoning Over Multimodal Data (Hannaneh Hajishirzi) (Invited Talk) Video » 
Sat 5:30 p.m. - 5:40 p.m. [iCal]
Closing Remarks

Author Information

Maruan Al-Shedivat (Carnegie Mellon University)
Anthony Platanios (Carnegie Mellon University)
Otilia Stretcu (Carnegie Mellon University)
Jacob Andreas (UC Berkeley)
Ameet Talwalkar (Carnegie Mellon University)
Rich Caruana (Microsoft)
Tom Mitchell (Carnegie Mellon University)
Tom Mitchell

Tom M. Mitchell is the Founders University Professor and Interim Dean of the School of Computer Science at Carnegie Mellon University. Mitchell has worked in Machine Learning for many years, and co-founded the ICML conference (with Jaime Carbonell and Ryszard Michalski). Recently, he directed the Never-Ending Language Learning (NELL) project, which operated continuously for over eight years, providing a case study for how to architect never-ending learning systems. Mitchell is a member of the U.S. National Academy of Engineering, a member of the American Academy of Arts and Sciences, and a Fellow and Past President of the Association for the Advancement of Artificial Intelligence (AAAI).

Eric Xing (Petuum Inc. and CMU)

More from the Same Authors