Workshop
Human In the Loop Learning (HILL)
Xin Wang · Xin Wang · Fisher Yu · Shanghang Zhang · Joseph Gonzalez · Yangqing Jia · Sarah Bird · Kush Varshney · Been Kim · Adrian Weller

Fri Jun 14th 08:30 AM -- 06:00 PM @ 103
Event URL: https://sites.google.com/view/hill2019/home »

https://sites.google.com/view/hill2019/home

This workshop is a joint effort between the 4th ICML Workshop on Human Interpretability in Machine Learning (WHI) and the ICML 2019 Workshop on Interactive Data Analysis System (IDAS). We have combined our forces this year to run Human in the Loop Learning (HILL) in conjunction with ICML 2019!

The workshop will bring together researchers and practitioners who study interpretable and interactive learning systems with applications in large scale data processing, data annotations, data visualization, human-assisted data integration, systems and tools to interpret machine learning models as well as algorithm designs for active learning, online learning, and interpretable machine learning algorithms. The target audience for the workshop includes people who are interested in using machines to solve problems by having a human be an integral part of the process. This workshop serves as a platform where researchers can discuss approaches that bridge the gap between humans and machines and get the best of both worlds.

We welcome high-quality submissions in the broad area of human in the loop learning. A few (non-exhaustive) topics of interest include:

Systems for online and interactive learning algorithms,
Active/Interactive machine learning algorithm design,
Systems for collecting, preparing, and managing machine learning data,
Model understanding tools (verifying, diagnosing, debugging, visualization, introspection, etc),
Design, testing and assessment of interactive systems for data analytics,
Psychology of human concept learning,
Generalized additive models, sparsity and rule learning,
Interpretable unsupervised models (clustering, topic models, etc.),
Interpretation of black-box models (including deep neural networks),
Interpretability in reinforcement learning.

08:25 AM Opening Remarks (Opening remarks)
08:30 AM Invited Talk: James Philbin (Invited talk)
09:00 AM Invited Talk: Sanja Fidler (Invited talk)
09:30 AM Invited Talk: Bryan Catanzaro (Invited talk)
10:00 AM IDAS Poster Session & Coffee break (Poster session and break)
11:30 AM Invited Talk: Yisong Yue (Invited talk)
12:00 PM Invited Talk: Vittorio Ferrari (Invited talk)
12:30 PM Lunch Break (Break)
02:00 PM Interpretability Contributed Talks (Contributed Talk)
03:00 PM Coffee Break (Break)
03:30 PM Interpretability Invited Discussion: California's Senate Bill 10 (SB 10) on Pretrial Release and Detention with Solon Barocas and Peter Eckersley (Discussion)
04:45 PM Human in the Loop Learning Panel Discussion (Discussion)

Author Information

Xin Wang
Xin Wang (UC Berkeley)
Fisher Yu (University of California, Berkeley)
Shanghang Zhang (Petuum Inc.)
Joseph Gonzalez (University of California, Berkeley)
Yangqing Jia (Facebook)
Sarah Bird (Facebook AI Research)

Sarah Bird leads strategic projects to accelerate the adoption and impact of AI research in products at Facebook. Her current work focused on AI Ethics and developing AI responsibly. She is the one of the co-creators of [ONNX](http://onnx.ai/), an open standard for deep learning models, and a leader in the [Pytorch 1.0](https://pytorch.org/) project. Prior to joining Facebook, she was an AI systems researcher at Microsoft Research NYC and a technical advisor to Microsoft’s Data Group. She is one of the researchers behind [Microsoft’s Decision Service](https://azure.microsoft.com/en-us/services/cognitive-services/custom-decision-service/), one of the first general-purpose reinforcement-learning style cloud systems publicly released. She has a Ph.D. in computer science from UC Berkeley advised by Dave Patterson, Krste Asanovic, and Burton Smith.

Kush Varshney (IBM Research AI)
Been Kim (Google)
Adrian Weller (University of Cambridge, Alan Turing Institute)

Adrian Weller is a Senior Research Fellow in the Machine Learning Group at the University of Cambridge, a Faculty Fellow at the Alan Turing Institute for data science and an Executive Fellow at the Leverhulme Centre for the Future of Intelligence (CFI). He is very interested in all aspects of artificial intelligence, its commercial applications and how it may be used to benefit society. At the CFI, he leads their project on Trust and Transparency. Previously, Adrian held senior roles in finance. He received a PhD in computer science from Columbia University, and an undergraduate degree in mathematics from Trinity College, Cambridge.

More from the Same Authors