Timezone: »

 
Workshop
Human In the Loop Learning (HILL)
Xin Wang · Xin Wang · Fisher Yu · Shanghang Zhang · Joseph Gonzalez · Yangqing Jia · Sarah Bird · Kush Varshney · Been Kim · Adrian Weller

Fri Jun 14 08:30 AM -- 06:00 PM (PDT) @ 103
Event URL: https://sites.google.com/view/hill2019/home »

https://sites.google.com/view/hill2019/home

This workshop is a joint effort between the 4th ICML Workshop on Human Interpretability in Machine Learning (WHI) and the ICML 2019 Workshop on Interactive Data Analysis System (IDAS). We have combined our forces this year to run Human in the Loop Learning (HILL) in conjunction with ICML 2019!

The workshop will bring together researchers and practitioners who study interpretable and interactive learning systems with applications in large scale data processing, data annotations, data visualization, human-assisted data integration, systems and tools to interpret machine learning models as well as algorithm designs for active learning, online learning, and interpretable machine learning algorithms. The target audience for the workshop includes people who are interested in using machines to solve problems by having a human be an integral part of the process. This workshop serves as a platform where researchers can discuss approaches that bridge the gap between humans and machines and get the best of both worlds.

We welcome high-quality submissions in the broad area of human in the loop learning. A few (non-exhaustive) topics of interest include:

Systems for online and interactive learning algorithms,
Active/Interactive machine learning algorithm design,
Systems for collecting, preparing, and managing machine learning data,
Model understanding tools (verifying, diagnosing, debugging, visualization, introspection, etc),
Design, testing and assessment of interactive systems for data analytics,
Psychology of human concept learning,
Generalized additive models, sparsity and rule learning,
Interpretable unsupervised models (clustering, topic models, etc.),
Interpretation of black-box models (including deep neural networks),
Interpretability in reinforcement learning.

Author Information

Xin Wang (UC Berkeley)
Xin Wang (UC Berkeley)
Fisher Yu (University of California, Berkeley)
Shanghang Zhang (Petuum Inc.)
Joseph Gonzalez (University of California, Berkeley)
Yangqing Jia (Facebook)
Sarah Bird (Facebook AI Research)

Sarah Bird leads strategic projects to accelerate the adoption and impact of AI research in products at Facebook. Her current work focused on AI Ethics and developing AI responsibly. She is the one of the co-creators of [ONNX](http://onnx.ai/), an open standard for deep learning models, and a leader in the [Pytorch 1.0](https://pytorch.org/) project. Prior to joining Facebook, she was an AI systems researcher at Microsoft Research NYC and a technical advisor to Microsoft’s Data Group. She is one of the researchers behind [Microsoft’s Decision Service](https://azure.microsoft.com/en-us/services/cognitive-services/custom-decision-service/), one of the first general-purpose reinforcement-learning style cloud systems publicly released. She has a Ph.D. in computer science from UC Berkeley advised by Dave Patterson, Krste Asanovic, and Burton Smith.

Kush Varshney (IBM Research AI)
Been Kim (Google)
Adrian Weller (University of Cambridge, Alan Turing Institute)
Adrian Weller

Adrian Weller is Programme Director for AI at The Alan Turing Institute, the UK national institute for data science and AI, and is a Turing AI Fellow leading work on trustworthy Machine Learning (ML). He is a Principal Research Fellow in ML at the University of Cambridge, and at the Leverhulme Centre for the Future of Intelligence where he is Programme Director for Trust and Society. His interests span AI, its commercial applications and helping to ensure beneficial outcomes for society. Previously, Adrian held senior roles in finance. He received a PhD in computer science from Columbia University, and an undergraduate degree in mathematics from Trinity College, Cambridge.

More from the Same Authors