Timezone: »
Training machine learning models in a centralized fashion often faces significant challenges due to regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to create and maintain a central data repository, and regulatory guidelines (GDPR, HIPAA) that restrict sharing sensitive data. Federated learning (FL) is a new paradigm in machine learning that can mitigate these challenges by training a global model using distributed data, without the need for data sharing. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarization with and adoption of this relevant and timely topic among the scientific community.Despite the advantages of federated learning, and its successful application in certain industry-based cases, this field is still in its infancy due to new challenges that are imposed by limited visibility of the training data, potential lack of trust among participants training a single model, potential privacy inferences, and in some cases, limited or unreliable connectivity.The goal of this workshop is to bring together researchers and practitioners interested in FL. This day-long event will facilitate interaction among students, scholars, and industry professionals from around the world to understand the topic, identify technical challenges, and discuss potential solutions. This will lead to an overall advancement of FL and its impact in the community.
For a detailed workshop schedule, please visit: http://federated-learning.org/fl-icml-2020/
Workshop date: July 18, 2020 (Saturday)
Starting at 9 am in US Eastern Daylight Time, https://time.is/EDT
Sat 5:45 a.m. - 6:00 a.m.
|
Arrival (Presenters should connect and test the system)
Presenters of Keynote Session 1 and Technical Talks Session 1 please connect to the main Zoom room of this workshop, to make sure that everything works well. |
🔗 |
Sat 6:00 a.m. - 6:10 a.m.
|
Opening remarks
(
Talk
)
|
Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Ramesh Raskar · Shiqiang Wang · Han Yu 🔗 |
Sat 6:10 a.m. - 6:35 a.m.
|
Keynote Session 1: Balancing Efficiency and Security in Federated Learning, by Qiang Yang (WeBank)
(
Talk
)
Abstract: Federated learning systems need to balance the efficiency and security of machine learning algorithms while maintaining model accuracy. In this talk we discuss this trade-off in two settings. One is when two collaborating organisations wish to transfer the knowledge from one to another via a federated learning framework. We present a federated transfer learning algorithm to both improve the security and the performance while preserving privacy. Another case is when one exploits differential privacy in a federated learning framework to ensure efficiency, but this may cause security degradation. To solve the problem, we employ a dual-headed network architecture that guarantees training data privacy by exerting secret gradient perturbations to original gradients, while maintaining high performance of the global shared model. We find that the combination of secret-public networks provides a preferable alternative to DP-based mechanisms in federated learning applications. Biography: Qiang Yang is Chief Artificial Intelligence Officer of WeBank and Chair Professor of CSE Department of Hong Kong Univ. of Sci. and Tech. He is the Conference Chair of AAAI-21, President of Hong Kong Society of Artificial Intelligence and Robotics(HKSAIR) and a former President of IJCAI (2017-2019). He is a fellow of AAAI, ACM, IEEE and AAAS. His research interests include transfer learning and federated learning. He is the founding EiC of two journals: IEEE Transactions on Big Data and ACM Transactions on Intelligent Systems and Technology. |
Qiang Yang 🔗 |
Sat 6:35 a.m. - 7:25 a.m.
|
Technical Talks Session 1
(
Talk
)
|
Ishika Singh · Laura Rieger · Rasmus Høegh · Hanlin Lu · Wonyong Jeong 🔗 |
Sat 7:25 a.m. - 7:40 a.m.
|
Break (Presenters should connect and test the system)
Presenters of Keynote Session 2 and Lightning Talks Session 1 please connect to the main Zoom room of this workshop, to make sure that everything works well. |
🔗 |
Sat 7:40 a.m. - 8:05 a.m.
|
Keynote Session 2: Federated Learning in Enterprise Settings, by Rania Khalaf (IBM Research)
(
Talk
)
Abstract: Federated learning in consumer scenarios has garnered a lot of interest. However, its application in large enterprises brings to bear additional needs and guarantees. In this talk, I will highlight key drivers for federated learning in enterprises, illustrate representative uses cases, and summarize the requirements for a platform that can support it. I will then present the newly released IBM Federated Learning framework (git, white paper) and show how it can be used and extended by researchers. Finally, I will highlight recent advances in federated learning and privacy from IBM Research. Biography: Rania Khalaf is the Director of AI Platforms and Runtimes at IBM Research where she leads teams pushing the envelope in AI platforms to make creating AI models and applications easy, fast, and safe for data scientists and developers. Her multi-disciplinary teams tackle key problems at the intersection of core AI, distributed systems, human computer interaction and cloud computing. Prior to this role, Rania was Director of Cloud Platform, Programming Models and Runtimes. Rania serves as a Judge for the MIT Solve AI for Humanity Prize, on the Leadership Challenge Group for MIT Solve's Learning for Girils and Women Challenge and on the Advisory Board of the Hariri Institute for Computing at Boston University. She has received several Outstanding Technical Innovation awards for major impact to the field of computer science and was a finalist for the 2019 MassTLC CTO of the Year award. |
Rania Khalaf 🔗 |
Sat 8:05 a.m. - 8:35 a.m.
|
Lightning Talks Session 1
(
Talk
)
|
Zhaohui Yang · Angel Navia-Vázquez · KUN LI · Hajime Ono · Yang Liu · Yuejiao Sun · Shahab Asoodeh · Chihoon Hwang · Romuald Menuet 🔗 |
Sat 8:35 a.m. - 9:05 a.m.
|
Poster Session 1
(
Poster
)
link »
Poster session with presenters of Lightning Talks Session 1. Individual Zoom links will be provided separately. |
🔗 |
Sat 9:05 a.m. - 10:20 a.m.
|
Lunch (Presenters should connect and test the system 15 minutes before the next session starts)
Presenters of Keynote Session 3 and Technical Talks Session 2 please connect to the main Zoom room of this workshop at 1:05 pm EDT, to make sure that everything works well. |
🔗 |
Sat 10:20 a.m. - 10:45 a.m.
|
Keynote Session 3: Federated Learning Applications in Alexa, by Shiv Vitaladevuni (Amazon Alexa)
(
Talk
)
Abstract: Alexa is a virtual assistant AI technology launched by Amazon in 2014. One of key enabling technologies is wakeword, which allows users to interact with Alexa devices hands-free via voice. We present some of the unique ML challengesposed in wakeword, and how Federated Learning can be used to address them. We also present some considerations when bringing Federated Learning to consumer grade, embedded applications. Biography: Shiv Vitaladevuni is a Senior Manager in Machine Learning at Amazon Alexa, focusing on R&D for Alexa family of devices such as Echo, Dot, FireTV, etc. At Amazon, Shiv leads a team of scientists and engineers inventing embedded speechand ML products used by millions of Alexa customers across all Alexa devices, around the globe. His team conducts research in areas such as Federated ML, Large scale semi/unsupervised learning, User diversity and fairness in ML, Speaker adaptation and personalization,memory efficient deep learning models, etc. Prior to Amazon, Shiv worked on video and text document analysis at Raytheon BBN Technologies, and bio-medical image analysis at Howard Hughes Medical Institute. |
Shiv Vitaladevuni 🔗 |
Sat 10:45 a.m. - 12:10 p.m.
|
Technical Talks Session 2
(
Talk
)
|
Jinhyun So · Chong Liu · Honglin Yuan · Krishna Pillutla · Leighton P Barnes · Ashkan Yousefpour · Swanand Kadhe 🔗 |
Sat 12:10 p.m. - 12:25 p.m.
|
Break (Presenters should connect and test the system)
Presenters of Keynote Session 4 and Lightning Talks Session 2 please connect to the main Zoom room of this workshop, to make sure that everything works well. |
🔗 |
Sat 12:25 p.m. - 12:50 p.m.
|
Keynote Session 4: The Shuffle Model and Federated Learning, by Ilya Mironov (Facebook)
(
Talk
)
Abstract: The shuffle model of computation, also known as the Encode-Shuffle-Analyze (ESA) architecture, is a recently introduced powerful approach towards combining anonymization channels and differentially private distributed computations. We present general results about amplification-by-shuffling unlocked by ESA, as well as more specialized theoretical and empirical findings. We discuss challenges of instantiating the shuffle model in practice. Biography: Ilya Mironov obtained his Ph.D. in cryptography from Stanford in 2003. In 2003-2014 he was a member of Microsoft Research-Silicon Valley Campus, where he contributed to early works on differential privacy. In 2015-2019 he worked in Google Brain. Since 2019 he has been part of Facebook AI working on privacy-preserving machine learning. |
Ilya Mironov 🔗 |
Sat 12:50 p.m. - 1:15 p.m.
|
Lightning Talks Session 2
(
Talk
)
|
Jichan Chung · Saurav Prakash · Mikhail Khodak · Ravi Rahman · Vaikkunth Mugunthan · xinwei zhang · Hossein Hosseini 🔗 |
Sat 1:15 p.m. - 1:45 p.m.
|
Poster Session 2
(
Poster
)
Poster session with presenters of Lightning Talks Session 2. Individual Zoom links will be provided separately. |
🔗 |
Sat 1:45 p.m. - 2:00 p.m.
|
Break (Presenters should connect and test the system)
Presenter of Keynote Session 5 please connect to the main Zoom room of this workshop, to make sure that everything works well. |
🔗 |
Sat 2:00 p.m. - 2:25 p.m.
|
Keynote Session 5: Advances and Open Problems in Federated Learning, by Brendan McMahan (Google)
(
Talk
)
Abstract: Motivated by the explosive growth in federated learning research, 22 Google researchers and 36 academics from 24 institutions collaborated on a paper titled Advances and Open Problems in Federated Learning. In this talk, I will survey some of the main themes from the paper, particularly the defining characteristics and challenges of different FL settings. I will then briefly discuss some of the ways FL increasingly powers Google products, and also highlight several exciting FL research results from Google. Biography: Brendan McMahan is a research scientist at Google, where he leads efforts on decentralized and privacy-preserving machine learning. His team pioneered the concept of federated learning, and continues to push the boundaries of what is possible when working with decentralized data using privacy-preserving techniques. Previously, he has worked in the fields of online learning, large-scale convex optimization, and reinforcement learning. Brendan received his Ph.D. in computer science from Carnegie Mellon University. |
Brendan McMahan 🔗 |
Sat 2:25 p.m. - 2:35 p.m.
|
Closing remarks
(
Talk
)
|
Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Ramesh Raskar · Shiqiang Wang · Han Yu 🔗 |
Author Information
Nathalie Baracaldo (IBM Research)
Olivia Choudhury (IBM Research)
Olivia Choudhury (Amazon)
Gauri Joshi (Carnegie Mellon University)
Ramesh Raskar (Massachusetts Institute of Technology)
Gauri Joshi (Carnegie Mellon University)
Shiqiang Wang (IBM Research)
Han Yu (Nanyang Technological University)
More from the Same Authors
-
2021 : Parallel Quasi-concave set optimization: A new frontier that scales without needing submodularity »
Praneeth Vepakomma · Ramesh Raskar -
2021 : BiG-Fed: Bilevel Optimization Enhanced Graph-Aided Federated Learning »
Pengwei Xing · Xing pengwei · Han Yu -
2021 : Industrial Booth (IBM) »
Shiqiang Wang · Nathalie Baracaldo -
2023 : Towards a Theoretical and Practical Understanding of One-Shot Federated Learning with Fisher Information »
Divyansh Jhunjhunwala · Shiqiang Wang · Gauri Joshi -
2023 : A New Theoretical Perspective on Data Heterogeneity in Federated Optimization »
Jiayi Wang · Shiqiang Wang · Rong-Rong Chen · Mingyue Ji -
2023 Workshop: Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities »
Zheng Xu · Peter Kairouz · Bo Li · Tian Li · John Nguyen · Jianyu Wang · Shiqiang Wang · Ayfer Ozgur -
2023 Poster: The Blessing of Heterogeneity in Federated Q-Learning: Linear Speedup and Beyond »
Jiin Woo · Gauri Joshi · Yuejie Chi -
2023 Poster: On the Convergence of Federated Averaging with Cyclic Client Participation »
Yae Jee Cho · PRANAY SHARMA · Gauri Joshi · Zheng Xu · Satyen Kale · Tong Zhang -
2023 Poster: LESS-VFL: Communication-Efficient Feature Selection for Vertical Federated Learning »
Timothy Castiglia · Yi Zhou · Shiqiang Wang · Swanand Kadhe · Nathalie Baracaldo · Stacy Patterson -
2022 Poster: Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data »
Timothy Castiglia · Anirban Das · Shiqiang Wang · Stacy Patterson -
2022 Poster: Federated Reinforcement Learning: Linear Speedup Under Markovian Sampling »
sajad khodadadian · PRANAY SHARMA · Gauri Joshi · Siva Maguluri -
2022 Spotlight: Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data »
Timothy Castiglia · Anirban Das · Shiqiang Wang · Stacy Patterson -
2022 Oral: Federated Reinforcement Learning: Linear Speedup Under Markovian Sampling »
sajad khodadadian · PRANAY SHARMA · Gauri Joshi · Siva Maguluri -
2022 Poster: Federated Minimax Optimization: Improved Convergence Analyses and Algorithms »
PRANAY SHARMA · Rohan Panda · Gauri Joshi · Pramod K Varshney -
2022 Spotlight: Federated Minimax Optimization: Improved Convergence Analyses and Algorithms »
PRANAY SHARMA · Rohan Panda · Gauri Joshi · Pramod K Varshney -
2021 : Parallel Quasi-concave set optimization: A new frontier that scales without needing submodularity »
Ramesh Raskar · Praneeth Vepakomma -
2021 : Closing Remarks »
Shiqiang Wang · Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu -
2021 : Industrial Panel »
Nathalie Baracaldo · Shiqiang Wang · Peter Kairouz · Zheng Xu · Kshitiz Malik · Tao Zhang -
2021 Workshop: International Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with ICML 2021 (FL-ICML'21) »
Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Shiqiang Wang · Han Yu -
2021 : Opening Remarks »
Shiqiang Wang · Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Peter Richtarik · Praneeth Vepakomma · Han Yu -
2021 Social: Mentoring Social »
Olivia Choudhury · Vaidheeswaran Archana -
2021 : Governance in FL: Providing AI Fairness and Accountability »
Nathalie Baracaldo · Ali Anwar · Annie Abay -
2021 Expo Talk Panel: Enterprise-Strength Federated Learning: New Algorithms, New Paradigms, and a Participant-Interactive Demonstration Session »
Laura Wynter · Nathalie Baracaldo · Chaitanya Kumar · Parijat Dube · Mikhail Yurochkin · Theodoros Salonidis · Shiqiang Wang -
2021 : Adaptive Federated Learning for Communication and Computation Efficiency (2021 IEEE Leonard Prize-winning work). »
Shiqiang Wang -
2020 : Closing remarks »
Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Ramesh Raskar · Shiqiang Wang · Han Yu -
2020 : Opening remarks »
Nathalie Baracaldo · Olivia Choudhury · Gauri Joshi · Ramesh Raskar · Shiqiang Wang · Han Yu -
2019 Workshop: Coding Theory For Large-scale Machine Learning »
Viveck Cadambe · Pulkit Grover · Dimitris Papailiopoulos · Gauri Joshi