Timezone: »
Big Data, Deep Learning, and huge computing are shaping up AI and are transforming our society. Especially, learning with deep neural networks has achieved great success across a wide variety of tasks. To help increase speed to solution and reduce duplication of effort, automated model construction is of great interest to provide architecture-effective and domain-adaptive deep models. On the other hand, understanding model behaviors and building trust in model prediction are of particular importance in many applications such as autonomous driving, medical and fintech tasks. The research of automated and interpretable deep learning should at least include the following key components: (1) neural architecture search (2) model construction with a changing environment such as transfer learning and (3) understanding and interpretation of deep learning models.
In this panel we plan to focus on timely topics of above areas. The panel will include a comprehensive survey of state-of-the-art algorithms and systems, a detailed description of the presenters’ research experience, and live-demonstration of platforms built by the Baidu AutoDL team. Through this panel, attendees will gain an understanding of how to efficiently build automated deep learning models and enhance their trustworthiness. Our panel will also speed up the process of turning deep learning research results into industrial products by introducing Baidu AutoDL, a tool that facilitates automated and interpretable deep learning.
Presenters: Bolei Zhou, Yi Yang, Quanshi Zhang, Dejing Dou, Haoyi Xiong, Jiahui Yu, Humphrey Shi, Linchao Zhu, Xingjian Li
Talk(Haoyi Xiong): 6:30-7:00 (Los Angeles) / 21:30-22:00 (Beijing), 12/07/2020
Panel: 7:00-7:40 (Los Angeles) / 22:00-22:40 (Beijing), 12/07/2020
Live Zoom Room:
https://zoom.us/j/66673887901

Password: 157673
Author Information
Bolei Zhou (CUHK)
Yi Yang (University of Technology Sydney)
Quanshi Zhang (Shanghai Jiao Tong University)
Dejing Dou (Baidu)
Haoyi Xiong (Baidu Research)
Jiahui Yu (Google)
Humphrey Shi (University of Oregon)
Linchao Zhu (University of Technology Sydney)
Xingjian Li (Baidu)
More from the Same Authors
-
2023 Poster: Bayesian Neural Networks Avoid Encoding Complex and Perturbation-Sensitive Concepts »
Qihan Ren · Huiqi Deng · Yunuo Chen · Siyu Lou · Quanshi Zhang -
2023 Poster: HarsanyiNet: Computing Accurate Shapley Values in a Single Forward Propagation »
Lu Chen · Siyu Lou · Keyan Zhang · JIN HUANG · Quanshi Zhang -
2023 Poster: Defects of Convolutional Decoder Networks in Frequency Representation »
Ling Tang · Wen Shen · Zhanpeng Zhou · YueFeng Chen · Quanshi Zhang -
2023 Poster: Does a Neural Network Really Encode Symbolic Concepts? »
Mingjie Li · Quanshi Zhang -
2022 Poster: Self-supervised learning with random-projection quantizer for speech recognition »
Chung-Cheng Chiu · James Qin · Yu Zhang · Jiahui Yu · Yonghui Wu -
2022 Spotlight: Self-supervised learning with random-projection quantizer for speech recognition »
Chung-Cheng Chiu · James Qin · Yu Zhang · Jiahui Yu · Yonghui Wu -
2022 Poster: Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs »
Jie Ren · Mingjie Li · Meng Zhou · Shih-Han Chan · Quanshi Zhang -
2022 Poster: Accelerated Federated Learning with Decoupled Adaptive Optimization »
Jiayin Jin · Jiaxiang Ren · Yang Zhou · Lingjuan Lyu · Ji Liu · Dejing Dou -
2022 Spotlight: Accelerated Federated Learning with Decoupled Adaptive Optimization »
Jiayin Jin · Jiaxiang Ren · Yang Zhou · Lingjuan Lyu · Ji Liu · Dejing Dou -
2022 Spotlight: Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs »
Jie Ren · Mingjie Li · Meng Zhou · Shih-Han Chan · Quanshi Zhang -
2022 Poster: Quantification and Analysis of Layer-wise and Pixel-wise Information Discarding »
Haotian Ma · Hao Zhang · Fan Zhou · Yinqing Zhang · Quanshi Zhang -
2022 Spotlight: Quantification and Analysis of Layer-wise and Pixel-wise Information Discarding »
Haotian Ma · Hao Zhang · Fan Zhou · Yinqing Zhang · Quanshi Zhang -
2021 Workshop: ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI »
Quanshi Zhang · Tian Han · Lixin Fan · Zhanxing Zhu · Hang Su · Ying Nian Wu -
2021 : [12:00 - 12:02 PM UTC] Welcome »
Quanshi Zhang -
2021 Poster: Interpreting and Disentangling Feature Components of Various Complexity from DNNs »
Jie Ren · Mingjie Li · Zexu Liu · Quanshi Zhang -
2021 Spotlight: Interpreting and Disentangling Feature Components of Various Complexity from DNNs »
Jie Ren · Mingjie Li · Zexu Liu · Quanshi Zhang -
2021 Expo Workshop: PaddlePaddle-based Deep Learning at Baidu »
Dejing Dou · Chenxia Li · Teng Xi · Dingfu Zhou · Tianyi Wu · Xuhong Li · Zhengjie Huang · Guocheng Niu · Ji Liu · Yaqing Wang · Xin Wang · Qianwei Cai -
2021 : Opening Remarks »
Dejing Dou -
2020 Poster: RIFLE: Backpropagation in Depth for Deep Transfer Learning through Re-Initializing the Fully-connected LayEr »
Xingjian Li · Haoyi Xiong · Haozhe An · Cheng-Zhong Xu · Dejing Dou -
2020 Poster: On the Noisy Gradient Descent that Generalizes as SGD »
Jingfeng Wu · Wenqing Hu · Haoyi Xiong · Jun Huan · Vladimir Braverman · Zhanxing Zhu -
2019 Poster: Towards a Deep and Unified Understanding of Deep Neural Models in NLP »
Chaoyu Guan · Xiting Wang · Quanshi Zhang · Runjin Chen · Di He · Xing Xie -
2019 Oral: Towards a Deep and Unified Understanding of Deep Neural Models in NLP »
Chaoyu Guan · Xiting Wang · Quanshi Zhang · Runjin Chen · Di He · Xing Xie