Expo Workshop

PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use, efficient, flexible, and scalable deep learning platform, which was originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu such as Computer Vision (CV), NLP, and Speech. PaddlePaddle supports various neural network architectures and optimization algorithms. With PaddlePaddle, it is possible to leverage many CPUs/GPUs and machines to speed up training, achieving high throughput and performance via optimized communication. In this workshop, Baidu scientists and engineers will present a wide range of PaddlePaddle-based research and projects, from CV, NLP, graph learning, federated learning, few shot learning, to quantum computing.

Chat is not available.

 

Schedule
Sun 5:00 p.m. - 5:10 p.m.
Opening Remarks (Remarks)   
Dejing Dou
Sun 5:10 p.m. - 5:30 p.m.
  

PaddlePaddle has built a large-scale official model zoo with a large number of algorithms which have been practiced and polished for a long time in industry. PaddleCV, mainly focusing on the computer vision(CV) areas, provides multiple end-to-end development kits, including image classification, object detection, image segmentation, OCR and other scenarios, to meet the requirements of enterprises for low-cost development and rapid integration. In the demonstration applications, it is shown how to use ultra lightweight PP-OCR and PP-YOLOv2 systems to solve the OCR and object detection tasks among server, mobile, embedded and IoT devices, respectively.

Chenxia Li
Sun 5:30 p.m. - 5:50 p.m.
  

Neural architecture search (NAS) advances beyond the state-of-the-art in various computer vision tasks by automating the designs of deep neural networks. In this talk, we aim to address three important questions in NAS: (1) How to measure the correlation between architectures and their performances? (2) How to evaluate the correlation between different architectures? (3) How to learn these correlations with a small number of samples? To this end, we first model these correlations from a Bayesian perspective. Specifically, by introducing a novel Gaussian Process based NAS (GP-NAS) method, the correlations are modeled by the kernel function and mean function. The kernel function is also learnable to enable adaptive modeling for complex correlations in different search spaces. GP-NAS enables direct performance prediction of any architecture in different scenarios and may obtain efficient networks for different deployment platforms. GP-NAS won the Winner prize in all three tracks of the AIM 2020 Real Image Super-Resolution Challenge, the Winner prize in OVIC Image Track of 2020 Low-Power Computer Vision Challenge.

Teng Xi
Sun 5:50 p.m. - 6:10 p.m.
  

Accurate detection of obstacles in 3D is an essential task for autonomous driving and intelligent transportation. Based on different sensors, various approaches have been proposed and promising results have been achieved. In this talk, we will introduce our latest works on 3D object detection including sing-image-based, Lidar-only-based, and multimodal fusion-based approaches. Furthermore, the recent proposed multimodal fusion-based approach FusionPainting has outperformed other state-of-the-art methods with a big margin on the nuScenes testing benchmark and achieved the champion of 3D object detection task on the latest nuScenes challenges at ICRA 2021. We plan to release our work on Paddle3D soon.

Dingfu Zhou
Sun 6:10 p.m. - 6:30 p.m.
  

Semantic segmentation is an essential and challenging task with high potential values in a variety of applications, e.g., human-computer interaction, augmented reality , and driverless technology. In this talk, we will introduce a development toolkit for semantic segmentation called PaddleSeg which builds upon PaddlePaddle. It provides reliable implementations of classical methods. Then, we present our research works based on the PaddleSeg.

Tianyi Wu
Sun 6:30 p.m. - 6:50 p.m.
  

Deep models have been well-known both for their excellent performance and their black-box nature. In recent years, many interpretation tools have been proposed to explain or reveal the ways that deep models make decisions. To exploit these tools, we first review the state-of-the-art interpretation algorithms within a proposed taxonomy, and present InterpretDL, our open-source implementations of mainstream interpretation algorithms, for explanations of deep models based on PaddlePaddle.

Xuhong Li
Sun 6:50 p.m. - 7:00 p.m.
Break
Sun 7:00 p.m. - 7:20 p.m.
  

In this talk, we will introduce you to PGL, an efficient, flexible, and large-scale graph learning framework based on PaddlePaddle. One of the most important benefits of graph neural networks compared to other models is the ability to use node-to-node connectivity information, but coding the communication between nodes is very cumbersome. At PGL we adopt Message Passing Paradigm to make building a customize graph neural network convenient. We also provide several examples for industrial GNN deployment with a distributed trillion scale graph engine and parameter server. Furthermore, we will present our recent studies on GNNs which achieve several SOTAs or championship in recent graph challenges.

Zhengjie Huang
Sun 7:20 p.m. - 7:40 p.m.
  

Existed pre-training methods either focus on single-modal tasks or multi-modal tasks, and cannot effectively adapt to each other. However, human beings are very good at learning from multi-source heterogeneous data to better understand the physical concept. In this talk, we introduce unified modal learning, whose target is to learn from different modalities of information simultaneously in a more general way and has the ability to boost both single-modal and multi-modal tasks. Based on PaddlePaddle, we present a unified modal pre-training architecture namely UNIMO. It achieves SOTAs on several NLP and multi-modal benchmarks. We hope that unified model learning will provide a possible way to Artificial General Intelligence(AGI) and can be built together by community.

Guocheng Niu
Sun 7:40 p.m. - 8:00 p.m.
  

In recent years, data and computing resources are typically distributed in the devices of end users, various regions or organizations. Because of laws or regulations, the distributed data and computing resources cannot be directly aggregated or shared among different regions or organizations for data processing or machine learning tasks. Federated learning and data federation emerge as efficient approaches to exploit distributed data and computing resources, so as to train machine learning models and to collaboratively process data, while obeying the laws and regulations, and ensuring data security and data privacy. In this talk, we present a functional architecture of federated learning systems including PaddleFL. Then, we present our research works based on federated learning systems in Baidu.

Ji Liu
Sun 8:00 p.m. - 8:20 p.m.
  

Few-shot Learning (FSL) targets at bridging the gap between artificial intelligence (AI) and human learning. It can learn new tasks containing only a few examples with supervised information by incorporating prior knowledge. Besides acting as a test-bed for AI, FSL makes the learning of rare cases possible, such as predicting new molecular property given a few labeled molecules in drug discovery. It also helps to relieve the burden of collecting large-scale supervised date in industrial applications, where large-scale unlabeled data exists but high-quality labeled data is costly to acquire. In this talk, we will introduce a development toolkit for few-shot learning called PaddleFSL which builds upon PaddlePaddle. Currently, it provides reliable implementations of popularly used FSL methods in many classic applications such as image classification and relation extraction. It is also easy to customize PaddleFSL for other applications. We hope PaddleFSL can contribute to helping users from both academia and industry to easily conduct FSL.

Yaqing Wang
Sun 8:20 p.m. - 8:40 p.m.
  

Quantum Computing (QC) is believed to be the heart of next-generation computing technology. With the recent exciting progress on quantum algorithms and technologies in experimental quantum computing, it becomes quite attractive to utilize the advantages of information processing in quantum computing to promote the development of artificial intelligence (AI), and to break through the bottleneck of R&D in the quantum area with current AI technologies. In this talk, we discuss the interplay between QC and AI, and show our strategies in tackling key challenges in this new field. We then introduce the Baidu Quantum Platform (BQP) with a particular focus on Paddle Quantum, a quantum machine learning toolkit developed based on Baidu's deep learning platform, PaddlePaddle. With the help of BQP and its further development, we expect to explore more possibilities of Quantum AI, to build a sustainable quantum ecosystem, and finally to achieve our vision that "Everyone Can Quantum."

Xin Wang
Sun 8:40 p.m. - 8:55 p.m.
Free Discussion (Q&A)   
Sun 8:55 p.m. - 9:00 p.m.
Closing Remarks (Remarks)