Timezone: »
Given the success of AdvML-inspired research, we propose a new edition from our workshop at ICML’22 (AdvML-Frontiers’22), ‘The 2nd Workshop on New Frontiers in AdvML’ (AdvML-Frontiers’23). We target a high-quality international workshop, coupled with new scientific activities, networking opportunities, and enjoyable social events. Scientifically, we aim to identify the challenges and limitations of current AdvML methods and explore new prospective and constructive views for next-generation AdvML across the full theory/algorithm/application stack. As the sequel to AdvML-Frontiers’22, we will continue exploring the new frontiers of AdvML in theoretical understanding, scalable algorithm and system designs, and scientific development that transcends traditional disciplinary boundaries. We will also add new features and programs in 2023. First, we will expand existing research themes, particularly considering the popularity of large foundational models (e.g., DALL-E 2, Stable Diffusion, and ChatGPT). Examples of topics include AdvML for prompt learning, counteracting AI-synthesized fake images and texts, debugging ML from unified data-model perspectives, and ‘green’ AdvML towards environmental sustainability. Second, we will organize a new section, AI Trust in Industry, by inviting industry experts to introduce the practical trend of AdvML, technological innovations, products, and societal impacts (e.g., AI’s responsibility). Third, we will host a Show-and-Tell Demos in the poster session to allow demonstrations of innovations done by research and engineering groups in the industry, academia, and government entities. Fourth, we will collaborate with ‘Black in AI’ (where Co-Organizer Dr. Sanmi Koyejo is serving as the president) to increase the presence and inclusion of Black people in the field of AdvML by creating spaces for sharing ideas and networking.
Author Information
Sijia Liu (Michigan State University & MIT-IBM Watson AI Lab)
Pin-Yu Chen (IBM Research AI)
Dongxiao Zhu (Wayne State University)
Dongxiao Zhu is currently an Associate Professor at Department of Computer Science, Wayne State University. He received the B.S. from Shandong University (1996), the M.S. from Peking University (1999) and the Ph.D. from University of Michigan (2006). Dongxiao Zhu's recent research interests are in Machine Learning and Applications in health informatics, natural language processing, medical imaging and other data science domains. Dr. Zhu is the Director of Machine Learning and Predictive Analytics (MLPA) Lab and the Director of Computer Science Graduate Program at Wayne State University. He has published over 70 peer-reviewed publications and numerous book chapters and he served on several editorial boards of scientific journals. Dr. Zhu's research has been supported by NIH, NSF and private agencies and he has served on multiple NIH and NSF grant review panels. Dr. Zhu has advised numerous students at undergraduate, graduate and postdoctoral levels and his teaching interest lies in programming language, data structures and algorithms, machine learning and data science.
Eric Wong (MIT)
Kathrin Grosse (University of Cagliari)
Baharan Mirzasoleiman (Stanford University)
Sanmi Koyejo (Stanford University)
More from the Same Authors
-
2021 : CrossWalk: Fairness-enhanced Node Representation Learning »
Ahmad Khajehnejad · Moein Khajehnejad · Krishna Gummadi · Adrian Weller · Baharan Mirzasoleiman -
2022 : Saliency Guided Adversarial Training for Tackling Generalization Gap with Applications to Medical Imaging Classification System »
Xin Li · Yao Qiang · CHNEGYIN LI · Sijia Liu · Dongxiao Zhu -
2022 : Investigating Why Contrastive Learning Benefits Robustness against Label Noise »
Yihao Xue · Kyle Whitecross · Baharan Mirzasoleiman -
2022 : Investigating Why Contrastive Learning Benefits Robustness against Label Noise »
Yihao Xue · Kyle Whitecross · Baharan Mirzasoleiman -
2023 Poster: Towards Sustainable Learning: Coresets for Data-efficient Deep Learning »
Yu Yang · Hao Kang · Baharan Mirzasoleiman -
2023 Poster: Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression »
Yihao Xue · Siddharth Joshi · Eric Gan · Pin-Yu Chen · Baharan Mirzasoleiman -
2023 Poster: Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning »
Yu Yang · Besmira Nushi · Hamid Palangi · Baharan Mirzasoleiman -
2023 Poster: Linearly Constrained Bilevel Optimization: A Smoothed Implicit Gradient Approach »
Prashant Khanduri · Ioannis Tsaknakis · Yihua Zhang · Jia Liu · Sijia Liu · Jiawei Zhang · Mingyi Hong -
2023 Poster: Data-Efficient Contrastive Self-supervised Learning: Most Beneficial Examples for Supervised Learning Contribute the Least »
Siddharth Joshi · Baharan Mirzasoleiman -
2023 Poster: Do Machine Learning Models Learn Statistical Rules Inferred from Data? »
Aaditya Naik · Yinjun Wu · Mayur Naik · Eric Wong -
2023 Poster: Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks »
Mohammed Nowaz Rabbani Chowdhury · Shuai Zhang · Meng Wang · Sijia Liu · Pin-Yu Chen -
2023 Oral: Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression »
Yihao Xue · Siddharth Joshi · Eric Gan · Pin-Yu Chen · Baharan Mirzasoleiman -
2023 Oral: Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks »
Mohammed Nowaz Rabbani Chowdhury · Shuai Zhang · Meng Wang · Sijia Liu · Pin-Yu Chen -
2022 : Less Data Can Be More! »
Baharan Mirzasoleiman -
2022 : Not All Poisons are Created Equal: Robust Training against Data Poisoning »
Yu Yang · Baharan Mirzasoleiman -
2022 Workshop: New Frontiers in Adversarial Machine Learning »
Sijia Liu · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Kathrin Grosse · Hima Lakkaraju · Sanmi Koyejo -
2022 Poster: Data-Efficient Double-Win Lottery Tickets from Robust Pre-training »
Tianlong Chen · Zhenyu Zhang · Sijia Liu · Yang Zhang · Shiyu Chang · Zhangyang “Atlas” Wang -
2022 Poster: Adaptive Second Order Coresets for Data-efficient Machine Learning »
Omead Pooladzandi · David Davini · Baharan Mirzasoleiman -
2022 Poster: Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning »
Momin Abbas · Quan Xiao · Lisha Chen · Pin-Yu Chen · Tianyi Chen -
2022 Poster: Investigating Why Contrastive Learning Benefits Robustness against Label Noise »
Yihao Xue · Kyle Whitecross · Baharan Mirzasoleiman -
2022 Poster: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Spotlight: Investigating Why Contrastive Learning Benefits Robustness against Label Noise »
Yihao Xue · Kyle Whitecross · Baharan Mirzasoleiman -
2022 Spotlight: Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning »
Momin Abbas · Quan Xiao · Lisha Chen · Pin-Yu Chen · Tianyi Chen -
2022 Spotlight: Adaptive Second Order Coresets for Data-efficient Machine Learning »
Omead Pooladzandi · David Davini · Baharan Mirzasoleiman -
2022 Spotlight: Data-Efficient Double-Win Lottery Tickets from Robust Pre-training »
Tianlong Chen · Zhenyu Zhang · Sijia Liu · Yang Zhang · Shiyu Chang · Zhangyang “Atlas” Wang -
2022 Spotlight: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Poster: Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling »
Hongkang Li · Meng Wang · Sijia Liu · Pin-Yu Chen · Jinjun Xiong -
2022 Poster: Not All Poisons are Created Equal: Robust Training against Data Poisoning »
Yu Yang · Tian Yu Liu · Baharan Mirzasoleiman -
2022 Poster: Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization »
Yihua Zhang · Guanhua Zhang · Prashant Khanduri · Mingyi Hong · Shiyu Chang · Sijia Liu -
2022 Spotlight: Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling »
Hongkang Li · Meng Wang · Sijia Liu · Pin-Yu Chen · Jinjun Xiong -
2022 Oral: Not All Poisons are Created Equal: Robust Training against Data Poisoning »
Yu Yang · Tian Yu Liu · Baharan Mirzasoleiman -
2022 Spotlight: Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization »
Yihua Zhang · Guanhua Zhang · Prashant Khanduri · Mingyi Hong · Shiyu Chang · Sijia Liu -
2022 Poster: Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework »
Ching-Yun (Irene) Ko · Jeet Mohapatra · Sijia Liu · Pin-Yu Chen · Luca Daniel · Lily Weng -
2022 Spotlight: Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework »
Ching-Yun (Irene) Ko · Jeet Mohapatra · Sijia Liu · Pin-Yu Chen · Luca Daniel · Lily Weng -
2021 : Data-efficient and Robust Learning from Massive Datasets »
Baharan Mirzasoleiman -
2021 Poster: CRFL: Certifiably Robust Federated Learning against Backdoor Attacks »
Chulin Xie · Minghao Chen · Pin-Yu Chen · Bo Li -
2021 Spotlight: CRFL: Certifiably Robust Federated Learning against Backdoor Attacks »
Chulin Xie · Minghao Chen · Pin-Yu Chen · Bo Li -
2021 Poster: Fold2Seq: A Joint Sequence(1D)-Fold(3D) Embedding-based Generative Model for Protein Design »
yue cao · Payel Das · Vijil Chenthamarakshan · Pin-Yu Chen · Igor Melnyk · Yang Shen -
2021 Spotlight: Fold2Seq: A Joint Sequence(1D)-Fold(3D) Embedding-based Generative Model for Protein Design »
yue cao · Payel Das · Vijil Chenthamarakshan · Pin-Yu Chen · Igor Melnyk · Yang Shen -
2021 Poster: Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not? »
Ning Liu · Geng Yuan · Zhengping Che · Xuan Shen · Xiaolong Ma · Qing Jin · Jian Ren · Jian Tang · Sijia Liu · Yanzhi Wang -
2021 Spotlight: Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not? »
Ning Liu · Geng Yuan · Zhengping Che · Xuan Shen · Xiaolong Ma · Qing Jin · Jian Ren · Jian Tang · Sijia Liu · Yanzhi Wang -
2021 Poster: Voice2Series: Reprogramming Acoustic Models for Time Series Classification »
Huck Yang · Yun-Yun Tsai · Pin-Yu Chen -
2021 Spotlight: Voice2Series: Reprogramming Acoustic Models for Time Series Classification »
Huck Yang · Yun-Yun Tsai · Pin-Yu Chen -
2020 Poster: Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing »
Sanghamitra Dutta · Dennis Wei · Hazar Yueksel · Pin-Yu Chen · Sijia Liu · Kush Varshney -
2020 Poster: Proper Network Interpretability Helps Adversarial Robustness in Classification »
Akhilan Boopathy · Sijia Liu · Gaoyuan Zhang · Cynthia Liu · Pin-Yu Chen · Shiyu Chang · Luca Daniel -
2020 Poster: Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources »
Yun Yun Tsai · Pin-Yu Chen · Tsung-Yi Ho -
2020 Poster: Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case »
shuai zhang · Meng Wang · Sijia Liu · Pin-Yu Chen · Jinjun Xiong -
2019 Poster: Fast Incremental von Neumann Graph Entropy Computation: Theory, Algorithm, and Applications »
Pin-Yu Chen · Lingfei Wu · Sijia Liu · Indika Rajapakse -
2019 Poster: PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach »
Tsui-Wei Weng · Pin-Yu Chen · Lam Nguyen · Mark Squillante · Akhilan Boopathy · Ivan Oseledets · Luca Daniel -
2019 Oral: Fast Incremental von Neumann Graph Entropy Computation: Theory, Algorithm, and Applications »
Pin-Yu Chen · Lingfei Wu · Sijia Liu · Indika Rajapakse -
2019 Oral: PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach »
Tsui-Wei Weng · Pin-Yu Chen · Lam Nguyen · Mark Squillante · Akhilan Boopathy · Ivan Oseledets · Luca Daniel