Workshop
|
|
Robust Deep Learning via Layerwise Tilted Exponentials
|
|
Poster
|
Tue 14:00
|
FlexRound: Learnable Rounding based on Element-wise Division for Post-Training Quantization
Jung Hyun Lee · Jeonghoon Kim · Se Jung Kwon · Dongsoo Lee
|
|
Poster
|
Tue 17:00
|
Scaling of Class-wise Training Losses for Post-hoc Calibration
Seungjin Jung · Seungmo Seo · Yonghyun Jeong · Jongwon Choi
|
|
Workshop
|
|
Robust Deep Learning via Layerwise Tilted Exponentials
Bhagyashree Puranik · Ahmad Beirami · Yao Qin · Upamanyu Madhow
|
|
Workshop
|
|
Scan and Snap: Understanding Training Dynamics and Token Composition in 1-layer Transformer
Yuandong Tian · Yiping Wang · Beidi Chen · Simon Du
|
|
Poster
|
Thu 13:30
|
MultiAdam: Parameter-wise Scale-invariant Optimizer for Multiscale Training of Physics-informed Neural Networks
Jiachen Yao · Chang Su · Zhongkai Hao · LIU SONGMING · Hang Su · Jun Zhu
|
|
Workshop
|
|
Guiding The Last Layer in Federated Learning with Pre-Trained Models
Gwen Legate · Nicolas Bernier · Lucas Caccia · Edouard Oyallon · Eugene Belilovsky
|
|
Workshop
|
|
Layer-Wise Feedback Alignment is Conserved in Deep Neural Networks
Zach Robertson · Sanmi Koyejo
|
|
Workshop
|
|
Arbitrary Decisions are a Hidden Cost of Differentially Private Training
Bogdan Kulynych · Hsiang Hsu · Carmela Troncoso · Flavio Calmon
|
|
Poster
|
Tue 14:00
|
FedAvg Converges to Zero Training Loss Linearly for Overparameterized Multi-Layer Neural Networks
Bingqing Song · Prashant Khanduri · xinwei zhang · Jinfeng Yi · Mingyi Hong
|
|
Poster
|
Wed 14:00
|
Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Chen Liang · Simiao Zuo · Qingru Zhang · Pengcheng He · Weizhu Chen · Tuo Zhao
|
|
Poster
|
Wed 14:00
|
Machine Learning Force Fields with Data Cost Aware Training
Alexander Bukharin · Tianyi Liu · Shengjie Wang · Simiao Zuo · Weihao Gao · Wen Yan · Tuo Zhao
|
|