Timezone: »

 
Oral
Monarch: Expressive Structured Matrices for Efficient and Accurate Training
Tri Dao · Beidi Chen · Nimit Sohoni · Arjun Desai · Michael Poli · Jessica Grogan · Alexander Liu · Aniruddh Rao · Atri Rudra · Christopher Re

Tue Jul 19 11:00 AM -- 11:20 AM (PDT) @ Ballroom 1 & 2

Large neural networks excel in many domains, but they are expensive to train and fine-tune. A popular approach to reduce their compute or memory requirements is to replace dense weight matrices with structured ones (e.g., sparse, low-rank, Fourier transform). These methods have not seen widespread adoption (1) in end-to-end training due to unfavorable efficiency--quality tradeoffs, and (2) in dense-to-sparse fine-tuning due to lack of tractable algorithms to approximate a given dense weight matrix. To address these issues, we propose a class of matrices (Monarch) that is hardware-efficient (they are parameterized as products of two block-diagonal matrices for better hardware utilization) and expressive (they can represent many commonly used transforms). Surprisingly, the problem of approximating a dense weight matrix with a Monarch matrix, though nonconvex, has an analytical optimal solution. These properties of Monarch matrices unlock new ways to train and fine-tune sparse and dense models. We empirically validate that Monarch can achieve favorable accuracy-efficiency tradeoffs in several end-to-end sparse training applications: speeding up ViT and GPT-2 training on ImageNet classification and Wikitext-103 language modeling by 2x with comparable model quality, and reducing the error on PDE solving and MRI reconstruction tasks by 40%. In sparse-to-dense training, with a simple technique called "reverse sparsification," Monarch matrices serve as a useful intermediate representation to speed up GPT-2 pretraining on OpenWebText by 2x without quality drop. The same technique brings 23% faster BERT pretraining than even the very optimized implementation from Nvidia that set the MLPerf 1.1 record. In dense-to-sparse fine-tuning, as a proof-of-concept, our Monarch approximation algorithm speeds up BERT fine-tuning on GLUE by 1.7x with comparable accuracy.

Author Information

Tri Dao (Stanford)
Beidi Chen (Stanford University)
Nimit Sohoni (Stanford University)
Arjun Desai (Stanford University)
Michael Poli (Stanford University)
Jessica Grogan (University at Buffalo)
Jessica Grogan

Current Ph.D. student at the University at Buffalo, interested in Theoretical Computer Science.

Alexander Liu (University of Michigan)
Aniruddh Rao (University of Michigan)
Atri Rudra (University at Buffalo, SUNY)
Christopher Re (Stanford University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors