Skip to yearly menu bar Skip to main content


Poster

POET-X: Memory-efficient LLM Training by Scaling Orthogonal Transformation

Zeju Qiu ⋅ Lixin LIU ⋅ Adrian Weller ⋅ Han Shi ⋅ Weiyang Liu

Abstract

Log in and register to view live content