Skip to yearly menu bar Skip to main content


Poster

Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs

Jordan Dotzel · Yuzong Chen · Bahaa Kotb · Sushma Prasad · Gang Wu · Sheng Li · Mohamed Abdelfattah · Zhiru Zhang


Abstract:

Large language models (LLMs) have recently achieved state-of-the-art performance across various tasks, yet due to their large computational requirements, they struggle with strict latency and power demands. Deep neural network (DNN) quantization has traditionally addressed these limitations by converting models to low-precision integer formats. Yet recently alternative formats, such as Normal Float (NF4), have been shown to consistently increase model accuracy, albeit at the cost of increased chip area. In this work, we first conduct a large-scale analysis of LLM weights and activations across 30 networks to conclude most distributions follow a Student's t-distribution. We then derive a new theoretically optimal format, Student Float (SF4), with respect to this distribution, that improves over NF4 across modern LLMs, for example increasing the LAMBADA accuracy on LLaMA2-7B by 0.76%. Using this format as a high-accuracy reference, we then propose augmenting E2M1 with two variants of supernormal support to reassign negative zero for higher model accuracy. Finally, we explore the quality and performance frontier across datatypes, including non-traditional formats like Additive-Powers-of-Two (APoT), by evaluating their model accuracy and hardware complexity. We discover a Pareto curve composed of INT4, E2M1, and E2M1 with supernormal support, which offers a continuous tradeoff between model accuracy and chip area. For example, E2M1 with supernormal support increases the accuracy of Phi-2 by up to 2.19% with 1.22% area overhead, enabling more LLM-based applications to be run at four bits.

Live content is unavailable. Log in and register to view live content