Poster
Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs
Yeonhong Park · Jake Hyun · SangLyul Cho · Bonggeun Sim · Jae W. Lee
Hall C 4-9 #811
[
Abstract
]
[ Project Page ]
[ Paper PDF ]
Oral
presentation:
Oral 2F Efficient LLMs
Tue 23 Jul 7:30 a.m. PDT — 8:30 a.m. PDT
[
Poster]
Tue 23 Jul 4:30 a.m. PDT
— 6 a.m. PDT
Tue 23 Jul 7:30 a.m. PDT — 8:30 a.m. PDT
Abstract:
Recently, considerable efforts have been directed towards compressing Large Language Models (LLMs), which showcase groundbreaking capabilities across diverse applications but entail significant deployment costs due to their large sizes. Meanwhile, much less attention has been given to mitigating the costs associated with deploying multiple LLMs of varying sizes despite its practical significance. Thus, this paper introduces any-precision LLM, extending the concept of any-precision DNN to LLMs. Addressing challenges in any-precision LLM, we propose a lightweight method for any-precision quantization of LLMs, leveraging a post-training quantization framework, and develop a specialized software engine for its efficient serving. As a result, our solution significantly reduces the high costs of deploying multiple, different-sized LLMs by overlaying LLMs quantized to varying bit-widths, such as 3, 4, ..., $n$ bits, into a memory footprint comparable to a single $n$-bit LLM. All the supported LLMs with varying bit-widths demonstrate state-of-the-art model quality and inference throughput, proving itself to be a compelling option for deployment of multiple, different-sized LLMs.
Chat is not available.