Poster
 in 
Workshop: ES-FoMo III: 3rd Workshop on Efficient Systems for Foundation Models
                        
                    
                    Batch-Max: Higher LLM Throughput using Larger Batch Sizes and KV Cache Compression
Michael R. Metel · Boxing Chen · Mehdi Rezagholizadeh
                        Abstract:
                        
                            
                    
                Several works have developed eviction policies to remove key-value (KV) pairs from the KV cache for more efficient inference. The focus has been on compressing the KV cache after the input prompt has been processed for faster token generation. In settings with limited GPU memory, and when the input context is longer than the generation length, we show that by also compressing the KV cache during the input processing phase, larger batch sizes can be used resulting in significantly higher throughput while still maintaining the original model's accuracy.
Chat is not available.
            
        Successful Page Load