Skip to content

zcli-charlie/Awesome-KV-Cache

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 

Repository files navigation

Stargazers Forks Contributors MIT License

Awesome KV Caching (WIP)

Curated collection of papers and resources on how to build a efficient KV Cache system for LLM inference service.

The template is derived from Awesome-LLM-Reasoning. Still Work In Progress.

Pre-Train Stage, Structural Modification

  1. Long-Context Language Modeling with Parallel Context Encoding ACL 2024

    Howard Yen, Tianyu Gao, Danqi Chen [Paper] [Code], 2024.2

  2. GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints EMNLP 2023

    Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, Sumit Sanghai [Paper], 2023.05

  3. Reducing Transformer Key-Value Cache Size with Cross-Layer Attention Preprint

    William Brandon, Mayank Mishra, Aniruddha Nrusimha, Rameswar Panda, Jonathan Ragan Kelly [Paper], 2024.5

  4. You Only Cache Once: Decoder-Decoder Architectures for Language Models Preprint

    Yutao Sun, Li Dong, Yi Zhu, Shaohan Huang, Wenhui Wang, Shuming Ma, Quanlu Zhang, Jianyong Wang, Furu Wei [Paper] [Code], 2024.5

  5. GoldFinch: High Performance RWKV/Transformer Hybrid with Linear Pre-Fill and Extreme KV-Cache Compression Preprint

    Daniel Goldstein, Fares Obeid, Eric Alcaide, Guangyu Song, Eugene Cheah [Paper] [Code], 2024.7

  6. DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model Preprint

    DeepSeek-AI Team [Paper], 2024.5

↑ Back to Top ↑

Deploy Stage, Inference System

Lossless Method

  1. Efficient Memory Management for Large Language Model Serving with PagedAttention SOSP 2023

    Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, Ion Stoica [Paper] [Code], 2023.10 Pubed

  2. ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition ACL 2024

    Lu Ye, Ze Tao, Yong Huang, Yang Li [Paper] [Code], 2024.2

  3. FastDecode: High-Throughput GPU-Efficient LLM Serving using Heterogeneous Pipelines Preprint

    Jiaao He, Jidong Zhai [Paper], 2024.3

  4. Infinite-LLM: Efficient LLM Service for Long Context with DistAttention and Distributed KVCache Preprint

    Bin Lin, Chen Zhang, Tao Peng, Hanyu Zhao, Wencong Xiao, Minmin Sun, Anmin Liu, Zhipeng Zhang, Lanbo Li, Xiafei Qiu, Shen Li, Zhigang Ji, Tao Xie, Yong Li, Wei Lin [Paper], 2024.1

  5. Mooncake: A KVCache-centric Disaggregated Architecture for LLM Serving Preprint

    Ruoyu Qin, Zheming Li, Weiran He, Mingxing Zhang, Yongwei Wu, Weimin Zheng, Xinran Xu [Paper], 2024.6

Lossy Method

  1. InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management OSDI 2024

    Wonbeom Lee, Jungi Lee, Junghwan Seo, Jaewoong Sim [Paper], 2024.6

  2. Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention ATC 2024

    Bin Gao, Zhuomin He, Puru Sharma, Qingxuan Kang, Djordje Jevdjic, Junbo Deng, Xingkun Yang, Zhou Yu, Pengfei Zuo [Paper], 2024.3

  3. InfLLM: Training-Free Long-Context Extrapolation for LLMs with an Efficient Context Memory Preprint

    Chaojun Xiao, Pengle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Maosong Sun [Paper], 2024.2

  4. Mnemosyne: Parallelization Strategies for Efficiently Serving Multi-Million Context Length LLM Inference Requests Without Approximations Preprint

    Amey Agrawal, Junda Chen, Íñigo Goiri, Ramachandran Ramjee, Chaojie Zhang, Alexey Tumanov, Esha Choukse [Paper], 2024.9

  5. Post-Training Sparse Attention with Double Sparsity Preprint

    Shuo Yang, Ying Sheng, Joseph E. Gonzalez, Ion Stoica, Lianmin Zheng [Paper], 2024.8

↑ Back to Top ↑

Post-Train Stage

Static Eviction

  1. Longformer: The Long-Document Transformer Preprint

    Iz Beltagy, Matthew E. Peters, Arman Cohan [Paper], 2020.4

  2. Efficient Streaming Language Models with Attention Sinks ICLR 2024

    Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis [Paper], 2023.9

  3. LM-Infinite: Zero-Shot Extreme Length Generalization for Large Language Models NAACL 2024

    Chi Han, Qifan Wang, Hao Peng, Wenhan Xiong, Yu Chen, Heng Ji, Sinong Wang [Paper], 2023.12

  4. RazorAttention: Efficient KV Cache Compression Through Retrieval Heads Preprint

    Hanlin Tang, Yang Lin, Jing Lin, Qingsen Han, Shikuan Hong, Yiwu Yao, Gongyi Wang [Paper], 2024.7

Dynamic Eviction

  1. H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models NeurIPS 2023

    Zhenyu Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher Ré, Clark Barrett, Zhangyang "Atlas" Wang, Beidi Chen [Paper], 2023.4

  2. Scissorhands: Exploiting the Persistence of Importance Hypothesis for LLM KV Cache Compression at Test Time NeurIPS 2023

    Zichang Liu, Aditya Desai, Fangshuo Liao, Weitao Wang, Victor Xie, Zhaozhuo Xu, Anastasios Kyrillidis, Anshumali Shrivastava [Paper], 2023.4

  3. PyramidInfer: Pyramid KV Cache Compression for High-throughput LLM Inference ACL 2024

    Dongjie Yang, XiaoDong Han, Yan Gao, Yao Hu, Shilin Zhang, Hai Zhao [Paper], 2024.2

  4. Keyformer: KV Cache reduction through key tokens selection for Efficient Generative Inference MLSys 2024

    Muhammad Adnan, Akhil Arunkumar, Gaurav Jain, Prashant Nair, Ilya Soloveychik, Purushotham Kamath [Paper], 2024.3

  5. Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs ICLR 2024 Oral

    Suyu Ge, Yunan Zhang, Liyuan Liu, Minjia Zhang, Jiawei Han, Jianfeng Gao [Paper], 2023.10

  6. SparQ Attention: Bandwidth-Efficient LLM Inference Preprint

    Luka Ribar, Ivan Chelombiev, Luke Hudlass-Galley, Charlie Blake, Carlo Luschi, Douglas Orr [Paper], 2023.12

  7. Finch: Prompt-guided Key-Value Cache Compression TACL

    Giulio Corallo, Paolo Papotti [Paper], 2024.8

  8. A2SF: Accumulative Attention Scoring with Forgetting Factor for Token Pruning in Transformer Decoder Preprint

    Hyun-rae Jo, Dongkun Shin [Paper], 2024.7

  9. ThinK: Thinner Key Cache by Query-Driven Pruning Preprint

    Yuhui Xu, Zhanming Jie, Hanze Dong, Lei Wang, Xudong Lu, Aojun Zhou, Amrita Saha, Caiming Xiong, Doyen Sahoo [Paper], 2024.7

  10. LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inference Preprint

    Qichen Fu, Minsik Cho, Thomas Merth, Sachin Mehta, Mohammad Rastegari, Mahyar Najibi [Paper], 2024.7

  11. SirLLM: Streaming Infinite Retentive LLM ACL 2024

    Yao Yao, Zuchao Li, Hai Zhao [Paper], 2024.2

  12. A Simple and Effective $L_2$ Norm-Based Strategy for KV Cache Compression ACL 2024

    Alessio Devoto, Yu Zhao, Simone Scardapane, Pasquale Minervini [Paper], 2024.6

Merging

  1. Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference ICML 2024

    Piotr Nawrot, Adrian Łańcucki, Marcin Chochowski, David Tarjan, Edoardo M. Ponti [Paper], 2024.1

  2. Effectively Compress KV Heads for LLM Preprint

    Hao Yu, Zelan Yang, Shen Li, Yong Li, Jianxin Wu [Paper], 2024.6

  3. D2O: Dynamic Discriminative Operations for Efficient Generative Inference of Large Language Models Preprint

    *Yuxin Zhang, Yuxuan Du, Gen Luo, Yunshan Zhong, Zhenyu Zhang, Shiwei Liu, Rongrong Ji * [Paper], 2024.6

  4. CaM: Cache Merging for Memory-efficient LLMs Inference ICML 2024

    *Yuxin Zhang, Yuxuan Du, Gen Luo, Yunshan Zhong, Zhenyu Zhang, Shiwei Liu, Rongrong Ji * [Paper], 2024.1

  5. Model Tells You Where to Merge: Adaptive KV Cache Merging for LLMs on Long-Context Tasks Preprint

    Zheng Wang, Boxiao Jin, Zhongzhi Yu, Minjia Zhang [Paper], 2024.7

  6. MiniCache: KV Cache Compression in Depth Dimension for Large Language Models Preprint

    Akide Liu, Jing Liu, Zizheng Pan, Yefei He, Gholamreza Haffari, Bohan Zhuang [Paper], 2024.5

  7. Anchor-based Large Language Models ACL 2024

    Jianhui Pang, Fanghua Ye, Derek Fai Wong, Xin He, Wanshun Chen, Longyue Wang [Paper], 2024.2

Quantization

  1. KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization Preprint

    Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Michael W. Mahoney, Yakun Sophia Shao, Kurt Keutzer, Amir Gholami [Paper], 2024.1

  2. No Token Left Behind: Reliable KV Cache Compression via Importance-Aware Mixed Precision Quantization Preprint

    June Yong Yang, Byeongwook Kim, Jeongin Bae, Beomseok Kwon, Gunho Park, Eunho Yang, Se Jung Kwon, Dongsoo Lee [Paper], 2024.2

  3. QAQ: Quality Adaptive Quantization for LLM KV Cache Preprint

    Shichen Dong, Wen Cheng, Jiayu Qin, Wei Wang [Paper], 2024.3

  4. GEAR: An Efficient KV Cache Compression Recipe for Near-Lossless Generative Inference of LLM Preprint

    Hao Kang, Qingru Zhang, Souvik Kundu, Geonhwa Jeong, Zaoxing Liu, Tushar Krishna, Tuo Zhao [Paper], 2024.3

  5. FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU PMLR 2023

    Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Beidi Chen, Percy Liang, Christopher Re, Ion Stoica, Ce Zhang [Paper], 2023.3

  6. WKVQuant: Quantizing Weight and Key/Value Cache for Large Language Models Gains More PMLR 2023

    Yuxuan Yue, Zhihang Yuan, Haojie Duanmu, Sifan Zhou, Jianlong Wu, Liqiang Nie [Paper], 2024.2

  7. SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models COLM 2024

    Haojie Duanmu, Zhihang Yuan, Xiuhong Li, Jiangfei Duan, Xingcheng Zhang, Dahua Lin [Paper], 2024.5

Benchmark

Work in progress.

Field Benchmarks
Efficiency
Retrieval
Reasoning

↑ Back to Top ↑

Other Awesome Lists

↑ Back to Top ↑

Contributing

  • Add a new paper or update an existing paper, thinking about which category the work should belong to.
  • Use the same format as existing entries to describe the work.
  • Add the abstract link of the paper (/abs/ format if it is an arXiv publication).

Don't worry if you do something wrong, it will be fixed for you!

Contributors

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published