Skip to content

Latest commit

 

History

History
7 lines (4 loc) · 525 Bytes

README.md

File metadata and controls

7 lines (4 loc) · 525 Bytes

airllm_logo

AirLLM optimizes inference memory usage, allowing 70B large language models to run inference on a single 4GB GPU card without quantization, distillation and pruning. And you can run 405B Llama3.1 on 8GB vram now.

GitHub Repo stars

Moved to here: https://github.com/lyogavin/airllm