Stars
AI Inference
2 repositories
AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:
A high-throughput and memory-efficient inference and serving engine for LLMs