This library enables pre-training and fine-tuning of large language models (LLMs) at scale. Our repository is a modification of the original Megatron-LM codebase by Nvidia.
Added key features include:
- Llama, Llama 2, Code Llama and Falcon support
- support training of large models (70B Llama 2, 65B Llama 1, 34B Code Llama, and 40B Falcon) on commodity hardware on multiple nodes
- 3-way parallelism: tensor parallel, pipeline parallel and data parallel training (inherited from Megatron)
- full pretraining, finetuning and instruct tuning support
- Support for special tokens & tokenizers
- grouped-query attention (GQA) and multi-query attention (MQA)
- Rotary Position Embeddings (RoPE), RMS layer norm, Lima dropout
- RoPE scaling for longer attention context support
- FlashAttention 2
- BF16 / FP16 training
- WandB integration
- Metrics support: Ease to add custom metrics to evaluate on the validation set while training
- Conversion to and from Hugging Face hub
Take a look at the online documentation.
Alternatively, build the docs from source:
cd docs/
pip install -r requirements.txt
make html
70B Llama 2 1, 40B Falcon 1, 13B Code Llama 1, ... (Let us know about yours!)
If you use this software please cite it:
@software{epfmgtrn, author = {Alejandro Hernández Cano and Matteo Pagliardini and Andreas Köpf and Kyle Matoba and Amirkeivan Mohtashami and Olivia Simin Fan and Axel Marmet and Deniz Bayazit and Igor Krawczuk and Zeming Chen and Francesco Salvi and Antoine Bosselut and Martin Jaggi}, title = {epfLLM Megatron-LM}, year = 2023, url = {https://github.com/epfLLM/Megatron-LLM} }