Releases: huggingface/peft
Releases · huggingface/peft
v0.2.0
Whisper large tuning using PEFT LoRA+INT-8 on T4 GPU in Colab notebooks
We tested PEFT on @OpenAI's Whisper Large model and got:
i) 5x larger batch sizes
ii) Less than 8GB GPU VRAM
iii) Best part? Almost no degredation to WER 🤯
Without PEFT:
- OOM on a T4 GPU ❌
- 6GB checkpoint ❌
- 13.64 WER ✅
With PEFT:
- Train on a T4 GPU ✅
- 60MB checkpoint ✅
- 14.01 WER ✅
- adding whisper large peft+int8 training example by @pacman100 in #95
prepare_for_int8_training
utility
This utility enables preprocessing the base model to be ready for INT8 training.
- [
core
] addprepare_model_for_training
by @younesbelkada in #85 - [
core
] Some changes withprepare_model_for_training
& few fixes by @younesbelkada in #105
disable_adapter()
context manager
Enables to disable adapter layers to get the outputs from the frozen base models.
An exciting application of this feature allows only a single model copy to be used for policy model and reference model generations in RLHF.
- add disable adapter context manager by @pacman100 in #106
What's Changed
- release v0.2.0.dev0 by @pacman100 in #69
- Update README.md by @sayakpaul in #72
- Fixed typo in Readme by @Muhtasham in #73
- Update README.md by @pacman100 in #77
- convert prompt tuning vocab to fp32 by @mayank31398 in #68
- [
core
] addprepare_model_for_training
by @younesbelkada in #85 - [
bnb
] add flan-t5 example by @younesbelkada in #86 - making
prepare_model_for_training
flexible by @pacman100 in #90 - adding whisper large peft+int8 training example by @pacman100 in #95
- making
bnb
optional by @pacman100 in #97 - add support for regex target modules in lora by @pacman100 in #104
- [
core
] Some changes withprepare_model_for_training
& few fixes by @younesbelkada in #105 - Fix typo by @mrm8488 in #107
- add disable adapter context manager by @pacman100 in #106
- add
EleutherAI/gpt-neox-20b
to support matrix by @pacman100 in #109 - fix merging lora weights for inference by @pacman100 in #117
- [
core
] Fix autocast issue by @younesbelkada in #121 - fixes
prepare_for_int8_training
by @pacman100 in #127 - issue#126: torch.load device issue. by @gabinguo in #134
- fix: count params when zero init'd by @zanussbaum in #140
- chore: update
pyproject.toml
by @SauravMaheshkar in #125 - support option for encoder only prompts by @mayank31398 in #150
- minor fixes to the examples by @pacman100 in #149
- Add local saving for whisper largev2 example notebook by @alvanli in #163
- fix count by @dumpmemory in #162
- Add Prefix Tuning citation by @zphang in #159
- lora fixes and adding 8bitMegredLinear lora by @pacman100 in #157
- Update README.md by @pacman100 in #164
- minor changes by @pacman100 in #165
New Contributors
- @Muhtasham made their first contribution in #73
- @mayank31398 made their first contribution in #68
- @mrm8488 made their first contribution in #107
- @gabinguo made their first contribution in #134
- @zanussbaum made their first contribution in #140
- @SauravMaheshkar made their first contribution in #125
- @alvanli made their first contribution in #163
- @dumpmemory made their first contribution in #162
- @zphang made their first contribution in #159
Significant community contributions
The following contributors have made significant changes to the library over the last release:
Full Changelog: v0.1.0...v0.2.0
v0.1.0 Initial release
Initial release of 🤗 PEFT. Checkout the main README to learn more about it!