The official v.1 release of ohmeow-blurr
This is a massive refactoring over the previous iterations of blurr, including namespace modifications that will make it easier for us to add in support for vision, audio, etc... transformers in the future. If you've used any of the previous versions of blurr
or the development build we covered in part 2 of the W&B study group, please make sure you read the docs and note the namespace changes.
To get up to speed with how to use this library, check out the W&B x fastai x Hugging Face study group. The docs are your friend and full of examples as well. I'll be working on updating the other examples floating around the internet as I have time.
If you have any questions, please use the hf-fastai
channel in the fastai discord or github issues. As always, any and all PRs are welcome.