We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Here, under tips and tricks..... https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md#tips-and-tricks Both finetuning and eval are 30% faster with --fp16. For that you need to install apex.
tips and tricks
Both finetuning and eval are 30% faster with --fp16. For that you need to install apex.
But in the documentation... https://huggingface.co/transformers/master/model_doc/pegasus.html#examples FP16 is not supported (help/ideas on this appreciated!).
FP16 is not supported (help/ideas on this appreciated!).
Also in the documentation https://huggingface.co/transformers/master/model_doc/pegasus.html#examples Script to fine-tune pegasus on the XSUM dataset. leads to a 404: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_pegasus_xsum.sh
Script to fine-tune pegasus on the XSUM dataset.
The text was updated successfully, but these errors were encountered:
Hi @kingpalethe,
In general, for BART and Marian models, training and eval is faster with fp16, except Pegasus and T5 which currently don't work well with fp16
Yes, the fine-tuning script is now moved under examples/research_projects/seq2seq-distillation dir, https://github.com/huggingface/transformers/tree/master/examples/research_projects/seq2seq-distillation
examples/research_projects/seq2seq-distillation
Thanks for reporting,
Also please note that this script is not maintained anymore and is provided as-is. We only maintain the finetune_trainer.py script now.
finetune_trainer.py
Sorry, something went wrong.
This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.
#26521 @lotaflor989@gmail.com#v0
sio
No branches or pull requests
Here, under
tips and tricks
.....https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md#tips-and-tricks
Both finetuning and eval are 30% faster with --fp16. For that you need to install apex.
But in the documentation...
https://huggingface.co/transformers/master/model_doc/pegasus.html#examples
FP16 is not supported (help/ideas on this appreciated!).
Also in the documentation
https://huggingface.co/transformers/master/model_doc/pegasus.html#examples
Script to fine-tune pegasus on the XSUM dataset.
leads to a 404: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_pegasus_xsum.sh
The text was updated successfully, but these errors were encountered: