- we are using T5-base as our base model.
- finetuned on paradetox dataset from huggingface.
- This finetuned model is available to download from Huggingface : link1 or link2
- trained on kaggle environment took 5 hours to train the model and achieved good results.
- This is instruction based finetuning not applying PEFT here.
- during data preparation adding instructions as prefix is a good practice.
- example: input:
"Toxic version: i didnt vote for the liar"
, output:"Non-toxic version: I didn't vote for him"
- example: input:
-
Notifications
You must be signed in to change notification settings - Fork 0
Ribin-Baby/detoxify_text_t5
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
finetuning t5-base model for detoxifying texts.
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published