Finetuning facebook/wav2vec2-xls-r-2b #1458
Tirthankar-iiitb
started this conversation in
General
Replies: 1 comment 6 replies
-
You may look at https://github.com/marcoyang1998/icefall/tree/finetune_hubert/egs/librispeech/ASR/finetune_hubert_transducer, it's a recipe for fine-tuning a HuBERT model. Also, if you want to deploy a wav2vec2 model with Sherpa, you may find this useful (k2-fsa/sherpa#198). Doing the fine-tuning in icefall is not necessary for deployment with Sherpa, as long as you have the model in the right format (torchscript, onnx etc.) |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The requirement is to fine-tune the XLSR-2B pre-trained model for a new language/accent (Bhojpuri) using adapters. I have ~6 hrs of audio & transcriptions for the new language/accent. I want to use K2 for the same. Which recipe I can simulate? Any indicator to this will be very helpful.
I can do similar fine-tuning using Huggingface (HF). But I need to use Sherpa for inferences which I cannot do on the HF model.
Beta Was this translation helpful? Give feedback.
All reactions