-
Notifications
You must be signed in to change notification settings - Fork 454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fast-beam-search support for sherpa-onnx #347
Comments
We are going to support HLG decoding in sherpa-onnx. If you want to use a lexicon or/and an n-gram LM in decoding, then you may find HLG decoding interesting. Please also have a look at k2-fsa/icefall#1275 |
Sure, Thanks. Ya, we are interested in HLG. Hoping to see this HLG code at k2-fsa/icefall#1275 merged soon and available for usage. |
Hi @csukuangfj , |
Currently, I see in k2-fsa/icefall#1275 that HLG support is added for ICefall, When can we expect this support to be ported for sherpa-onnx? or Is that already ported to Sherpa-onnx? Thanks |
please see #349 We will finish it in two weeks.
Yes, you are right. It is only for CTC.
Zipformer is a kind of neural network, while ctc is a kind of loss function. They are two different things. |
The C++ part is usable now. |
Thank you. |
I found that the fast-beam-search decoding is currently not supported in sherpa-onnx. Is this activity is planned for future? If yes, when can this be expected (timeline)?
In specific, do you plan to support fast-beam-search-with-lg. Because, currently, the fast-beam-search-with-lg is not supported by zipformer/streaming-decode.py.
Thanks
The text was updated successfully, but these errors were encountered: