-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CIF Related #1738
CIF Related #1738
Conversation
@@ -0,0 +1,100 @@ | |||
# Copyright (c) 2023 ASLP@NWPU (authors: He Wang, Fan Yu) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The embedding.py shares almost the same code with transformer/embedding.py, I think we can reuse it.
@@ -0,0 +1,291 @@ | |||
# Copyright (c) 2023 ASLP@NWPU (authors: He Wang, Fan Yu) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as embedding.py, attention shares almost the same code as transformer/attention.py, and transformer/attention.py takes streamming into account.
import numpy as np | ||
|
||
|
||
def make_pad_mask(lengths: torch.Tensor, length_dim: int = -1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this function is defined in utils/mask.py
return mask | ||
|
||
|
||
def sequence_mask(lengths, maxlen: Optional[int] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this function is almost the same as subsequent_mask in utils/mask.py
Complete the implementation of CIF-related code, including two types of Predictor and two types of CIF-Decoder. Meanwhile, four baseline experiments were conducted on the AISHELL dataset.