Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using pretrained word embeddings #49

Open
SG87 opened this issue Oct 26, 2018 · 1 comment
Open

Using pretrained word embeddings #49

SG87 opened this issue Oct 26, 2018 · 1 comment

Comments

@SG87
Copy link

SG87 commented Oct 26, 2018

Is it possible to start model training (main.py) from existing word embeddings like Fasttext?

@raulpuric
Copy link
Contributor

We have an update planned to address more advanced tokenization/data processing, but currently there's not an easy way. It's easy to load the embedding weights into the model, but it's a bit difficult to change the preprocessing to handle tokenization that's not ascii-256 character level.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants