OpenNMT-tf is a general purpose sequence learning toolkit using TensorFlow 2.0. While neural machine translation is the main target task, it has been designed to more generally support:
- sequence to sequence mapping
- sequence tagging
- sequence classification
- language modeling
The project is production-oriented and comes with backward compatibility guarantees.
Models are described with code to allow training custom architectures and overriding default behavior. For example, the following instance defines a sequence to sequence model with 2 concatenated input features, a self-attentional encoder, and an attentional RNN decoder sharing its input and output embeddings:
opennmt.models.SequenceToSequence(
source_inputter=opennmt.inputters.ParallelInputter(
[opennmt.inputters.WordEmbedder(embedding_size=256),
opennmt.inputters.WordEmbedder(embedding_size=256)],
reducer=opennmt.layers.ConcatReducer(axis=-1)),
target_inputter=opennmt.inputters.WordEmbedder(embedding_size=512),
encoder=opennmt.encoders.SelfAttentionEncoder(num_layers=6),
decoder=opennmt.decoders.AttentionalRNNDecoder(
num_layers=4,
num_units=512,
attention_mechanism_class=tfa.seq2seq.LuongAttention),
share_embeddings=opennmt.models.EmbeddingsSharingLevel.TARGET)
The opennmt
package exposes other building blocks that can be used to design:
- multiple input features
- mixed embedding representation
- multi-source context
- cascaded or multi-column encoder
- hybrid sequence to sequence models
Standard models such as the Transformer are defined in a model catalog and can be used without additional configuration.
Find more information about model configuration in the documentation.
OpenNMT-tf is fully integrated in the TensorFlow 2.0 ecosystem:
- Reusable layers extending
tf.keras.layers.Layer
- Multi-GPU training with
tf.distribute
- Mixed precision support via a graph optimization pass
- Visualization with TensorBoard
tf.function
graph tracing that can be exported to a SavedModel and served with TensorFlow Serving or Python
OpenNMT-tf does not require to compile the data before the training. Instead, it can directly read text files and preprocess the data when needed by the training. This allows on-the-fly tokenization and data augmentation by injecting random noise.
OpenNMT-tf supports model fine-tuning workflows:
- Model weights can be transferred to new word vocabularies, e.g. to inject domain terminology before fine-tuning on in-domain data
- Contrastive learning to reduce word omission errors
Sequence to sequence models can be trained with guided alignment and alignment information are returned as part of the translation API.
OpenNMT-tf also implements most of the techniques commonly used to train and evaluate sequence models, such as:
- automatic evaluation during the training
- multiple decoding strategy: greedy search, beam search, random sampling
- N-best rescoring
- gradient accumulation
- scheduled sampling
- checkpoint averaging
- ... and more!
See the documentation to learn how to use these features.
OpenNMT-tf requires:
- Python >= 3.5
We recommend installing it with pip
:
pip install --upgrade pip
pip install OpenNMT-tf
See the documentation for more information.
OpenNMT-tf comes with several command line utilities to prepare data, train, and evaluate models.
For all tasks involving a model execution, OpenNMT-tf uses a unique entrypoint: onmt-main
. A typical OpenNMT-tf run consists of 3 elements:
- the model type
- the parameters described in a YAML file
- the run type such as
train
,eval
,infer
,export
,score
,average_checkpoints
, orupdate_vocab
that are passed to the main script:
onmt-main --model_type <model> --config <config_file.yml> --auto_config <run_type> <run_options>
For more information and examples on how to use OpenNMT-tf, please visit our documentation.
OpenNMT-tf also exposes well-defined and stable APIs, from high-level training utilities to low-level model layers and dataset transformations.
For example, the Runner
class can be used to train and evaluate models with few lines of code:
import opennmt
config = {
"model_dir": "/data/wmt-ende/checkpoints/,
"data": {
"source_vocabulary": "/data/wmt-ende/joint-vocab.txt",
"target_vocabulary": "/data/wmt-ende/joint-vocab.txt",
"train_features_file": "/data/wmt-ende/train.en",
"train_labels_file": "/data/wmt-ende/train.de",
"eval_features_file": "/data/wmt-ende/valid.en",
"eval_labels_file": "/data/wmt-ende/valid.de",
}
}
model = opennmt.models.TransformerBase()
runner = opennmt.Runner(model, config, auto_config=True)
runner.train(num_devices=2, with_eval=True)
Here is another example using OpenNMT-tf to run efficient beam search with a self-attentional decoder:
decoder = opennmt.decoders.SelfAttentionDecoder(num_layers=6)
decoder.initialize(vocab_size=32000)
initial_state = decoder.initial_state(
memory=memory,
memory_sequence_length=memory_sequence_length)
batch_size = tf.shape(memory)[0]
start_ids = tf.fill([batch_size], opennmt.START_OF_SENTENCE_ID)
decoding_result = decoder.dynamic_decode(
target_embedding,
start_ids=start_ids,
initial_state=initial_state,
decoding_strategy=opennmt.utils.BeamSearch(4))
More examples using OpenNMT-tf as a library can be found online:
- The directory examples/library contains additional examples that use OpenNMT-tf as a library
- nmt-wizard-docker uses the high-level
opennmt.Runner
API to wrap OpenNMT-tf with a custom interface for training, translating, and serving
For a complete overview of the APIs, see the package documentation.