This project develops a working Speech-To-Text module for Kabyle language using Mozilla DeepSpeech, that can be used for any audio processing pipeline. Mozilla DeepSpeech is an open-source automatic speech recognition (ASR) toolkit based on Baidu's Deep Speech research paper. The DeepSpeech project uses Google's TensorFlow to make the implementation easier.
This Readme is written for DeepSpeech v0.8.2. Refer to Mozillla DeepSpeech for lastest updates.
- A running setup of
NVIDIA Docker
- A host directory with 100 GB at least for training and producing intermediate data
- Host directory must be writable by
trainer
user (uid 999) (User defined in the Dockerfile) - Kabyle Common Voice dataset
kab.tar.gz
from https://voice.mozilla.org/kab/datasets insidesources/
subdirectory of your host directory
docker build -t dskabyle .
Parameters for the model:
batch_size
: ( default 96 ) to specify the number of elements un a batch for training, dev and test datasetn_hidden
: ( default 2048 ) to specify the layer width to use when initializing layersepochs
: ( default 75 ) to specify the number of epochs to run training forlearning_rate
: ( default 0.001 ) to define the learning rate of Adam optimizerdropout
: ( default 0.05 ) to define the applied dropout rate for feedforward layerslm_alpha
: ( default 0.66 ) defines the alpha hyperparameters of the CTC decoder. Language Model weight. Word insertion weight.lm_beta
: ( default 1.45 ) define the beta hyperparameters of the CTC decoderbeam_width
: ( default 500 ) to define the beam width used in the CTC decoder when building candidate transcriptionsearly_stop
: ( default 1 ) to indicate early stop during training to avoid overfittingduplicate_sentence_count
: ( default 1 ) to specify the maximum number of times a sentence can appear in the common-voice corpus
These training parameters can always be modified at runtime using Docker environment variables.
Should you have got your host directory containing the needed dataset, it is to be mounted when running the Docker image. Il will contain intermediate files, checkpoints, and the final generated model files.
docker run --tty --mount type=bind,src=PATH-TO-HOST-DIRECTORY,dst=/mnt dskabyle
Using environment variables, use the following command to run the image, with a différent number of epochs for instance.
docker run --tty
--mount type=bind,src=PATH-TO-HOST-DIRECTORY,dst=/mnt
-e EPOCHS=50
dskabyle
Models and training intermediate data are to be found in your host directory.
Subdirectory models/
contains the generated model output_graph.pb
. However, such a file needs to be loaded in memory when running inference.
Therefore, additional models in tflite format, to be run on mobile devices, and mmap-able format, to read data directly from disk, are generated as well.
Training intermediate data are kept in checkpoints/
subdirectory. The purpose of checkpoints is to allow interruption and later resume training.
For further information, be pleased to consult DeepSpeech Documentation.