This project is based on docTR and leverages TensorFlow.js to serve you an end-to-end OCR running directly in your favorite web browser.
For this project, models were trained with docTR using its TensorFlow back-end, then converted to the TJFS SavedModel format thanks to the tensorflowjs_converter
. Just like docTR, under the hood, there are two types of modules:
- Text detection:
db_mobilenet_v2
(low resolution) &db_resnet50
(high resolution) as available architectures, post-processing performed with OpenCV.js. - Text recognition:
crnn_vgg16_bn
as available architecture
Documentation about all the models can be found over here.
The interface is divided into five sections:
- Model settings (side pannel): select the architectures to use for text detection and for text recognition.
- Input Image (top-left pannel): upload your image there by clicking in the area & selecting your file. Uploading a file will automatically run the OCR on it.
- Text localization (top-right pannel): the output of the text localization module.
- Detected word boxes (bottom-left pannel): visualization of the final predictions of the OCR.
- Words (bottom-right pannel): the list of all the detected words. If you hover a prediction on the bottom-left pannel, it will highlight the corresponding text prediction in this section.
In order to install this project, you will need Yarn and NPM, which are package managers for Node.js.
npm install -g serve
This demo was built using React, a framework for JavaScript development. This demo requires you to install the project from the source code, which will require you to install Git. First clone the project repository:
git clone https://github.com/mindee/doctr-tfjs-demo.git
Then install the project's dependencies using the following command:
cd doctr-tfjs-demo
yarn install
Alternatively, if you are looking at a production situation, first build the bundle and serve it:
yarn build
serve --no-clipboard -s build
then navigate to the URL with your favorite web browser
Once all dependencies have been installed, launch the app using:
yarn start
and navigate with your web browser to the URL in the console.
Lucky for you, if you prefer working with containers, we provide a minimal Docker image. You can build it as follows (it might take a few minutes depending on your setup):
DOCKER_BUILDKIT=1 docker build . -t doctr-tfjs:node12-alpine
and then run your image with:
docker run -p 8001:3000 doctr-tfjs:node12-alpine
Feel free to change the port, but by default, you should be able to access the demo at http://localhost:8001/
. The -p 8001:3000
lets Docker know that we want to map the internal port of the container (3000) to port 8001 on the outside.
Distributed under the Apache 2.0 License. See LICENSE
for more information.