Releases: GRAAL-Research/deepparse
Releases · GRAAL-Research/deepparse
0.9.13
0.9.12
0.9.12
- Bug-fix the call to the BPEmb class instead of the BPEmbBaseURLWrapperBugFix to fix the download URL in
download_models
.
0.9.11
0.9.11
- Fix Sentry version error in Docker Image.
0.9.10
- Fix and improve documentation.
- Remove fixed dependencies version.
- Fix app errors.
- Add data validation for 1) multiple consecutive whitespace and 2) newline.
- Fixes some errors in tests.
- Add an argument to the
DatasetContainer
interface to use a pre-processing data cleaning function before validation. - Hot-fix the issue with the BPEmb base URL download problem. See issue 221.
- Fix the NumPy version due to a major release with breaking changes.
- Fix the SciPy version due to breaking change with Gensim.
- Fix circular import in the API app.
- Fix deprecated
max_request_body_size
in Sentry.
0.9.9
- Add version to Seq2Seq and AddressParser.
- Add a Deepparse as an API using FastAPI.
- Add a Dockerfile and a
docker-compose.yml
to build a Docker container for the API. - Bug-fix the default pre-processors that were not all apply but only the last one.
0.9.8 and weights release
- Hot-Fix wheel install (See issue 196).
0.9.7
- New models release with more meta-data.
- Add a feature to use an AddressParser from a URI.
- Add a feature to upload the trained model to a URI.
- Add an example of how to use URI for parsing from and uploading to.
- Improve error handling of
path_to_retrain_model
. - Bug-fix pre-processor error.
- Add verbose override and improve verbosity handling in retrain.
- Bug-fix the broken FastText installation using
fasttext-wheel
instead offasttext
(
see here
and here).
0.9.6
- Add Python 3.11.
- Add pre-processor when parsing addresses.
- Add pin_memory=True when using a CUDA device to increase performance as suggested
- by Torch documentation.
- Add torch.no_grad() context manager in call() to increase performance.
- Reduce memory swap between CPU and GPU by instantiating Tensor directly on the GPU device.
- Improve some Warnings clarity (i.e. category and message).
- Bug-fix MacOS multiprocessing. It was impossible to use in multiprocess since we were not testing whether torch
- multiprocess was set properly. Now, we set it properly and raise a warning instead of an error.
- Drop Python 3.7 support since newer Python versions are faster
- and Torch 2.0 does not support Python 3.7.
- Improve error handling with wrong checkpoint loading in AddressParser retrain_path use.
- Add torch.compile integration to improve performance (Torch 1.x still supported) with mode="reduce-overhead" as
- suggested in the documentation. It
- increases the performance by about 1/100.
0.9.5
- Fixed tags converter bug with data processor.
0.9.4
Improve codebase.