Skip to content

Commit

Permalink
Support for Cross-Batch Memory and Numerous Improvements. (#260)
Browse files Browse the repository at this point in the history
* [nightly] Increase version to 0.15.0.dev29

* [nightly] Increase version to 0.15.0.dev30

* [nightly] Increase version to 0.15.0.dev31

* [nightly] Increase version to 0.15.0.dev32

* [nightly] Increase version to 0.15.0.dev33

* [nightly] Increase version to 0.15.0.dev34

* [nightly] Increase version to 0.15.0.dev35

* [nightly] Increase version to 0.15.0.dev36

* [nightly] Increase version to 0.15.0.dev37

* Add tests and typeing for effnet architectures.

* Add tests for confusion matrix and rename module to avoid clobering the module name when importing.

* add tets for neighbor vix.

* fix typing warnings in new modules.

* Update unsupervised draft to include barlow twins tests.

* Add draft of tf record kaggle notebook.

* [nightly] Increase version to 0.15.0.dev38

* [nightly] Increase version to 0.15.0.dev39

* [nightly] Increase version to 0.15.0.dev40

* [nightly] Increase version to 0.15.0.dev41

* [nightly] Increase version to 0.15.0.dev42

* Fix var mismatch in sampler after merge from master.

* Add WarmUpCosine learning rate schedule.  (#197)

* Add WarmUpCosine learning rate schedule. This is required for the Barlow Twins loss.

* Make WarmUpCosine lr schedule serializable.

* [nightly] Increase version to 0.15.0.dev43

* [nightly] Increase version to 0.15.0.dev44

* [nightly] Increase version to 0.15.0.dev45

* Add GeneralizeMeanPooling2D layer (#198)

* Add GeneralizeMeanPooling2D layer

GeM adds support for global mean pooling using the generalized mean.
This enables the pooling to increase or decrease the contrast between
the feature map activation's.

Add tests for GeM and MetricEmbedding layers.

* Fix mypy errors.

* Add 1D version for GeM pooling.

* [nightly] Increase version to 0.15.0.dev46

* [nightly] Increase version to 0.15.0.dev47

* Refactor GeM to reduce code duplication and add test coverage for 1D.

* Rename lr schedule module to schedules to make import cleaner.

- Fix mypy error by defining return type of GeM output_shape.

* [nightly] Increase version to 0.15.0.dev48

* [nightly] Increase version to 0.15.0.dev49

* [nightly] Increase version to 0.15.0.dev50

* Replace GlobalMeanPooling2d wiht GeneralizedMeanPooling2D and add None to augmenter typing.

* Add support for scaling vix by max pixel value.

* [Bug] Cast steps to dtype in WarmUpCosine. Rename test module to schedules.

* [nightly] Increase version to 0.15.0.dev51

* Update changelog style.

* Add license to schedules module.

* Refer to Layers using the full module path rather than importing the class directly.

* Rename effnet p param to gem_p to make it clear that we are setting the power on the gem layer.

* Major updates to the contrastive model.

- Add support for indexing.
- Add test_step for tracking validation loss.
- Add forward pass method to make it easier to pass data to the training model.
- Update the predict method so that it now passes through the backbone and projector and returns the embedding output layer.
- Various other fixes.

* Major updates to the unsupervised hello world notebook.

- Add support for using effnet as the backbone.
- Clean up the projector and predictor models.
- Provide support for switching between the various sefl-supervised algos.
- Add example of passing validation data and match classification metrics.
- Update eval code to ensure we take the correct output from the projector.

* [nightly] Increase version to 0.15.0.dev52

* Fix mypy errors in ContrastiveModel.

- distance can be Union[Distance, str]
- embedding_output was an Optional[int], but should just default to 0 and be a simple int.

* Update example notebooks.

* [nightly] Increase version to 0.15.0.dev53

* [nightly] Increase version to 0.15.0.dev54

* [nightly] Increase version to 0.15.0.dev55

* Update supervised hello world.

* [nightly] Increase version to 0.15.0.dev56

* [nightly] Increase version to 0.15.0.dev57

* [nightly] Increase version to 0.15.0.dev58

* [nightly] Increase version to 0.15.0.dev59

* [nightly] Increase version to 0.15.0.dev60

* Added soft_nearest_neighbor_loss. Closes #103 (#203)

* [nightly] Increase version to 0.15.0.dev61

* Add ActivationStdLoggingLayer.

Used for tracking the mean std of a layer's activations during training.

* Add area_range arg to simclr crop_and_resize.

This enables us to call simclr resize and crop with a custom upper and
lower bound.

* Basic formatting and cleanup of doc strings and args.

* Add resent architectures.

Add ResNet50Sim and ResNet18Sim architectures.
Add support for standard keras.application args in efficitent.

* Updates to contrastive model.

- Explicitly add sub models losses to combined loss.
- Track contrastive_loss, regularization_loss, and combined_loss separately
- Add metrics property to correctly reset metrics state.

* Updates to unsupervised notebook

* Remove contrastive metrics module.

We now track the layer std using the metric logging layer.

* [nightly] Increase version to 0.15.0.dev62

* Support inner model losses.

The backbone, projector, and predictor models may have layer losses like
kernel_regularization. We now check for these and add additional loss
trackers if required. This enables us to separately monitor the
contrastive and regularization losses, if they exist.

* Fix mypy errors in simsiam loss.

* Various fixes to ResNet18

Now uses SimSiam kernel intialization. This seems to be critical for
proper training on the cifar10 dataset.

* Initial working version of SimSiam on cifar10

* Added SNR distance. Closes #64 (#205)

Added Signal-to-Noise Ratio distance metric as defined in
[Signal-to-Noise Ratio: A Robust Distance Metric for Deep Metric Learning](https://arxiv.org/abs/1904.02616)

* Small updates for SimSiam example.

Switch to using LeCunn.Uniform for the kernel_initalizers.
Stride should be 1 for the first Conv2d layer in the ResNet18 backbone.

* [nightly] Increase version to 0.15.0.dev63

* [nightly] Increase version to 0.15.0.dev64

* Updates for contrastive model saving.

* [nightly] Increase version to 0.15.0.dev65

* [nightly] Increase version to 0.15.0.dev66

* [nightly] Increase version to 0.15.0.dev67

* [nightly] Increase version to 0.15.0.dev68

* [nightly] Increase version to 0.15.0.dev69

* [nightly] Increase version to 0.15.0.dev70

* [nightly] Increase version to 0.15.0.dev71

* Update losses to use Loss reduction.

Losses previously computed the mean loss over the examples within the call() method. This may create issues when using multi GPU training. The call() method now returns the per example loss, and the final loss is computed using the losses.Loss reduction method.

We also updated the from_config() method to include the parent class's reduction and name args.

* Resnet18 returns as a SimilarityModel.

We may want Resnet18 as a regular model, but keeping the output type as
SimilarityModel to avoid mixed output types.

* Fix various mypy and linter errors.

* Add support for contrastive_model save and load.

* Update unsupervised notebook with save and load.

* Update the save and load.

Add updated example and docs for save and load in the supervised hello world.

* Updates to visualization notebook.

* [nightly] Increase version to 0.15.0.dev72

* Unsupervised notebook update.

* [nightly] Increase version to 0.15.0.dev73

* [nightly] Increase version to 0.15.0.dev74

* [nightly] Increase version to 0.15.0.dev75

* [nightly] Increase version to 0.15.0.dev76

* Notes on the unsupervised notebook draft.

* [nightly] Increase version to 0.15.0.dev77

* [nightly] Increase version to 0.15.0.dev78

* [nightly] Increase version to 0.15.0.dev79

* Remove get_backbone() method and just have users access the backbone attribute directly.

* Add new diagrams and updated copy to teh unsupervised notebook.

* [nightly] Increase version to 0.15.0.dev80

* [nightly] Increase version to 0.15.0.dev81

* First finished draft of unsupervised_hello_world notebook

* Updates to the README file. Add self-supervised info.

* [nightly] Increase version to 0.15.0.dev82

* [nightly] Increase version to 0.15.0.dev83

* Update README.md

* Remove augmentation arg from architectures.

Architectures previously took a callable stack of augmentation layers
that would be added after the input of the model. This could cause
issues with saving and training on TPU. Users are now expected to add
augmentation to either the data samplers / datasets or manually add it
to the model.

* Clean up example dir.

* Fix flake8 errors in architectures.

* Update API docs.

* Bump version to 0.15.0

* Bump minor version to 0.16.0.dev0

* [nightly] Increase version to 0.16.0.dev1

* [nightly] Increase version to 0.16.0.dev2

* [nightly] Increase version to 0.16.0.dev3

* Distance and losses refactor (#222)

* refactor distances call signature and add appropriate tests

* refactor metrics for new distance call signature

* make similarity losses compatible with asymmetric and non-square distance matrices

* adapt and add test

* remove newline

* [nightly] Increase version to 0.16.0.dev4

* [nightly] Increase version to 0.16.0.dev5

* [nightly] Increase version to 0.16.0.dev6

* [nightly] Increase version to 0.16.0.dev7

* [nightly] Increase version to 0.16.0.dev8

* Cross-batch memory (XBM) (#225)

* initiate XBM loss

* add todo

* add XBM tests

* WIP: XBM serialization

* XBM serialization

* class docstring

* remove todo

* improve docstring

* remove comment

* [nightly] Increase version to 0.16.0.dev9

* [nightly] Increase version to 0.16.0.dev10

* [nightly] Increase version to 0.16.0.dev11

* [nightly] Increase version to 0.16.0.dev12

* [nightly] Increase version to 0.16.0.dev13

* [nightly] Increase version to 0.16.0.dev14

* [nightly] Increase version to 0.16.0.dev15

* [nightly] Increase version to 0.16.0.dev16

* [nightly] Increase version to 0.16.0.dev17

* [nightly] Increase version to 0.16.0.dev18

* [nightly] Increase version to 0.16.0.dev19

* [nightly] Increase version to 0.16.0.dev20

* [nightly] Increase version to 0.16.0.dev21

* [nightly] Increase version to 0.16.0.dev22

* Augmentor for Barlow Twins (#229)

* Use list(range()) instead of comprehension as it is more pythonic.

* Create barlow.py

* Bump three in /tensorflow_similarity/visualization/projector_v2 (#228)

Bumps [three](https://github.com/mrdoob/three.js) from 0.132.2 to 0.137.0.
- [Release notes](https://github.com/mrdoob/three.js/releases)
- [Commits](https://github.com/mrdoob/three.js/commits)

---
updated-dependencies:
- dependency-name: three
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Restructure class to be like Augmenter

* Minor fixing of dead links (#230)

* Fixed dead links

* augmenter main to master

* Spelling changes Auto Augment

* MixupAndCutmix main to master

* RandAugment main to master

* RandomErasing main to master

* Update SimCLRAugmenter.md

* Update ClassificationMatch.md

* Update ClassificationMetric.md

* Update Evaluator.md

* Update MemoryEvaluator.md

* Update SimilarityModel.md

* Update BinaryAccuracy.md

* Update F1Score.md

* Update FalsePositiveRate.md

* Update NegativePredictiveValue.md

* Update Precision.md

* Update Recall.md

Co-authored-by: Owen Vallis <owensvallis@gmail.com>

* Fix minor typos (#226)

Co-authored-by: Owen Vallis <owensvallis@gmail.com>

* Update barlow.py

* Update barlow.py

* Update setup.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* revisions

* Update __init__.py

* Update __init__.py

* Update color_jitter.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update setup.py

Co-authored-by: Owen S Vallis <ovallis@google.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Owen Vallis <owensvallis@gmail.com>
Co-authored-by: Genrry Hernandez <genrryhernandez@gmail.com>

* Fixed some bugs in augmenter. (#232)

* Create barlow.py

* Restructure class to be like Augmenter

* Update barlow.py

* Update barlow.py

* Update setup.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* revisions

* Update __init__.py

* Update __init__.py

* Update color_jitter.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update setup.py

* fixed some bugs

* Remove seed instance variable

Co-authored-by: Owen Vallis <owensvallis@gmail.com>

* [nightly] Increase version to 0.16.0.dev23

* [nightly] Increase version to 0.16.0.dev24

* [nightly] Increase version to 0.16.0.dev25

* [nightly] Increase version to 0.16.0.dev26

* [nightly] Increase version to 0.16.0.dev27

* [nightly] Increase version to 0.16.0.dev28

* [nightly] Increase version to 0.16.0.dev29

* [nightly] Increase version to 0.16.0.dev30

* [nightly] Increase version to 0.16.0.dev31

* [nightly] Increase version to 0.16.0.dev32

* [nightly] Increase version to 0.16.0.dev33

* [nightly] Increase version to 0.16.0.dev34

* [nightly] Increase version to 0.16.0.dev35

* [nightly] Increase version to 0.16.0.dev36

* [nightly] Increase version to 0.16.0.dev37

* [nightly] Increase version to 0.16.0.dev38

* [nightly] Increase version to 0.16.0.dev39

* [nightly] Increase version to 0.16.0.dev40

* [nightly] Increase version to 0.16.0.dev41

* [nightly] Increase version to 0.16.0.dev42

* [nightly] Increase version to 0.16.0.dev43

* [nightly] Increase version to 0.16.0.dev44

* [nightly] Increase version to 0.16.0.dev45

* [nightly] Increase version to 0.16.0.dev46

* Added test coverage for augmentation functions + barlow, simCLR augmenter  (#235)

* Create test_blur.py

* Create test_color_jitter.py

* Create test_crop.py

* Create test_flip.py

* Update test_crop.py

* Update test_color_jitter.py

* Create test_solarize.py

* Create test_augmenters.py

* Update test_flip.py

* Update test_flip.py

* Update test_flip.py

* Update blur.py

* Update blur.py

* [nightly] Increase version to 0.16.0.dev47

* Change augmenters to use augmentation_utils (#238)

* Fix corrupted JSON formatting in unsupervised notebook.

* Added features of SplitValidationLoss callback to EvalCallback (#242)

* Added features of SplitValidationLoss callback to EvalCallback

Merged SplitValidationLoss into EvalCallbaclk

* Refactored EvalCallback using utils.unpack_results

* [nightly] Increase version to 0.16.0.dev48

* [nightly] Increase version to 0.16.0.dev49

* [nightly] Increase version to 0.16.0.dev50

* VicReg Loss - Improvement of Barlow Twins (#243)

* VicReg Loss

* Update vicreg.py

* Update vicreg.py

* Update vicreg.py

* fix big bug

* Update vicreg.py

* Update vicreg.py

* fixes

* Update vicreg.py

* [nightly] Increase version to 0.16.0.dev51

* [nightly] Increase version to 0.16.0.dev52

* Update tests for algebra.py

* Coverage now at 100%
* Convert tests to use tf.testing.TestCase

* [nightly] Increase version to 0.16.0.dev53

* [nightly] Increase version to 0.16.0.dev54

* Fix corrupted formatting in visualization notebook.

* [bug] Fix multisim loss offsets.

The tfsim version of multisim uses distances instead of the inner
product. However, multisim requires that we "center" the pairwise
distances around 0. Here we add a new center param, which we set to 1.0
for cosine distance. Additionally, we also flip the lambda (lmda) param
to add the threshold to the values instead of subtracting it. These
changes will help improve the pos and neg weighting in the log1psumexp.

* [nightly] Increase version to 0.16.0.dev55

* [bug] In losses.utils.logsumexp() tf.math.log(1 + x) should be
tf.math.log(tf.math.exp(-my_max) + x). This is needed to properly
account for removing the rowwise max before computing the logsumexp.

* Make Augmentation Utilities More Customizable(reupload due to branch issues) (#255)

* modifications of benchmark

* test commit 123

* new changes to training

* testo changes

* works in colab... kind of

* code is neat now

* working on sampler problem

* Update barlow.py

* Update blur.py

* Update color_jitter.py

* Update color_jitter.py

* Update barlow.py

* Update barlow.py

* Added vicreg for sync

* Update vicreg.py

* Update vicreg.py

* Update vicreg.py

* Update barlow.py

* randomresizedcrop edits

* Update barlow.py

* allow to customize loss reduction

* Update __init__.py

* Delete sampler_test.py

* Delete benchmark/supervised directory

* Update barlow.py

* added docstring on random_resized_crop

* Allow user to set normalization

* Update barlow.py

* Update barlow.py

* Update setup.py

* remove pipfile

* Delete Pipfile

* Delete Pipfile.lock

* Update cropping.py

* Update cropping.py

* Additive multiplicative changes

* Update simclr.py

* change additive, multiplicative

* Update barlow.py

* Update solarize.py

* Update barlow.py

* Update solarize.py

* Update barlow.py

* Update test_solarize.py

* Update test_solarize.py

* Update test_solarize.py

Co-authored-by: Owen Vallis <ovallis@google.com>

* Refactor test_basic to use TestCase to improve flaky test results.

* Fix Flake8 warnings.

* Freeze all batchnorm architecture layers.

We now freeze all BN layers when loading pre-trained weights in the
effnet and resnet50 architectures. Previously, we only froze the BN
layers if trainable was partial or frozen. When trainable was full, the
BN layers would be trainable as well and this led to suboptimal training
losses.

* Improve EfficientNetSim docstring and type hints (#254)

* Fix typos in docstring

* Remove reference to image augmentation

Image augmentation was previously removed, so purge it from the comment and docstring.

* Correct input image type annotation

* Fix #251. Check for model._index before calling Indexer methods.

The Indexer is core to a number of the Similarity model methods. Add
support for checking if the index exists and return a more informative
AttributeError if the index hasn't been created yet.

* Set random seeds for tfrecord samplers test.

* All augmenters use the Tensor type from tensorflow_similarity.types.

* [nightly] Increase version to 0.16.0.dev56

* Fix Tensor type error in callbacks.

Unpacking the Lookup objects converts the python types to Tensors. This
can lead to Tensor type errors. This commit adds support for taking the
expected dtype of the model Tensors where possible.

We also fix a bug where the EvalCallback was not logging the split
metric names in the history.

* Update doc strings in color_jitter.

* Update the create index AttributeError text

* [nightly] Increase version to 0.16.0.dev57

* Update Notebook examples.

* Remove unneeded tf.function and register_keras_serializable decorators.

Subclasses of tf.keras.losses.Loss will trace all child functions and we
only need to register the subclassed loss to support deserialization.

* Simplify MetricEmbedding layer.

* Fix mypy type error in simsiam.

Convert all constants to tf.constant.

* Simplify the MetricEmbedding layer.

Subclass layers.Dense directly. This simplifies the layer and also fixes
function tracing during model save.

* Fix test_tfrecord_samplers tests.

* Update api documentation.

TODO: This just generated the new docs. We still need to go through and
clean up the documentation.

* Update doc string and api for MetricEmbedding layer.

* Bump to version 0.16

* Fix static type check error in memory store.

The np.savez functions expect array_like values but we were passing
List. Casting as np array should solve the issue.

* Fix effnet test for TF 2.9

* Fix TFRecordDatasetSampler now returns correct number of examples per batch.

Co-authored-by: Github Actions Bot <>
Co-authored-by: Abhishar Sinha <24841841+abhisharsinha@users.noreply.github.com>
Co-authored-by: Christoffer Hjort <Christoffer.Hjort1995@gmail.com>
Co-authored-by: dewball345 <abhiraamkumar@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Genrry Hernandez <genrryhernandez@gmail.com>
Co-authored-by: Emil Larsson <emla2805@users.noreply.github.com>
  • Loading branch information
7 people authored May 27, 2022
1 parent 0e44e96 commit 71e02c4
Show file tree
Hide file tree
Showing 161 changed files with 5,148 additions and 3,839 deletions.
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ release.sh
.DS_Store
benchmark/supervised/datasets/
benchmark/supervised/models/
datasets/
models/

# Byte-compiled / optimized / DLL files
__pycache__/
Expand Down Expand Up @@ -142,3 +144,7 @@ dmypy.json

# Pyre type checker
.pyre/

# Pipfile
Pipfile
Pipfile.lock
54 changes: 40 additions & 14 deletions api/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,20 +11,46 @@ TensorFlow Similarity is a TensorFlow library focused on making metric learning
TensorFlow Similiarity, as visible in the diagram above, offers the following
components to help research, train, evaluate and serve metric models:

- **`SimilarityModel()`**: This class subclasses the `tf.keras.model` class and extends it with additional properties that are useful for metric learning. For example it adds the methods:
1. `index()`: Enables indexing of the embedding
2. `lookup()`: Takes samples, calls predict(), and searches for neighbors within the index.
3. `calibrate()`: Calibrates the model's index search thresholds using a calibration metric and a test dataset.

- **`MetricLoss()`**: This virtual class, that extends the `tf.keras.Loss` class, is the base class from which Metric losses are derived. This sub-classing ensures proper error checking; that is, it ensures the user is using a loss metric to train the models, performs better static analysis, and enforces additional constraints such as having a distance function that is supported by the index. Additionally, Metric losses make use of the fully tested and highly optimized pairwise distances functions provided by TensorFlow Similarity that are available under the `Distances.*` classes.

- **`Samplers()`**: Samplers are meant to ensure that each batch has at least n (with n >=2) examples of each class, as losses such as TripletLoss can’t work properly if this condition is not met. TensorFlow Similarity offers an in-memory sampler for small dataset and a `tf.data.TFRecordDataset` for large scales one.

- **`Indexer()`**: The Indexer and its sub-components are meant to index known embeddings alongside their metadata. The embedding metadata is stored within `Table()`, while the `Matcher()` is used to perform [fast approximate neighbor searches](https://en.wikipedia.org/wiki/Nearest_neighbor_search) that are meant to quickly retrieve the indexed elements that are the closest to the embeddings supplied in the `lookup()` and `single_lookup()` function.

The default `Index()` sub-compoments run in-memory and are optimized to be used in interactive settings such as Jupyter notebooks, Colab, and metric computation during training (e.g using the `EvalCallback()` provided). Index are serialized as part of `model.save()` so you can reload them via `model.index_load()` for serving purpose or further training / evaluation.

The default implementation can scale up to medium deployment (1M-10M+ points) easily, provided the computers have enough memory. For very large scale deployments you will need to sublcass the compoments to match your own architetctue. See FIXME colab to see how to deploy TensorFlow Similarity in production.
- **`SimilarityModel()`**: This class subclasses the `tf.keras.model` class and
extends it with additional properties that are useful for metric learning. For
example it adds the methods: 1. `index()`: Enables indexing of the embedding
2. `lookup()`: Takes samples, calls predict(), and searches for neighbors
within the index. 3. `calibrate()`: Calibrates the model's index search
thresholds using a calibration metric and a test dataset.

- **`MetricLoss()`**: This virtual class, that extends the `tf.keras.Loss`
class, is the base class from which Metric losses are derived. This
sub-classing ensures proper error checking; that is, it ensures the user is
using a loss metric to train the models, performs better static analysis, and
enforces additional constraints such as having a distance function that is
supported by the index. Additionally, Metric losses make use of the fully
tested and highly optimized pairwise distances functions provided by
TensorFlow Similarity that are available under the `Distances.*` classes.

- **`Samplers()`**: Samplers are meant to ensure that each batch has at least n
(with n >=2) examples of each class, as losses such as TripletLoss can’t work
properly if this condition is not met. TensorFlow Similarity offers an
in-memory sampler for small dataset and a `tf.data.TFRecordDataset` for large
scales one.

- **`Indexer()`**: The Indexer and its sub-components are meant to index known
embeddings alongside their metadata. The embedding metadata is stored within
`Table()`, while the `Matcher()` is used to perform [fast approximate neighbor
searches](https://en.wikipedia.org/wiki/Nearest_neighbor_search) that
are meant to quickly retrieve the indexed elements that are the closest to the
embeddings supplied in the `lookup()` and `single_lookup()` function.

The default `Index()` sub-compoments run in-memory and are optimized to be used
in interactive settings such as Jupyter notebooks, Colab, and metric computation
during training (e.g using the `EvalCallback()` provided). Index are serialized
as part of `model.save()` so you can reload them via `model.index_load()` for
serving purpose or further training / evaluation.

The default implementation can scale up to medium deployment (1M-10M+ points)
easily, provided the computers have enough memory. For very large scale
deployments you will need to sublcass the compoments to match your own
architetctue. See FIXME colab to see how to deploy TensorFlow Similarity in
production.

## Modules

Expand Down
Loading

0 comments on commit 71e02c4

Please sign in to comment.