Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added SNR distance. #205

Merged
merged 1 commit into from
Dec 21, 2021
Merged

Conversation

abhisharsinha
Copy link
Contributor

Added Signal-to-Noise Ratio distance metric as defined in
Signal-to-Noise Ratio: A Robust Distance Metric for Deep Metric Learning

Added test using an easy to verify loop-based implementation in test_distances.py
Can be tested using pytest tests/test_distances.py

Closes #64

Added Signal-to-Noise Ratio distance metric as defined in
[Signal-to-Noise Ratio: A Robust Distance Metric for Deep Metric Learning](https://arxiv.org/abs/1904.02616)
@owenvallis
Copy link
Collaborator

Thanks for implementing this!

@owenvallis owenvallis merged commit 681f100 into tensorflow:development Dec 21, 2021
ebursztein added a commit that referenced this pull request Jan 11, 2022
* SSL initial working implementation

* Self supervised initial version

* version bump

* Add dev suffix to version number.

* Fix nightly pypi token var in workflow.

* [nightly] Increase version to 0.15.0.dev1

* [nightly] Increase version to 0.15.0.dev2

* vis: Introduce frontend projector (#176)

This change introduces frontend bundle for custom projector that is very
bare in its functionality. Python counterpart will follow shortly.

While the change is sizable, most of it is configuration and fixture
that contain very bare logic.

Co-authored-by: Owen Vallis <owensvallis@gmail.com>

* [nightly] Increase version to 0.15.0.dev3

* * Add SimClr loss
* Update contrastive model to sum the two example level losses before
computing the mean loss
* Add support for converting cosing similarity to angular similarity
* Update typing issues.

* Initial contrastive loss (#180)

* refactore example

* Fixed SiamSiam to SimSiam -- experiment with decoder

* implemented saving

* Set the SiamSiamLoss back to SimSiamLoss.

Was reverted during the previous merge.

* Add Batch type alias to cover all basic types accepted by tf.keras.Model.fit()
Add typing to Augmenter
Update unsupervised notebook.

* Updates to unsupervised notebook.

Co-authored-by: Owen Vallis <ovallis@google.com>

* Kaggle (#182)

* Kaggle working

* Updates and Markdown for Kaggle notebook.

* Pull updates from Kaggle notebook into kaggle_train script and add argparse for data path.

* Clean up Kaggle examples in the examples directory.

Update links in the examples README.

Co-authored-by: Elie Bursztein <github@elie.net>

* Updates to make simclr loss work as expected.

* Update unsupervised hello world to work with simsiam.

Add Encoder and Projector to get_encoder, and build predictor as
second MLP layer.

Test on the frozen global max pooling layer only.

Add cosine decay for SGD

* More updates to the unsupervised notebook.

* Updates the nightly workflow to purge the pip cache before installing
tf-sim

* Fix workflow versions to python 3.9

PY3.10 is breaking pip install packages.

* [nightly] Increase version to 0.15.0.dev4

* Fix bug in unsupervised notebook.

No pretrain eval was over writing the pretrianed encoder.

* [nightly] Increase version to 0.15.0.dev5

* Reduce mean over each simsiam loss seperately before summing.

This is closer to the original paper.

Updates to the unsupervised notebook.

* Make SimSiam a subclass of Loss.

Update Notebook to use the cifair10 resnet used in the Kerias simsiam
example. Also update the notebook params to confirm we get similar
outputs as the keras example.

Add support for selecting the loss type in the simsiam loss.

Fix angular distance. Was previously returning angular similarity.

* Move loss_type cond from SimSiam.call() to the __init__.

* Fix mypy typing for simsiam loss.

* Unsupervised notebook updates.

* [nightly] Increase version to 0.15.0.dev6

* [nightly] Increase version to 0.15.0.dev7

* [nightly] Increase version to 0.15.0.dev8

* [nightly] Increase version to 0.15.0.dev9

* [nightly] Increase version to 0.15.0.dev10

* [nightly] Increase version to 0.15.0.dev11

* [nightly] Increase version to 0.15.0.dev12

* [nightly] Increase version to 0.15.0.dev13

* [nightly] Increase version to 0.15.0.dev14

* [nightly] Increase version to 0.15.0.dev15

* [nightly] Increase version to 0.15.0.dev16

* [nightly] Increase version to 0.15.0.dev17

* [nightly] Increase version to 0.15.0.dev18

* [nightly] Increase version to 0.15.0.dev19

* [nightly] Increase version to 0.15.0.dev20

* Set the direction of the False Positive Count metric to 'min' as we would like to minimize the FP rate.

* [nightly] Increase version to 0.15.0.dev21

* [nightly] Increase version to 0.15.0.dev22

* Unpacking the Lookup objects could fail if the Lookup sets were of different lengths.

* Add support for loading variable length Lookup sets.
* Imput label and distance values to make all Lookup sets the same length
* * The label is set to 0x7FFFFFFF as this is unlikely to be a class label.
* * The distance is set to math.inf
* print a warning to the user to notify them that we are imputing the
values and the number of short Lookup sets.
* Add tests for all utils.py functions.

* [nightly] Increase version to 0.15.0.dev23

* [nightly] Increase version to 0.15.0.dev24

* [nightly] Increase version to 0.15.0.dev25

* [nightly] Increase version to 0.15.0.dev26

* [nightly] Increase version to 0.15.0.dev27

* Initial version of the Barlow contrastive loss.

* Updates to the simclr and simsiam contrastive losses.

* Refactor contrastive model to subclass similarity model and add support for indexing.

* Small updates to the example notebooks.

* [nightly] Increase version to 0.15.0.dev28

* Fix divide by zero in the barlow column wise normalization and only call barlow loss once for each pair of views.

* [nightly] Increase version to 0.15.0.dev29

* [nightly] Increase version to 0.15.0.dev30

* [nightly] Increase version to 0.15.0.dev31

* [nightly] Increase version to 0.15.0.dev32

* [nightly] Increase version to 0.15.0.dev33

* [nightly] Increase version to 0.15.0.dev34

* [nightly] Increase version to 0.15.0.dev35

* [nightly] Increase version to 0.15.0.dev36

* [nightly] Increase version to 0.15.0.dev37

* Add tests and typeing for effnet architectures.

* Add tests for confusion matrix and rename module to avoid clobering the module name when importing.

* add tets for neighbor vix.

* fix typing warnings in new modules.

* Update unsupervised draft to include barlow twins tests.

* Add draft of tf record kaggle notebook.

* [nightly] Increase version to 0.15.0.dev38

* [nightly] Increase version to 0.15.0.dev39

* [nightly] Increase version to 0.15.0.dev40

* [nightly] Increase version to 0.15.0.dev41

* [nightly] Increase version to 0.15.0.dev42

* Fix var mismatch in sampler after merge from master.

* Add WarmUpCosine learning rate schedule.  (#197)

* Add WarmUpCosine learning rate schedule. This is required for the Barlow Twins loss.

* Make WarmUpCosine lr schedule serializable.

* [nightly] Increase version to 0.15.0.dev43

* [nightly] Increase version to 0.15.0.dev44

* [nightly] Increase version to 0.15.0.dev45

* Add GeneralizeMeanPooling2D layer (#198)

* Add GeneralizeMeanPooling2D layer

GeM adds support for global mean pooling using the generalized mean.
This enables the pooling to increase or decrease the contrast between
the feature map activation's.

Add tests for GeM and MetricEmbedding layers.

* Fix mypy errors.

* Add 1D version for GeM pooling.

* [nightly] Increase version to 0.15.0.dev46

* [nightly] Increase version to 0.15.0.dev47

* Refactor GeM to reduce code duplication and add test coverage for 1D.

* Rename lr schedule module to schedules to make import cleaner.

- Fix mypy error by defining return type of GeM output_shape.

* [nightly] Increase version to 0.15.0.dev48

* [nightly] Increase version to 0.15.0.dev49

* [nightly] Increase version to 0.15.0.dev50

* Replace GlobalMeanPooling2d wiht GeneralizedMeanPooling2D and add None to augmenter typing.

* Add support for scaling vix by max pixel value.

* [Bug] Cast steps to dtype in WarmUpCosine. Rename test module to schedules.

* [nightly] Increase version to 0.15.0.dev51

* Update changelog style.

* Add license to schedules module.

* Refer to Layers using the full module path rather than importing the class directly.

* Rename effnet p param to gem_p to make it clear that we are setting the power on the gem layer.

* Major updates to the contrastive model.

- Add support for indexing.
- Add test_step for tracking validation loss.
- Add forward pass method to make it easier to pass data to the training model.
- Update the predict method so that it now passes through the backbone and projector and returns the embedding output layer.
- Various other fixes.

* Major updates to the unsupervised hello world notebook.

- Add support for using effnet as the backbone.
- Clean up the projector and predictor models.
- Provide support for switching between the various sefl-supervised algos.
- Add example of passing validation data and match classification metrics.
- Update eval code to ensure we take the correct output from the projector.

* [nightly] Increase version to 0.15.0.dev52

* Fix mypy errors in ContrastiveModel.

- distance can be Union[Distance, str]
- embedding_output was an Optional[int], but should just default to 0 and be a simple int.

* Update example notebooks.

* [nightly] Increase version to 0.15.0.dev53

* [nightly] Increase version to 0.15.0.dev54

* [nightly] Increase version to 0.15.0.dev55

* Update supervised hello world.

* [nightly] Increase version to 0.15.0.dev56

* [nightly] Increase version to 0.15.0.dev57

* [nightly] Increase version to 0.15.0.dev58

* [nightly] Increase version to 0.15.0.dev59

* [nightly] Increase version to 0.15.0.dev60

* Added soft_nearest_neighbor_loss. Closes #103 (#203)

* [nightly] Increase version to 0.15.0.dev61

* Add ActivationStdLoggingLayer.

Used for tracking the mean std of a layer's activations during training.

* Add area_range arg to simclr crop_and_resize.

This enables us to call simclr resize and crop with a custom upper and
lower bound.

* Basic formatting and cleanup of doc strings and args.

* Add resent architectures.

Add ResNet50Sim and ResNet18Sim architectures.
Add support for standard keras.application args in efficitent.

* Updates to contrastive model.

- Explicitly add sub models losses to combined loss.
- Track contrastive_loss, regularization_loss, and combined_loss separately
- Add metrics property to correctly reset metrics state.

* Updates to unsupervised notebook

* Remove contrastive metrics module.

We now track the layer std using the metric logging layer.

* [nightly] Increase version to 0.15.0.dev62

* Support inner model losses.

The backbone, projector, and predictor models may have layer losses like
kernel_regularization. We now check for these and add additional loss
trackers if required. This enables us to separately monitor the
contrastive and regularization losses, if they exist.

* Fix mypy errors in simsiam loss.

* Various fixes to ResNet18

Now uses SimSiam kernel intialization. This seems to be critical for
proper training on the cifar10 dataset.

* Initial working version of SimSiam on cifar10

* Added SNR distance. Closes #64 (#205)

Added Signal-to-Noise Ratio distance metric as defined in
[Signal-to-Noise Ratio: A Robust Distance Metric for Deep Metric Learning](https://arxiv.org/abs/1904.02616)

* Small updates for SimSiam example.

Switch to using LeCunn.Uniform for the kernel_initalizers.
Stride should be 1 for the first Conv2d layer in the ResNet18 backbone.

* [nightly] Increase version to 0.15.0.dev63

* [nightly] Increase version to 0.15.0.dev64

* Updates for contrastive model saving.

* [nightly] Increase version to 0.15.0.dev65

* [nightly] Increase version to 0.15.0.dev66

* [nightly] Increase version to 0.15.0.dev67

* [nightly] Increase version to 0.15.0.dev68

* [nightly] Increase version to 0.15.0.dev69

* [nightly] Increase version to 0.15.0.dev70

* [nightly] Increase version to 0.15.0.dev71

* Update losses to use Loss reduction.

Losses previously computed the mean loss over the examples within the call() method. This may create issues when using multi GPU training. The call() method now returns the per example loss, and the final loss is computed using the losses.Loss reduction method.

We also updated the from_config() method to include the parent class's reduction and name args.

* Resnet18 returns as a SimilarityModel.

We may want Resnet18 as a regular model, but keeping the output type as
SimilarityModel to avoid mixed output types.

* Fix various mypy and linter errors.

* Add support for contrastive_model save and load.

* Update unsupervised notebook with save and load.

* Update the save and load.

Add updated example and docs for save and load in the supervised hello world.

* Updates to visualization notebook.

* [nightly] Increase version to 0.15.0.dev72

* Unsupervised notebook update.

* [nightly] Increase version to 0.15.0.dev73

* [nightly] Increase version to 0.15.0.dev74

* [nightly] Increase version to 0.15.0.dev75

* [nightly] Increase version to 0.15.0.dev76

* Notes on the unsupervised notebook draft.

* [nightly] Increase version to 0.15.0.dev77

* [nightly] Increase version to 0.15.0.dev78

* [nightly] Increase version to 0.15.0.dev79

* Remove get_backbone() method and just have users access the backbone attribute directly.

* Add new diagrams and updated copy to teh unsupervised notebook.

* [nightly] Increase version to 0.15.0.dev80

* [nightly] Increase version to 0.15.0.dev81

* First finished draft of unsupervised_hello_world notebook

* Updates to the README file. Add self-supervised info.

* [nightly] Increase version to 0.15.0.dev82

* [nightly] Increase version to 0.15.0.dev83

* Update README.md

* Remove augmentation arg from architectures.

Architectures previously took a callable stack of augmentation layers
that would be added after the input of the model. This could cause
issues with saving and training on TPU. Users are now expected to add
augmentation to either the data samplers / datasets or manually add it
to the model.

* Clean up example dir.

* Fix flake8 errors in architectures.

* Update API docs.

* Bump version to 0.15.0

Co-authored-by: Elie Bursztein <github@elie.net>
Co-authored-by: Github Actions Bot <>
Co-authored-by: Stephan Lee <stephanwlee@gmail.com>
Co-authored-by: Abhishar Sinha <24841841+abhisharsinha@users.noreply.github.com>
ebursztein pushed a commit that referenced this pull request May 27, 2022
* [nightly] Increase version to 0.15.0.dev29

* [nightly] Increase version to 0.15.0.dev30

* [nightly] Increase version to 0.15.0.dev31

* [nightly] Increase version to 0.15.0.dev32

* [nightly] Increase version to 0.15.0.dev33

* [nightly] Increase version to 0.15.0.dev34

* [nightly] Increase version to 0.15.0.dev35

* [nightly] Increase version to 0.15.0.dev36

* [nightly] Increase version to 0.15.0.dev37

* Add tests and typeing for effnet architectures.

* Add tests for confusion matrix and rename module to avoid clobering the module name when importing.

* add tets for neighbor vix.

* fix typing warnings in new modules.

* Update unsupervised draft to include barlow twins tests.

* Add draft of tf record kaggle notebook.

* [nightly] Increase version to 0.15.0.dev38

* [nightly] Increase version to 0.15.0.dev39

* [nightly] Increase version to 0.15.0.dev40

* [nightly] Increase version to 0.15.0.dev41

* [nightly] Increase version to 0.15.0.dev42

* Fix var mismatch in sampler after merge from master.

* Add WarmUpCosine learning rate schedule.  (#197)

* Add WarmUpCosine learning rate schedule. This is required for the Barlow Twins loss.

* Make WarmUpCosine lr schedule serializable.

* [nightly] Increase version to 0.15.0.dev43

* [nightly] Increase version to 0.15.0.dev44

* [nightly] Increase version to 0.15.0.dev45

* Add GeneralizeMeanPooling2D layer (#198)

* Add GeneralizeMeanPooling2D layer

GeM adds support for global mean pooling using the generalized mean.
This enables the pooling to increase or decrease the contrast between
the feature map activation's.

Add tests for GeM and MetricEmbedding layers.

* Fix mypy errors.

* Add 1D version for GeM pooling.

* [nightly] Increase version to 0.15.0.dev46

* [nightly] Increase version to 0.15.0.dev47

* Refactor GeM to reduce code duplication and add test coverage for 1D.

* Rename lr schedule module to schedules to make import cleaner.

- Fix mypy error by defining return type of GeM output_shape.

* [nightly] Increase version to 0.15.0.dev48

* [nightly] Increase version to 0.15.0.dev49

* [nightly] Increase version to 0.15.0.dev50

* Replace GlobalMeanPooling2d wiht GeneralizedMeanPooling2D and add None to augmenter typing.

* Add support for scaling vix by max pixel value.

* [Bug] Cast steps to dtype in WarmUpCosine. Rename test module to schedules.

* [nightly] Increase version to 0.15.0.dev51

* Update changelog style.

* Add license to schedules module.

* Refer to Layers using the full module path rather than importing the class directly.

* Rename effnet p param to gem_p to make it clear that we are setting the power on the gem layer.

* Major updates to the contrastive model.

- Add support for indexing.
- Add test_step for tracking validation loss.
- Add forward pass method to make it easier to pass data to the training model.
- Update the predict method so that it now passes through the backbone and projector and returns the embedding output layer.
- Various other fixes.

* Major updates to the unsupervised hello world notebook.

- Add support for using effnet as the backbone.
- Clean up the projector and predictor models.
- Provide support for switching between the various sefl-supervised algos.
- Add example of passing validation data and match classification metrics.
- Update eval code to ensure we take the correct output from the projector.

* [nightly] Increase version to 0.15.0.dev52

* Fix mypy errors in ContrastiveModel.

- distance can be Union[Distance, str]
- embedding_output was an Optional[int], but should just default to 0 and be a simple int.

* Update example notebooks.

* [nightly] Increase version to 0.15.0.dev53

* [nightly] Increase version to 0.15.0.dev54

* [nightly] Increase version to 0.15.0.dev55

* Update supervised hello world.

* [nightly] Increase version to 0.15.0.dev56

* [nightly] Increase version to 0.15.0.dev57

* [nightly] Increase version to 0.15.0.dev58

* [nightly] Increase version to 0.15.0.dev59

* [nightly] Increase version to 0.15.0.dev60

* Added soft_nearest_neighbor_loss. Closes #103 (#203)

* [nightly] Increase version to 0.15.0.dev61

* Add ActivationStdLoggingLayer.

Used for tracking the mean std of a layer's activations during training.

* Add area_range arg to simclr crop_and_resize.

This enables us to call simclr resize and crop with a custom upper and
lower bound.

* Basic formatting and cleanup of doc strings and args.

* Add resent architectures.

Add ResNet50Sim and ResNet18Sim architectures.
Add support for standard keras.application args in efficitent.

* Updates to contrastive model.

- Explicitly add sub models losses to combined loss.
- Track contrastive_loss, regularization_loss, and combined_loss separately
- Add metrics property to correctly reset metrics state.

* Updates to unsupervised notebook

* Remove contrastive metrics module.

We now track the layer std using the metric logging layer.

* [nightly] Increase version to 0.15.0.dev62

* Support inner model losses.

The backbone, projector, and predictor models may have layer losses like
kernel_regularization. We now check for these and add additional loss
trackers if required. This enables us to separately monitor the
contrastive and regularization losses, if they exist.

* Fix mypy errors in simsiam loss.

* Various fixes to ResNet18

Now uses SimSiam kernel intialization. This seems to be critical for
proper training on the cifar10 dataset.

* Initial working version of SimSiam on cifar10

* Added SNR distance. Closes #64 (#205)

Added Signal-to-Noise Ratio distance metric as defined in
[Signal-to-Noise Ratio: A Robust Distance Metric for Deep Metric Learning](https://arxiv.org/abs/1904.02616)

* Small updates for SimSiam example.

Switch to using LeCunn.Uniform for the kernel_initalizers.
Stride should be 1 for the first Conv2d layer in the ResNet18 backbone.

* [nightly] Increase version to 0.15.0.dev63

* [nightly] Increase version to 0.15.0.dev64

* Updates for contrastive model saving.

* [nightly] Increase version to 0.15.0.dev65

* [nightly] Increase version to 0.15.0.dev66

* [nightly] Increase version to 0.15.0.dev67

* [nightly] Increase version to 0.15.0.dev68

* [nightly] Increase version to 0.15.0.dev69

* [nightly] Increase version to 0.15.0.dev70

* [nightly] Increase version to 0.15.0.dev71

* Update losses to use Loss reduction.

Losses previously computed the mean loss over the examples within the call() method. This may create issues when using multi GPU training. The call() method now returns the per example loss, and the final loss is computed using the losses.Loss reduction method.

We also updated the from_config() method to include the parent class's reduction and name args.

* Resnet18 returns as a SimilarityModel.

We may want Resnet18 as a regular model, but keeping the output type as
SimilarityModel to avoid mixed output types.

* Fix various mypy and linter errors.

* Add support for contrastive_model save and load.

* Update unsupervised notebook with save and load.

* Update the save and load.

Add updated example and docs for save and load in the supervised hello world.

* Updates to visualization notebook.

* [nightly] Increase version to 0.15.0.dev72

* Unsupervised notebook update.

* [nightly] Increase version to 0.15.0.dev73

* [nightly] Increase version to 0.15.0.dev74

* [nightly] Increase version to 0.15.0.dev75

* [nightly] Increase version to 0.15.0.dev76

* Notes on the unsupervised notebook draft.

* [nightly] Increase version to 0.15.0.dev77

* [nightly] Increase version to 0.15.0.dev78

* [nightly] Increase version to 0.15.0.dev79

* Remove get_backbone() method and just have users access the backbone attribute directly.

* Add new diagrams and updated copy to teh unsupervised notebook.

* [nightly] Increase version to 0.15.0.dev80

* [nightly] Increase version to 0.15.0.dev81

* First finished draft of unsupervised_hello_world notebook

* Updates to the README file. Add self-supervised info.

* [nightly] Increase version to 0.15.0.dev82

* [nightly] Increase version to 0.15.0.dev83

* Update README.md

* Remove augmentation arg from architectures.

Architectures previously took a callable stack of augmentation layers
that would be added after the input of the model. This could cause
issues with saving and training on TPU. Users are now expected to add
augmentation to either the data samplers / datasets or manually add it
to the model.

* Clean up example dir.

* Fix flake8 errors in architectures.

* Update API docs.

* Bump version to 0.15.0

* Bump minor version to 0.16.0.dev0

* [nightly] Increase version to 0.16.0.dev1

* [nightly] Increase version to 0.16.0.dev2

* [nightly] Increase version to 0.16.0.dev3

* Distance and losses refactor (#222)

* refactor distances call signature and add appropriate tests

* refactor metrics for new distance call signature

* make similarity losses compatible with asymmetric and non-square distance matrices

* adapt and add test

* remove newline

* [nightly] Increase version to 0.16.0.dev4

* [nightly] Increase version to 0.16.0.dev5

* [nightly] Increase version to 0.16.0.dev6

* [nightly] Increase version to 0.16.0.dev7

* [nightly] Increase version to 0.16.0.dev8

* Cross-batch memory (XBM) (#225)

* initiate XBM loss

* add todo

* add XBM tests

* WIP: XBM serialization

* XBM serialization

* class docstring

* remove todo

* improve docstring

* remove comment

* [nightly] Increase version to 0.16.0.dev9

* [nightly] Increase version to 0.16.0.dev10

* [nightly] Increase version to 0.16.0.dev11

* [nightly] Increase version to 0.16.0.dev12

* [nightly] Increase version to 0.16.0.dev13

* [nightly] Increase version to 0.16.0.dev14

* [nightly] Increase version to 0.16.0.dev15

* [nightly] Increase version to 0.16.0.dev16

* [nightly] Increase version to 0.16.0.dev17

* [nightly] Increase version to 0.16.0.dev18

* [nightly] Increase version to 0.16.0.dev19

* [nightly] Increase version to 0.16.0.dev20

* [nightly] Increase version to 0.16.0.dev21

* [nightly] Increase version to 0.16.0.dev22

* Augmentor for Barlow Twins (#229)

* Use list(range()) instead of comprehension as it is more pythonic.

* Create barlow.py

* Bump three in /tensorflow_similarity/visualization/projector_v2 (#228)

Bumps [three](https://github.com/mrdoob/three.js) from 0.132.2 to 0.137.0.
- [Release notes](https://github.com/mrdoob/three.js/releases)
- [Commits](https://github.com/mrdoob/three.js/commits)

---
updated-dependencies:
- dependency-name: three
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Restructure class to be like Augmenter

* Minor fixing of dead links (#230)

* Fixed dead links

* augmenter main to master

* Spelling changes Auto Augment

* MixupAndCutmix main to master

* RandAugment main to master

* RandomErasing main to master

* Update SimCLRAugmenter.md

* Update ClassificationMatch.md

* Update ClassificationMetric.md

* Update Evaluator.md

* Update MemoryEvaluator.md

* Update SimilarityModel.md

* Update BinaryAccuracy.md

* Update F1Score.md

* Update FalsePositiveRate.md

* Update NegativePredictiveValue.md

* Update Precision.md

* Update Recall.md

Co-authored-by: Owen Vallis <owensvallis@gmail.com>

* Fix minor typos (#226)

Co-authored-by: Owen Vallis <owensvallis@gmail.com>

* Update barlow.py

* Update barlow.py

* Update setup.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* revisions

* Update __init__.py

* Update __init__.py

* Update color_jitter.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update setup.py

Co-authored-by: Owen S Vallis <ovallis@google.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Owen Vallis <owensvallis@gmail.com>
Co-authored-by: Genrry Hernandez <genrryhernandez@gmail.com>

* Fixed some bugs in augmenter. (#232)

* Create barlow.py

* Restructure class to be like Augmenter

* Update barlow.py

* Update barlow.py

* Update setup.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* revisions

* Update __init__.py

* Update __init__.py

* Update color_jitter.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update setup.py

* fixed some bugs

* Remove seed instance variable

Co-authored-by: Owen Vallis <owensvallis@gmail.com>

* [nightly] Increase version to 0.16.0.dev23

* [nightly] Increase version to 0.16.0.dev24

* [nightly] Increase version to 0.16.0.dev25

* [nightly] Increase version to 0.16.0.dev26

* [nightly] Increase version to 0.16.0.dev27

* [nightly] Increase version to 0.16.0.dev28

* [nightly] Increase version to 0.16.0.dev29

* [nightly] Increase version to 0.16.0.dev30

* [nightly] Increase version to 0.16.0.dev31

* [nightly] Increase version to 0.16.0.dev32

* [nightly] Increase version to 0.16.0.dev33

* [nightly] Increase version to 0.16.0.dev34

* [nightly] Increase version to 0.16.0.dev35

* [nightly] Increase version to 0.16.0.dev36

* [nightly] Increase version to 0.16.0.dev37

* [nightly] Increase version to 0.16.0.dev38

* [nightly] Increase version to 0.16.0.dev39

* [nightly] Increase version to 0.16.0.dev40

* [nightly] Increase version to 0.16.0.dev41

* [nightly] Increase version to 0.16.0.dev42

* [nightly] Increase version to 0.16.0.dev43

* [nightly] Increase version to 0.16.0.dev44

* [nightly] Increase version to 0.16.0.dev45

* [nightly] Increase version to 0.16.0.dev46

* Added test coverage for augmentation functions + barlow, simCLR augmenter  (#235)

* Create test_blur.py

* Create test_color_jitter.py

* Create test_crop.py

* Create test_flip.py

* Update test_crop.py

* Update test_color_jitter.py

* Create test_solarize.py

* Create test_augmenters.py

* Update test_flip.py

* Update test_flip.py

* Update test_flip.py

* Update blur.py

* Update blur.py

* [nightly] Increase version to 0.16.0.dev47

* Change augmenters to use augmentation_utils (#238)

* Fix corrupted JSON formatting in unsupervised notebook.

* Added features of SplitValidationLoss callback to EvalCallback (#242)

* Added features of SplitValidationLoss callback to EvalCallback

Merged SplitValidationLoss into EvalCallbaclk

* Refactored EvalCallback using utils.unpack_results

* [nightly] Increase version to 0.16.0.dev48

* [nightly] Increase version to 0.16.0.dev49

* [nightly] Increase version to 0.16.0.dev50

* VicReg Loss - Improvement of Barlow Twins (#243)

* VicReg Loss

* Update vicreg.py

* Update vicreg.py

* Update vicreg.py

* fix big bug

* Update vicreg.py

* Update vicreg.py

* fixes

* Update vicreg.py

* [nightly] Increase version to 0.16.0.dev51

* [nightly] Increase version to 0.16.0.dev52

* Update tests for algebra.py

* Coverage now at 100%
* Convert tests to use tf.testing.TestCase

* [nightly] Increase version to 0.16.0.dev53

* [nightly] Increase version to 0.16.0.dev54

* Fix corrupted formatting in visualization notebook.

* [bug] Fix multisim loss offsets.

The tfsim version of multisim uses distances instead of the inner
product. However, multisim requires that we "center" the pairwise
distances around 0. Here we add a new center param, which we set to 1.0
for cosine distance. Additionally, we also flip the lambda (lmda) param
to add the threshold to the values instead of subtracting it. These
changes will help improve the pos and neg weighting in the log1psumexp.

* [nightly] Increase version to 0.16.0.dev55

* [bug] In losses.utils.logsumexp() tf.math.log(1 + x) should be
tf.math.log(tf.math.exp(-my_max) + x). This is needed to properly
account for removing the rowwise max before computing the logsumexp.

* Make Augmentation Utilities More Customizable(reupload due to branch issues) (#255)

* modifications of benchmark

* test commit 123

* new changes to training

* testo changes

* works in colab... kind of

* code is neat now

* working on sampler problem

* Update barlow.py

* Update blur.py

* Update color_jitter.py

* Update color_jitter.py

* Update barlow.py

* Update barlow.py

* Added vicreg for sync

* Update vicreg.py

* Update vicreg.py

* Update vicreg.py

* Update barlow.py

* randomresizedcrop edits

* Update barlow.py

* allow to customize loss reduction

* Update __init__.py

* Delete sampler_test.py

* Delete benchmark/supervised directory

* Update barlow.py

* added docstring on random_resized_crop

* Allow user to set normalization

* Update barlow.py

* Update barlow.py

* Update setup.py

* remove pipfile

* Delete Pipfile

* Delete Pipfile.lock

* Update cropping.py

* Update cropping.py

* Additive multiplicative changes

* Update simclr.py

* change additive, multiplicative

* Update barlow.py

* Update solarize.py

* Update barlow.py

* Update solarize.py

* Update barlow.py

* Update test_solarize.py

* Update test_solarize.py

* Update test_solarize.py

Co-authored-by: Owen Vallis <ovallis@google.com>

* Refactor test_basic to use TestCase to improve flaky test results.

* Fix Flake8 warnings.

* Freeze all batchnorm architecture layers.

We now freeze all BN layers when loading pre-trained weights in the
effnet and resnet50 architectures. Previously, we only froze the BN
layers if trainable was partial or frozen. When trainable was full, the
BN layers would be trainable as well and this led to suboptimal training
losses.

* Improve EfficientNetSim docstring and type hints (#254)

* Fix typos in docstring

* Remove reference to image augmentation

Image augmentation was previously removed, so purge it from the comment and docstring.

* Correct input image type annotation

* Fix #251. Check for model._index before calling Indexer methods.

The Indexer is core to a number of the Similarity model methods. Add
support for checking if the index exists and return a more informative
AttributeError if the index hasn't been created yet.

* Set random seeds for tfrecord samplers test.

* All augmenters use the Tensor type from tensorflow_similarity.types.

* [nightly] Increase version to 0.16.0.dev56

* Fix Tensor type error in callbacks.

Unpacking the Lookup objects converts the python types to Tensors. This
can lead to Tensor type errors. This commit adds support for taking the
expected dtype of the model Tensors where possible.

We also fix a bug where the EvalCallback was not logging the split
metric names in the history.

* Update doc strings in color_jitter.

* Update the create index AttributeError text

* [nightly] Increase version to 0.16.0.dev57

* Update Notebook examples.

* Remove unneeded tf.function and register_keras_serializable decorators.

Subclasses of tf.keras.losses.Loss will trace all child functions and we
only need to register the subclassed loss to support deserialization.

* Simplify MetricEmbedding layer.

* Fix mypy type error in simsiam.

Convert all constants to tf.constant.

* Simplify the MetricEmbedding layer.

Subclass layers.Dense directly. This simplifies the layer and also fixes
function tracing during model save.

* Fix test_tfrecord_samplers tests.

* Update api documentation.

TODO: This just generated the new docs. We still need to go through and
clean up the documentation.

* Update doc string and api for MetricEmbedding layer.

* Bump to version 0.16

* Fix static type check error in memory store.

The np.savez functions expect array_like values but we were passing
List. Casting as np array should solve the issue.

* Fix effnet test for TF 2.9

* Fix TFRecordDatasetSampler now returns correct number of examples per batch.

Co-authored-by: Github Actions Bot <>
Co-authored-by: Abhishar Sinha <24841841+abhisharsinha@users.noreply.github.com>
Co-authored-by: Christoffer Hjort <Christoffer.Hjort1995@gmail.com>
Co-authored-by: dewball345 <abhiraamkumar@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Genrry Hernandez <genrryhernandez@gmail.com>
Co-authored-by: Emil Larsson <emla2805@users.noreply.github.com>
abeltheo pushed a commit to abeltheo/similarity that referenced this pull request Mar 23, 2023
Added Signal-to-Noise Ratio distance metric as defined in
[Signal-to-Noise Ratio: A Robust Distance Metric for Deep Metric Learning](https://arxiv.org/abs/1904.02616)
abeltheo pushed a commit to abeltheo/similarity that referenced this pull request Mar 23, 2023
* SSL initial working implementation

* Self supervised initial version

* version bump

* Add dev suffix to version number.

* Fix nightly pypi token var in workflow.

* [nightly] Increase version to 0.15.0.dev1

* [nightly] Increase version to 0.15.0.dev2

* vis: Introduce frontend projector (tensorflow#176)

This change introduces frontend bundle for custom projector that is very
bare in its functionality. Python counterpart will follow shortly.

While the change is sizable, most of it is configuration and fixture
that contain very bare logic.

Co-authored-by: Owen Vallis <owensvallis@gmail.com>

* [nightly] Increase version to 0.15.0.dev3

* * Add SimClr loss
* Update contrastive model to sum the two example level losses before
computing the mean loss
* Add support for converting cosing similarity to angular similarity
* Update typing issues.

* Initial contrastive loss (tensorflow#180)

* refactore example

* Fixed SiamSiam to SimSiam -- experiment with decoder

* implemented saving

* Set the SiamSiamLoss back to SimSiamLoss.

Was reverted during the previous merge.

* Add Batch type alias to cover all basic types accepted by tf.keras.Model.fit()
Add typing to Augmenter
Update unsupervised notebook.

* Updates to unsupervised notebook.

Co-authored-by: Owen Vallis <ovallis@google.com>

* Kaggle (tensorflow#182)

* Kaggle working

* Updates and Markdown for Kaggle notebook.

* Pull updates from Kaggle notebook into kaggle_train script and add argparse for data path.

* Clean up Kaggle examples in the examples directory.

Update links in the examples README.

Co-authored-by: Elie Bursztein <github@elie.net>

* Updates to make simclr loss work as expected.

* Update unsupervised hello world to work with simsiam.

Add Encoder and Projector to get_encoder, and build predictor as
second MLP layer.

Test on the frozen global max pooling layer only.

Add cosine decay for SGD

* More updates to the unsupervised notebook.

* Updates the nightly workflow to purge the pip cache before installing
tf-sim

* Fix workflow versions to python 3.9

PY3.10 is breaking pip install packages.

* [nightly] Increase version to 0.15.0.dev4

* Fix bug in unsupervised notebook.

No pretrain eval was over writing the pretrianed encoder.

* [nightly] Increase version to 0.15.0.dev5

* Reduce mean over each simsiam loss seperately before summing.

This is closer to the original paper.

Updates to the unsupervised notebook.

* Make SimSiam a subclass of Loss.

Update Notebook to use the cifair10 resnet used in the Kerias simsiam
example. Also update the notebook params to confirm we get similar
outputs as the keras example.

Add support for selecting the loss type in the simsiam loss.

Fix angular distance. Was previously returning angular similarity.

* Move loss_type cond from SimSiam.call() to the __init__.

* Fix mypy typing for simsiam loss.

* Unsupervised notebook updates.

* [nightly] Increase version to 0.15.0.dev6

* [nightly] Increase version to 0.15.0.dev7

* [nightly] Increase version to 0.15.0.dev8

* [nightly] Increase version to 0.15.0.dev9

* [nightly] Increase version to 0.15.0.dev10

* [nightly] Increase version to 0.15.0.dev11

* [nightly] Increase version to 0.15.0.dev12

* [nightly] Increase version to 0.15.0.dev13

* [nightly] Increase version to 0.15.0.dev14

* [nightly] Increase version to 0.15.0.dev15

* [nightly] Increase version to 0.15.0.dev16

* [nightly] Increase version to 0.15.0.dev17

* [nightly] Increase version to 0.15.0.dev18

* [nightly] Increase version to 0.15.0.dev19

* [nightly] Increase version to 0.15.0.dev20

* Set the direction of the False Positive Count metric to 'min' as we would like to minimize the FP rate.

* [nightly] Increase version to 0.15.0.dev21

* [nightly] Increase version to 0.15.0.dev22

* Unpacking the Lookup objects could fail if the Lookup sets were of different lengths.

* Add support for loading variable length Lookup sets.
* Imput label and distance values to make all Lookup sets the same length
* * The label is set to 0x7FFFFFFF as this is unlikely to be a class label.
* * The distance is set to math.inf
* print a warning to the user to notify them that we are imputing the
values and the number of short Lookup sets.
* Add tests for all utils.py functions.

* [nightly] Increase version to 0.15.0.dev23

* [nightly] Increase version to 0.15.0.dev24

* [nightly] Increase version to 0.15.0.dev25

* [nightly] Increase version to 0.15.0.dev26

* [nightly] Increase version to 0.15.0.dev27

* Initial version of the Barlow contrastive loss.

* Updates to the simclr and simsiam contrastive losses.

* Refactor contrastive model to subclass similarity model and add support for indexing.

* Small updates to the example notebooks.

* [nightly] Increase version to 0.15.0.dev28

* Fix divide by zero in the barlow column wise normalization and only call barlow loss once for each pair of views.

* [nightly] Increase version to 0.15.0.dev29

* [nightly] Increase version to 0.15.0.dev30

* [nightly] Increase version to 0.15.0.dev31

* [nightly] Increase version to 0.15.0.dev32

* [nightly] Increase version to 0.15.0.dev33

* [nightly] Increase version to 0.15.0.dev34

* [nightly] Increase version to 0.15.0.dev35

* [nightly] Increase version to 0.15.0.dev36

* [nightly] Increase version to 0.15.0.dev37

* Add tests and typeing for effnet architectures.

* Add tests for confusion matrix and rename module to avoid clobering the module name when importing.

* add tets for neighbor vix.

* fix typing warnings in new modules.

* Update unsupervised draft to include barlow twins tests.

* Add draft of tf record kaggle notebook.

* [nightly] Increase version to 0.15.0.dev38

* [nightly] Increase version to 0.15.0.dev39

* [nightly] Increase version to 0.15.0.dev40

* [nightly] Increase version to 0.15.0.dev41

* [nightly] Increase version to 0.15.0.dev42

* Fix var mismatch in sampler after merge from master.

* Add WarmUpCosine learning rate schedule.  (tensorflow#197)

* Add WarmUpCosine learning rate schedule. This is required for the Barlow Twins loss.

* Make WarmUpCosine lr schedule serializable.

* [nightly] Increase version to 0.15.0.dev43

* [nightly] Increase version to 0.15.0.dev44

* [nightly] Increase version to 0.15.0.dev45

* Add GeneralizeMeanPooling2D layer (tensorflow#198)

* Add GeneralizeMeanPooling2D layer

GeM adds support for global mean pooling using the generalized mean.
This enables the pooling to increase or decrease the contrast between
the feature map activation's.

Add tests for GeM and MetricEmbedding layers.

* Fix mypy errors.

* Add 1D version for GeM pooling.

* [nightly] Increase version to 0.15.0.dev46

* [nightly] Increase version to 0.15.0.dev47

* Refactor GeM to reduce code duplication and add test coverage for 1D.

* Rename lr schedule module to schedules to make import cleaner.

- Fix mypy error by defining return type of GeM output_shape.

* [nightly] Increase version to 0.15.0.dev48

* [nightly] Increase version to 0.15.0.dev49

* [nightly] Increase version to 0.15.0.dev50

* Replace GlobalMeanPooling2d wiht GeneralizedMeanPooling2D and add None to augmenter typing.

* Add support for scaling vix by max pixel value.

* [Bug] Cast steps to dtype in WarmUpCosine. Rename test module to schedules.

* [nightly] Increase version to 0.15.0.dev51

* Update changelog style.

* Add license to schedules module.

* Refer to Layers using the full module path rather than importing the class directly.

* Rename effnet p param to gem_p to make it clear that we are setting the power on the gem layer.

* Major updates to the contrastive model.

- Add support for indexing.
- Add test_step for tracking validation loss.
- Add forward pass method to make it easier to pass data to the training model.
- Update the predict method so that it now passes through the backbone and projector and returns the embedding output layer.
- Various other fixes.

* Major updates to the unsupervised hello world notebook.

- Add support for using effnet as the backbone.
- Clean up the projector and predictor models.
- Provide support for switching between the various sefl-supervised algos.
- Add example of passing validation data and match classification metrics.
- Update eval code to ensure we take the correct output from the projector.

* [nightly] Increase version to 0.15.0.dev52

* Fix mypy errors in ContrastiveModel.

- distance can be Union[Distance, str]
- embedding_output was an Optional[int], but should just default to 0 and be a simple int.

* Update example notebooks.

* [nightly] Increase version to 0.15.0.dev53

* [nightly] Increase version to 0.15.0.dev54

* [nightly] Increase version to 0.15.0.dev55

* Update supervised hello world.

* [nightly] Increase version to 0.15.0.dev56

* [nightly] Increase version to 0.15.0.dev57

* [nightly] Increase version to 0.15.0.dev58

* [nightly] Increase version to 0.15.0.dev59

* [nightly] Increase version to 0.15.0.dev60

* Added soft_nearest_neighbor_loss. Closes tensorflow#103 (tensorflow#203)

* [nightly] Increase version to 0.15.0.dev61

* Add ActivationStdLoggingLayer.

Used for tracking the mean std of a layer's activations during training.

* Add area_range arg to simclr crop_and_resize.

This enables us to call simclr resize and crop with a custom upper and
lower bound.

* Basic formatting and cleanup of doc strings and args.

* Add resent architectures.

Add ResNet50Sim and ResNet18Sim architectures.
Add support for standard keras.application args in efficitent.

* Updates to contrastive model.

- Explicitly add sub models losses to combined loss.
- Track contrastive_loss, regularization_loss, and combined_loss separately
- Add metrics property to correctly reset metrics state.

* Updates to unsupervised notebook

* Remove contrastive metrics module.

We now track the layer std using the metric logging layer.

* [nightly] Increase version to 0.15.0.dev62

* Support inner model losses.

The backbone, projector, and predictor models may have layer losses like
kernel_regularization. We now check for these and add additional loss
trackers if required. This enables us to separately monitor the
contrastive and regularization losses, if they exist.

* Fix mypy errors in simsiam loss.

* Various fixes to ResNet18

Now uses SimSiam kernel intialization. This seems to be critical for
proper training on the cifar10 dataset.

* Initial working version of SimSiam on cifar10

* Added SNR distance. Closes tensorflow#64 (tensorflow#205)

Added Signal-to-Noise Ratio distance metric as defined in
[Signal-to-Noise Ratio: A Robust Distance Metric for Deep Metric Learning](https://arxiv.org/abs/1904.02616)

* Small updates for SimSiam example.

Switch to using LeCunn.Uniform for the kernel_initalizers.
Stride should be 1 for the first Conv2d layer in the ResNet18 backbone.

* [nightly] Increase version to 0.15.0.dev63

* [nightly] Increase version to 0.15.0.dev64

* Updates for contrastive model saving.

* [nightly] Increase version to 0.15.0.dev65

* [nightly] Increase version to 0.15.0.dev66

* [nightly] Increase version to 0.15.0.dev67

* [nightly] Increase version to 0.15.0.dev68

* [nightly] Increase version to 0.15.0.dev69

* [nightly] Increase version to 0.15.0.dev70

* [nightly] Increase version to 0.15.0.dev71

* Update losses to use Loss reduction.

Losses previously computed the mean loss over the examples within the call() method. This may create issues when using multi GPU training. The call() method now returns the per example loss, and the final loss is computed using the losses.Loss reduction method.

We also updated the from_config() method to include the parent class's reduction and name args.

* Resnet18 returns as a SimilarityModel.

We may want Resnet18 as a regular model, but keeping the output type as
SimilarityModel to avoid mixed output types.

* Fix various mypy and linter errors.

* Add support for contrastive_model save and load.

* Update unsupervised notebook with save and load.

* Update the save and load.

Add updated example and docs for save and load in the supervised hello world.

* Updates to visualization notebook.

* [nightly] Increase version to 0.15.0.dev72

* Unsupervised notebook update.

* [nightly] Increase version to 0.15.0.dev73

* [nightly] Increase version to 0.15.0.dev74

* [nightly] Increase version to 0.15.0.dev75

* [nightly] Increase version to 0.15.0.dev76

* Notes on the unsupervised notebook draft.

* [nightly] Increase version to 0.15.0.dev77

* [nightly] Increase version to 0.15.0.dev78

* [nightly] Increase version to 0.15.0.dev79

* Remove get_backbone() method and just have users access the backbone attribute directly.

* Add new diagrams and updated copy to teh unsupervised notebook.

* [nightly] Increase version to 0.15.0.dev80

* [nightly] Increase version to 0.15.0.dev81

* First finished draft of unsupervised_hello_world notebook

* Updates to the README file. Add self-supervised info.

* [nightly] Increase version to 0.15.0.dev82

* [nightly] Increase version to 0.15.0.dev83

* Update README.md

* Remove augmentation arg from architectures.

Architectures previously took a callable stack of augmentation layers
that would be added after the input of the model. This could cause
issues with saving and training on TPU. Users are now expected to add
augmentation to either the data samplers / datasets or manually add it
to the model.

* Clean up example dir.

* Fix flake8 errors in architectures.

* Update API docs.

* Bump version to 0.15.0

Co-authored-by: Elie Bursztein <github@elie.net>
Co-authored-by: Github Actions Bot <>
Co-authored-by: Stephan Lee <stephanwlee@gmail.com>
Co-authored-by: Abhishar Sinha <24841841+abhisharsinha@users.noreply.github.com>
abeltheo pushed a commit to abeltheo/similarity that referenced this pull request Mar 23, 2023
…#260)

* [nightly] Increase version to 0.15.0.dev29

* [nightly] Increase version to 0.15.0.dev30

* [nightly] Increase version to 0.15.0.dev31

* [nightly] Increase version to 0.15.0.dev32

* [nightly] Increase version to 0.15.0.dev33

* [nightly] Increase version to 0.15.0.dev34

* [nightly] Increase version to 0.15.0.dev35

* [nightly] Increase version to 0.15.0.dev36

* [nightly] Increase version to 0.15.0.dev37

* Add tests and typeing for effnet architectures.

* Add tests for confusion matrix and rename module to avoid clobering the module name when importing.

* add tets for neighbor vix.

* fix typing warnings in new modules.

* Update unsupervised draft to include barlow twins tests.

* Add draft of tf record kaggle notebook.

* [nightly] Increase version to 0.15.0.dev38

* [nightly] Increase version to 0.15.0.dev39

* [nightly] Increase version to 0.15.0.dev40

* [nightly] Increase version to 0.15.0.dev41

* [nightly] Increase version to 0.15.0.dev42

* Fix var mismatch in sampler after merge from master.

* Add WarmUpCosine learning rate schedule.  (tensorflow#197)

* Add WarmUpCosine learning rate schedule. This is required for the Barlow Twins loss.

* Make WarmUpCosine lr schedule serializable.

* [nightly] Increase version to 0.15.0.dev43

* [nightly] Increase version to 0.15.0.dev44

* [nightly] Increase version to 0.15.0.dev45

* Add GeneralizeMeanPooling2D layer (tensorflow#198)

* Add GeneralizeMeanPooling2D layer

GeM adds support for global mean pooling using the generalized mean.
This enables the pooling to increase or decrease the contrast between
the feature map activation's.

Add tests for GeM and MetricEmbedding layers.

* Fix mypy errors.

* Add 1D version for GeM pooling.

* [nightly] Increase version to 0.15.0.dev46

* [nightly] Increase version to 0.15.0.dev47

* Refactor GeM to reduce code duplication and add test coverage for 1D.

* Rename lr schedule module to schedules to make import cleaner.

- Fix mypy error by defining return type of GeM output_shape.

* [nightly] Increase version to 0.15.0.dev48

* [nightly] Increase version to 0.15.0.dev49

* [nightly] Increase version to 0.15.0.dev50

* Replace GlobalMeanPooling2d wiht GeneralizedMeanPooling2D and add None to augmenter typing.

* Add support for scaling vix by max pixel value.

* [Bug] Cast steps to dtype in WarmUpCosine. Rename test module to schedules.

* [nightly] Increase version to 0.15.0.dev51

* Update changelog style.

* Add license to schedules module.

* Refer to Layers using the full module path rather than importing the class directly.

* Rename effnet p param to gem_p to make it clear that we are setting the power on the gem layer.

* Major updates to the contrastive model.

- Add support for indexing.
- Add test_step for tracking validation loss.
- Add forward pass method to make it easier to pass data to the training model.
- Update the predict method so that it now passes through the backbone and projector and returns the embedding output layer.
- Various other fixes.

* Major updates to the unsupervised hello world notebook.

- Add support for using effnet as the backbone.
- Clean up the projector and predictor models.
- Provide support for switching between the various sefl-supervised algos.
- Add example of passing validation data and match classification metrics.
- Update eval code to ensure we take the correct output from the projector.

* [nightly] Increase version to 0.15.0.dev52

* Fix mypy errors in ContrastiveModel.

- distance can be Union[Distance, str]
- embedding_output was an Optional[int], but should just default to 0 and be a simple int.

* Update example notebooks.

* [nightly] Increase version to 0.15.0.dev53

* [nightly] Increase version to 0.15.0.dev54

* [nightly] Increase version to 0.15.0.dev55

* Update supervised hello world.

* [nightly] Increase version to 0.15.0.dev56

* [nightly] Increase version to 0.15.0.dev57

* [nightly] Increase version to 0.15.0.dev58

* [nightly] Increase version to 0.15.0.dev59

* [nightly] Increase version to 0.15.0.dev60

* Added soft_nearest_neighbor_loss. Closes tensorflow#103 (tensorflow#203)

* [nightly] Increase version to 0.15.0.dev61

* Add ActivationStdLoggingLayer.

Used for tracking the mean std of a layer's activations during training.

* Add area_range arg to simclr crop_and_resize.

This enables us to call simclr resize and crop with a custom upper and
lower bound.

* Basic formatting and cleanup of doc strings and args.

* Add resent architectures.

Add ResNet50Sim and ResNet18Sim architectures.
Add support for standard keras.application args in efficitent.

* Updates to contrastive model.

- Explicitly add sub models losses to combined loss.
- Track contrastive_loss, regularization_loss, and combined_loss separately
- Add metrics property to correctly reset metrics state.

* Updates to unsupervised notebook

* Remove contrastive metrics module.

We now track the layer std using the metric logging layer.

* [nightly] Increase version to 0.15.0.dev62

* Support inner model losses.

The backbone, projector, and predictor models may have layer losses like
kernel_regularization. We now check for these and add additional loss
trackers if required. This enables us to separately monitor the
contrastive and regularization losses, if they exist.

* Fix mypy errors in simsiam loss.

* Various fixes to ResNet18

Now uses SimSiam kernel intialization. This seems to be critical for
proper training on the cifar10 dataset.

* Initial working version of SimSiam on cifar10

* Added SNR distance. Closes tensorflow#64 (tensorflow#205)

Added Signal-to-Noise Ratio distance metric as defined in
[Signal-to-Noise Ratio: A Robust Distance Metric for Deep Metric Learning](https://arxiv.org/abs/1904.02616)

* Small updates for SimSiam example.

Switch to using LeCunn.Uniform for the kernel_initalizers.
Stride should be 1 for the first Conv2d layer in the ResNet18 backbone.

* [nightly] Increase version to 0.15.0.dev63

* [nightly] Increase version to 0.15.0.dev64

* Updates for contrastive model saving.

* [nightly] Increase version to 0.15.0.dev65

* [nightly] Increase version to 0.15.0.dev66

* [nightly] Increase version to 0.15.0.dev67

* [nightly] Increase version to 0.15.0.dev68

* [nightly] Increase version to 0.15.0.dev69

* [nightly] Increase version to 0.15.0.dev70

* [nightly] Increase version to 0.15.0.dev71

* Update losses to use Loss reduction.

Losses previously computed the mean loss over the examples within the call() method. This may create issues when using multi GPU training. The call() method now returns the per example loss, and the final loss is computed using the losses.Loss reduction method.

We also updated the from_config() method to include the parent class's reduction and name args.

* Resnet18 returns as a SimilarityModel.

We may want Resnet18 as a regular model, but keeping the output type as
SimilarityModel to avoid mixed output types.

* Fix various mypy and linter errors.

* Add support for contrastive_model save and load.

* Update unsupervised notebook with save and load.

* Update the save and load.

Add updated example and docs for save and load in the supervised hello world.

* Updates to visualization notebook.

* [nightly] Increase version to 0.15.0.dev72

* Unsupervised notebook update.

* [nightly] Increase version to 0.15.0.dev73

* [nightly] Increase version to 0.15.0.dev74

* [nightly] Increase version to 0.15.0.dev75

* [nightly] Increase version to 0.15.0.dev76

* Notes on the unsupervised notebook draft.

* [nightly] Increase version to 0.15.0.dev77

* [nightly] Increase version to 0.15.0.dev78

* [nightly] Increase version to 0.15.0.dev79

* Remove get_backbone() method and just have users access the backbone attribute directly.

* Add new diagrams and updated copy to teh unsupervised notebook.

* [nightly] Increase version to 0.15.0.dev80

* [nightly] Increase version to 0.15.0.dev81

* First finished draft of unsupervised_hello_world notebook

* Updates to the README file. Add self-supervised info.

* [nightly] Increase version to 0.15.0.dev82

* [nightly] Increase version to 0.15.0.dev83

* Update README.md

* Remove augmentation arg from architectures.

Architectures previously took a callable stack of augmentation layers
that would be added after the input of the model. This could cause
issues with saving and training on TPU. Users are now expected to add
augmentation to either the data samplers / datasets or manually add it
to the model.

* Clean up example dir.

* Fix flake8 errors in architectures.

* Update API docs.

* Bump version to 0.15.0

* Bump minor version to 0.16.0.dev0

* [nightly] Increase version to 0.16.0.dev1

* [nightly] Increase version to 0.16.0.dev2

* [nightly] Increase version to 0.16.0.dev3

* Distance and losses refactor (tensorflow#222)

* refactor distances call signature and add appropriate tests

* refactor metrics for new distance call signature

* make similarity losses compatible with asymmetric and non-square distance matrices

* adapt and add test

* remove newline

* [nightly] Increase version to 0.16.0.dev4

* [nightly] Increase version to 0.16.0.dev5

* [nightly] Increase version to 0.16.0.dev6

* [nightly] Increase version to 0.16.0.dev7

* [nightly] Increase version to 0.16.0.dev8

* Cross-batch memory (XBM) (tensorflow#225)

* initiate XBM loss

* add todo

* add XBM tests

* WIP: XBM serialization

* XBM serialization

* class docstring

* remove todo

* improve docstring

* remove comment

* [nightly] Increase version to 0.16.0.dev9

* [nightly] Increase version to 0.16.0.dev10

* [nightly] Increase version to 0.16.0.dev11

* [nightly] Increase version to 0.16.0.dev12

* [nightly] Increase version to 0.16.0.dev13

* [nightly] Increase version to 0.16.0.dev14

* [nightly] Increase version to 0.16.0.dev15

* [nightly] Increase version to 0.16.0.dev16

* [nightly] Increase version to 0.16.0.dev17

* [nightly] Increase version to 0.16.0.dev18

* [nightly] Increase version to 0.16.0.dev19

* [nightly] Increase version to 0.16.0.dev20

* [nightly] Increase version to 0.16.0.dev21

* [nightly] Increase version to 0.16.0.dev22

* Augmentor for Barlow Twins (tensorflow#229)

* Use list(range()) instead of comprehension as it is more pythonic.

* Create barlow.py

* Bump three in /tensorflow_similarity/visualization/projector_v2 (tensorflow#228)

Bumps [three](https://github.com/mrdoob/three.js) from 0.132.2 to 0.137.0.
- [Release notes](https://github.com/mrdoob/three.js/releases)
- [Commits](https://github.com/mrdoob/three.js/commits)

---
updated-dependencies:
- dependency-name: three
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Restructure class to be like Augmenter

* Minor fixing of dead links (tensorflow#230)

* Fixed dead links

* augmenter main to master

* Spelling changes Auto Augment

* MixupAndCutmix main to master

* RandAugment main to master

* RandomErasing main to master

* Update SimCLRAugmenter.md

* Update ClassificationMatch.md

* Update ClassificationMetric.md

* Update Evaluator.md

* Update MemoryEvaluator.md

* Update SimilarityModel.md

* Update BinaryAccuracy.md

* Update F1Score.md

* Update FalsePositiveRate.md

* Update NegativePredictiveValue.md

* Update Precision.md

* Update Recall.md

Co-authored-by: Owen Vallis <owensvallis@gmail.com>

* Fix minor typos (tensorflow#226)

Co-authored-by: Owen Vallis <owensvallis@gmail.com>

* Update barlow.py

* Update barlow.py

* Update setup.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* revisions

* Update __init__.py

* Update __init__.py

* Update color_jitter.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update setup.py

Co-authored-by: Owen S Vallis <ovallis@google.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Owen Vallis <owensvallis@gmail.com>
Co-authored-by: Genrry Hernandez <genrryhernandez@gmail.com>

* Fixed some bugs in augmenter. (tensorflow#232)

* Create barlow.py

* Restructure class to be like Augmenter

* Update barlow.py

* Update barlow.py

* Update setup.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* revisions

* Update __init__.py

* Update __init__.py

* Update color_jitter.py

* Update barlow.py

* Update barlow.py

* Update barlow.py

* Update setup.py

* fixed some bugs

* Remove seed instance variable

Co-authored-by: Owen Vallis <owensvallis@gmail.com>

* [nightly] Increase version to 0.16.0.dev23

* [nightly] Increase version to 0.16.0.dev24

* [nightly] Increase version to 0.16.0.dev25

* [nightly] Increase version to 0.16.0.dev26

* [nightly] Increase version to 0.16.0.dev27

* [nightly] Increase version to 0.16.0.dev28

* [nightly] Increase version to 0.16.0.dev29

* [nightly] Increase version to 0.16.0.dev30

* [nightly] Increase version to 0.16.0.dev31

* [nightly] Increase version to 0.16.0.dev32

* [nightly] Increase version to 0.16.0.dev33

* [nightly] Increase version to 0.16.0.dev34

* [nightly] Increase version to 0.16.0.dev35

* [nightly] Increase version to 0.16.0.dev36

* [nightly] Increase version to 0.16.0.dev37

* [nightly] Increase version to 0.16.0.dev38

* [nightly] Increase version to 0.16.0.dev39

* [nightly] Increase version to 0.16.0.dev40

* [nightly] Increase version to 0.16.0.dev41

* [nightly] Increase version to 0.16.0.dev42

* [nightly] Increase version to 0.16.0.dev43

* [nightly] Increase version to 0.16.0.dev44

* [nightly] Increase version to 0.16.0.dev45

* [nightly] Increase version to 0.16.0.dev46

* Added test coverage for augmentation functions + barlow, simCLR augmenter  (tensorflow#235)

* Create test_blur.py

* Create test_color_jitter.py

* Create test_crop.py

* Create test_flip.py

* Update test_crop.py

* Update test_color_jitter.py

* Create test_solarize.py

* Create test_augmenters.py

* Update test_flip.py

* Update test_flip.py

* Update test_flip.py

* Update blur.py

* Update blur.py

* [nightly] Increase version to 0.16.0.dev47

* Change augmenters to use augmentation_utils (tensorflow#238)

* Fix corrupted JSON formatting in unsupervised notebook.

* Added features of SplitValidationLoss callback to EvalCallback (tensorflow#242)

* Added features of SplitValidationLoss callback to EvalCallback

Merged SplitValidationLoss into EvalCallbaclk

* Refactored EvalCallback using utils.unpack_results

* [nightly] Increase version to 0.16.0.dev48

* [nightly] Increase version to 0.16.0.dev49

* [nightly] Increase version to 0.16.0.dev50

* VicReg Loss - Improvement of Barlow Twins (tensorflow#243)

* VicReg Loss

* Update vicreg.py

* Update vicreg.py

* Update vicreg.py

* fix big bug

* Update vicreg.py

* Update vicreg.py

* fixes

* Update vicreg.py

* [nightly] Increase version to 0.16.0.dev51

* [nightly] Increase version to 0.16.0.dev52

* Update tests for algebra.py

* Coverage now at 100%
* Convert tests to use tf.testing.TestCase

* [nightly] Increase version to 0.16.0.dev53

* [nightly] Increase version to 0.16.0.dev54

* Fix corrupted formatting in visualization notebook.

* [bug] Fix multisim loss offsets.

The tfsim version of multisim uses distances instead of the inner
product. However, multisim requires that we "center" the pairwise
distances around 0. Here we add a new center param, which we set to 1.0
for cosine distance. Additionally, we also flip the lambda (lmda) param
to add the threshold to the values instead of subtracting it. These
changes will help improve the pos and neg weighting in the log1psumexp.

* [nightly] Increase version to 0.16.0.dev55

* [bug] In losses.utils.logsumexp() tf.math.log(1 + x) should be
tf.math.log(tf.math.exp(-my_max) + x). This is needed to properly
account for removing the rowwise max before computing the logsumexp.

* Make Augmentation Utilities More Customizable(reupload due to branch issues) (tensorflow#255)

* modifications of benchmark

* test commit 123

* new changes to training

* testo changes

* works in colab... kind of

* code is neat now

* working on sampler problem

* Update barlow.py

* Update blur.py

* Update color_jitter.py

* Update color_jitter.py

* Update barlow.py

* Update barlow.py

* Added vicreg for sync

* Update vicreg.py

* Update vicreg.py

* Update vicreg.py

* Update barlow.py

* randomresizedcrop edits

* Update barlow.py

* allow to customize loss reduction

* Update __init__.py

* Delete sampler_test.py

* Delete benchmark/supervised directory

* Update barlow.py

* added docstring on random_resized_crop

* Allow user to set normalization

* Update barlow.py

* Update barlow.py

* Update setup.py

* remove pipfile

* Delete Pipfile

* Delete Pipfile.lock

* Update cropping.py

* Update cropping.py

* Additive multiplicative changes

* Update simclr.py

* change additive, multiplicative

* Update barlow.py

* Update solarize.py

* Update barlow.py

* Update solarize.py

* Update barlow.py

* Update test_solarize.py

* Update test_solarize.py

* Update test_solarize.py

Co-authored-by: Owen Vallis <ovallis@google.com>

* Refactor test_basic to use TestCase to improve flaky test results.

* Fix Flake8 warnings.

* Freeze all batchnorm architecture layers.

We now freeze all BN layers when loading pre-trained weights in the
effnet and resnet50 architectures. Previously, we only froze the BN
layers if trainable was partial or frozen. When trainable was full, the
BN layers would be trainable as well and this led to suboptimal training
losses.

* Improve EfficientNetSim docstring and type hints (tensorflow#254)

* Fix typos in docstring

* Remove reference to image augmentation

Image augmentation was previously removed, so purge it from the comment and docstring.

* Correct input image type annotation

* Fix tensorflow#251. Check for model._index before calling Indexer methods.

The Indexer is core to a number of the Similarity model methods. Add
support for checking if the index exists and return a more informative
AttributeError if the index hasn't been created yet.

* Set random seeds for tfrecord samplers test.

* All augmenters use the Tensor type from tensorflow_similarity.types.

* [nightly] Increase version to 0.16.0.dev56

* Fix Tensor type error in callbacks.

Unpacking the Lookup objects converts the python types to Tensors. This
can lead to Tensor type errors. This commit adds support for taking the
expected dtype of the model Tensors where possible.

We also fix a bug where the EvalCallback was not logging the split
metric names in the history.

* Update doc strings in color_jitter.

* Update the create index AttributeError text

* [nightly] Increase version to 0.16.0.dev57

* Update Notebook examples.

* Remove unneeded tf.function and register_keras_serializable decorators.

Subclasses of tf.keras.losses.Loss will trace all child functions and we
only need to register the subclassed loss to support deserialization.

* Simplify MetricEmbedding layer.

* Fix mypy type error in simsiam.

Convert all constants to tf.constant.

* Simplify the MetricEmbedding layer.

Subclass layers.Dense directly. This simplifies the layer and also fixes
function tracing during model save.

* Fix test_tfrecord_samplers tests.

* Update api documentation.

TODO: This just generated the new docs. We still need to go through and
clean up the documentation.

* Update doc string and api for MetricEmbedding layer.

* Bump to version 0.16

* Fix static type check error in memory store.

The np.savez functions expect array_like values but we were passing
List. Casting as np array should solve the issue.

* Fix effnet test for TF 2.9

* Fix TFRecordDatasetSampler now returns correct number of examples per batch.

Co-authored-by: Github Actions Bot <>
Co-authored-by: Abhishar Sinha <24841841+abhisharsinha@users.noreply.github.com>
Co-authored-by: Christoffer Hjort <Christoffer.Hjort1995@gmail.com>
Co-authored-by: dewball345 <abhiraamkumar@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Genrry Hernandez <genrryhernandez@gmail.com>
Co-authored-by: Emil Larsson <emla2805@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants