Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support of segmentation mask in Augmix layer #1988

Merged
merged 9 commits into from
Jul 31, 2023

Conversation

cosmo3769
Copy link
Contributor

@cosmo3769 cosmo3769 commented Jul 28, 2023

What does this PR do?

Related: #1992

Add support of segmentation mask in Augmix layer. Here is the colab link for demo.

Output generated by demo:

Screenshot 2023-07-31 at 8 08 50 PM

Who can review?

@ianstenbit @jbischof

@cosmo3769
Copy link
Contributor Author

I am getting this lint error here, though this error is not picked up in my local:

./keras_cv/datasets/waymo/load.py:80:8: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`

Similar error I am getting in this #1991.
How to resolve this? 🤔

@ianstenbit
Copy link
Contributor

I am getting this lint error here, though this error is not picked up in my local:

./keras_cv/datasets/waymo/load.py:80:8: E721 do not compare types, for exact checks use `is` / `is not`, for instance checks use `isinstance()`

Similar error I am getting in this #1991. How to resolve this? 🤔

I'm taking a look

@ianstenbit
Copy link
Contributor

Lint error is unrelated and will be fixed by #1993

Copy link
Contributor

@ianstenbit ianstenbit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!

def augment_segmentation_mask(
self, segmentation_masks, transformation=None, **kwargs
):
chain_mixing_weights = self._sample_from_dirichlet(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sampling these random values differently for the image augmentation and the mask augmentation will cause the image + mask to be augmented inconsistently with one another (your demo image shows, for example, that the dog mask is rotated differently than the dog itself).

We should implement and use get_random_transformation to get a single chain_mixing_weights and a single weight_sample that is used for both images and labels

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking into it.

@ariG23498
Copy link
Contributor

Hey @cosmo3769 this PR should not auto-close #1992

Could you please edit that part out?

@ianstenbit
Copy link
Contributor

Hey @cosmo3769 this PR should not auto-close #1992

Could you please edit that part out?

I've updated the description

@cosmo3769 cosmo3769 marked this pull request as draft July 31, 2023 13:33
@cosmo3769 cosmo3769 marked this pull request as ready for review July 31, 2023 14:39
@ianstenbit
Copy link
Contributor

Awesome! This looks better now -- I tinkered with the demo and the behavior seems correct

@@ -28,6 +28,7 @@
def normalize(input_image, input_mask):
input_image = tf.image.convert_image_dtype(input_image, tf.float32)
input_image = (input_image - mean) / tf.maximum(std, backend.epsilon())
input_image = input_image / 255
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this break other demos?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually this seems incorrect -- we're already scaling by mean and stddev so this shouldn't be done

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I checked other demos as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why make this change though? It seems like it's not a reasonable transform given the existing input scale of the image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this, the value range of image is greater than 255. As we are setting Augmix value range to be [0, 255], so the demo does not work without making this change. Any better alternative? 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay this seems more or less reasonable. However, this does make resize_demo.py strange, as that file already manually rescales by 255.0 on line 34. Let's get rid of that rescaling there and then this is fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Contributor

@ianstenbit ianstenbit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good except for the change with demo_utils. Thank you!

@ianstenbit
Copy link
Contributor

/gcbrun

@ianstenbit ianstenbit merged commit d14c16d into keras-team:master Jul 31, 2023
9 checks passed
@cosmo3769 cosmo3769 deleted the augmix-segmentation-mask branch August 11, 2023 20:49
ghost pushed a commit to y-vectorfield/keras-cv that referenced this pull request Nov 16, 2023
* update augmix segmentation mask

* fix

* fix

* added demo

* add test

* update readme

* fix

* fix

* fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants