-
Notifications
You must be signed in to change notification settings - Fork 329
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SemanticSegmentationTrainer: ignore_zeros -> ignore_index #644
Conversation
Is there any reason not to rename |
We don't need to ignore other values and old segmentation trainer checkpoints will not be easily loadable |
tests/conf/oscd_all.yaml
Outdated
@@ -11,7 +11,7 @@ experiment: | |||
in_channels: 26 | |||
num_classes: 2 | |||
num_filters: 1 | |||
ignore_zeros: True | |||
ignore_zeros: False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't ignore_zeros with two classes, this causes the Accuracy metric to break
@@ -97,7 +97,7 @@ def model_kwargs(self) -> Dict[Any, Any]: | |||
"encoder_name": "resnet18", | |||
"encoder_weights": None, | |||
"in_channels": 1, | |||
"num_classes": 1, | |||
"num_classes": 2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instantiating the torchmetrics classes with a single class doesn't make sense
torchgeo/trainers/segmentation.py
Outdated
self.ignore_zeros = None if kwargs["ignore_zeros"] else 0 | ||
if not isinstance(kwargs["ignore_zeros"], bool): | ||
raise ValueError("ignore_zeros must be a bool") | ||
if kwargs["ignore_zeros"] and kwargs["loss"] == "jaccard": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I specifically warn about this because a user might expect training with jaccard to behave similarly to training with ce or focal loss, however it doesn't
We don't need to ignore other values yet. Presumably this feature exists in torchmetrics because some datasets use a class other than 0 for background. This will break as soon as someone adds a dataset with a different index that needs to be ignored.
I'm concerned about reproducibility of experiments, but not necessarily reproducibility with older versions of TorchGeo. We are still in alpha mode and backwards incompatible changes like this would be better sooner rather than later. I still say we should switch to |
Okay fair enough! Any thoughts on the warning? |
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
) * Clarify the ignore_zeros parameter to the segmentationi trainer * Black * Add test * Added warnings * Renaming ignore_zeros to ignore_index * Fixed rest of config and tests * Update conf/chesapeake_cvpr.yaml Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com> * Update torchgeo/trainers/segmentation.py Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com> * Update torchgeo/trainers/segmentation.py Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com> * Changed warning match Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Fixes #444