Skip to content
This repository has been archived by the owner on Dec 20, 2024. It is now read-only.

fix weight mask calculation when using the remapper and no imputer #178

Merged
merged 2 commits into from
Dec 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ Keep it human-readable, your future self will thank you!

## [Unreleased](https://github.com/ecmwf/anemoi-training/compare/0.3.1...HEAD)
### Fixed
- Not update NaN-weight-mask for loss function when using remapper and no imputer [#178](https://github.com/ecmwf/anemoi-training/pull/178)

### Added
- Added a check for the variable sorting on pre-trained/finetuned models [#120](https://github.com/ecmwf/anemoi-training/pull/120)
Expand Down
4 changes: 3 additions & 1 deletion src/anemoi/training/train/forecaster.py
Original file line number Diff line number Diff line change
Expand Up @@ -229,12 +229,14 @@ def training_weights_for_imputed_variables(
"""Update the loss weights mask for imputed variables."""
if "loss_weights_mask" in self.loss.scalar:
loss_weights_mask = torch.ones((1, 1), device=batch.device)
found_loss_mask_training = False
# iterate over all pre-processors and check if they have a loss_mask_training attribute
for pre_processor in self.model.pre_processors.processors.values():
if hasattr(pre_processor, "loss_mask_training"):
loss_weights_mask = loss_weights_mask * pre_processor.loss_mask_training
found_loss_mask_training = True
# if transform_loss_mask function exists for preprocessor apply it
if hasattr(pre_processor, "transform_loss_mask"):
if hasattr(pre_processor, "transform_loss_mask") and found_loss_mask_training:
loss_weights_mask = pre_processor.transform_loss_mask(loss_weights_mask)
# update scaler with loss_weights_mask retrieved from preprocessors
self.loss.update_scalar(scalar=loss_weights_mask.cpu(), name="loss_weights_mask")
Expand Down
Loading