Skip to content
This repository has been archived by the owner on Dec 20, 2024. It is now read-only.

fix: remove saving of metadata for training ckpt #190

Merged
merged 2 commits into from
Dec 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ Keep it human-readable, your future self will thank you!
### Fixed
- Not update NaN-weight-mask for loss function when using remapper and no imputer [#178](https://github.com/ecmwf/anemoi-training/pull/178)
- Dont crash when using the profiler if certain env vars arent set [#180](https://github.com/ecmwf/anemoi-training/pull/180)
- Remove saving of metadata to training checkpoint [#57](https://github.com/ecmwf/anemoi-training/pull/190)

### Added
- Introduce variable to configure: transfer_learning -> bool, True if loading checkpoint in a transfer learning setting.
Expand Down
3 changes: 0 additions & 3 deletions src/anemoi/training/diagnostics/callbacks/checkpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,9 +173,6 @@ def _save_checkpoint(self, trainer: pl.Trainer, lightning_checkpoint_filepath: s
if trainer.is_global_zero:
from weakref import proxy

# save metadata for the training checkpoint in the same format as inference
save_metadata(lightning_checkpoint_filepath, metadata)

# notify loggers
for logger in trainer.loggers:
logger.after_save_checkpoint(proxy(self))
Loading