Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: allow skip pure torch computation on benchmark camelyon #219

Merged
merged 2 commits into from
Jun 20, 2024

Conversation

ThibaultFy
Copy link
Member

@ThibaultFy ThibaultFy commented Jun 19, 2024

Related issue

part of FL-1507

Summary

Notes

Please check if the PR fulfills these requirements

  • If the feature has an impact on the user experience, the changelog has been updated
  • Tests for the changes have been added (for bug fixes / features)
  • Docs have been added / updated (for bug fixes / features)
  • The commit message follows the conventional commit specification

Signed-off-by: ThibaultFy <thibault.fouqueray@gmail.com>
Copy link

linear bot commented Jun 19, 2024

@ThibaultFy ThibaultFy marked this pull request as ready for review June 20, 2024 07:01
@ThibaultFy ThibaultFy requested a review from a team as a code owner June 20, 2024 07:01
Copy link
Contributor

@thbcmlowk thbcmlowk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

Comment on lines 69 to 80
if not exp_params["skip_pure_torch"]:
torch_metrics = torch_fed_avg(
train_folder=train_folder,
test_folder=test_folder,
**{k: v for k, v in exp_params.items() if k in run_keys},
index_generator=index_generator,
model=model,
)

results = {**exp_params, **{"results": {**substrafl_metrics.to_dict, **torch_metrics.to_dict}}}
load_benchmark_summary(file=LOCAL_RESULTS_FILE, experiment_summary=results)
assert_expected_results(substrafl_metrics=substrafl_metrics, torch_metrics=torch_metrics, exp_params=exp_params)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To avoid negation in the if statement and keep the default behavior at the first indentation level, I think I'd rather do:

Suggested change
if not exp_params["skip_pure_torch"]:
torch_metrics = torch_fed_avg(
train_folder=train_folder,
test_folder=test_folder,
**{k: v for k, v in exp_params.items() if k in run_keys},
index_generator=index_generator,
model=model,
)
results = {**exp_params, **{"results": {**substrafl_metrics.to_dict, **torch_metrics.to_dict}}}
load_benchmark_summary(file=LOCAL_RESULTS_FILE, experiment_summary=results)
assert_expected_results(substrafl_metrics=substrafl_metrics, torch_metrics=torch_metrics, exp_params=exp_params)
if exp_params["skip_pure_torch"]:
return
torch_metrics = torch_fed_avg(
train_folder=train_folder,
test_folder=test_folder,
**{k: v for k, v in exp_params.items() if k in run_keys},
index_generator=index_generator,
model=model,
)
results = {**exp_params, **{"results": {**substrafl_metrics.to_dict, **torch_metrics.to_dict}}}
load_benchmark_summary(file=LOCAL_RESULTS_FILE, experiment_summary=results)
assert_expected_results(substrafl_metrics=substrafl_metrics, torch_metrics=torch_metrics, exp_params=exp_params)

But it is basically the same so either way is fine by me

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for updating!

Signed-off-by: ThibaultFy <thibault.fouqueray@gmail.com>
@ThibaultFy ThibaultFy merged commit 7b8e88f into main Jun 20, 2024
6 checks passed
@ThibaultFy ThibaultFy deleted the chore/allow-skip-torch-strat branch June 20, 2024 08:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants