-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: allow skip pure torch computation on benchmark camelyon #219
Conversation
Signed-off-by: ThibaultFy <thibault.fouqueray@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
benchmark/camelyon/benchmarks.py
Outdated
if not exp_params["skip_pure_torch"]: | ||
torch_metrics = torch_fed_avg( | ||
train_folder=train_folder, | ||
test_folder=test_folder, | ||
**{k: v for k, v in exp_params.items() if k in run_keys}, | ||
index_generator=index_generator, | ||
model=model, | ||
) | ||
|
||
results = {**exp_params, **{"results": {**substrafl_metrics.to_dict, **torch_metrics.to_dict}}} | ||
load_benchmark_summary(file=LOCAL_RESULTS_FILE, experiment_summary=results) | ||
assert_expected_results(substrafl_metrics=substrafl_metrics, torch_metrics=torch_metrics, exp_params=exp_params) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid negation in the if statement and keep the default behavior at the first indentation level, I think I'd rather do:
if not exp_params["skip_pure_torch"]: | |
torch_metrics = torch_fed_avg( | |
train_folder=train_folder, | |
test_folder=test_folder, | |
**{k: v for k, v in exp_params.items() if k in run_keys}, | |
index_generator=index_generator, | |
model=model, | |
) | |
results = {**exp_params, **{"results": {**substrafl_metrics.to_dict, **torch_metrics.to_dict}}} | |
load_benchmark_summary(file=LOCAL_RESULTS_FILE, experiment_summary=results) | |
assert_expected_results(substrafl_metrics=substrafl_metrics, torch_metrics=torch_metrics, exp_params=exp_params) | |
if exp_params["skip_pure_torch"]: | |
return | |
torch_metrics = torch_fed_avg( | |
train_folder=train_folder, | |
test_folder=test_folder, | |
**{k: v for k, v in exp_params.items() if k in run_keys}, | |
index_generator=index_generator, | |
model=model, | |
) | |
results = {**exp_params, **{"results": {**substrafl_metrics.to_dict, **torch_metrics.to_dict}}} | |
load_benchmark_summary(file=LOCAL_RESULTS_FILE, experiment_summary=results) | |
assert_expected_results(substrafl_metrics=substrafl_metrics, torch_metrics=torch_metrics, exp_params=exp_params) |
But it is basically the same so either way is fine by me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for updating!
Signed-off-by: ThibaultFy <thibault.fouqueray@gmail.com>
Related issue
part of FL-1507
Summary
Notes
Please check if the PR fulfills these requirements