Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ref: Forward CacheItemRequest::compute to async fns #631

Merged
merged 2 commits into from
Jan 14, 2022

Conversation

Swatinem
Copy link
Member

This refactors the code, pulling as much of the actual computation out into an async fn.

Moves some refactors out of #628 and this should in the end help with tokio-rs/tracing#1831

#skip-changelog ffs

This refactors the code, pulling as much of the actual computation out into an async fn.
@Swatinem Swatinem requested a review from a team January 13, 2022 17:08
Copy link
Contributor

@flub flub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for splitting this off in a separate and easy-to-review PR

/// This is the actual implementation of [`CacheItemRequest::compute`] for
/// [`FetchFileMetaRequest`] but outside of the trait so it can be written as async/await
/// code.
async fn compute_file_meta(self, path: PathBuf) -> Result<CacheStatus, ObjectError> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this version is much nicer than some of the others because of the self arg which means it doesn't need a really long arg list. I guess doing this for the others is too much refactoring (probably not all selfs are clonable)?

@Swatinem Swatinem merged commit 394efe5 into master Jan 14, 2022
@Swatinem Swatinem deleted the ref/async-compute branch January 14, 2022 10:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants