-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: adds specific exception handling for the download operation #208
refactor: adds specific exception handling for the download operation #208
Conversation
...data-plane-aws-s3/src/main/java/org/eclipse/edc/connector/dataplane/aws/s3/S3DataSource.java
Outdated
Show resolved
Hide resolved
...e/data-plane-aws-s3/src/main/java/org/eclipse/edc/connector/dataplane/aws/s3/S3DataSink.java
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd suggest to refactor the data sink a little, fetch and upload are too coupled, my suggestion would be to reach something like this:
for (var part : parts) {
try (var input = part.openStream()) {
// fetch logic
} (catch Exception e) {
return fetchFailure();
}
try {
// upload logic
} catch (Exception e) {
return uploadFailure(e);
}
}
that will clearly distinguish between different kind of exception without catching and retrowing exceptions.
An additional question would be: is it correct that a multi-parts upload should be stopped at the first failure? maybe yes, but that should be eventually documented with a test that explains why
...ta-plane-aws-s3/src/test/java/org/eclipse/edc/connector/dataplane/aws/s3/S3DataSinkTest.java
Outdated
Show resolved
Hide resolved
...e/data-plane-aws-s3/src/main/java/org/eclipse/edc/connector/dataplane/aws/s3/S3DataSink.java
Outdated
Show resolved
Hide resolved
...ta-plane-aws-s3/src/test/java/org/eclipse/edc/connector/dataplane/aws/s3/S3DataSinkTest.java
Outdated
Show resolved
Hide resolved
The coupling between Regarding stopping multi-parts upload at the first failure, I think considering a potential retry strategy might be beneficial. Should we open a new issue to discuss and address this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd suggest to refactor the data sink a little, fetch and upload are too coupled, my suggestion would be to reach something like this:
for (var part : parts) { try (var input = part.openStream()) { // fetch logic } (catch Exception e) { return fetchFailure(); } try { // upload logic } catch (Exception e) { return uploadFailure(e); } }that will clearly distinguish between different kind of exception without catching and retrowing exceptions.
An additional question would be: is it correct that a multi-parts upload should be stopped at the first failure? maybe yes, but that should be eventually documented with a test that explains whyThe coupling between
DataSource
andDataSink
complicates the neat separation of download and upload logic. Theinput
resource, required outside the download scope, must be closed in any situation. I addressed this by just isolating the upload logic to handle errors within that block differently.
the input can be transformed in a list of UploadPartRequest
or RequestBody
, then it can be closed before any of the upload call is made (createMultipartUpload, uploadPart, completeMultipartUpload), so the two parts can be logically split.
maybe that's not really efficient, but, in alternative, a custom exception SourceException
or so could be defined and be thrown by the openStream
method, so it can caught separately from all the others in the transferParts
method, that will avoid the nested try catch block.
Please note that, an exception thrown by the input.readNBytes
will be considered as uploadFailure
(also in your code), to avoid that, the S3Part
could return a InputStream
decorator that catch and throws the SourceException
I mentioned also for the readNBytes method.
Regarding stopping multi-parts upload at the first failure, I think considering a potential retry strategy might be beneficial. Should we open a new issue to discuss and address this?
yes, that makes sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some cleanup needed, good otherwise
...c/main/java/org/eclipse/edc/connector/dataplane/aws/s3/exceptions/S3DataSourceException.java
Outdated
Show resolved
Hide resolved
...c/main/java/org/eclipse/edc/connector/dataplane/aws/s3/exceptions/S3DataSourceException.java
Outdated
Show resolved
Hide resolved
...ta-plane-aws-s3/src/test/java/org/eclipse/edc/connector/dataplane/aws/s3/S3DataSinkTest.java
Show resolved
Hide resolved
...ta-plane-aws-s3/src/test/java/org/eclipse/edc/connector/dataplane/aws/s3/S3DataSinkTest.java
Outdated
Show resolved
Hide resolved
...e/data-plane-aws-s3/src/main/java/org/eclipse/edc/connector/dataplane/aws/s3/S3DataSink.java
Outdated
Show resolved
Hide resolved
...e/data-plane-aws-s3/src/main/java/org/eclipse/edc/connector/dataplane/aws/s3/S3DataSink.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a missing test, please fix dependencies and then we can merge this
...data-plane-aws-s3/src/main/java/org/eclipse/edc/connector/dataplane/aws/s3/S3DataSource.java
Show resolved
Hide resolved
…into refactor/datasink_exception_handling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, could you please fix the dependency check?
What this PR changes/adds
Why it does that
Linked Issue(s)
Closes #200