Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure callback is called once in doWhilst with S3 operations #2168

Merged
2 commits merged into from
Nov 19, 2024

Conversation

williamlardier
Copy link
Contributor

@williamlardier williamlardier commented Nov 14, 2024

The trigger seem to be
aws/aws-sdk-js#1678
... causing the callback to be called twice, which is a problem in these loops.

Issue: ZENKO-4925

Rationale of the change in the ticket comments.

@bert-e
Copy link
Contributor

bert-e commented Nov 14, 2024

Hello williamlardier,

My role is to assist you with the merge of this
pull request. Please type @bert-e help to get information
on this process, or consult the user documentation.

Available options
name description privileged authored
/after_pull_request Wait for the given pull request id to be merged before continuing with the current one.
/bypass_author_approval Bypass the pull request author's approval
/bypass_build_status Bypass the build and test status
/bypass_commit_size Bypass the check on the size of the changeset TBA
/bypass_incompatible_branch Bypass the check on the source branch prefix
/bypass_jira_check Bypass the Jira issue check
/bypass_peer_approval Bypass the pull request peers' approval
/bypass_leader_approval Bypass the pull request leaders' approval
/approve Instruct Bert-E that the author has approved the pull request. ✍️
/create_pull_requests Allow the creation of integration pull requests.
/create_integration_branches Allow the creation of integration branches.
/no_octopus Prevent Wall-E from doing any octopus merge and use multiple consecutive merge instead
/unanimity Change review acceptance criteria from one reviewer at least to all reviewers
/wait Instruct Bert-E not to run until further notice.
Available commands
name description privileged
/help Print Bert-E's manual in the pull request.
/status Print Bert-E's current status in the pull request TBA
/clear Remove all comments from Bert-E from the history TBA
/retry Re-start a fresh build TBA
/build Re-start a fresh build TBA
/force_reset Delete integration branches & pull requests, and restart merge process from the beginning.
/reset Try to remove integration branches unless there are commits on them which do not appear on the source branch.

Status report is not available.

@williamlardier williamlardier changed the base branch from development/2.10 to development/2.6 November 14, 2024 14:49
@scality scality deleted a comment from bert-e Nov 14, 2024
@bert-e
Copy link
Contributor

bert-e commented Nov 14, 2024

Incorrect fix version

The Fix Version/s in issue ZENKO-4925 contains:

  • 2.10.6

  • 2.6.69

  • 2.7.65

  • 2.8.45

  • 2.9.23

Considering where you are trying to merge, I ignored possible hotfix versions and I expected to find:

  • 2.10.7

  • 2.6.70

  • 2.7.66

  • 2.8.46

  • 2.9.24

Please check the Fix Version/s of ZENKO-4925, or the target
branch of this pull request.

The trigger seem to be
aws/aws-sdk-js#1678
... causing the callback to be called twice, which
is a problem in these loops.
Issue: ZENKO-4925
@@ -596,8 +597,9 @@ class ReplicationUtility {
Key: key,
VersionId: versionId,
}, (err, data) => {
const cbOnce = jsutil.once(callback);
Copy link
Contributor

@francoisferrand francoisferrand Nov 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if I understand correctly, this would happen if/because the headObject callback is invoked twice : both for the actual response and a socket timeout which happens at about the same time.

→ it seems to me a simpler fix would be just:

 callback => this.s3.headObject({
                Bucket: bucketName,
                Key: key,
                VersionId: versionId,
            }, jsutil.once((err, data) => {
                [...]
           })

However, I am wondering if there may not be a real underlying issue : like the socket timeout (on client side, i.e. test) being shorter than the timeout we have in zenko.... and maybe this could explain the (other) issues we have with these tests, what do you think?

Can we check the timeout (or infirm this hypothesis), and/or should we ignore retryable AWS errors (retry after a timeout) ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure it was a timeout but worth reviewing that indeed, let me check

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current client-side timeout seem to be infinite, so I don't think we are affected by the timeout from client side:

const scalityS3Client = new S3({
    ...
    // disable node sdk retries and timeout to prevent InvalidPart
    // and SocketHangUp errors. If retries are allowed, sdk will send
    // another request after first request has already deleted parts,
    // causing InvalidPart. Meanwhile, if request takes too long to finish,
    // sdk will create SocketHangUp error before response.
    maxRetries: 0,
    httpOptions: { timeout: 0 },
});

@williamlardier
Copy link
Contributor Author

Will be merged with the upcoming Zenko release - keeping this open till then.

@bert-e bert-e closed this pull request by merging all changes into development/2.6 in 6cd4998 Nov 19, 2024
@bert-e bert-e deleted the bugfix/ZENKO-4925 branch November 19, 2024 05:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants