Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix to prevent stopping buffering prematurely #17013

Merged
merged 10 commits into from
Nov 11, 2024

Conversation

GuptaManan100
Copy link
Member

@GuptaManan100 GuptaManan100 commented Oct 19, 2024

Description

I'll summarise the issue described in #16438. Please follow all the comments in the issue for more details.

The problem that was noticed was that in the new keyspace events implementation of buffering, when 2 shards were being externally reparented around the same time, the buffering implementation would stop buffering for both shards when the first shard completed reparenting. This fails the writes on the second shard which is still in the midst of reparenting.

I was able to find the underlying problem. Basically, the order of steps are something like this -

  1. A vttablet is marked read-only (as part of an external reparent). Any write that comes to it fails with the read-only error and starts buffering.
  2. Now let's say a PRS starts for a different shard.
  3. The keyspace even watcher sees that a PRS has started, and stores that this shard is non-serving.
  4. Later when PRS completes, it sees that the PRS has completed. It however doesn't know that the first vttablet from a different shard than the one that just completed PRS is actually still read-only. So, it just stops buffering!
  5. This causes the queries targeted towards the first vttablet to see failures on the application side.

The problem was that when we start buffering, we weren't coordinating with the keyspace event watcher. If it doesn't receive a healthcheck from the primary tablet saying its not-serving, it doesn't even know the shard is in the midst of reparenting.

After realising this, I tried out a change where-in, we mark the shard non-serving in the shard state of the keyspace event watcher when we start buffering. This solution however didn't work, as it created a new issue, wherein we would stop buffering even when the primary tablet that is being demoted sends a serving health check. This is possible during the grace period of PRS, and also during external reparents.

To mitigate this problem, I've implemented a different solution in this PR. This solution is somewhat similar to how the previous healthcheck implementation used to handle stopping of buffering.

The proposed solution, takes the previous changes a bit further, and when we start buffering after receiving an error that tells us a reparent has started, we ask the keyspace event watcher to only mark the shard serving again after it has seen a serving primary with a higher timestamp than the one it knows off.

This change works well for all the cases we want it to, but has one downside. If a PRS call fails midway, and we end up reverting the primary demotion, we won't be able to stop buffering because we wouldn't have seen a new primary. This is a drawback that existed in the previous implementation of healthcheck buffering as well.

However, I have tried to mitigate this somewhat. In our implementation, if we receive a non-serving healthcheck from a primary tablet after buffering has started, we drop the restriction of only needing to see a new higher timestamp primary. This is because, we only use waitForReparent to ensure that we don't prematurely stop buffering when we receive a serving healthcheck from the primary that is being demoted. However, if we receive a non-serving check, then we know that we won't receive any more serving healthchecks until reparent finishes. Specifically, this helps us when PRS fails, but stops gracefully because the new candidate couldn't get caught up in time. In this case, we promote the previous primary back. This too will stop buffering since we aren't explicitly waiting for a new primary.

Related Issue(s)

Checklist

  • "Backport to:" labels have been added if this change should be back-ported to release branches
  • If this change is to be back-ported to previous releases, a justification is included in the PR description
  • Tests were added or are not required
  • Did the new or modified tests pass consistently locally and on CI?
  • Documentation was added or is not required

Deployment Notes

Signed-off-by: Manan Gupta <manan@planetscale.com>
Copy link
Contributor

vitess-bot bot commented Oct 19, 2024

Review Checklist

Hello reviewers! 👋 Please follow this checklist when reviewing this Pull Request.

General

  • Ensure that the Pull Request has a descriptive title.
  • Ensure there is a link to an issue (except for internal cleanup and flaky test fixes), new features should have an RFC that documents use cases and test cases.

Tests

  • Bug fixes should have at least one unit or end-to-end test, enhancement and new features should have a sufficient number of tests.

Documentation

  • Apply the release notes (needs details) label if users need to know about this change.
  • New features should be documented.
  • There should be some code comments as to why things are implemented the way they are.
  • There should be a comment at the top of each new or modified test to explain what the test does.

New flags

  • Is this flag really necessary?
  • Flag names must be clear and intuitive, use dashes (-), and have a clear help text.

If a workflow is added or modified:

  • Each item in Jobs should be named in order to mark it as required.
  • If the workflow needs to be marked as required, the maintainer team must be notified.

Backward compatibility

  • Protobuf changes should be wire-compatible.
  • Changes to _vt tables and RPCs need to be backward compatible.
  • RPC changes should be compatible with vitess-operator
  • If a flag is removed, then it should also be removed from vitess-operator and arewefastyet, if used there.
  • vtctl command output order should be stable and awk-able.

@vitess-bot vitess-bot bot added NeedsBackportReason If backport labels have been applied to a PR, a justification is required NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsIssue A linked issue is missing for this Pull Request NeedsWebsiteDocsUpdate What it says labels Oct 19, 2024
@github-actions github-actions bot added this to the v22.0.0 milestone Oct 19, 2024
Copy link

codecov bot commented Oct 19, 2024

Codecov Report

Attention: Patch coverage is 74.71264% with 22 lines in your changes missing coverage. Please review.

Project coverage is 67.31%. Comparing base (655f7fa) to head (a4ee765).
Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
go/vt/vtgate/buffer/shard_buffer.go 40.00% 9 Missing ⚠️
go/vt/discovery/keyspace_events.go 85.71% 7 Missing ⚠️
go/vt/discovery/fake_healthcheck.go 66.66% 4 Missing ⚠️
go/vt/vtgate/buffer/buffer.go 80.00% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main   #17013      +/-   ##
==========================================
- Coverage   67.32%   67.31%   -0.01%     
==========================================
  Files        1569     1569              
  Lines      252548   252627      +79     
==========================================
+ Hits       170023   170067      +44     
- Misses      82525    82560      +35     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@arthurschreiber
Copy link
Contributor

arthurschreiber commented Oct 19, 2024

It might be easier to reproduce this via an "external" failover, as it's simpler to get the sequencing right:

  • Set up a cluster with 2 shards (-80, 80-)
  • Switch -80 and 80- primaries into read only mode to start buffering.
  • Start running a DML query that's hitting shard -80 and one DML query that's hitting shard 80-. Both of these queries should get buffered and "block".
  • Perform external promotion of a replica in shard -80 to primary. Then run TabletExternallyReparented for the externally promoted replica to mark it as primary in the vitess topology.
  • This will stop buffering for both shards -80 and 80-. The query that was buffered for shard -80 will be retried and will be successful, the query that was buffered for shard 80- will fail because buffering was ended prematurely.

@GuptaManan100
Copy link
Member Author

Thanks Arthur! I was able to reproduce the problem now!

Signed-off-by: Manan Gupta <manan@planetscale.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
…-stop

Signed-off-by: Manan Gupta <manan@planetscale.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
@GuptaManan100 GuptaManan100 force-pushed the premature-buffering-stop branch from d4b042c to 9aaab35 Compare October 25, 2024 10:09
@GuptaManan100 GuptaManan100 removed NeedsWebsiteDocsUpdate What it says NeedsIssue A linked issue is missing for this Pull Request NeedsBackportReason If backport labels have been applied to a PR, a justification is required labels Oct 25, 2024
Signed-off-by: Manan Gupta <manan@planetscale.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
Signed-off-by: Manan Gupta <manan@planetscale.com>
@GuptaManan100 GuptaManan100 changed the title Fix for premature buffering Fix to prevent stopping buffering prematurely Oct 25, 2024
@GuptaManan100 GuptaManan100 added Type: Bug Type: Enhancement Logical improvement (somewhere between a bug and feature) Component: Query Serving and removed NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work labels Oct 25, 2024
Copy link
Contributor

@arthurschreiber arthurschreiber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❤️

@vitess-bot vitess-bot mentioned this pull request Oct 28, 2024
40 tasks
@arthurschreiber
Copy link
Contributor

@GuptaManan100 Is there anything outstanding before this can land? I can try and see if I can give these changes a go in our environments to see if the issues we've run into before are all fixed.

@GuptaManan100
Copy link
Member Author

@arthurschreiber Its complete from my side, just waiting for Deepthi to review it. It would be great if you can test it out though!

@vitess-bot vitess-bot mentioned this pull request Nov 5, 2024
30 tasks
go/vt/vtgate/buffer/buffer.go Outdated Show resolved Hide resolved
go/vt/discovery/keyspace_events.go Outdated Show resolved Hide resolved
Signed-off-by: Manan Gupta <manan@planetscale.com>
…-stop

Signed-off-by: Manan Gupta <manan@planetscale.com>
@GuptaManan100 GuptaManan100 merged commit 0403d54 into vitessio:main Nov 11, 2024
98 checks passed
@GuptaManan100 GuptaManan100 deleted the premature-buffering-stop branch November 11, 2024 06:44
vitess-bot pushed a commit that referenced this pull request Nov 11, 2024
Signed-off-by: Manan Gupta <manan@planetscale.com>
GuptaManan100 added a commit that referenced this pull request Nov 11, 2024
Signed-off-by: Manan Gupta <manan@planetscale.com>
GuptaManan100 added a commit that referenced this pull request Nov 11, 2024
Signed-off-by: Manan Gupta <manan@planetscale.com>
GuptaManan100 added a commit that referenced this pull request Nov 11, 2024
Signed-off-by: Manan Gupta <manan@planetscale.com>
GuptaManan100 pushed a commit that referenced this pull request Nov 11, 2024
…#17205)

Signed-off-by: Manan Gupta <manan@planetscale.com>
Co-authored-by: vitess-bot[bot] <108069721+vitess-bot[bot]@users.noreply.github.com>
GuptaManan100 added a commit that referenced this pull request Nov 11, 2024
…#17202)

Signed-off-by: Manan Gupta <manan@planetscale.com>
Co-authored-by: Manan Gupta <35839558+GuptaManan100@users.noreply.github.com>
GuptaManan100 added a commit that referenced this pull request Nov 11, 2024
…#17204)

Signed-off-by: Manan Gupta <manan@planetscale.com>
Co-authored-by: Manan Gupta <35839558+GuptaManan100@users.noreply.github.com>
Co-authored-by: Manan Gupta <manan@planetscale.com>
GuptaManan100 added a commit that referenced this pull request Nov 11, 2024
…#17203)

Signed-off-by: Manan Gupta <manan@planetscale.com>
Co-authored-by: Manan Gupta <35839558+GuptaManan100@users.noreply.github.com>
@arthurschreiber
Copy link
Contributor

@GuptaManan100 Just wanted to let you know that we've been running a backport of this (plus a few other of the buffering related bugixes) to v18 for the past few days, and have not encountered any issues whatsoever. 🎉

@GuptaManan100
Copy link
Member Author

Perfect! Thank you!

rvrangel pushed a commit to rvrangel/vitess that referenced this pull request Nov 21, 2024
Signed-off-by: Manan Gupta <manan@planetscale.com>
Signed-off-by: Renan Rangel <rrangel@slack-corp.com>
tanjinx pushed a commit to slackhq/vitess that referenced this pull request Nov 21, 2024
…o#17013) (vitessio#17203)

Signed-off-by: Manan Gupta <manan@planetscale.com>
Co-authored-by: Manan Gupta <35839558+GuptaManan100@users.noreply.github.com>
tanjinx added a commit to slackhq/vitess that referenced this pull request Nov 22, 2024
…o#17013) (vitessio#17203) (#564)

Signed-off-by: Manan Gupta <manan@planetscale.com>
Co-authored-by: vitess-bot[bot] <108069721+vitess-bot[bot]@users.noreply.github.com>
Co-authored-by: Manan Gupta <35839558+GuptaManan100@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Backport to: release-19.0 Needs to be back ported to release-19.0 Backport to: release-20.0 Needs to be backport to release-20.0 Backport to: release-21.0 Needs to be backport to release-21.0 Component: Query Serving Type: Bug Type: Enhancement Logical improvement (somewhere between a bug and feature)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bug Report: Premature buffering stop during concurrent reparenting can lead to query failures
4 participants