Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CDC/etcd_worker: add rate limiter to limit EtcdWorker tick frequency #3219

Merged
merged 7 commits into from
Nov 4, 2021

Conversation

asddongmen
Copy link
Contributor

@asddongmen asddongmen commented Nov 1, 2021

What problem does this PR solve?

#3112
Too frequent etcd worker tick will cause etcd to be overburdened, and the current etcd qps will increase exponentially with the number of tables that need to be replicated.

What is changed and how it works?

Add rate limiter to limit EtcdWorker ticks frequency.

Check List

Tests

  • Manual test (add detailed scripts or steps below)

Summary: Limiting the tick frequency of EtcdWoker can reduce etcd qps by about 50%, but it will reduce the replication speed by about 20%.

Test Environment:

  1. 2 Machines: 8-core 16G memory 500G high-speed disk
  2. Sink Type: mysql
  3. Cluster topo:
pd_servers:
  - host: 172.x.x.x
  - host: 172.x.x.x

tidb_servers:
  - host: 172.x.x.x
  - host: 172.x.x.x
    
tikv_servers:
  - host: 172.x.x.x
  - host: 172.x.x.x

cdc_servers:
  - host: 172.x.x.x
  - host: 172.x.x.x

monitoring_servers:
  - host: 172.x.x.x

grafana_servers:
  - host: 172.x.x.x

alertmanager_servers:
  - host: 172.x.x.x

Test1 (EtcdWorker ticks limit 10 times / s)
Create 16 changefeeds, synchronize a table for each, and write 50w rows of data to each table in upstream.


  Max checkpoint lag Average checkpoint lag in peak Max etcd qps CPU usage Total synchronization time
owner 3.3 min 1.1min 320 164% 4 min
processor - - 160 82% -

Test2 (EtcdWorker ticks limit 10 times / s)
Create 30 changefeeds, synchronize a table for each, and write 50w rows of data to each table in upstream.


  Max checkpoint lag Average checkpoint lag in peak Max etcd qps CPU usage Total synchronization time
owner 6.8 min 5 min 600 189% 10 min
processor - - 300 95% -

Test3 (EtcdWorker ticks without limit )
Create 16 changefeeds, synchronize a table for each, and write 50w rows of data to each table in upstream.


  Max checkpoint lag Average checkpoint lag in peak Max etcd qps CPU usage Total synchronization time
owner 2.74 min 50 s 630 193% 3 min
processor - - 260 92% -

Test4 (EtcdWorker ticks without limit )
Create 30 changefeeds, synchronize a table for each, and write 50w rows of data to each table in upstream.


  Max checkpoint lag Average checkpoint lag in peak Max etcd qps CPU usage Total synchronization time
owner 6.9 min 3 min 1180 199% 8 min
processor - - 600 100% -

Test5 (Incremental scan、EtcdWorker ticks limit 10 times / s)

  1. Create 30 changefeed and pause.
  2. 30 tables are created upstream, and 50w data is written into each table.
  3. After waiting for the upstream data to be written, resume the above 30 changefeeds.
Max checkpoint lag Average checkpoint lag in peak Max etcd qps CPU usage Total synchronization time
owner 7 min 4 min 600 400% 9 min
processor - - 300 200% -

Test6 (Incremental scan、EtcdWorker ticks limit 10 times / s)

  1. Create 30 changefeed and pause.
  2. 30 tables are created upstream, and 50w data is written into each table.
  3. After waiting for the upstream data to be written, resume the above 30 changefeeds.

When resume changefeed, the owner was down instantly.

  Max checkpoint lag Average checkpoint lag in peak Max etcd qps CPU usage Total synchronization time
owner 10 min 7 min 871 400% 10 min
processor - - 658 200% -

Side effects

  • Has exported variable changed.
  • Possible performance regression

Related changes

  • Need to cherry-pick to the release branch

Release note

Please add a release note.
If you don't think this PR needs a release note then fill it with `None`.

@ti-chi-bot
Copy link
Member

ti-chi-bot commented Nov 1, 2021

[REVIEW NOTIFICATION]

This pull request has been approved by:

  • liuzix
  • overvenus

To complete the pull request process, please ask the reviewers in the list to review by filling /cc @reviewer in the comment.
After your PR has acquired the required number of LGTMs, you can assign this pull request to the committer in the list by filling /assign @committer in the comment to help you merge this pull request.

The full list of commands accepted by this bot can be found here.

Reviewer can indicate their review by submitting an approval review.
Reviewer can cancel approval by submitting a request changes review.

@ti-chi-bot ti-chi-bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Nov 1, 2021
@asddongmen asddongmen added the component/replica-model Replication model component. label Nov 1, 2021
@asddongmen asddongmen changed the title cdc/etcd_worker: add rate limiter to limit EtcdWorker ticks frequency cdc/etcd_worker: add rate limiter to limit EtcdWorker tick frequency Nov 1, 2021
@ti-chi-bot ti-chi-bot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Nov 1, 2021
@asddongmen asddongmen changed the title cdc/etcd_worker: add rate limiter to limit EtcdWorker tick frequency CDC/etcd_worker: add rate limiter to limit EtcdWorker tick frequency Nov 1, 2021
@asddongmen
Copy link
Contributor Author

/run-all-tests

@maxshuang
Copy link
Contributor

maxshuang commented Nov 1, 2021

the current etcd qps will increase exponentially with the number of tables that need to be replicated.

----- @asddongmen could u explain why?

This pr is a workaround or a solution?

@maxshuang
Copy link
Contributor

I still can't get the point of your tests. "write 50w rows of data to each table in upstream" means a heavy continuous flow in upstream, so it is reasonable to increase the etcdworker ticks to speed up.

Am I right? Or merge some resolvedTs or checkpointTs report will be a better solution? instead of just limit the rate.

@liuzix
Copy link
Contributor

liuzix commented Nov 1, 2021

I still can't get the point of your tests. "write 50w rows of data to each table in upstream" means a heavy continuous flow in upstream, so it is reasonable to increase the etcdworker ticks to speed up.

Am I right? Or merge some resolvedTs or checkpointTs report will be a better solution? instead of just limit the rate.

From what I understand, the point of this PR is to address the problem that, within a given period, the number of ticks in one node must be greater than or equal to the number of updates by all nodes. This behavior is not optimal from a very low level.

If you mean batching updates for different changefeeds by "merge some resolvedTs or checkpointTs report", I think this will be a different optimization. The behavior of the business logic is orthogonal to what we are trying to fix here. We can investigate your proposal and address it in another PR.

@ti-chi-bot ti-chi-bot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Nov 2, 2021
@ti-chi-bot ti-chi-bot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Nov 3, 2021
@liuzix
Copy link
Contributor

liuzix commented Nov 3, 2021

/run-integration-tests

@ti-chi-bot ti-chi-bot added the status/LGT1 Indicates that a PR has LGTM 1. label Nov 3, 2021
pkg/orchestrator/etcd_worker.go Outdated Show resolved Hide resolved
pkg/orchestrator/etcd_worker.go Outdated Show resolved Hide resolved
@codecov-commenter
Copy link

Codecov Report

Merging #3219 (9adb87d) into master (3c3b915) will decrease coverage by 0.3558%.
The diff coverage is 51.4877%.

@@               Coverage Diff                @@
##             master      #3219        +/-   ##
================================================
- Coverage   57.2251%   56.8692%   -0.3559%     
================================================
  Files           163        211        +48     
  Lines         19453      22768      +3315     
================================================
+ Hits          11132      12948      +1816     
- Misses         7261       8493      +1232     
- Partials       1060       1327       +267     

@ti-chi-bot ti-chi-bot added status/LGT2 Indicates that a PR has LGTM 2. and removed status/LGT1 Indicates that a PR has LGTM 1. labels Nov 3, 2021
@overvenus
Copy link
Member

/merge

@ti-chi-bot
Copy link
Member

This pull request has been accepted and is ready to merge.

Commit hash: ea892d8

@ti-chi-bot ti-chi-bot added the status/can-merge Indicates a PR has been approved by a committer. label Nov 3, 2021
@liuzix
Copy link
Contributor

liuzix commented Nov 3, 2021

/run-all-tests

@amyangfei
Copy link
Contributor

/merge

@asddongmen
Copy link
Contributor Author

/run-dm-integration-tests

@ti-chi-bot
Copy link
Member

In response to a cherrypick label: new pull request created: #3267.

@ti-chi-bot
Copy link
Member

In response to a cherrypick label: new pull request created: #3268.

@ti-chi-bot
Copy link
Member

In response to a cherrypick label: new pull request created: #3269.

@ti-chi-bot
Copy link
Member

In response to a cherrypick label: new pull request created: #3270.

@ti-chi-bot
Copy link
Member

In response to a cherrypick label: new pull request created: #3271.

@Rustin170506
Copy link
Member

@asddongmen You don't seem to have filled out the release note correctly, so please fill out the release note for the next fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/replica-model Replication model component. needs-cherry-pick-release-4.0 Should cherry pick this PR to release-4.0 branch. needs-cherry-pick-release-5.0 Should cherry pick this PR to release-5.0 branch. needs-cherry-pick-release-5.1 Should cherry pick this PR to release-5.1 branch. needs-cherry-pick-release-5.2 Should cherry pick this PR to release-5.2 branch. needs-cherry-pick-release-5.3 Should cherry pick this PR to release-5.3 branch. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. status/can-merge Indicates a PR has been approved by a committer. status/LGT2 Indicates that a PR has LGTM 2.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants