This repository has been archived by the owner on Jul 24, 2024. It is now read-only.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What problem does this PR solve?
Currently, DDLs are send to TiDB cluster sequently, if we were the DDL owner, that is fine: we can execute this DDL immediately, and return very fast.
But we are not, and probably cannot. Then things getting bad, we must block and waiting our DDL job pushed to the queue, and executed by owner, then we can send next DDL. Even during waiting time, we can push more DDLs into the DDL job queue.
This PR make
GoCreateTabels
send create table jobs into DDL queue concurrently.What is changed and how it works?
we change
GoCreateTables
and make it use below strategy to create tables:dbPool
, use this DB pool to execute DDLs concurrently.Check List
Tests
br_300_small_tables
)We test it locally, by a 300 table, per table 100 records workload:
With different concurrency, the result at my computer is:
Release Note
More Things
cfg.concurrency
, which sometimes may be too big and will make many transaction conflicts. Since the execution time of a DDL has no much relative to environment, maybe a fixed value(like 16 or 32) would be good?