Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

backup: support split big region into small backup files (#9283) #9448

Merged
merged 9 commits into from
Jan 28, 2021

Conversation

ti-srebot
Copy link
Contributor

cherry-pick #9283 to release-4.0

What problem does this PR solve?

Issue Number: close #9144

Problem Summary: BR will read all data of a region and fill it in a SST writer. But it is in-memory. If there is a huge region, TiKV may crash for OOM because of keeping all data of this region in memory.

What is changed and how it works?

What's Changed: Record the written txn entries' size. When it reaches region_max_size, we will save the data cached in RocksDB to a SST file and then switch to the next file.

Related changes

  • Need to cherry-pick to the release branch

Check List

Tests

  • Unit test
  • Integration test
  • Manual test (add detailed scripts or steps below)
  1. Set sst-max-size to 15MiB.
mysql> select * from CLUSTER_CONFIG where `TYPE`="tikv";
+------+-----------------+---------------------------------------------------------------+------------------------------------------------------+
| TYPE | INSTANCE        | KEY                                                           | VALUE                                                |
+------+-----------------+---------------------------------------------------------------+------------------------------------------------------+
| tikv | 127.0.0.1:20160 | backup.batch-size                                             | 8                                                    |
| tikv | 127.0.0.1:20160 | backup.num-threads                                            | 9                                                    |
| tikv | 127.0.0.1:20160 | backup.sst-max-size                                           | 15MiB                                                |
...
  1. Backup around 100MB data(without compaction) successfully.
$ ./br backup full -s ./backup --pd http://127.0.0.1:2379 
Full backup <--------------------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
Checksum <-----------------------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
[2020/12/31 14:39:12.534 +08:00] [INFO] [collector.go:60] ["Full backup Success summary: total backup ranges: 2, total success: 2, total failed: 0, total take(Full backup time): 4.273097395s, total take(real time): 8.133315406s, total kv: 8000000, total size(MB): 361.27, avg speed(MB/s): 84.55"] ["backup checksum"=901.754111ms] ["backup fast checksum"=6.09384ms] ["backup total regions"=10] [BackupTS=421893700168974340] [Size=48023090]
  1. The big region can be split into several files:
-rw-r--r-- 1 * * 1.5M Dec 31 14:39 1_60_28_74219326eeb0a4ae3a0f5190f7784132bb0e44791391547ef66862aaeb668579_1609396745730_write.sst
-rw-r--r-- 1 * * 1.2M Dec 31 14:39 1_60_28_b7a5509d9912c66a21589d614cfc8828acd4051a7eeea3f24f5a7b337b5a389e_1609396746062_write.sst
-rw-r--r-- 1 * * 1.5M Dec 31 14:39 1_60_28_cdcc2ce1c18a30a2b779b574f64de9f0e3be81c2d8720d5af0a9ef9633f8fbb7_1609396745429_write.sst
-rw-r--r-- 1 * * 2.4M Dec 31 14:39 1_62_28_4259e616a6e7b70c33ee64af60230f3e4160af9ac7aac723f033cddf6681826a_1609396747038_write.sst
-rw-r--r-- 1 * * 2.4M Dec 31 14:39 1_62_28_5d0de44b65fb805e45c93278661edd39792308c8ce90855b54118c4959ec9f16_1609396746731_write.sst
-rw-r--r-- 1 * * 2.4M Dec 31 14:39 1_62_28_ef7ab4b5471b088ee909870e316d926f31f4f6ec771754690eac61af76e8782c_1609396747374_write.sst
-rw-r--r-- 1 * * 1.5M Dec 31 14:39 1_64_29_74211aae8215fe9cde8bd7ceb8494afdcc18e5c6a8c5830292a577a9859d38e1_1609396746671_write.sst
-rw-r--r-- 1 * * 1.2M Dec 31 14:39 1_64_29_81e152c98742938c1662241fac1c841319029e800da6881d799a16723cb42888_1609396747010_write.sst
-rw-r--r-- 1 * * 1.5M Dec 31 14:39 1_64_29_ce0dde9826aee9e5ccac0a516f18b9871d3897effd559ff7450b8e56ac449bbd_1609396746349_write.sst
-rw-r--r-- 1 * *   78 Dec 31 14:39 backup.lock
-rw-r--r-- 1 * * 229K Dec 31 14:39 backupmeta
  1. Restore backuped data. It works successfully and passes the manual check.
./br restore full -s ./backup --pd http://127.0.0.1:2379
Full restore <-------------------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
[2020/12/31 14:42:49.983 +08:00] [INFO] [collector.go:60] ["Full restore Success summary: total restore files: 27, total success: 27, total failed: 0, total take(Full restore time): 5.063048828s, total take(real time): 7.84620924s, total kv: 8000000, total size(MB): 361.27, avg speed(MB/s): 71.36"] ["split region"=26.217737ms] ["restore checksum"=4.10792638s] ["restore ranges"=26] [Size=48023090]

Release note

  • Fix the problem that TiKV OOM when we backup a huge region.

Signed-off-by: ti-srebot <ti-srebot@pingcap.com>
@ti-chi-bot ti-chi-bot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Jan 6, 2021
Signed-off-by: Chunzhu Li <lichunzhu@stu.xjtu.edu.cn>
Signed-off-by: Chunzhu Li <lichunzhu@stu.xjtu.edu.cn>
Signed-off-by: Chunzhu Li <lichunzhu@stu.xjtu.edu.cn>
@kennytm
Copy link
Contributor

kennytm commented Jan 7, 2021

/lgtm

@ti-chi-bot
Copy link
Member

@kennytm: /lgtm is only allowed for the reviewers in list.

In response to this:

/lgtm

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/ti-community-prow repository.

@kennytm
Copy link
Contributor

kennytm commented Jan 7, 2021

/lgtm

@ti-chi-bot ti-chi-bot added the status/LGT1 Indicates that a PR has LGTM 1. label Jan 7, 2021
@lonng lonng requested review from overvenus and kennytm and removed request for lonng January 7, 2021 13:02
@kennytm kennytm removed their request for review January 8, 2021 01:07
@Rustin170506
Copy link
Member

/label do-not-merge/cherry-pick-not-approved

@Rustin170506
Copy link
Member

Rustin170506 commented Jan 8, 2021

This cherry pick PR is for a release branch and has not yet been approved by release team.
Adding the do-not-merge/cherry-pick-not-approved label.

To merge this cherry pick, it must first be approved (/lgtm + /merge) by the collaborators.

AFTER it has been approved by collaborators, please psend an email to the QA team requesting approval and the QA team will help you merge the PR.

@lichunzhu
Copy link
Contributor

@overvenus PTAL

@jebter
Copy link
Collaborator

jebter commented Jan 25, 2021

/label cherry-pick-approved

@ti-chi-bot ti-chi-bot added cherry-pick-approved Cherry pick PR approved by release team. and removed do-not-merge/cherry-pick-not-approved labels Jan 25, 2021
@overvenus
Copy link
Member

/lgtm

@ti-chi-bot
Copy link
Member

[REVIEW NOTIFICATION]

This pull request has been approved by:

  • overvenus

To complete the pull request process, please ask the reviewers in the list to review by filling /cc @reviewer in the comment.
After your PR has acquired the required number of LGTMs, you can assign this pull request to the committer in the list by filling /assign @committer in the comment to help you merge this pull request.

The full list of commands accepted by this bot can be found here.

Reviewer can indicate their review by writing /lgtm in a comment.
Reviewer can cancel approval by writing /lgtm cancel in a comment.

@ti-chi-bot ti-chi-bot added status/LGT2 Indicates that a PR has LGTM 2. and removed status/LGT1 Indicates that a PR has LGTM 1. labels Jan 25, 2021
@overvenus overvenus added the component/backup-restore Component: backup, import, external_storage label Jan 26, 2021
@lichunzhu
Copy link
Contributor

lichunzhu commented Jan 28, 2021

Test backup one big region (9GB) on TiKV v4.0.10 and TiKV from this PR (We will call it new TiKV in the following passage):

All backup and restore jobs can pass checksum.

TiKV configurations

server_configs:
  tikv:
    backup.sst-max-size: "144MB"
    coprocessor.region-max-size: "14400MB"
    coprocessor.region-split-size: "9600MB"
    coprocessor.region-max-keys: 144000000
    coprocessor.region-split-keys: 96000000

Region info:

mysql> show table t regions;
+-----------+-----------+---------+-----------+-----------------+---------------+------------+---------------+------------+----------------------+------------------+
| REGION_ID | START_KEY | END_KEY | LEADER_ID | LEADER_STORE_ID | PEERS         | SCATTERING | WRITTEN_BYTES | READ_BYTES | APPROXIMATE_SIZE(MB) | APPROXIMATE_KEYS |
+-----------+-----------+---------+-----------+-----------------+---------------+------------+---------------+------------+----------------------+------------------+
|       288 | t_52_     |         |       290 |               4 | 289, 290, 291 |          0 |             0 |          0 |                 9259 |        120000000 |
+-----------+-----------+---------+-----------+-----------------+---------------+------------+---------------+------------+----------------------+------------------+
1 row in set (0.01 sec)

Backup memory usage of v4.0.10 TiKV:

The max memory usage could be around 1.4 GiB.
image

Backup memory usage of new TiKV:

The max memory usage could be around 350 MiB.
image

Restore memory usage of new TiKV:

The max memory usage was around 572 MiB.
image

The comparison of backuped files:

New TiKV:

$ du -sh ./backup4/*
17M     ./backup4/4_288_171_035d8443af15b3bf29ca4ae6910af50b10a9ba6dac9071c9c6d9932dea1f6326_1611801463341_write.sst
17M     ./backup4/4_288_171_03f70d93d5d9d2f5d2718fbcc83641479b7f3f45e011702a39cb774bae1b3652_1611801447347_write.sst
17M     ./backup4/4_288_171_0823587195b2d7930d747fcdc05117a4ab9ae14bb6192b8df777bdb1d6d33c7d_1611801457020_write.sst
22M     ./backup4/4_288_171_12ecf1c4e3764909ffb5d16a95eee17523ee5071109fb7d59c02eb6a02ff9002_1611801452752_write.sst
17M     ./backup4/4_288_171_22f8525218ea26880e474b7e35b0e6417238a1b18b97b732b91d210e28145315_1611801475975_write.sst
17M     ./backup4/4_288_171_2334a18b4455213018fb96ec3c92e951dcc4b10106e91896d882a5ee0d2dbea4_1611801431331_write.sst
17M     ./backup4/4_288_171_235b6e0428e4f5f777fef276c1ecdf9c2f605ddfbee3fce68725b1ea75569a0f_1611801495038_write.sst
22M     ./backup4/4_288_171_2d1273b63ea96a1480887395bd79eeb4b96654d1bb7d8780a954c43399cb8807_1611801463311_write.sst
17M     ./backup4/4_288_171_324a1888766a99ca99e74665df9281d5c550f7dd023de84ac03a920525dd1fef_1611801479195_write.sst
22M     ./backup4/4_288_171_3d337c0353924acb99bca61784fcf0307532f3a67104203d79f0fcf9572b34ca_1611801484641_write.sst
17M     ./backup4/4_288_171_45d3b46d695df1697a55f466720ba9c9b6209a7ead538d1253f6546edb69cc72_1611801444179_write.sst
17M     ./backup4/4_288_171_495d1378812a221d4e2010ff3b752824f01d24ffff527b8f2bb4ab10b6f93243_1611801488741_write.sst
23M     ./backup4/4_288_171_4d4bf84bd30a3ae10b645ad9354c2c26cbc5ff2a95b338b93b820db95359617b_1611801431309_write.sst
23M     ./backup4/4_288_171_55886dca4e4c8a0442e2901c310049cf20da42003e140597b5f4b1803ef4cc1f_1611801442211_write.sst
17M     ./backup4/4_288_171_5eb09a8e2155850557a7a4e0db3b3aa6b00f17c3d3d64c57e8c746b63d16ff1f_1611801482332_write.sst
22M     ./backup4/4_288_171_6103bf93c045227d79e1dda9c6d56eb532f95ac7f65bb2307a6ce87c5fbc534e_1611801474002_write.sst
23M     ./backup4/4_288_171_7674e4ecc286c346c8eaf20272f056895e426170e1b345eed8ccf5968de9b054_1611801434915_write.sst
17M     ./backup4/4_288_171_7f45450d5d1d045c4e2a5c2fedc48e6e7dedf9d0aeea625cf771496c5fff7f84_1611801453781_write.sst
22M     ./backup4/4_288_171_82d2064801073ba8a4503768788eec8173296b7a895d9df99d66f1bfc6914bd0_1611801445733_write.sst
17M     ./backup4/4_288_171_83fc8f3a78e8fd5a8aad6966a43bc3b2deb33f43ee547df9bba0e7e4d96d4806_1611801472835_write.sst
17M     ./backup4/4_288_171_8514fa1ee70a7498e52ed7934fa49f3c558d45efaa6463b74acd764b8f8c271f_1611801466478_write.sst
22M     ./backup4/4_288_171_87eb8c3c1f033683a5489cd908b9234feb3ee722567b90ffa2163dfe7425bb4a_1611801470421_write.sst
22M     ./backup4/4_288_171_8d03e0426a72c77e81031e075e28636b5e53410cb4f8e4f7214b39eb5dc3a6d5_1611801456197_write.sst
17M     ./backup4/4_288_171_8f40d6078d4e235a7841777c4e7b7bdd8f3da8d254c90b5d885facf21003d967_1611801450599_write.sst
17M     ./backup4/4_288_171_9a66d88021606eaa44fc321483dce3ee0f7fb5440a5186f6f0c2934ba74924bd_1611801434462_write.sst
22M     ./backup4/4_288_171_9be3e25f9babfaf2e1b09aa91c006dbc1d3920335c5c29bc3336081e9ce1117e_1611801491808_write.sst
17M     ./backup4/4_288_171_9c8c89c4ae8c1f7a8860fda9a554e7de7aa8a9ecb364314e6ab990e53fe2a64c_1611801491866_write.sst
17M     ./backup4/4_288_171_a4562c5f72254bb0ac149f9a971c2518f10a3b13d78f9b8901b7f9ca097ae23d_1611801437809_write.sst
17M     ./backup4/4_288_171_b355a34e1b78086ead381756d15ed5efcfa3d183557e3bc08236dcded2e98f8b_1611801440985_write.sst
22M     ./backup4/4_288_171_ba2da8b47206b710b1e48bea33cde5c26977f37e5c19497a4462495c37b3f8e5_1611801477467_write.sst
22M     ./backup4/4_288_171_ba8e90a48bb305b18245762dbc05c14114591ad63592d2088313ccd404e2aabc_1611801488354_write.sst
18M     ./backup4/4_288_171_baba2ae33a5f1862271a9d1c6316ed8e0fdf5a2e60d5883bd05a582342db4db8_1611801498871_write.sst
22M     ./backup4/4_288_171_c27783a685d1609d1961fa6985d164ca657fc7ee42668cee6da7a573c81ffb35_1611801495315_write.sst
22M     ./backup4/4_288_171_c478ca2dfc776a24ea24d90537415d9536636a268e1e58ab0e4336b4f15745aa_1611801481040_write.sst
23M     ./backup4/4_288_171_d152185af3bf9e34dd26ac875fdfdcca555de9396b67c1b01e9d3d28085724b9_1611801438590_write.sst
22M     ./backup4/4_288_171_da91b93a9365c999753b25c2081b602f939ff5a34c103a1bdad1b4e76c89e198_1611801459725_write.sst
14M     ./backup4/4_288_171_e10c4b9764350bd59d8ddbc866c2875a922e2ef6ecfe8a4320566c30136b66bf_1611801498246_write.sst
22M     ./backup4/4_288_171_e27b93cec8fe377817da31cc526467023d7919b81d608c9b14f489d1430d45fb_1611801466816_write.sst
17M     ./backup4/4_288_171_e62a86a85ecf1145cf229175334a822db3c02e52c01a26bb0b4c3645b5318cba_1611801460163_write.sst
17M     ./backup4/4_288_171_e82148c04fe121c58e3aaf36ccf9da869b0ec0172f20e327259b644e57a5bd2f_1611801469635_write.sst
22M     ./backup4/4_288_171_ef4f8d1c81c6451c6fcac56e774cd7286e86aa961a932b99b3177b1d8ccc1218_1611801449252_write.sst
17M     ./backup4/4_288_171_f4edbac57e5e5a5bb25c25cae69c9a37d26949db536747d0492ab850951c9588_1611801485485_write.sst
4.0K    ./backup4/backup.lock
264K    ./backup4/backupmeta

TiKV v4.0.10:

$ du -sh ./backup5/*
352M    ./backup5/5_288_171_2334a18b4455213018fb96ec3c92e951dcc4b10106e91896d882a5ee0d2dbea4_1611803180771_write.sst
437M    ./backup5/5_288_171_4d4bf84bd30a3ae10b645ad9354c2c26cbc5ff2a95b338b93b820db95359617b_1611803180769_write.sst
4.0K    ./backup5/backup.lock
252K    ./backup5/backupmeta

@NingLin-P
Copy link
Member

/merge

@ti-chi-bot
Copy link
Member

@NingLin-P: It seems you want to merge this PR, I will help you trigger all the tests:

/run-all-tests

You only need to trigger /merge once, and if the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes.

If you have any questions about the PR merge process, please refer to pr process.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

@ti-chi-bot
Copy link
Member

@NingLin-P: /merge is only allowed for the committers in list.

In response to this:

/merge

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

@overvenus
Copy link
Member

/lgtm

@lichunzhu
Copy link
Contributor

/run-all-tests

@overvenus
Copy link
Member

/merge

@ti-chi-bot
Copy link
Member

@overvenus: It seems you want to merge this PR, I will help you trigger all the tests:

/run-all-tests

You only need to trigger /merge once, and if the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes.

If you have any questions about the PR merge process, please refer to pr process.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

@ti-chi-bot
Copy link
Member

This pull request has been accepted and is ready to merge.

Commit hash: 828c71b

@ti-chi-bot ti-chi-bot added the status/can-merge Indicates a PR has been approved by a committer. label Jan 28, 2021
@ti-chi-bot
Copy link
Member

@ti-srebot: Your PR has out-of-dated, I have automatically updated it for you.

At the same time I will also trigger all tests for you:

/run-all-tests

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

@ti-chi-bot ti-chi-bot merged commit c57c7ba into tikv:release-4.0 Jan 28, 2021
@lichunzhu lichunzhu deleted the release-4.0-1bb82f0a2003 branch January 28, 2021 07:44
gengliqi pushed a commit to gengliqi/tikv that referenced this pull request Feb 20, 2021
…ikv#9448)

cherry-pick tikv#9283 to release-4.0
---

<!--
Thank you for contributing to TiKV!

If you haven't already, please read TiKV's [CONTRIBUTING](https://github.com/tikv/tikv/blob/master/CONTRIBUTING.md) document.

If you're unsure about anything, just ask; somebody should be along to answer within a day or two.

PR Title Format:
1. module [, module2, module3]: what's changed
2. *: what's changed

If you want to open the **Challenge Program** pull request, please use the following template:
https://raw.githubusercontent.com/tikv/.github/master/.github/PULL_REQUEST_TEMPLATE/challenge-program.md
You can use it with query parameters: https://github.com/tikv/tikv/compare/master...${you branch}?template=challenge-program.md
-->

### What problem does this PR solve?

Issue Number: close tikv#9144 <!-- REMOVE this line if no issue to close -->

Problem Summary: BR will read all data of a region and fill it in a SST writer. But it is in-memory. If there is a huge region, TiKV may crash for OOM because of keeping all data of this region in memory.

### What is changed and how it works?

What's Changed: Record the written txn entries' size. When it reaches `region_max_size`, we will save the data cached in RocksDB to a SST file and then switch to the next file.

### Related changes

- Need to cherry-pick to the release branch

### Check List <!--REMOVE the items that are not applicable-->

Tests <!-- At least one of them must be included. -->

- Unit test
- Integration test
- Manual test (add detailed scripts or steps below)
1. Set `sst-max-size` to 15MiB.
```
mysql> select * from CLUSTER_CONFIG where `TYPE`="tikv";
+------+-----------------+---------------------------------------------------------------+------------------------------------------------------+
| TYPE | INSTANCE        | KEY                                                           | VALUE                                                |
+------+-----------------+---------------------------------------------------------------+------------------------------------------------------+
| tikv | 127.0.0.1:20160 | backup.batch-size                                             | 8                                                    |
| tikv | 127.0.0.1:20160 | backup.num-threads                                            | 9                                                    |
| tikv | 127.0.0.1:20160 | backup.sst-max-size                                           | 15MiB                                                |
...
```
2. Backup around 100MB data(without compaction) successfully.
```
$ ./br backup full -s ./backup --pd http://127.0.0.1:2379 
Full backup <--------------------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
Checksum <-----------------------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
[2020/12/31 14:39:12.534 +08:00] [INFO] [collector.go:60] ["Full backup Success summary: total backup ranges: 2, total success: 2, total failed: 0, total take(Full backup time): 4.273097395s, total take(real time): 8.133315406s, total kv: 8000000, total size(MB): 361.27, avg speed(MB/s): 84.55"] ["backup checksum"=901.754111ms] ["backup fast checksum"=6.09384ms] ["backup total regions"=10] [BackupTS=421893700168974340] [Size=48023090]
```
3. The big region can be split into several files:
```
-rw-r--r-- 1 * * 1.5M Dec 31 14:39 1_60_28_74219326eeb0a4ae3a0f5190f7784132bb0e44791391547ef66862aaeb668579_1609396745730_write.sst
-rw-r--r-- 1 * * 1.2M Dec 31 14:39 1_60_28_b7a5509d9912c66a21589d614cfc8828acd4051a7eeea3f24f5a7b337b5a389e_1609396746062_write.sst
-rw-r--r-- 1 * * 1.5M Dec 31 14:39 1_60_28_cdcc2ce1c18a30a2b779b574f64de9f0e3be81c2d8720d5af0a9ef9633f8fbb7_1609396745429_write.sst
-rw-r--r-- 1 * * 2.4M Dec 31 14:39 1_62_28_4259e616a6e7b70c33ee64af60230f3e4160af9ac7aac723f033cddf6681826a_1609396747038_write.sst
-rw-r--r-- 1 * * 2.4M Dec 31 14:39 1_62_28_5d0de44b65fb805e45c93278661edd39792308c8ce90855b54118c4959ec9f16_1609396746731_write.sst
-rw-r--r-- 1 * * 2.4M Dec 31 14:39 1_62_28_ef7ab4b5471b088ee909870e316d926f31f4f6ec771754690eac61af76e8782c_1609396747374_write.sst
-rw-r--r-- 1 * * 1.5M Dec 31 14:39 1_64_29_74211aae8215fe9cde8bd7ceb8494afdcc18e5c6a8c5830292a577a9859d38e1_1609396746671_write.sst
-rw-r--r-- 1 * * 1.2M Dec 31 14:39 1_64_29_81e152c98742938c1662241fac1c841319029e800da6881d799a16723cb42888_1609396747010_write.sst
-rw-r--r-- 1 * * 1.5M Dec 31 14:39 1_64_29_ce0dde9826aee9e5ccac0a516f18b9871d3897effd559ff7450b8e56ac449bbd_1609396746349_write.sst
-rw-r--r-- 1 * *   78 Dec 31 14:39 backup.lock
-rw-r--r-- 1 * * 229K Dec 31 14:39 backupmeta
```
4. Restore backuped data. It works successfully and passes the manual check.
```
./br restore full -s ./backup --pd http://127.0.0.1:2379
Full restore <-------------------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
[2020/12/31 14:42:49.983 +08:00] [INFO] [collector.go:60] ["Full restore Success summary: total restore files: 27, total success: 27, total failed: 0, total take(Full restore time): 5.063048828s, total take(real time): 7.84620924s, total kv: 8000000, total size(MB): 361.27, avg speed(MB/s): 71.36"] ["split region"=26.217737ms] ["restore checksum"=4.10792638s] ["restore ranges"=26] [Size=48023090]
```

### Release note <!-- bugfixes or new feature need a release note -->

- Fix the problem that TiKV OOM when we backup a huge region.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cherry-pick-approved Cherry pick PR approved by release team. component/backup-restore Component: backup, import, external_storage sig/migrate size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. status/can-merge Indicates a PR has been approved by a committer. status/LGT2 Indicates that a PR has LGTM 2.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants