Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

br: incremental restore does not handle CREATE INDEX (ADD INDEX) correctly, causing data inconsistency #54426

Closed
kennytm opened this issue Jul 3, 2024 · 3 comments · Fixed by #54430
Labels
affects-6.5 affects-7.1 affects-7.5 affects-8.1 component/br This issue is related to BR of TiDB. impact/inconsistency incorrect/inconsistency/inconsistent severity/major type/bug The issue is confirmed as a bug.

Comments

@kennytm
Copy link
Contributor

kennytm commented Jul 3, 2024

Bug Report

Please answer these questions before submitting your issue. Thanks!

1. Minimal reproduce step (Required)

use test;

-- 1. prepare the full backup

drop table if exists t;
create table t (pk bigint primary key, val int not null);
insert into t values (1, 11), (2, 22), (3, 33), (4, 44);
backup table t to 'local:///tmp/tidb_54426_full_backup';
-- ^ write down the reported backupts.
admin check table t;
select * from t;
/*
+----+-----+
| pk | val |
+----+-----+
|  1 |  11 |
|  2 |  22 |
|  3 |  33 |
|  4 |  44 |
+----+-----+
*/
-- 2. prepare the incr backup

create index index_val on t (val);
update t set val = 66 - val;
backup table t to 'local:///tmp/tidb_54426_incr_backup' last_backup = «fill_in_the_BackupTS_from_above_here»;
admin check table t;
select * from t use index (index_val);
/*
+----+-----+
| pk | val |
+----+-----+
|  4 |  22 |
|  3 |  33 |
|  2 |  44 |
|  1 |  55 |
+----+-----+
*/
-- 3. clean up
drop table t;
-- 4. perform the full restore
restore schema * from 'local:///tmp/tidb_54426_full_backup';
admin check table t;
select * from t;
/*
+----+-----+
| pk | val |
+----+-----+
|  1 |  11 |
|  2 |  22 |
|  3 |  33 |
|  4 |  44 |
+----+-----+
*/
-- 5. perform the incr restore
restore schema * from 'local:///tmp/tidb_54426_incr_backup';
admin check table t;
select * from t use index (index_val);

2. What did you expect to see? (Required)

In step 5, the admin check table passes and the select * has the same output as step 2.

3. What did you see instead (Required)

mysql> admin check table t;
ERROR 8223 (HY000): data inconsistency in table: t, index: index_val, handle: 1, index-values:"handle: 1, values: [KindInt64 11]" != record-values:"handle: 1, values: [KindInt64 55]"

mysql> select * from t use index (index_val);
+----+-----+
| pk | val |
+----+-----+
|  1 |  11 |
|  2 |  22 |
|  4 |  22 |
|  3 |  33 |
|  2 |  44 |
|  4 |  44 |
|  1 |  55 |
+----+-----+
7 rows in set (0.01 sec)

4. What is your TiDB version? (Required)

Release Version: v8.1.0
Edition: Community
Git Commit Hash: 945d07c5d5c7a1ae212f6013adfb187f2de24b23
Git Branch: HEAD
UTC Build Time: 2024-05-21 03:52:40
GoVersion: go1.21.10
Race Enabled: false
Check Table Before Drop: false
Store: tikv
@kennytm kennytm added type/bug The issue is confirmed as a bug. component/br This issue is related to BR of TiDB. affects-8.1 labels Jul 3, 2024
@kennytm
Copy link
Contributor Author

kennytm commented Jul 3, 2024

If we use tidb-ctl mvcc to check on all keys after step 5 we'll find the MVCC records like this:

key startTS commitTS value
row - pk: 1 450893690177585154 450893690177585155 val: 11
row - pk: 2 450893690177585154 450893690177585155 val: 22
row - pk: 3 450893690177585154 450893690177585155 val: 33
row - pk: 4 450893690177585154 450893690177585155 val: 44
row - pk: 1 450893773434519588 450893773434519588 val: 55
row - pk: 2 450893773434519588 450893773434519588 val: 44
row - pk: 4 450893773434519588 450893773434519588 val: 22
index - val: 11, pk: 1 450893773434519588 450893773434519588 DELETED
index - val: 22, pk: 2 450893773434519588 450893773434519588 DELETED
index - val: 22, pk: 4 450893773434519588 450893773434519588 null
index - val: 33, pk: 3 450893773434519588 450893773434519588 null
index - val: 44, pk: 2 450893773434519588 450893773434519588 null
index - val: 44, pk: 4 450893773434519588 450893773434519588 DELETED
index - val: 55, pk: 1 450893773434519588 450893773434519588 null
index - val: 11, pk: 1 450893773749092360 450893773749092360 null
index - val: 22, pk: 2 450893773749092360 450893773749092360 null
index - val: 33, pk: 3 450893773749092360 450893773749092360 null
index - val: 44, pk: 4 450893773749092360 450893773749092360 null

Here,

  • 450893690177585154/155 are the timestamps of the initial INSERT transaction. Full restore did not perform any RewriteTS operation.
  • 450893773434519588 is the reported Cluster TS of the Incremental restore.
  • 450893773749092360 is the ingestion TS of the CREATE INDEX DDL performed during Incremental restore.

The bug is that the Cluster TS obtained by Incremental restore

restoreTS, err := restore.GetTSWithRetry(ctx, mgr.GetPDClient())

is computed way too early before DDL execution at

err = client.ExecDDLs(ctx, ddlJobs)

so the RewriteTS operation failed to overwrite the keys generated by CREATE INDEX.

The restoreTS should be fetched again after executing the DDLs.

(Alternatively, we can modify the DDL execution such that it will never change any t-prefixed keys i.e. skip backfilling, which should be much more efficient and get rid of the need of RewriteTS)

@lance6716
Copy link
Contributor

I'm not sure what's the timeline of these actions. Should it be invoking ExecDDLs after RewriteTS operation? So ADD INDEX will see the newest data and generate the correct index KV

@kennytm
Copy link
Contributor Author

kennytm commented Jul 4, 2024

@lance6716 the current situation is

  1. get "RestoreTS" (let's say the value is 51)
  2. setup a GCSafePoint on RestoreTS=51
  3. perform ExecDDL (tso = 52...60)
  4. import data with NewTimestamp set to RestoreTS=51 ❌

The correct action should be:

  1. get "MinRestoreTS" = 51
  2. setup a GCSafePoint on MinRestoreTS=51
  3. perform ExecDDL (tso = 52...60)
  4. get a new "DataRestoreTS" = 61 🆕
  5. import data with NewTimestamp set to DataRestoreTS=61

Alternatively, ensure DDLs won't generate any t-prefixed KVs so no overwriting will happen

  1. get "RestoreTS" = 51
  2. setup a GCSafePoint on RestoreTS=51
  3. perform ExecDDL with backfilling set to do nothing 🆕 (tso = 52...60)
  4. import data with NewTimestamp set to RestoreTS=51 or without rewrite TS at all ⭕️

But if we do want to keep rewriteTS it should use 61 rather than 51 to maintain the global chronological order.

If we disable rewriteTS, it is better we find a way to also force ExecDDL to commit the DDLJobs using the "backup time axis" rather than "restore time axis", so that @@tidb_snapshot can still work.

@jebter jebter added the impact/inconsistency incorrect/inconsistency/inconsistent label Jul 5, 2024
@ti-chi-bot ti-chi-bot bot added may-affects-5.4 This bug maybe affects 5.4.x versions. may-affects-6.1 labels Jul 8, 2024
@3pointer 3pointer removed may-affects-5.4 This bug maybe affects 5.4.x versions. may-affects-6.1 labels Jul 8, 2024
@ti-chi-bot ti-chi-bot bot closed this as completed in 08147e7 Jul 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
affects-6.5 affects-7.1 affects-7.5 affects-8.1 component/br This issue is related to BR of TiDB. impact/inconsistency incorrect/inconsistency/inconsistent severity/major type/bug The issue is confirmed as a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants