-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PITR: Run PITR for multiple times could lead to tiflash crash #52628
Comments
Because PITR will try to restore the logs into the cluster without caring about the order of default cf and write cf (in terms of speeding up the restore). While tiflash rely on when applying a write cf key, its belonging default cf key must exist, otherwise tiflash can not decode the key-value pairs into column data correctly. When tiflash see a write cf without its belonging default cf, tiflash panic. |
/component br |
It's a compatibility issue and we don't have solution to resolve it but have to document the limitation. |
@JaySon-Huang can TiFlash lift the restriction instead? |
@BornChanger During the step 4 (PITR restore point again), TiFlash cannot tell whether the error is a corrupted RaftLog that was accepted that violated the transaction model or a RaftLog that was recovered by PITR. so TiFlash cannot lift the restriction only for PITR. |
I guess we need further discussion to decide whether bring this to release branches. For now just fix this in master. |
/found customer |
/remove-found customer |
Bug Report
Please answer these questions before submitting your issue. Thanks!
1. Minimal reproduce step (Required)
br restore point
br restore point
2. What did you expect to see? (Required)
Restore success and all instances run normally
3. What did you see instead (Required)
When running step 4, TiFlash instances crash with backtrace like
4. What is your TiDB version? (Required)
v7.5.1
The text was updated successfully, but these errors were encountered: