-
Notifications
You must be signed in to change notification settings - Fork 474
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try to recover automatically from the background error about sst corrupt #1667
Conversation
@caipengbo Thanks for your solution, some users did suffer from this issue even though it's a low probability. For test cases, it's good for me, since it's hard to mock without hijacking the rocksdb. |
I'm not sure it can be used. Seems that myrocks didn't apply the rule: https://github.com/facebook/mysql-5.6/blob/fb-mysql-8.0.28/storage/rocksdb/event_listener.cc#L116 TiKV has replicas, and it scan transfer the leader to replica, so I think there could be something different. I'll take a look into rocksdb later, does any one else use this pattern to recover from Corrupt? |
Yes, myrocks will abort. I've looked into myrocks and other systems before and only found tikv with a special check.
In the past, we had a NIC failure in production, which caused this error to become more frequent. So we added this mechanism, and we will monitor the background error when it appears, and furthermore, we will archive the error file so that we can locate the problem. Since then, there have been a few similar errors, all of which have been recovered safely. @mapleFU |
So the problem here is that we have background network IO, which might cause error much more-frequently than disk io error? |
IMHO, it will not affect the correctness of the data. The possible effect is that users tend to overlook background errors they encounter. HDYT @git-hulk @PragmaTwice |
It should be fine since this only impacts the io error scenario, and the instance also would be broken if got those errors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…che#1667) When the SST file corrupts, which is an unrecoverable error for the rocksdb, then rocksdb will go into read-only mode(https://github.com/facebook/rocksdb/wiki/Background-Error-Handling). Only restart rocksdb to try to recover from the error. When does sst file corruption occur? The error message looks like this: ``` 1. Corruption: Corrupt or unsupported format_version: 1005 in /tmp/kvrocks/data/db/000038.sst 2. Corruption: Bad table magic number: expected 9863518390377041911, found 9863518390377041912 in /tmp/kvrocks_db/data/db/000038.sst 3. Corruption: block checksum mismatch: stored = 3308200672, computed = 51173877, type = 4 in /tmp/kvrocks_db/data/db/000038.sst offset 0 size 15715 ``` The cause of the error is usually a hardware issue or a problem with the network or cloud disk (when using the cloud disk). The most common place we see this error is when a file is generated by `Compaction` or `Flush` and the `Version` applies the result. In this case, the result of the compaction is not actually applied, so we can ignore the error and avoid restarting the rocksdb. Tikv introduces this check when sst file corruption occurs, you can refer to: - tikv/tikv#10578 - tikv/tikv#10961 Let's try it on Kvrocks: 1. Extract the sst file from the background error message 2. Determine if it is a living file 3. If not, we ignore the error and force recovery from the background error For the rocksdb error message, before the rocksdb v7.10.2, the error message was imperfect and we could only recover from a limited number of errors. Thanks to this PR facebook/rocksdb#11009, the error message is enriched and we can recover from more scenarios.
When the SST file corrupts, which is an unrecoverable error for the rocksdb, then rocksdb will go into read-only mode(https://github.com/facebook/rocksdb/wiki/Background-Error-Handling). Only restart rocksdb to try to recover from the error.
When does sst file corruption occur? The error message looks like this:
The cause of the error is usually a hardware issue or a problem with the network or cloud disk (when using the cloud disk).
The most common place we see this error is when a file is generated by
Compaction
orFlush
and theVersion
applies the result.In this case, the result of the compaction is not actually applied, so we can ignore the error and avoid restarting the rocksdb.
Tikv introduces this check when sst file corruption occurs, you can refer to:
Let's try it on Kvrocks:
For the rocksdb error message, before the rocksdb v7.10.2, the error message was imperfect and we could only recover from a limited number of errors. Thanks to this PR facebook/rocksdb#11009, the error message is enriched and we can recover from more scenarios.