-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: frequent flush cause minio rate limit #28625
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: xiaofan-luan The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@xiaofan-luan Please associate the related pr of master to the body of your Pull Request. (eg. “pr: #”) |
Invalid PR Title Format Detected Your PR submission does not adhere to our required standards. To ensure clarity and consistency, please meet the following criteria:
Required Title Structure:
Where Example:
Please review and update your PR to comply with these guidelines. |
@xiaofan-luan E2e jenkins job failed, comment |
/run-cpu-e2e |
Actually, there is a logic to avoid duplicated sync segments:
|
@bigsheeper This checks when executing tasks, which is too late. we're generating way too many sync tasks, logging them and merging them, but not executing them, which's very confusing. It's betther we ignore the syncing segment while generating sync task, more efficient. |
internal/datanode/flush_manager.go
Outdated
@@ -683,6 +683,7 @@ func (t *flushBufferInsertTask) flushInsertData() error { | |||
err := group.Wait() | |||
metrics.DataNodeSave2StorageLatency.WithLabelValues(fmt.Sprint(paramtable.GetNodeID()), metrics.InsertLabel).Observe(float64(tr.ElapseSpan().Milliseconds())) | |||
if err == nil { | |||
log.Warn("failed to flush insert data", zap.Error(err)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is the right place to log failed to flush, its err==nil
above
internal/datanode/flush_manager.go
Outdated
@@ -707,6 +708,7 @@ func (t *flushBufferDeleteTask) flushDeleteData() error { | |||
metrics.DataNodeSave2StorageLatency.WithLabelValues(fmt.Sprint(paramtable.GetNodeID()), metrics.DeleteLabel).Observe(float64(tr.ElapseSpan().Milliseconds())) | |||
if err == nil { | |||
for _, d := range t.data { | |||
log.Warn("failed to flush delete data", zap.Error(err)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
@xiaofan-luan E2e jenkins job failed, comment |
0a889da
to
fe57543
Compare
@xiaofan-luan E2e jenkins job failed, comment |
/run-cpu-e2e |
fe57543
to
80ce331
Compare
@xiaofan-luan E2e jenkins job failed, comment |
80ce331
to
e3a0c05
Compare
@xiaofan-luan E2e jenkins job failed, comment |
@xiaofan-luan E2e jenkins job failed, comment |
/run-cpu-e2e |
2 similar comments
/run-cpu-e2e |
/run-cpu-e2e |
0341083
to
a1d93ff
Compare
@xiaofan-luan E2e jenkins job failed, comment |
/run-cpu-e2e |
@xiaofan-luan E2e jenkins job failed, comment |
1 similar comment
@xiaofan-luan E2e jenkins job failed, comment |
/run-cpu-e2e |
2 similar comments
/run-cpu-e2e |
/run-cpu-e2e |
@xiaofan-luan E2e jenkins job failed, comment |
a1d93ff
to
a498eb0
Compare
@xiaofan-luan E2e jenkins job failed, comment |
a498eb0
to
fdc48d3
Compare
@xiaofan-luan E2e jenkins job failed, comment |
/run-cpu-e2e |
@xiaofan-luan E2e jenkins job failed, comment |
/run-cpu-e2e |
@xiaofan-luan E2e jenkins job failed, comment |
fdc48d3
to
4aa9e51
Compare
@xiaofan-luan E2e jenkins job failed, comment |
4aa9e51
to
49e43da
Compare
@xiaofan-luan E2e jenkins job failed, comment |
Add a jitter in periodically flush policy to aovid flush large amount of segments in a short time Signed-off-by: xiaofanluan <xiaofan.luan@zilliz.com>
49e43da
to
559f151
Compare
related to #28549
pr: #28626