Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segment did not perform as expected, querynode memory growth was uneven #23477

Closed
1 task done
yesletgo opened this issue Apr 18, 2023 · 5 comments
Closed
1 task done
Assignees
Labels
kind/bug Issues or changes related a bug stale indicates no udpates for 30 days triage/accepted Indicates an issue or PR is ready to be actively worked on.
Milestone

Comments

@yesletgo
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

Environment

- Milvus version:2.2.5
- Deployment mode(standalone or cluster):cluster
- MQ type(rocksmq, pulsar or kafka):   pulsar  
- SDK version(e.g. pymilvus v2.0.0rc2):2.2.3
- OS(Ubuntu or CentOS): CentOS
- CPU/Memory: 84cores/600G
- GPU: 
- Others:

Current Behavior

1681782630073
1681782697988

After manual flush
1681782786265

Expected Behavior

No response

Steps To Reproduce

No response

Milvus Log

https://video-tag-lib-1251808348.cos.ap-beijing.myqcloud.com/milvus_log/milvus-log.tar.gz

Anything else?

No response

@yesletgo yesletgo added kind/bug Issues or changes related a bug needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Apr 18, 2023
@yanliang567
Copy link
Contributor

/assign @congqixia
it is weird that there are many segments with row_count=0
/unassign

@yanliang567 yanliang567 added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Apr 18, 2023
@yanliang567 yanliang567 added this to the 2.2.6 milestone Apr 18, 2023
@yanliang567 yanliang567 removed the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Apr 18, 2023
@congqixia
Copy link
Contributor

@yesletgo from the screen shots and the log, it looks like the datanode cannot watch channel in time. It's a known issue and shall be fixed in 2.2.6
You could update this configuration as a quick workaround:

# milvus.yaml
dataCoord:
...
  channel:
    watchTimeoutInterval:  120 # default value is 30

@yesletgo
Copy link
Author

@yesletgo from the screen shots and the log, it looks like the datanode cannot watch channel in time. It's a known issue and shall be fixed in 2.2.6 You could update this configuration as a quick workaround:

# milvus.yaml
dataCoord:
...
  channel:
    watchTimeoutInterval:  120 # default value is 30

The configuration has been updated and the datacoord has been restarted,but the phenomenon is still the same.
image

milvus-log2.tar.gz

@stale
Copy link

stale bot commented Aug 3, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.

@stale stale bot added the stale indicates no udpates for 30 days label Aug 3, 2023
@yanliang567 yanliang567 modified the milestones: 2.2.12, 2.2.13 Aug 4, 2023
@stale stale bot removed the stale indicates no udpates for 30 days label Aug 4, 2023
@stale
Copy link

stale bot commented Sep 4, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.

@stale stale bot added the stale indicates no udpates for 30 days label Sep 4, 2023
@stale stale bot closed this as completed Sep 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Issues or changes related a bug stale indicates no udpates for 30 days triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

3 participants