-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DocDB] Tserver hits soft limit and not reclaiming back memory | Rejecting Write request: Soft memory limit exceeded #11723
Closed
shantanugupta-yb opened this issue
Mar 11, 2022
· 1 comment
· May be fixed by ryan-ally/yugabyte-db#213
Closed
[DocDB] Tserver hits soft limit and not reclaiming back memory | Rejecting Write request: Soft memory limit exceeded #11723
shantanugupta-yb opened this issue
Mar 11, 2022
· 1 comment
· May be fixed by ryan-ally/yugabyte-db#213
Labels
area/docdb
YugabyteDB core features
kind/bug
This issue is a bug
priority/medium
Medium priority issue
Comments
yugabyte-ci
added
kind/bug
This issue is a bug
priority/medium
Medium priority issue
labels
Jun 8, 2022
This is not a bug, this is just the memory overhead that we have, per tablet, from the rocksdb arena and metric histograms |
This was referenced Nov 30, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
area/docdb
YugabyteDB core features
kind/bug
This issue is a bug
priority/medium
Medium priority issue
Jira Link: DB-1085
Description
While testing for yb cloud free tier(RF1, 2GB memory), tserver was hitting soft limit and Writes were getting rejected with error "Rejecting Write request: Soft memory limit exceeded (at 92.86% of capacity), score: 0.00"
Ideally when tserver has already hit memory soft limit with no active workload running, the background thread should reclaim back memory.
I also wanted to check the current allocation held in memory for which I took difference between to memory snapshots. Below are the top 5 allocation. The 1st snapshot had allocations of 270MB and the 2nd snapshot had allocation 610M. So out of the 310M below are the top 5 allocation:
The rocksdb version used by us seems to of v4.5 and I seen a bug "rocksdb::Arena::AllocateNewBlock allocate memory grows without limit #8371" recently got fixed which is missing from the rocksdb version we are using. facebook/rocksdb#8371
The text was updated successfully, but these errors were encountered: