-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TIDB Instantly runs out of memory when adding index #27687
Comments
thanks for reporting this bug! I unrolled the stack
do you have trouble if the primary key is not clustered index? |
@ichn-hu thanks! I was able to drop the table, insert random data and have the same issue. So it appears reproducible but needs a large data set. I'm not sure how to easily reproduce. I'll try to upload the data set soon. I tried with not clustered index and I can't reproduce this issue, but since it is hard to reproduce I'm not 100% sure |
@shellderp In addition, could you please give all the logs during the execution of this "Add Index"? |
Did not change these:
I'm attaching logs up to the OOM crash - but I didn't see anything interesting. Maybe this is caused by negative values, somehow? My clustered primary key contains random numbers from LONG_MIN to LONG_MAX, including large negative numbers. I just tested inserting only positive numbers for the clustered key in a new table, and I can't reproduce this issue, even at 40 million records in the table. Are negative integers supported in TIDB clustered primary keys? Unfortunately I have to support an old data set that includes negative primary keys. |
Please check whether the issue should be labeled with 'affects-x.y' or 'fixes-x.y.z', and then remove 'needs-more-info' label. |
Bug Report
1. Minimal reproduce step (Required)
TiDB deployed with Kubernetes AWS deployment, 32gb memory dedicated node (https://docs.pingcap.com/tidb-in-kubernetes/stable/deploy-on-aws-eks/)
Created a table with the following schema
Inserted 3 million records with random values (I don't know how to reproduce trivially)
I ran:
No other queries are running on the cluster.
2. What did you expect to see? (Required)
Add index ok
3. What did you see instead (Required)
TiDB uses 30+ gb almost immediately and is OOMKilled repeatedly
Row count is much higher than actual rows in
admin show ddl jobs
outputI see this log as well - is it related?
Same issue as #22453
When DDL is cancelled, memory usage drops to < 1GB
I had no issue migrating a table with similar number of records but fewer columns and indexes. I don't know what is special about this table to cause an issue.
4. What is your TiDB version? (Required)
5.7.25-TiDB-v5.2.0
The text was updated successfully, but these errors were encountered: