-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance Degradation When Upgrading to v1.60 #8063
Comments
Hi, thank you for raising the issue. From your code snippet, it seems you are not using distributed training? |
Using Rabit to sync across an AWS cluster to do distributed training on AWS SageMaker as shown here in their official example: |
Things I've tried:
The training time was slow each time. |
Thank you for running these experiments. It's probably due to the number of features. The approx was recently rewritten, the new version might be less efficient for wide dataset: #7214 (comment) . Also, the parameter |
Do you have any suggestions for anything that can be done to recover the old level of performance for wide datasets like this, datasets with 5K to 10K features? We can try setting max_bin = 63. If I understand correctly, accuracy-wise it should be on-par with what we had before. Are there any other settings that would help? The main reason we use the "approx" method over the "hist" method for many of our workloads is that "approx" used far less memory than "hist" in version 0.9. Is that still expected to be applicable in 1.6? We want to upgrade to 1.6 because of all the great new features since 0.9, like early stopping, categorical feature support, etc. However, an 8x increase in running time is prohibitive in our case. |
I recently tried updating from 0.90 to 1.60. However, my distributed training job (using the approx method) is ~8 times slower now. On version 0.90, each boosting round took about 25 seconds. On version 1.6, each boosting round is now taking around 3 minutes.
Even the hist method on v1.60 is slower than using approx on v0.90.
My dataset has ~5000 features and 500K rows. The exact same parameters and exact same data are being used in my training runs for both versions. The only difference is the version. I cannot share the dataset since it is a work dataset. Roughly 20% of the values in the data are null.
One thing I've noticed, on version 0.90, is that if I increase the nthread parameter the time taken per boosting round goes down. If I decrease the nthread parameter, the time taken per boosting round goes up. This makes sense.
However, on version 1.60, increasing or decreasing the nthread parameter doesn't seem to have any affect. I'm wondering if this is related in some way.
Here's the relevant code snippet:
Using 2 ml.m5.12xlarge (48vCPU's, 192 GiB RAM) on AWS SageMaker for each training job. Python 3.7.
The text was updated successfully, but these errors were encountered: