-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Doc] how to configure regarding to stage-level #9727
Conversation
doc/tutorials/spark_estimator.rst
Outdated
@@ -128,7 +128,7 @@ Write your PySpark application | |||
============================== | |||
|
|||
Below snippet is a small example for training xgboost model with PySpark. Notice that we are | |||
using a list of feature names and the additional parameter ``device``: | |||
using a list of feature names instead of vector features and the additional parameter ``device``: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a little bit confusing, what's "vector features"? Also, "and the additional parameter ... ", is the "and" a continuation of "vector feature" or the continuation of "list of feature names"
doc/tutorials/spark_estimator.rst
Outdated
By executing the aforementioned command, the XGBoost application will be submitted with python environment created by pip or conda, | ||
specifying a request for 1 GPU and 12 CPUs per executor. During the ETL phase, a total of 12 tasks will be executed concurrently. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By executing the aforementioned command, the XGBoost application will be submitted with python environment created by pip or conda, | |
specifying a request for 1 GPU and 12 CPUs per executor. During the ETL phase, a total of 12 tasks will be executed concurrently. | |
The submit command sends the Python environment created by pip or conda along with the specification of GPU allocation. During the ETL phase, a total of 12 tasks will be executed concurrently. |
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
Co-authored-by: Jiaming Yuan <jm.yuan@outlook.com>
No description provided.