-
Notifications
You must be signed in to change notification settings - Fork 492
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(deps): update dependency xgboost to v1 - autoclosed #109
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
4 times, most recently
from
May 12, 2020 09:39
d43360f
to
c3bc995
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
May 17, 2020 09:54
c3bc995
to
8e974d8
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
6 times, most recently
from
June 7, 2020 04:24
e41a7b1
to
0bd3eb7
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
3 times, most recently
from
June 18, 2020 23:53
cd2aa3e
to
0ebe0fa
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
3 times, most recently
from
July 1, 2020 16:33
14988f9
to
dcf0285
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
July 13, 2020 13:29
dcf0285
to
ef2e640
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
August 4, 2020 17:25
ef2e640
to
2e2ce2d
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
August 23, 2020 03:37
2e2ce2d
to
ed66d48
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
2 times, most recently
from
September 15, 2020 08:01
1fae8dd
to
017c2a8
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
September 19, 2020 15:17
017c2a8
to
d835229
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
October 9, 2020 13:35
d835229
to
8fabf76
Compare
renovate
bot
changed the title
chore(deps): update dependency xgboost to v1 - autoclosed
chore(deps): update dependency xgboost to v1
Jun 8, 2021
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
June 8, 2021 11:16
2968bed
to
9b60337
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
October 18, 2021 23:44
9b60337
to
0feb755
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
March 7, 2022 09:55
0feb755
to
7a5f868
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
March 26, 2022 16:36
7a5f868
to
20b1476
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
April 16, 2022 05:00
20b1476
to
eff4b52
Compare
⚠ Artifact update problemRenovate failed to update an artifact related to this branch. You probably do not want to merge this PR as-is. ♻ Renovate will retry this branch, including artifacts, only when one of the following happens:
The artifact failure details are included below: File name: poetry.lock
|
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
May 15, 2022 19:44
eff4b52
to
e64a9a3
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
2 times, most recently
from
June 27, 2022 10:44
1297226
to
28a666f
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
August 22, 2022 15:47
28a666f
to
c4bfaf3
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
November 20, 2022 11:53
c4bfaf3
to
c9e1314
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
March 16, 2023 21:53
c9e1314
to
00f896d
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
2 times, most recently
from
March 30, 2023 20:17
8ed4288
to
b43dfdf
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
May 30, 2023 06:03
b43dfdf
to
475afd1
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
June 19, 2023 06:59
475afd1
to
5cf2bf0
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
July 6, 2023 16:15
5cf2bf0
to
8183edd
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
3 times, most recently
from
July 23, 2023 13:18
062d015
to
fb74e76
Compare
renovate
bot
force-pushed
the
feature/renovate-xgboost-1.x
branch
from
July 23, 2023 14:59
fb74e76
to
b8dba5b
Compare
renovate
bot
changed the title
chore(deps): update dependency xgboost to v1
chore(deps): update dependency xgboost to v1 - autoclosed
Sep 13, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
0.90
->1.7.6
Release Notes
dmlc/xgboost (xgboost)
v1.7.6
: 1.7.6 Patch ReleaseCompare Source
This is a patch release for bug fixes. The CRAN package for the R binding is kept at 1.7.5.
Bug Fixes
QuantileDMatrix
. (#9096)Document
Maintenance
Additional artifacts:
You can verify the downloaded packages by running the following command on your Unix shell:
Experimental binary packages for R with CUDA enabled
Source tarball
Link in GitHub release assets
v1.7.5
: 1.7.5 Patch ReleaseCompare Source
1.7.5 (2023 Mar 30)
This is a patch release for bug fixes.
Additional artifacts:
You can verify the downloaded packages by running the following command on your Unix shell:
Experimental binary packages for R with CUDA enabled
Source tarball
Link in GitHub release assets
v1.7.4
: 1.7.4 Patch ReleaseCompare Source
1.7.4 (2023 Feb 16)
This is a patch release for bug fixes.
Artifacts
xgboost_r_gpu_win64_1.7.4.tar.gz: Download
v1.7.3
: 1.7.3 Patch ReleaseCompare Source
1.7.3 (2023 Jan 6)
This is a patch release for bug fixes.
get_params
no longer returns internally configured values. (#8634)Artifacts
You can verify the downloaded packages by running the following command on your Unix shell:
v1.7.2
: 1.7.2 Patch ReleaseCompare Source
v1.7.2 (2022 Dec 8)
This is a patch release for bug fixes.
Work with newer thrust and libcudacxx (#8432)
Support null value in CUDA array interface namespace. (#8486)
Use
getsockname
instead ofSO_DOMAIN
on AIX. (#8437)[pyspark] Make QDM optional based on a cuDF check (#8471)
[pyspark] sort qid for SparkRanker. (#8497)
[dask] Properly await async method client.wait_for_workers. (#8558)
[R] Fix CRAN test notes. (#8428)
[doc] Fix outdated document [skip ci]. (#8527)
[CI] Fix github action mismatched glibcxx. (#8551)
Artifacts
You can verify the downloaded packages by running this on your Unix shell:
v1.7.1
: 1.7.1 Patch Releasev1.7.1 (2022 November 3)
This is a patch release to incorporate the following hotfix:
v1.7.0
: Release 1.7.0 stableCompare Source
Note. The source distribution of Python XGBoost 1.7.0 was defective (#8415). Since PyPI does not allow us to replace existing artifacts, we released
1.7.0.post0
version to upload the new source distribution. Everything in1.7.0.post0
is identical to1.7.0
otherwise.v1.7.0 (2022 Oct 20)
We are excited to announce the feature packed XGBoost 1.7 release. The release note will walk through some of the major new features first, then make a summary for other improvements and language-binding-specific changes.
PySpark
XGBoost 1.7 features initial support for PySpark integration. The new interface is adapted from the existing PySpark XGBoost interface developed by databricks with additional features like
QuantileDMatrix
and the rapidsai plugin (GPU pipeline) support. The new Spark XGBoost Python estimators not only benefit from PySpark ml facilities for powerful distributed computing but also enjoy the rest of the Python ecosystem. Users can define a custom objective, callbacks, and metrics in Python and use them with this interface on distributed clusters. The support is labeled as experimental with more features to come in future releases. For a brief introduction please visit the tutorial on XGBoost's document page. (#8355, #8344, #8335, #8284, #8271, #8283, #8250, #8231, #8219, #8245, #8217, #8200, #8173, #8172, #8145, #8117, #8131, #8088, #8082, #8085, #8066, #8068, #8067, #8020, #8385)Due to its initial support status, the new interface has some limitations; categorical features and multi-output models are not yet supported.
Development of categorical data support
More progress on the experimental support for categorical features. In 1.7, XGBoost can handle missing values in categorical features and features a new parameter
max_cat_threshold
, which limits the number of categories that can be used in the split evaluation. The parameter is enabled when the partitioning algorithm is used and helps prevent over-fitting. Also, the sklearn interface can now accept thefeature_types
parameter to use data types other than dataframe for categorical features. (#8280, #7821, #8285, #8080, #7948, #7858, #7853, #8212, #7957, #7937, #7934)Experimental support for federated learning and new communication collective
An exciting addition to XGBoost is the experimental federated learning support. The federated learning is implemented with a gRPC federated server that aggregates allreduce calls, and federated clients that train on local data and use existing tree methods (approx, hist, gpu_hist). Currently, this only supports horizontal federated learning (samples are split across participants, and each participant has all the features and labels). Future plans include vertical federated learning (features split across participants), and stronger privacy guarantees with homomorphic encryption and differential privacy. See Demo with NVFlare integration for example usage with nvflare.
As part of the work, XGBoost 1.7 has replaced the old rabit module with the new collective module as the network communication interface with added support for runtime backend selection. In previous versions, the backend is defined at compile time and can not be changed once built. In this new release, users can choose between
rabit
andfederated.
(#8029, #8351, #8350, #8342, #8340, #8325, #8279, #8181, #8027, #7958, #7831, #7879, #8257, #8316, #8242, #8057, #8203, #8038, #7965, #7930, #7911)The feature is available in the public PyPI binary package for testing.
Quantile DMatrix
Before 1.7, XGBoost has an internal data structure called
DeviceQuantileDMatrix
(and its distributed version). We now extend its support to CPU and renamed it toQuantileDMatrix
. This data structure is used for optimizing memory usage for thehist
andgpu_hist
tree methods. The new feature helps reduce CPU memory usage significantly, especially for dense data. The newQuantileDMatrix
can be initialized from both CPU and GPU data, and regardless of where the data comes from, the constructed instance can be used by both the CPU algorithm and GPU algorithm including training and prediction (with some overhead of conversion if the device of data and training algorithm doesn't match). Also, a new parameterref
is added toQuantileDMatrix
, which can be used to construct validation/test datasets. Lastly, it's set as default in the scikit-learn interface when a supported tree method is specified by users. (#7889, #7923, #8136, #8215, #8284, #8268, #8220, #8346, #8327, #8130, #8116, #8103, #8094, #8086, #7898, #8060, #8019, #8045, #7901, #7912, #7922)Mean absolute error
The mean absolute error is a new member of the collection of objectives in XGBoost. It's noteworthy since MAE has zero hessian value, which is unusual to XGBoost as XGBoost relies on Newton optimization. Without valid Hessian values, the convergence speed can be slow. As part of the support for MAE, we added line searches into the XGBoost training algorithm to overcome the difficulty of training without valid Hessian values. In the future, we will extend the line search to other objectives where it's appropriate for faster convergence speed. (#8343, #8107, #7812, #8380)
XGBoost on Browser
With the help of the pyodide project, you can now run XGBoost on browsers. (#7954, #8369)
Experimental IPv6 Support for Dask
With the growing adaption of the new internet protocol, XGBoost joined the club. In the latest release, the Dask interface can be used on IPv6 clusters, see XGBoost's Dask tutorial for details. (#8225, #8234)
Optimizations
We have new optimizations for both the
hist
andgpu_hist
tree methods to make XGBoost's training even more efficient.Hist
Hist now supports optional by-column histogram build, which is automatically configured based on various conditions of input data. This helps the XGBoost CPU hist algorithm to scale better with different shapes of training datasets. (#8233, #8259). Also, the build histogram kernel now can better utilize CPU registers (#8218)
GPU Hist
GPU hist performance is significantly improved for wide datasets. GPU hist now supports batched node build, which reduces kernel latency and increases throughput. The improvement is particularly significant when growing deep trees with the default
depthwise
policy. (#7919, #8073, #8051, #8118, #7867, #7964, #8026)Breaking Changes
Breaking changes made in the 1.7 release are summarized below.
grow_local_histmaker
updater is removed. This updater is rarely used in practice and has no test. We decided to remove it and focus have XGBoot focus on other more efficient algorithms. (#7992, #8091)rabit
module is replaced with the newcollective
module. It's a drop-in replacement with added runtime backend selection, see the federated learning section for more details (#8257)General new features and improvements
Before diving into package-specific changes, some general new features other than those listed at the beginning are summarized here.
DMatrix
andQuantileDMatrix
can get the data from XGBoost. In previous versions, only getters for meta info like labels are available. The new method is available in Python (DMatrix::get_data
) and C. (#8269, #8323)Fixes
Some noteworthy bug fixes that are not related to specific language binding are listed in this section.
Python Package
Python 3.8 is now the minimum required Python version. (#8071)
More progress on type hint support. Except for the new PySpark interface, the XGBoost module is fully typed. (#7742, #7945, #8302, #7914, #8052)
XGBoost now validates the feature names in
inplace_predict
, which also affects the predict function in scikit-learn estimators as it usesinplace_predict
internally. (#8359)Users can now get the data from
DMatrix
usingDMatrix::get_data
orQuantileDMatrix::get_data
.Show
libxgboost.so
path in build info. (#7893)Raise import error when using the sklearn module while scikit-learn is missing. (#8049)
Use
config_context
in the sklearn interface. (#8141)Validate features for inplace prediction. (#8359)
Pandas dataframe handling is refactored to reduce data fragmentation. (#7843)
Support more pandas nullable types (#8262)
Remove pyarrow workaround. (#7884)
Binary wheel size
We aim to enable as many features as possible in XGBoost's default binary distribution on PyPI (package installed with pip), but there's a upper limit on the size of the binary wheel. In 1.7, XGBoost reduces the size of the wheel by pruning unused CUDA architectures. (#8179, #8152, #8150)
Fixes
Some noteworthy fixes are listed here:
Fix potential error in DMatrix constructor on 32-bit platform. (#8369)
Maintenance work
isort
andblack
for selected files. (#8137, #8096)use_label_encoder
in XGBClassifier. The label encoder has already been deprecated and removed in the previous version. These changes only affect the indicator parameter (#7822)Documents
R Package
We summarize improvements for the R package briefly here:
JVM Packages
The consistency between JVM packages and other language bindings is greatly improved in 1.7, improvements range from model serialization format to the default value of hyper-parameters.
timeoutRequestWorkers
is now removed. With the support for barrier mode, this parameter is no longer needed. (#7839)Documents
d70e59f
, #7806)Maintenance
CI and Tests
pytest-timeout
is added as an optional dependency for running Python tests to keep the test time in check. (#7772, #8291, #8286, #8276, #8306, #8287, #8243, #8313, #8235, #8288, #8303, #8142, #8092, #8333, #8312, #8348)v1.6.2
: 1.6.2 Patch ReleaseCompare Source
This is a patch release for bug fixes.
v1.6.1
: 1.6.1 Patch ReleaseCompare Source
v1.6.1 (2022 May 9)
This is a patch release for bug fixes and Spark barrier mode support. The R package is unchanged.
Experimental support for categorical data
JVM packages
We replaced the old parallelism tracker with spark barrier mode to improve the robustness of the JVM package and fix the GPU training pipeline.
Artifacts
You can verify the downloaded packages by running this on your Unix shell:
v1.6.0
: Release 1.6.0 stableCompare Source
v1.6.0 (2022 Apr 16)
After a long period of development, XGBoost v1.6.0 is packed with many new features and
improvements. We summarize them in the following sections starting with an introduction to
some major new features, then moving on to language binding specific changes including new
features and notable bug fixes for that binding.
Development of categorical data support
This version of XGBoost features new improvements and full coverage of experimental
categorical data support in Python and C package with tree model. Both
hist
,approx
and
gpu_hist
now support training with categorical data. Also, partition-basedcategorical split is introduced in this release. This split type is first available in
LightGBM in the context of gradient boosting. The previous XGBoost release supported one-hot
split where the splitting criteria is of form
x \in {c}
, i.e. the categorical featurex
is testedagainst a single candidate. The new release allows for more expressive conditions:
x \in S
where the categorical feature
x
is tested against multiple candidates. Moreover, it is nowpossible to use any tree algorithms (
hist
,approx
,gpu_hist
) when creating categorical splits.For more information, please see our tutorial on categorical data, along with
examples linked on that page. (#7380, #7708, #7695, #7330, #7307, #7322, #7705,
#7652, #7592, #7666, #7576, #7569, #7529, #7575, #7393, #7465, #7385, #7371, #7745, #7810)
In the future, we will continue to improve categorical data support with new features and
optimizations. Also, we are looking forward to bringing the feature beyond Python binding,
contributions and feedback are welcomed! Lastly, as a result of experimental status, the
behavior might be subject to change, especially the default value of related
hyper-parameters.
Experimental support for multi-output model
XGBoost 1.6 features initial support for the multi-output model, which includes
multi-output regression and multi-label classification. Along with this, the XGBoost
classifier has proper support for base margin without to need for the user to flatten the
input. In this initial support, XGBoost builds one model for each target similar to the
sklearn meta estimator, for more details, please see our quick
introduction.
(#7365, #7736, #7607, #7574, #7521, #7514, #7456, #7453, #7455, #7434, #7429, #7405, #7381)
External memory support
External memory support for both approx and hist tree method is considered feature
complete in XGBoost 1.6. Building upon the iterator-based interface introduced in the
previous version, now both
hist
andapprox
iterates over each batch of data duringtraining and prediction. In previous versions,
hist
concatenates all the batches intoan internal representation, which is removed in this version. As a result, users can
expect higher scalability in terms of data size but might experience lower performance due
to disk IO. (#7531, #7320, #7638, #7372)
Rewritten approx
The
approx
tree method is rewritten based on the existinghist
tree method. Therewrite closes the feature gap between
approx
andhist
and improves the performance.Now the behavior of
approx
should be more aligned withhist
andgpu_hist
. Here is alist of user-visible changes:
max_leaves
andmax_depth
.grow_policy
.max_bin
to replacesketch_eps
.depthwise
policy is used.New serialization format
Based on the existing JSON serialization format, we introduce UBJSON support as a more
efficient alternative. Both formats will be available in the future and we plan to
gradually phase out support for the old
binary model format. Users can opt to use the different formats in the serialization
function by providing the file extension
json
orubj
. Also, thesave_raw
function inall supported languages bindings gains a new parameter for exporting the model in different
formats, available options are
json
,ubj
, anddeprecated
, see document for thelanguage binding you are using for details. Lastly, the default internal serialization
format is set to UBJSON, which affects Python pickle and R RDS. (#7572, #7570, #7358,
#7571, #7556, #7549, #7416)
General new features and improvements
Aside from the major new features mentioned above, some others are summarized here:
interface. (#7399, #7553)
seed_per_iteration
is removed, now distributed training shouldgenerate closer results to single node training when sampling is used. (#7009)
huber_slope
is introduced for thePseudo-Huber
objective.environments. (#7654, #7704)
aucpr
is rewritten for better performance and GPU support. (#7297, #7368)#7589, #7588, #7687)
max_leave
andmax_depth
is now unified (#7302, #7551).XGBoost, evaluation results might differ slightly for each run due to parallel reduction
for floating-point values, which is now addressed. (#7362, #7303, #7316, #7349)
gpu_hist
. (#7507)Performance improvements
Most of the performance improvements are integrated into other refactors during feature
developments. The
approx
should see significant performance gain for many datasets asmentioned in the previous section, while the
hist
tree method also enjoys improvedperformance with the removal of the internal
pruner
along with some otherrefactoring. Lastly,
gpu_hist
no longer synchronizes the device during training. (#7737)General bug fixes
This section lists bug fixes that are not specific to any language binding.
num_parallel_tree
is now a model parameter instead of a training hyper-parameter,which fixes model IO with random forest. (#7751)
libsvm and scipy sparse matrix where column index might not be sorted. (#7731)
the maximum value of int32. (#7565)
incorrect value for
iteration_range
is provided. (#7409)Changes in the Python package
Other than the changes in Dask, the XGBoost Python package gained some new features and
improvements along with small bug fixes.
now be able to run
pip install xgboost
to install XGBoost.libomp
from Homebrew, as the XGBoost wheel nowbundles
libomp.dylib
library.behavior. XGBoost can now output transformed prediction values when a custom objective is
not supplied. See our explanation in the
tutorial
for details.
parameters in
fit
that are not related to input data are moved into the constructorand can be set by
set_params
. (#6751, #7420, #7375, #7369)pipeline (#7512)
get_group
is introduced forDMatrix
to allow users to get the groupinformation in the custom objective function. (#7564)
**kwargs
. (#7629)feature_names_in_
is defined for all sklearn estimators likeXGBRegressor
to follow the convention of sklearn. (#7526)states. (#7685)
#7667, #7377, #7360, #7498, #7438, #7667, #7752, #7749, #7751)
Changes in the Dask interface
Please see introduction and
API document
for reference. (#7645, #7581)
DMatrix
construction in dask now honers thread configuration. (#7337)nthread
configuration using the Dask sklearn interfaceConfiguration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by Mend Renovate. View repository job log here.