Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update dependency xgboost to v1 - autoclosed #109

Closed
wants to merge 1 commit into from

Conversation

renovate[bot]
Copy link

@renovate renovate bot commented May 9, 2020

Mend Renovate

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
xgboost 0.90 -> 1.7.6 age adoption passing confidence

Release Notes

dmlc/xgboost (xgboost)

v1.7.6: 1.7.6 Patch Release

Compare Source

This is a patch release for bug fixes. The CRAN package for the R binding is kept at 1.7.5.

Bug Fixes
  • Fix distributed training with mixed dense and sparse partitions. (#​9272)
  • Fix monotone constraints on CPU with large trees. (#​9122)
  • [spark] Make the spark model have the same UID as its estimator (#​9022)
  • Optimize prediction with QuantileDMatrix. (#​9096)
Document
Maintenance
Additional artifacts:

You can verify the downloaded packages by running the following command on your Unix shell:

echo "<hash> <artifact>" | shasum -a 256 --check
0a54300dd274b98b7f039acffa006bec4875dace041fd9288422306fe7c379ca  xgboost.tar.gz
990fb3c54be7ce53365389f2eb82ce3c1f2e78735b4605ddd2ddb0d47a15d3c3  xgboost_r_gpu_linux_1.7.6.tar.gz
a48fc64bce774bb76eddade6dc6df1d4fc25199a0c17dc66cdfa50cedd3282ad  xgboost_r_gpu_win64_1.7.6.tar.gz

Experimental binary packages for R with CUDA enabled

  • xgboost_r_gpu_linux_1.7.6.tar.gz: Download
  • xgboost_r_gpu_win64_1.7.6.tar.gz: Download

Source tarball
Link in GitHub release assets

v1.7.5: 1.7.5 Patch Release

Compare Source

1.7.5 (2023 Mar 30)

This is a patch release for bug fixes.

  • C++ requirement is updated to C++-17, along with which, CUDA 11.8 is used as the default CTK. (#​8860, #​8855, #​8853)
  • Fix import for pyspark ranker. (#​8692)
  • Fix Windows binary wheel to be compatible with Poetry (#​8991)
  • Fix GPU hist with column sampling. (#​8850)
  • Make sure the iterative DMatrix is properly initialized. (#​8997)
  • [R] Update link in a document. (#​8998)
Additional artifacts:

You can verify the downloaded packages by running the following command on your Unix shell:

echo "<hash> <artifact>" | shasum -a 256 --check
69a8cf4958e2cea5d492948968d765b856f60d336fbd4367d8176de95898ad7a  xgboost.tar.gz
0098f8d1cf5646d75c7d9dafa7e11b8d57441384f86a004b181cd679ef9677d1  xgboost_r_gpu_linux_1.7.5.tar.gz
a23b9744fcff8b53325604935b239c4cfef8a047ca5f4e57ea2b1011382314ee  xgboost_r_gpu_win64_1.7.5.tar.gz

Experimental binary packages for R with CUDA enabled

  • xgboost_r_gpu_linux_1.7.5.tar.gz: Download
  • xgboost_r_gpu_win64_1.7.5.tar.gz: Download

Source tarball
Link in GitHub release assets

v1.7.4: 1.7.4 Patch Release

Compare Source

1.7.4 (2023 Feb 16)

This is a patch release for bug fixes.

  • [R] Fix OpenMP detection on macOS. #​8684
  • [Python] Make sure input numpy array is aligned. #​8690
  • Fix feature interaction with column sampling in gpu_hist evaluator. #​8754
  • Fix GPU L1 error. #​8749
  • [PySpark] Fix feature types param #​8772
  • Fix ranking with quantile dmatrix and group weight. #​8762
  • Fix CPU bin compression with categorical data. #​8809

Artifacts

xgboost_r_gpu_win64_1.7.4.tar.gz: Download

v1.7.3: 1.7.3 Patch Release

Compare Source

1.7.3 (2023 Jan 6)

This is a patch release for bug fixes.

  • [Breaking] XGBoost Sklearn estimator method get_params no longer returns internally configured values. (#​8634)
  • Fix linalg iterator, which may crash the L1 error. (#​8603)
  • Fix loading pickled GPU sklearn estimator with a CPU-only XGBoost build. (#​8632)
  • Fix inference with unseen categories with categorical features. (#​8591, #​8602)
  • CI fixes. (#​8620, #​8631, #​8579)
Artifacts

You can verify the downloaded packages by running the following command on your Unix shell:

echo "<hash> <artifact>" | shasum -a 256 --check
0b6aa86b93aec2b3e7ec6f53a696f8bbb23e21a03b369dc5a332c55ca57bc0c4  xgboost.tar.gz

v1.7.2: 1.7.2 Patch Release

Compare Source

v1.7.2 (2022 Dec 8)

This is a patch release for bug fixes.

  • Work with newer thrust and libcudacxx (#​8432)

  • Support null value in CUDA array interface namespace. (#​8486)

  • Use getsockname instead of SO_DOMAIN on AIX. (#​8437)

  • [pyspark] Make QDM optional based on a cuDF check (#​8471)

  • [pyspark] sort qid for SparkRanker. (#​8497)

  • [dask] Properly await async method client.wait_for_workers. (#​8558)

  • [R] Fix CRAN test notes. (#​8428)

  • [doc] Fix outdated document [skip ci]. (#​8527)

  • [CI] Fix github action mismatched glibcxx. (#​8551)

Artifacts

You can verify the downloaded packages by running this on your Unix shell:

echo "<hash> <artifact>" | shasum -a 256 --check
15be5a96e86c3c539112a2052a5be585ab9831119cd6bc3db7048f7e3d356bac  xgboost_r_gpu_linux_1.7.2.tar.gz
0dd38b08f04ab15298ec21c4c43b17c667d313eada09b5a4ac0d35f8d9ba15d7  xgboost_r_gpu_win64_1.7.2.tar.gz

v1.7.1: 1.7.1 Patch Release

v1.7.1 (2022 November 3)

This is a patch release to incorporate the following hotfix:

  • Add back xgboost.rabit for backwards compatibility (#​8411)

v1.7.0: Release 1.7.0 stable

Compare Source

Note. The source distribution of Python XGBoost 1.7.0 was defective (#​8415). Since PyPI does not allow us to replace existing artifacts, we released 1.7.0.post0 version to upload the new source distribution. Everything in 1.7.0.post0 is identical to 1.7.0 otherwise.

v1.7.0 (2022 Oct 20)

We are excited to announce the feature packed XGBoost 1.7 release. The release note will walk through some of the major new features first, then make a summary for other improvements and language-binding-specific changes.

PySpark

XGBoost 1.7 features initial support for PySpark integration. The new interface is adapted from the existing PySpark XGBoost interface developed by databricks with additional features like QuantileDMatrix and the rapidsai plugin (GPU pipeline) support. The new Spark XGBoost Python estimators not only benefit from PySpark ml facilities for powerful distributed computing but also enjoy the rest of the Python ecosystem. Users can define a custom objective, callbacks, and metrics in Python and use them with this interface on distributed clusters. The support is labeled as experimental with more features to come in future releases. For a brief introduction please visit the tutorial on XGBoost's document page. (#​8355, #​8344, #​8335, #​8284, #​8271, #​8283, #​8250, #​8231, #​8219, #​8245, #​8217, #​8200, #​8173, #​8172, #​8145, #​8117, #​8131, #​8088, #​8082, #​8085, #​8066, #​8068, #​8067, #​8020, #​8385)

Due to its initial support status, the new interface has some limitations; categorical features and multi-output models are not yet supported.

Development of categorical data support

More progress on the experimental support for categorical features. In 1.7, XGBoost can handle missing values in categorical features and features a new parameter max_cat_threshold, which limits the number of categories that can be used in the split evaluation. The parameter is enabled when the partitioning algorithm is used and helps prevent over-fitting. Also, the sklearn interface can now accept the feature_types parameter to use data types other than dataframe for categorical features. (#​8280, #​7821, #​8285, #​8080, #​7948, #​7858, #​7853, #​8212, #​7957, #​7937, #​7934)

Experimental support for federated learning and new communication collective

An exciting addition to XGBoost is the experimental federated learning support. The federated learning is implemented with a gRPC federated server that aggregates allreduce calls, and federated clients that train on local data and use existing tree methods (approx, hist, gpu_hist). Currently, this only supports horizontal federated learning (samples are split across participants, and each participant has all the features and labels). Future plans include vertical federated learning (features split across participants), and stronger privacy guarantees with homomorphic encryption and differential privacy. See Demo with NVFlare integration for example usage with nvflare.

As part of the work, XGBoost 1.7 has replaced the old rabit module with the new collective module as the network communication interface with added support for runtime backend selection. In previous versions, the backend is defined at compile time and can not be changed once built. In this new release, users can choose between rabit and federated. (#​8029, #​8351, #​8350, #​8342, #​8340, #​8325, #​8279, #​8181, #​8027, #​7958, #​7831, #​7879, #​8257, #​8316, #​8242, #​8057, #​8203, #​8038, #​7965, #​7930, #​7911)

The feature is available in the public PyPI binary package for testing.

Quantile DMatrix

Before 1.7, XGBoost has an internal data structure called DeviceQuantileDMatrix (and its distributed version). We now extend its support to CPU and renamed it to QuantileDMatrix. This data structure is used for optimizing memory usage for the hist and gpu_hist tree methods. The new feature helps reduce CPU memory usage significantly, especially for dense data. The new QuantileDMatrix can be initialized from both CPU and GPU data, and regardless of where the data comes from, the constructed instance can be used by both the CPU algorithm and GPU algorithm including training and prediction (with some overhead of conversion if the device of data and training algorithm doesn't match). Also, a new parameter ref is added to QuantileDMatrix, which can be used to construct validation/test datasets. Lastly, it's set as default in the scikit-learn interface when a supported tree method is specified by users. (#​7889, #​7923, #​8136, #​8215, #​8284, #​8268, #​8220, #​8346, #​8327, #​8130, #​8116, #​8103, #​8094, #​8086, #​7898, #​8060, #​8019, #​8045, #​7901, #​7912, #​7922)

Mean absolute error

The mean absolute error is a new member of the collection of objectives in XGBoost. It's noteworthy since MAE has zero hessian value, which is unusual to XGBoost as XGBoost relies on Newton optimization. Without valid Hessian values, the convergence speed can be slow. As part of the support for MAE, we added line searches into the XGBoost training algorithm to overcome the difficulty of training without valid Hessian values. In the future, we will extend the line search to other objectives where it's appropriate for faster convergence speed. (#​8343, #​8107, #​7812, #​8380)

XGBoost on Browser

With the help of the pyodide project, you can now run XGBoost on browsers. (#​7954, #​8369)

Experimental IPv6 Support for Dask

With the growing adaption of the new internet protocol, XGBoost joined the club. In the latest release, the Dask interface can be used on IPv6 clusters, see XGBoost's Dask tutorial for details. (#​8225, #​8234)

Optimizations

We have new optimizations for both the hist and gpu_hist tree methods to make XGBoost's training even more efficient.

  • Hist
    Hist now supports optional by-column histogram build, which is automatically configured based on various conditions of input data. This helps the XGBoost CPU hist algorithm to scale better with different shapes of training datasets. (#​8233, #​8259). Also, the build histogram kernel now can better utilize CPU registers (#​8218)

  • GPU Hist
    GPU hist performance is significantly improved for wide datasets. GPU hist now supports batched node build, which reduces kernel latency and increases throughput. The improvement is particularly significant when growing deep trees with the default depthwise policy. (#​7919, #​8073, #​8051, #​8118, #​7867, #​7964, #​8026)

Breaking Changes

Breaking changes made in the 1.7 release are summarized below.

  • The grow_local_histmaker updater is removed. This updater is rarely used in practice and has no test. We decided to remove it and focus have XGBoot focus on other more efficient algorithms. (#​7992, #​8091)
  • Single precision histogram is removed due to its lack of accuracy caused by significant floating point error. In some cases the error can be difficult to detect due to log-scale operations, which makes the parameter dangerous to use. (#​7892, #​7828)
  • Deprecated CUDA architectures are no longer supported in the release binaries. (#​7774)
  • As part of the federated learning development, the rabit module is replaced with the new collective module. It's a drop-in replacement with added runtime backend selection, see the federated learning section for more details (#​8257)
General new features and improvements

Before diving into package-specific changes, some general new features other than those listed at the beginning are summarized here.

  • Users of DMatrix and QuantileDMatrix can get the data from XGBoost. In previous versions, only getters for meta info like labels are available. The new method is available in Python (DMatrix::get_data) and C. (#​8269, #​8323)
  • In previous versions, the GPU histogram tree method may generate phantom gradient for missing values due to floating point error. We fixed such an error in this release and XGBoost is much better equated to handle floating point errors when training on GPU. (#​8274, #​8246)
  • Parameter validation is no longer experimental. (#​8206)
  • C pointer parameters and JSON parameters are vigorously checked. (#​8254, #​8254)
  • Improved handling of JSON model input. (#​7953, #​7918)
  • Support IBM i OS (#​7920, #​8178)
Fixes

Some noteworthy bug fixes that are not related to specific language binding are listed in this section.

  • Rename misspelled config parameter for pseudo-Huber (#​7904)
  • Fix feature weights with nested column sampling. (#​8100)
  • Fix loading DMatrix binary in distributed env. (#​8149)
  • Force auc.cc to be statically linked for unusual compiler platforms. (#​8039)
  • New logic for detecting libomp on macos (#​8384).
Python Package
  • Python 3.8 is now the minimum required Python version. (#​8071)

  • More progress on type hint support. Except for the new PySpark interface, the XGBoost module is fully typed. (#​7742, #​7945, #​8302, #​7914, #​8052)

  • XGBoost now validates the feature names in inplace_predict, which also affects the predict function in scikit-learn estimators as it uses inplace_predict internally. (#​8359)

  • Users can now get the data from DMatrix using DMatrix::get_data or QuantileDMatrix::get_data.

  • Show libxgboost.so path in build info. (#​7893)

  • Raise import error when using the sklearn module while scikit-learn is missing. (#​8049)

  • Use config_context in the sklearn interface. (#​8141)

  • Validate features for inplace prediction. (#​8359)

  • Pandas dataframe handling is refactored to reduce data fragmentation. (#​7843)

  • Support more pandas nullable types (#​8262)

  • Remove pyarrow workaround. (#​7884)

  • Binary wheel size
    We aim to enable as many features as possible in XGBoost's default binary distribution on PyPI (package installed with pip), but there's a upper limit on the size of the binary wheel. In 1.7, XGBoost reduces the size of the wheel by pruning unused CUDA architectures. (#​8179, #​8152, #​8150)

  • Fixes
    Some noteworthy fixes are listed here:

    • Fix the Dask interface with the latest cupy. (#​8210)
    • Check cuDF lazily to avoid potential errors with cuda-python. (#​8084)
  • Fix potential error in DMatrix constructor on 32-bit platform. (#​8369)

  • Maintenance work

  • Documents

    • [dask] Fix potential error in demo. (#​8079)
    • Improved documentation for the ranker. (#​8356, #​8347)
    • Indicate lack of py-xgboost-gpu on Windows (#​8127)
    • Clarification for feature importance. (#​8151)
    • Simplify Python getting started example (#​8153)
R Package

We summarize improvements for the R package briefly here:

  • Feature info including names and types are now passed to DMatrix in preparation for categorical feature support. (#​804)
  • XGBoost 1.7 can now gracefully load old R models from RDS for better compatibility with 3-party tuning libraries (#​7864)
  • The R package now can be built with parallel compilation, along with fixes for warnings in CRAN tests. (#​8330)
  • Emit error early if DiagrammeR is missing (#​8037)
  • Fix R package Windows build. (#​8065)
JVM Packages

The consistency between JVM packages and other language bindings is greatly improved in 1.7, improvements range from model serialization format to the default value of hyper-parameters.

  • Java package now supports feature names and feature types for DMatrix in preparation for categorical feature support. (#​7966)
  • Models trained by the JVM packages can now be safely used with other language bindings. (#​7896, #​7907)
  • Users can specify the model format when saving models with a stream. (#​7940, #​7955)
  • The default value for training parameters is now sourced from XGBoost directly, which helps JVM packages be consistent with other packages. (#​7938)
  • Set the correct objective if the user doesn't explicitly set it (#​7781)
  • Auto-detection of MUSL is replaced by system properties (#​7921)
  • Improved error message for launching tracker. (#​7952, #​7968)
  • Fix a race condition in parameter configuration. (#​8025)
  • [Breaking] timeoutRequestWorkers is now removed. With the support for barrier mode, this parameter is no longer needed. (#​7839)
  • Dependencies updates. (#​7791, #​8157, #​7801, #​8240)
Documents
Maintenance
CI and Tests

v1.6.2: 1.6.2 Patch Release

Compare Source

This is a patch release for bug fixes.

  • Remove pyarrow workaround. (#​7884)
  • Fix monotone constraint with tuple input. (#​7891)
  • Verify shared object version at load. (#​7928)
  • Fix LTR with weighted Quantile DMatrix. (#​7975)
  • Fix Python package source install. (#​8036)
  • Limit max_depth to 30 for GPU. (#​8098)
  • Fix compatibility with the latest cupy. (#​8129)
  • [dask] Deterministic rank assignment. (#​8018)
  • Fix loading DMatrix binary in distributed env. (#​8149)

v1.6.1: 1.6.1 Patch Release

Compare Source

v1.6.1 (2022 May 9)

This is a patch release for bug fixes and Spark barrier mode support. The R package is unchanged.

Experimental support for categorical data
JVM packages

We replaced the old parallelism tracker with spark barrier mode to improve the robustness of the JVM package and fix the GPU training pipeline.

Artifacts

You can verify the downloaded packages by running this on your Unix shell:

echo "<hash> <artifact>" | shasum -a 256 --check
2633f15e7be402bad0660d270e0b9a84ad6fcfd1c690a5d454efd6d55b4e395b  ./xgboost.tar.gz

v1.6.0: Release 1.6.0 stable

Compare Source

v1.6.0 (2022 Apr 16)

After a long period of development, XGBoost v1.6.0 is packed with many new features and
improvements. We summarize them in the following sections starting with an introduction to
some major new features, then moving on to language binding specific changes including new
features and notable bug fixes for that binding.

Development of categorical data support

This version of XGBoost features new improvements and full coverage of experimental
categorical data support in Python and C package with tree model. Both hist, approx
and gpu_hist now support training with categorical data. Also, partition-based
categorical split is introduced in this release. This split type is first available in
LightGBM in the context of gradient boosting. The previous XGBoost release supported one-hot
split where the splitting criteria is of form x \in {c}, i.e. the categorical feature x is tested
against a single candidate. The new release allows for more expressive conditions: x \in S
where the categorical feature x is tested against multiple candidates. Moreover, it is now
possible to use any tree algorithms (hist, approx, gpu_hist) when creating categorical splits.
For more information, please see our tutorial on categorical data, along with
examples linked on that page. (#​7380, #​7708, #​7695, #​7330, #​7307, #​7322, #​7705,
#​7652, #​7592, #​7666, #​7576, #​7569, #​7529, #​7575, #​7393, #​7465, #​7385, #​7371, #​7745, #​7810)

In the future, we will continue to improve categorical data support with new features and
optimizations. Also, we are looking forward to bringing the feature beyond Python binding,
contributions and feedback are welcomed! Lastly, as a result of experimental status, the
behavior might be subject to change, especially the default value of related
hyper-parameters.

Experimental support for multi-output model

XGBoost 1.6 features initial support for the multi-output model, which includes
multi-output regression and multi-label classification. Along with this, the XGBoost
classifier has proper support for base margin without to need for the user to flatten the
input. In this initial support, XGBoost builds one model for each target similar to the
sklearn meta estimator, for more details, please see our quick
introduction
.

(#​7365, #​7736, #​7607, #​7574, #​7521, #​7514, #​7456, #​7453, #​7455, #​7434, #​7429, #​7405, #​7381)

External memory support

External memory support for both approx and hist tree method is considered feature
complete in XGBoost 1.6. Building upon the iterator-based interface introduced in the
previous version, now both hist and approx iterates over each batch of data during
training and prediction. In previous versions, hist concatenates all the batches into
an internal representation, which is removed in this version. As a result, users can
expect higher scalability in terms of data size but might experience lower performance due
to disk IO. (#​7531, #​7320, #​7638, #​7372)

Rewritten approx

The approx tree method is rewritten based on the existing hist tree method. The
rewrite closes the feature gap between approx and hist and improves the performance.
Now the behavior of approx should be more aligned with hist and gpu_hist. Here is a
list of user-visible changes:

  • Supports both max_leaves and max_depth.
  • Supports grow_policy.
  • Supports monotonic constraint.
  • Supports feature weights.
  • Use max_bin to replace sketch_eps.
  • Supports categorical data.
  • Faster performance for many of the datasets.
  • Improved performance and robustness for distributed training.
  • Supports prediction cache.
  • Significantly better performance for external memory when depthwise policy is used.
New serialization format

Based on the existing JSON serialization format, we introduce UBJSON support as a more
efficient alternative. Both formats will be available in the future and we plan to
gradually phase out support for the old
binary model format. Users can opt to use the different formats in the serialization
function by providing the file extension json or ubj. Also, the save_raw function in
all supported languages bindings gains a new parameter for exporting the model in different
formats, available options are json, ubj, and deprecated, see document for the
language binding you are using for details. Lastly, the default internal serialization
format is set to UBJSON, which affects Python pickle and R RDS. (#​7572, #​7570, #​7358,
#​7571, #​7556, #​7549, #​7416)

General new features and improvements

Aside from the major new features mentioned above, some others are summarized here:

  • Users can now access the build information of XGBoost binary in Python and C
    interface. (#​7399, #​7553)
  • Auto-configuration of seed_per_iteration is removed, now distributed training should
    generate closer results to single node training when sampling is used. (#​7009)
  • A new parameter huber_slope is introduced for the Pseudo-Huber objective.
  • During source build, XGBoost can choose cub in the system path automatically. (#​7579)
  • XGBoost now honors the CPU counts from CFS, which is usually set in docker
    environments. (#​7654, #​7704)
  • The metric aucpr is rewritten for better performance and GPU support. (#​7297, #​7368)
  • Metric calculation is now performed in double precision. (#​7364)
  • XGBoost no longer mutates the global OpenMP thread limit. (#​7537, #​7519, #​7608, #​7590,
    #​7589, #​7588, #​7687)
  • The default behavior of max_leave and max_depth is now unified (#​7302, #​7551).
  • CUDA fat binary is now compressed. (#​7601)
  • Deterministic result for evaluation metric and linear model. In previous versions of
    XGBoost, evaluation results might differ slightly for each run due to parallel reduction
    for floating-point values, which is now addressed. (#​7362, #​7303, #​7316, #​7349)
  • XGBoost now uses double for GPU Hist node sum, which improves the accuracy of
    gpu_hist. (#​7507)
Performance improvements

Most of the performance improvements are integrated into other refactors during feature
developments. The approx should see significant performance gain for many datasets as
mentioned in the previous section, while the hist tree method also enjoys improved
performance with the removal of the internal pruner along with some other
refactoring. Lastly, gpu_hist no longer synchronizes the device during training. (#​7737)

General bug fixes

This section lists bug fixes that are not specific to any language binding.

  • The num_parallel_tree is now a model parameter instead of a training hyper-parameter,
    which fixes model IO with random forest. (#​7751)
  • Fixes in CMake script for exporting configuration. (#​7730)
  • XGBoost can now handle unsorted sparse input. This includes text file formats like
    libsvm and scipy sparse matrix where column index might not be sorted. (#​7731)
  • Fix tree param feature type, this affects inputs with the number of columns greater than
    the maximum value of int32. (#​7565)
  • Fix external memory with gpu_hist and subsampling. (#​7481)
  • Check the number of trees in inplace predict, this avoids a potential segfault when an
    incorrect value for iteration_range is provided. (#​7409)
  • Fix non-stable result in cox regression (#​7756)
Changes in the Python package

Other than the changes in Dask, the XGBoost Python package gained some new features and
improvements along with small bug fixes.

  • Python 3.7 is required as the lowest Python version. (#​7682)
  • Pre-built binary wheel for Apple Silicon. (#​7621, #​7612, #​7747) Apple Silicon users will
    now be able to run pip install xgboost to install XGBoost.
  • MacOS users no longer need to install libomp from Homebrew, as the XGBoost wheel now
    bundles libomp.dylib library.
  • There are new parameters for users to specify the custom metric with new
    behavior. XGBoost can now output transformed prediction values when a custom objective is
    not supplied. See our explanation in the
    tutorial
    for details.
  • For the sklearn interface, following the estimator guideline from scikit-learn, all
    parameters in fit that are not related to input data are moved into the constructor
    and can be set by set_params. (#​6751, #​7420, #​7375, #​7369)
  • Apache arrow format is now supported, which can bring better performance to users'
    pipeline (#​7512)
  • Pandas nullable types are now supported (#​7760)
  • A new function get_group is introduced for DMatrix to allow users to get the group
    information in the custom objective function. (#​7564)
  • More training parameters are exposed in the sklearn interface instead of relying on the
    **kwargs. (#​7629)
  • A new attribute feature_names_in_ is defined for all sklearn estimators like
    XGBRegressor to follow the convention of sklearn. (#​7526)
  • More work on Python type hint. (#​7432, #​7348, #​7338, #​7513, #​7707)
  • Support the latest pandas Index type. (#​7595)
  • Fix for Feature shape mismatch error on s390x platform (#​7715)
  • Fix using feature names for constraints with multiple groups (#​7711)
  • We clarified the behavior of the callback function when it contains mutable
    states. (#​7685)
  • Lastly, there are some code cleanups and maintenance work. (#​7585, #​7426, #​7634, #​7665,
    #​7667, #​7377, #​7360, #​7498, #​7438, #​7667, #​7752, #​7749, #​7751)
Changes in the Dask interface
  • Dask module now supports user-supplied host IP and port address of scheduler node.
    Please see introduction and
    API document
    for reference. (#​7645, #​7581)
  • Internal DMatrix construction in dask now honers thread configuration. (#​7337)
  • A fix for nthread configuration using the Dask sklearn interface

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Mend Renovate. View repository job log here.

@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch 4 times, most recently from d43360f to c3bc995 Compare May 12, 2020 09:39
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from c3bc995 to 8e974d8 Compare May 17, 2020 09:54
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch 6 times, most recently from e41a7b1 to 0bd3eb7 Compare June 7, 2020 04:24
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch 3 times, most recently from cd2aa3e to 0ebe0fa Compare June 18, 2020 23:53
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch 3 times, most recently from 14988f9 to dcf0285 Compare July 1, 2020 16:33
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from dcf0285 to ef2e640 Compare July 13, 2020 13:29
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from ef2e640 to 2e2ce2d Compare August 4, 2020 17:25
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from 2e2ce2d to ed66d48 Compare August 23, 2020 03:37
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch 2 times, most recently from 1fae8dd to 017c2a8 Compare September 15, 2020 08:01
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from 017c2a8 to d835229 Compare September 19, 2020 15:17
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from d835229 to 8fabf76 Compare October 9, 2020 13:35
@renovate renovate bot closed this Jun 8, 2021
@renovate renovate bot deleted the feature/renovate-xgboost-1.x branch June 8, 2021 10:12
@renovate renovate bot changed the title chore(deps): update dependency xgboost to v1 - autoclosed chore(deps): update dependency xgboost to v1 Jun 8, 2021
@renovate renovate bot restored the feature/renovate-xgboost-1.x branch June 8, 2021 11:12
@renovate renovate bot reopened this Jun 8, 2021
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from 2968bed to 9b60337 Compare June 8, 2021 11:16
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from 9b60337 to 0feb755 Compare October 18, 2021 23:44
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from 0feb755 to 7a5f868 Compare March 7, 2022 09:55
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from 7a5f868 to 20b1476 Compare March 26, 2022 16:36
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from 20b1476 to eff4b52 Compare April 16, 2022 05:00
@renovate
Copy link
Author

renovate bot commented Apr 16, 2022

⚠ Artifact update problem

Renovate failed to update an artifact related to this branch. You probably do not want to merge this PR as-is.

♻ Renovate will retry this branch, including artifacts, only when one of the following happens:

  • any of the package files in this branch needs updating, or
  • the branch becomes conflicted, or
  • you click the rebase/retry checkbox if found above, or
  • you rename this PR's title to start with "rebase!" to trigger it manually

The artifact failure details are included below:

File name: poetry.lock
[14:59:12.186] INFO (9): Installing tool python v3.11.4...
installing v2 tool python v3.11.4
linking tool python v3.11.4
Python 3.11.4
pip 23.2.1 from /opt/containerbase/tools/python/3.11.4/lib/python3.11/site-packages/pip (python 3.11)
[14:59:19.038] INFO (9): Installed tool python in 6.8s.
[14:59:19.144] INFO (148): Installing tool poetry v1.2.2...
installing v2 tool poetry v1.2.2
linking tool poetry v1.2.2
Poetry (version 1.2.2)
[14:59:28.017] INFO (148): Installed tool poetry in 8.8s.
Creating virtualenv pytorch-tabnet-FjHJ_RR9-py3.11 in /home/ubuntu/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...

/usr/local/bin/docker: line 4: .: filename argument required
.: usage: . filename [arguments]

The current project's Python requirement (>=3.7) is not compatible with some of the required packages Python requirement:
  - xgboost requires Python >=3.8, so it will not be satisfied for Python >=3.7,<3.8

Because pytorch-tabnet depends on xgboost (1.7.6) which requires Python >=3.8, version solving failed.

  • Check your dependencies Python requirement: The Python requirement can be specified via the `python` or `markers` properties
    
    For xgboost, a possible solution would be to set the `python` property to ">=3.8"

    https://python-poetry.org/docs/dependency-specification/#python-restricted-dependencies,
    https://python-poetry.org/docs/dependency-specification/#using-environment-markers

@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from eff4b52 to e64a9a3 Compare May 15, 2022 19:44
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch 2 times, most recently from 1297226 to 28a666f Compare June 27, 2022 10:44
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from 28a666f to c4bfaf3 Compare August 22, 2022 15:47
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from c4bfaf3 to c9e1314 Compare November 20, 2022 11:53
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from c9e1314 to 00f896d Compare March 16, 2023 21:53
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch 2 times, most recently from 8ed4288 to b43dfdf Compare March 30, 2023 20:17
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from b43dfdf to 475afd1 Compare May 30, 2023 06:03
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from 475afd1 to 5cf2bf0 Compare June 19, 2023 06:59
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from 5cf2bf0 to 8183edd Compare July 6, 2023 16:15
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch 3 times, most recently from 062d015 to fb74e76 Compare July 23, 2023 13:18
@renovate renovate bot force-pushed the feature/renovate-xgboost-1.x branch from fb74e76 to b8dba5b Compare July 23, 2023 14:59
@renovate renovate bot changed the title chore(deps): update dependency xgboost to v1 chore(deps): update dependency xgboost to v1 - autoclosed Sep 13, 2023
@renovate renovate bot closed this Sep 13, 2023
@renovate renovate bot deleted the feature/renovate-xgboost-1.x branch September 13, 2023 03:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants