Skip to content
This repository has been archived by the owner on Apr 2, 2022. It is now read-only.

chore(deps): update dependency xgboost to v1 - autoclosed #461

Conversation

renovate-bot
Copy link
Contributor

@renovate-bot renovate-bot commented Aug 4, 2021

WhiteSource Renovate

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
xgboost ==0.81 -> ==1.4.2 age adoption passing confidence

Release Notes

dmlc/xgboost

v1.4.2

Compare Source

This is a patch release for Python package with following fixes:

You can verify the downloaded source code xgboost.tar.gz by running this on your unix shell:

echo "3ffd4a90cd03efde596e51cadf7f344c8b6c91aefd06cc92db349cd47056c05a *xgboost.tar.gz" | shasum -a 256 --check

v1.4.1

Compare Source

This is a bug fix release.

  • Fix GPU implementation of AUC on some large datasets. (#​6866)

You can verify the downloaded source code xgboost.tar.gz by running this on your unix shell:

echo "f3a37e5ddac10786e46423db874b29af413eed49fd9baed85035bbfee6fc6635 *xgboost.tar.gz" | shasum -a 256 --check

v1.4.0

Compare Source

Introduction of pre-built binary package for R, with GPU support

Starting with release 1.4.0, users now have the option of installing {xgboost} without
having to build it from the source. This is particularly advantageous for users who want
to take advantage of the GPU algorithm (gpu_hist), as previously they'd have to build
{xgboost} from the source using CMake and NVCC. Now installing {xgboost} with GPU
support is as easy as: R CMD INSTALL ./xgboost_r_gpu_linux.tar.gz. (#​6827)

See the instructions at https://xgboost.readthedocs.io/en/latest/build.html

Improvements on prediction functions

XGBoost has many prediction types including shap value computation and inplace prediction.
In 1.4 we overhauled the underlying prediction functions for C API and Python API with an
unified interface. (#​6777, #​6693, #​6653, #​6662, #​6648, #​6668, #​6804)

  • Starting with 1.4, sklearn interface prediction will use inplace predict by default when
    input data is supported.
  • Users can use inplace predict with dart booster and enable GPU acceleration just
    like gbtree.
  • Also all prediction functions with tree models are now thread-safe. Inplace predict is
    improved with base_margin support.
  • A new set of C predict functions are exposed in the public interface.
  • A user-visible change is a newly added parameter called strict_shape. See
    https://xgboost.readthedocs.io/en/latest/prediction.html for more details.
Improvement on Dask interface
  • Starting with 1.4, the Dask interface is considered to be feature-complete, which means
    all of the models found in the single node Python interface are now supported in Dask,
    including but not limited to ranking and random forest. Also, the prediction function
    is significantly faster and supports shap value computation.

    • Most of the parameters found in single node sklearn interface are supported by
      Dask interface. (#​6471, #​6591)
    • Implements learning to rank. On the Dask interface, we use the newly added support of
      query ID to enable group structure. (#​6576)
    • The Dask interface has Python type hints support. (#​6519)
    • All models can be safely pickled. (#​6651)
    • Random forest estimators are now supported. (#​6602)
    • Shap value computation is now supported. (#​6575, #​6645, #​6614)
    • Evaluation result is printed on the scheduler process. (#​6609)
    • DaskDMatrix (and device quantile dmatrix) now accepts all meta-information. (#​6601)
  • Prediction optimization. We enhanced and speeded up the prediction function for the
    Dask interface. See the latest Dask tutorial page in our document for an overview of
    how you can optimize it even further. (#​6650, #​6645, #​6648, #​6668)

  • Bug fixes

    • If you are using the latest Dask and distributed where distributed.MultiLock is
      present, XGBoost supports training multiple models on the same cluster in
      parallel. (#​6743)
    • A bug fix for when using dask.client to launch async task, XGBoost might use a
      different client object internally. (#​6722)
  • Other improvements on documents, blogs, tutorials, and demos. (#​6389, #​6366, #​6687,
    #​6699, #​6532, #​6501)

Python package

With changes from Dask and general improvement on prediction, we have made some
enhancements on the general Python interface and IO for booster information. Starting
from 1.4, booster feature names and types can be saved into the JSON model. Also some
model attributes like best_iteration, best_score are restored upon model load. On
sklearn interface, some attributes are now implemented as Python object property with
better documents.

  • Breaking change: All data parameters in prediction functions are renamed to X
    for better compliance to sklearn estimator interface guidelines.

  • Breaking change: XGBoost used to generate some pseudo feature names with DMatrix
    when inputs like np.ndarray don't have column names. The procedure is removed to
    avoid conflict with other inputs. (#​6605)

  • Early stopping with training continuation is now supported. (#​6506)

  • Optional import for Dask and cuDF are now lazy. (#​6522)

  • As mentioned in the prediction improvement summary, the sklearn interface uses inplace
    prediction whenever possible. (#​6718)

  • Booster information like feature names and feature types are now saved into the JSON
    model file. (#​6605)

  • All DMatrix interfaces including DeviceQuantileDMatrix and counterparts in Dask
    interface (as mentioned in the Dask changes summary) now accept all the meta-information
    like group and qid in their constructor for better consistency. (#​6601)

  • Booster attributes are restored upon model load so users don't have to call attr
    manually. (#​6593)

  • On sklearn interface, all models accept base_margin for evaluation datasets. (#​6591)

  • Improvements over the setup script including smaller sdist size and faster installation
    if the C++ library is already built (#​6611, #​6694, #​6565).

  • Bug fixes for Python package:

    • Don't validate feature when number of rows is 0. (#​6472)
    • Move metric configuration into booster. (#​6504)
    • Calling XGBModel.fit() should clear the Booster by default (#​6562)
    • Support _estimator_type. (#​6582)
    • [dask, sklearn] Fix predict proba. (#​6566, #​6817)
    • Restore unknown data support. (#​6595)
    • Fix learning rate scheduler with cv. (#​6720)
    • Fixes small typo in sklearn documentation (#​6717)
    • [python-package] Fix class Booster: feature_types = None (#​6705)
    • Fix divide by 0 in feature importance when no split is found. (#​6676)
JVM package
  • [jvm-packages] fix early stopping doesn't work even without custom_eval setting (#​6738)
  • fix potential TaskFailedListener's callback won't be called (#​6612)
  • [jvm] Add ability to load booster direct from byte array (#​6655)
  • [jvm-packages] JVM library loader extensions (#​6630)
R package
  • R documentation: Make construction of DMatrix consistent.
  • Fix R documentation for xgb.train. (#​6764)
ROC-AUC

We re-implemented the ROC-AUC metric in XGBoost. The new implementation supports
multi-class classification and has better support for learning to rank tasks that are not
binary. Also, it has a better-defined average on distributed environments with additional
handling for invalid datasets. (#​6749, #​6747, #​6797)

Global configuration.

Starting from 1.4, XGBoost's Python, R and C interfaces support a new global configuration
model where users can specify some global parameters. Currently, supported parameters are
verbosity and use_rmm. The latter is experimental, see rmm plugin demo and
related README file for details. (#​6414, #​6656)

Other New features.
  • Better handling for input data types that support __array_interface__. For some
    data types including GPU inputs and scipy.sparse.csr_matrix, XGBoost employs
    __array_interface__ for processing the underlying data. Starting from 1.4, XGBoost
    can accept arbitrary array strides (which means column-major is supported) without
    making data copies, potentially reducing a significant amount of memory consumption.
    Also version 3 of __cuda_array_interface__ is now supported. (#​6776, #​6765, #​6459,
    #​6675)
  • Improved parameter validation, now feeding XGBoost with parameters that contain
    whitespace will trigger an error. (#​6769)
  • For Python and R packages, file paths containing the home indicator ~ are supported.
  • As mentioned in the Python changes summary, the JSON model can now save feature
    information of the trained booster. The JSON schema is updated accordingly. (#​6605)
  • Development of categorical data support is continued. Newly added weighted data support
    and dart booster support. (#​6508, #​6693)
  • As mentioned in Dask change summary, ranking now supports the qid parameter for
    query groups. (#​6576)
  • DMatrix.slice can now consume a numpy array. (#​6368)
Other breaking changes
  • Aside from the feature name generation, there are 2 breaking changes:
    • Drop saving binary format for memory snapshot. (#​6513, #​6640)
    • Change default evaluation metric for binary:logitraw objective to logloss (#​6647)
CPU Optimization
  • Aside from the general changes on predict function, some optimizations are applied on
    CPU implementation. (#​6683, #​6550, #​6696, #​6700)
  • Also performance for sampling initialization in hist is improved. (#​6410)
Notable fixes in the core library

These fixes do not reside in particular language bindings:

  • Fixes for gamma regression. This includes checking for invalid input values, fixes for
    gamma deviance metric, and better floating point guard for gamma negative log-likelihood
    metric. (#​6778, #​6537, #​6761)
  • Random forest with gpu_hist might generate low accuracy in previous versions. (#​6755)
  • Fix a bug in GPU sketching when data size exceeds limit of 32-bit integer. (#​6826)
  • Memory consumption fix for row-major adapters (#​6779)
  • Don't estimate sketch batch size when rmm is used. (#​6807) (#​6830)
  • Fix in-place predict with missing value. (#​6787)
  • Re-introduce double buffer in UpdatePosition, to fix perf regression in gpu_hist (#​6757)
  • Pass correct split_type to GPU predictor (#​6491)
  • Fix DMatrix feature names/types IO. (#​6507)
  • Use view for SparsePage exclusively to avoid some data access races. (#​6590)
  • Check for invalid data. (#​6742)
  • Fix relocatable include in CMakeList (#​6734) (#​6737)
  • Fix DMatrix slice with feature types. (#​6689)
Other deprecation notices:
  • This release will be the last release to support CUDA 10.0. (#​6642)

  • Starting in the next release, the Python package will require Pip 19.3+ due to the use
    of manylinux2014 tag. Also, CentOS 6, RHEL 6 and other old distributions will not be
    supported.

Known issue:

MacOS build of the JVM packages doesn't support multi-threading out of the box. To enable
multi-threading with JVM packages, MacOS users will need to build the JVM packages from
the source. See https://xgboost.readthedocs.io/en/latest/jvm/index.html#installation-from-source

Doc
  • Dedicated page for tree_method parameter is added. (#​6564, #​6633)
  • [doc] Add FLAML as a fast tuning tool for XGBoost (#​6770)
  • Add document for tests directory. [skip ci] (#​6760)
  • Fix doc string of config.py to use correct versionadded (#​6458)
  • Update demo for prediction. (#​6789)
  • [Doc] Document that AUCPR is for binary classification/ranking (#​5899)
  • Update the C API comments (#​6457)
  • Fix document. [skip ci] (#​6669)
Maintenance: Testing, continuous integration
  • Use CPU input for test_boost_from_prediction. (#​6818)
  • [CI] Upload xgboost4j.dll to S3 (#​6781)
  • Update dmlc-core submodule (#​6745)
  • [CI] Use manylinux2010_x86_64 container to vendor libgomp (#​6485)
  • Add conda-forge badge (#​6502)
  • Fix merge conflict. (#​6512)
  • [CI] Split up main.yml, add mypy. (#​6515)
  • [Breaking] Upgrade cuDF and RMM to 0.18 nightlies; require RMM 0.18+ for RMM plugin (#​6510)
  • "featue_map" typo changed to "feature_map" (#​6540)
  • Add script for generating release tarball. (#​6544)
  • Add credentials to .gitignore (#​6559)
  • Remove warnings in tests. (#​6554)
  • Update dmlc-core submodule and conform to new API (#​6431)
  • Suppress hypothesis health check for dask client. (#​6589)
  • Fix pylint. (#​6714)
  • [CI] Clear R package cache (#​6746)
  • Exclude dmlc test on github action. (#​6625)
  • Tests for regression metrics with weights. (#​6729)
  • Add helper script and doc for releasing pip package. (#​6613)
  • Support pylint 2.7.0 (#​6726)
  • Remove R cache in github action. (#​6695)
  • [CI] Do not mix up stashed executable built for ARM and x86_64 platforms (#​6646)
  • [CI] Add ARM64 test to Jenkins pipeline (#​6643)
  • Disable s390x and arm64 tests on travis for now. (#​6641)
  • Move sdist test to action. (#​6635)
  • [dask] Rework base margin test. (#​6627)
Maintenance: Refactor code for legibility and maintainability
  • Improve OpenMP exception handling (#​6680)
  • Improve string view to reduce string allocation. (#​6644)
  • Simplify Span checks. (#​6685)
  • Use generic dispatching routine for array interface. (#​6672)

You can verify the downloaded source code xgboost.tar.gz by running this on your unix shell:

echo "ff77130a86aebd83a8b996c76768a867b0a6e5012cce89212afc3df4c4ee6b1c *xgboost.tar.gz" | shasum -a 256 --check

v1.3.3

Compare Source

  • Fix regression on best_ntree_limit. (#​6616)

v1.3.2

Compare Source

v1.3.1

  • Enable loading model from <1.0.0 trained with objective='binary:logitraw' (#​6517)
  • Fix handling of print period in EvaluationMonitor (#​6499)
  • Fix a bug in metric configuration after loading model. (#​6504)
  • Fix save_best early stopping option (#​6523)
  • Remove cupy.array_equal, since it's not compatible with cuPy 7.8 (#​6528)

You can verify the downloaded source code xgboost.tar.gz by running this on your unix shell:

echo "fd51e844dd0291fd9e7129407be85aaeeda2309381a6e3fc104938b27fb09279 *xgboost.tar.gz" | shasum -a 256 --check

v1.2.1

Compare Source

This patch release applies the following patches to 1.2.0 release:

  • Hide C++ symbols from dmlc-core (#​6188)

v1.2.0

Compare Source

XGBoost4J-Spark now supports the GPU algorithm (#​5171)
  • Now XGBoost4J-Spark is able to leverage NVIDIA GPU hardware to speed up training.
  • There is on-going work for accelerating the rest of the data pipeline with NVIDIA GPUs (#​5950, #​5972).
XGBoost now supports CUDA 11 (#​5808)
  • It is now possible to build XGBoost with CUDA 11. Note that we do not yet distribute pre-built binaries built with CUDA 11; all current distributions use CUDA 10.0.
Better guidance for persisting XGBoost models in an R environment (#​5940, #​5964)
  • Users are strongly encouraged to use xgb.save() and xgb.save.raw() instead of saveRDS(). This is so that the persisted models can be accessed with future releases of XGBoost.
  • The previous release (1.1.0) had problems loading models that were saved with saveRDS(). This release adds a compatibility layer to restore access to the old RDS files. Note that this is meant to be a temporary measure; users are advised to stop using saveRDS() and migrate to xgb.save() and xgb.save.raw().
New objectives and metrics
  • The pseudo-Huber loss reg:pseudohubererror is added (#​5647). The corresponding metric is mphe. Right now, the slope is hard-coded to 1.
  • The Accelerated Failure Time objective for survival analysis (survival:aft) is now accelerated on GPUs (#​5714, #​5716). The survival metrics aft-nloglik and interval-regression-accuracy are also accelerated on GPUs.
Improved integration with scikit-learn
  • Added n_features_in_ attribute to the scikit-learn interface to store the number of features used (#​5780). This is useful for integrating with some scikit-learn features such as StackingClassifier. See this link for more details.
  • XGBoostError now inherits ValueError, which conforms scikit-learn's exception requirement (#​5696).
Improved integration with Dask
  • The XGBoost Dask API now exposes an asynchronous interface (#​5862). See the document for details.
  • Zero-copy ingestion of GPU arrays via DaskDeviceQuantileDMatrix (#​5623, #​5799, #​5800, #​5803, #​5837, #​5874, #​5901): Previously, the Dask interface had to make 2 data copies: one for concatenating the Dask partition/block into a single block and another for internal representation. To save memory, we introduce DaskDeviceQuantileDMatrix. As long as Dask partitions are resident in the GPU memory, DaskDeviceQuantileDMatrix is able to ingest them directly without making copies. This matrix type wraps DeviceQuantileDMatrix.
  • The prediction function now returns GPU Series type if the input is from Dask-cuDF (#​5710). This is to preserve the input data type.
Robust handling of external data types (#​5689, #​5893)
  • As we support more and more external data types, the handling logic has proliferated all over the code base and became hard to keep track. It also became unclear how missing values and threads are handled. We refactored the Python package code to collect all data handling logic to a central location, and now we have an explicit list of of all supported data types.
Improvements in GPU-side data matrix (DeviceQuantileDMatrix)
  • The GPU-side data matrix now implements its own quantile sketching logic, so that data don't have to be transported back to the main memory (#​5700, #​5747, #​5760, #​5846, #​5870, #​5898). The GK sketching algorithm is also now better documented.
    • Now we can load extremely sparse dataset like URL, although performance is still sub-optimal.
  • The GPU-side data matrix now exposes an iterative interface (#​5783), so that users are able to construct a matrix from a data iterator. See the Python demo.
New language binding: Swift (#​5728)
Robust model serialization with JSON (#​5772, #​5804, #​5831, #​5857, #​5934)
  • We continue efforts from the 1.0.0 release to adopt JSON as the format to save and load models robustly.
  • JSON model IO is significantly faster and produces smaller model files.
  • Round-trip reproducibility is guaranteed, via the introduction of an efficient float-to-string conversion algorithm known as the Ryū algorithm. The conversion is locale-independent, producing consistent numeric representation regardless of the locale setting of the user's machine.
  • We fixed an issue in loading large JSON files to memory.
  • It is now possible to load a JSON file from a remote source such as S3.
Performance improvements
  • CPU hist tree method optimization
    • Skip missing lookup in hist row partitioning if data is dense. (#​5644)
    • Specialize training procedures for CPU hist tree method on distributed environment. (#​5557)
    • Add single point histogram for CPU hist. Previously gradient histogram for CPU hist is hard coded to be 64 bit, now users can specify the parameter single_precision_histogram to use 32 bit histogram instead for faster training performance. (#​5624, #​5811)
  • GPU hist tree method optimization
    • Removed some unnecessary synchronizations and better memory allocation pattern. (#​5707)
    • Optimize GPU Hist for wide dataset. Previously for wide dataset the atomic operation is performed on global memory, now it can run on shared memory for faster histogram building. But there's a known small regression on GeForce cards with dense data. (#​5795, #​5926, #​5948, #​5631)
API additions
  • Support passing fmap to importance plot (#​5719). Now importance plot can show actual names of features instead of default ones.
  • Support 64bit seed. (#​5643)
  • A new C API XGBoosterGetNumFeature is added for getting number of features in booster (#​5856).
  • Feature names and feature types are now stored in C++ core and saved in binary DMatrix (#​5858).
Breaking: The predict() method of DaskXGBClassifier now produces class predictions (#​5986). Use predict_proba() to obtain probability predictions.
  • Previously, DaskXGBClassifier.predict() produced probability predictions. This is inconsistent with the behavior of other scikit-learn classifiers, where predict() returns class predictions. We make a breaking change in 1.2.0 release so that DaskXGBClassifier.predict() now correctly produces class predictions and thus behave like other scikit-learn classifiers. Furthermore, we introduce the predict_proba() method for obtaining probability predictions, again to be in line with other scikit-learn classifiers.
Breaking: Custom evaluation metric now receives raw prediction (#​5954)
  • Previously, the custom evaluation metric received a transformed prediction result when used with a classifier. Now the custom metric will receive a raw (untransformed) prediction and will need to transform the prediction itself. See demo/guide-python/custom_softmax.py for an example.
  • This change is to make the custom metric behave consistently with the custom objective, which already receives raw prediction (#​5564).
Breaking: XGBoost4J-Spark now requires Spark 3.0 and Scala 2.12 (#​5836, #​5890)
  • Starting with version 3.0, Spark can manage GPU resources and allocate them among executors.
  • Spark 3.0 dropped support for Scala 2.11 and now only supports Scala 2.12. Thus, XGBoost4J-Spark also only supports Scala 2.12.
Breaking: XGBoost Python package now requires Python 3.6 and later (#​5715)
  • Python 3.6 has many useful features such as f-strings.
Breaking: XGBoost now adopts the C++14 standard (#​5664)
  • Make sure to use a sufficiently modern C++ compiler that supports C++14, such as Visual Studio 2017, GCC 5.0+, and Clang 3.4+.
Bug-fixes
  • Fix a data race in the prediction function (#​5853). As a byproduct, the prediction function now uses a thread-local data store and became thread-safe.
  • Restore capability to run prediction when the test input has fewer features than the training data (#​5955). This capability is necessary to support predicting with LIBSVM inputs. The previous release (1.1) had broken this capability, so we restore it in this version with better tests.
  • Fix OpenMP build with CMake for R package, to support CMake 3.13 (#​5895).
  • Fix Windows 2016 build (#​5902, #​5918).
  • Fix edge cases in scikit-learn interface with Pandas input by disabling feature validation. (#​5953)
  • [R] Enable weighted learning to rank (#​5945)
  • [R] Fix early stopping with custom objective (#​5923)
  • Fix NDK Build (#​5886)
  • Add missing explicit template specializations for greater portability (#​5921)
  • Handle empty rows in data iterators correctly (#​5929). This bug affects file loader and JVM data frames.
  • Fix IsDense (#​5702)
  • [jvm-packages] Fix wrong method name setAllowZeroForMissingValue (#​5740)
  • Fix shape inference for Dask predict (#​5989)
Usability Improvements, Documentation
  • [Doc] Document that CUDA 10.0 is required (#​5872)
  • Refactored command line interface (CLI). Now CLI is able to handle user errors and output basic document. (#​5574)
  • Better error handling in Python: use raise from syntax to preserve full stacktrace (#​5787).
  • The JSON model dump now has a formal schema (#​5660, #​5818). The benefit is to prevent dump_model() function from breaking. See this document to understand the difference between saving and dumping models.
  • Add a reference to the GPU external memory paper (#​5684)
  • Document more objective parameters in the R package (#​5682)
  • Document the existence of pre-built binary wheels for MacOS (#​5711)
  • Remove max.depth in the R gblinear example. (#​5753)
  • Added conda environment file for building docs (#​5773)
  • Mention dask blog post in the doc, which introduces using Dask with GPU and some internal workings. (#​5789)
  • Fix rendering of Markdown docs (#​5821)
  • Document new objectives and metrics available on GPUs (#​5909)
  • Better message when no GPU is found. (#​5594)
  • Remove the use of silent parameter from R demos. (#​5675)
  • Don't use masked array in array interface. (#​5730)
  • Update affiliation of @​terrytangyuan: Ant Financial -> Ant Group (#​5827)
  • Move dask tutorial closer other distributed tutorials (#​5613)
  • Update XGBoost + Dask overview documentation (#​5961)
  • Show n_estimators in the docstring of the scikit-learn interface (#​6041)
  • Fix a type in a doctring of the scikit-learn interface (#​5980)
Maintenance: testing, continuous integration, build system
  • [CI] Remove CUDA 9.0 from CI (#​5674, #​5745)
  • Require CUDA 10.0+ in CMake build (#​5718)
  • [R] Remove dependency on gendef for Visual Studio builds (fixes #​5608) (#​5764). This enables building XGBoost with GPU support with R 4.x.
  • [R-package] Reduce duplication in configure.ac (#​5693)
  • Bump com.esotericsoftware to 4.0.2 (#​5690)
  • Migrate some tests from AppVeyor to GitHub Actions to speed up the tests. (#​5911, #​5917, #​5919, #​5922, #​5928)
  • Reduce cost of the Jenkins CI server (#​5884, #​5904, #​5892). We now enforce a daily budget via an automated monitor. We also dramatically reduced the workload for the Windows platform, since the cloud VM cost is vastly greater for Windows.
  • [R] Set up automated R linter (#​5944)
  • [R] replace uses of T and F with TRUE and FALSE (#​5778)
  • Update Docker container 'CPU' (#​5956)
  • Simplify CMake build with modern CMake techniques (#​5871)
  • Use hypothesis package for testing (#​5759, #​5835, #​5849).
  • Define _CRT_SECURE_NO_WARNINGS to remove unneeded warnings in MSVC (#​5434)
  • Run all Python demos in CI, to ensure that they don't break (#​5651)
  • Enhance nvtx support (#​5636). Now we can use unified timer between CPU and GPU. Also CMake is able to find nvtx automatically.
  • Speed up python test. (#​5752)
  • Add helper for generating batches of data. (#​5756)
  • Add c-api-demo to .gitignore (#​5855)
  • Add option to enable all compiler warnings in GCC/Clang (#​5897)
  • Make Python model compatibility test runnable locally (#​5941)
  • Add cupy to Windows CI (#​5797)
  • [CI] Fix cuDF install; merge 'gpu' and 'cudf' test suite (#​5814)
  • Update rabit submodule (#​5680, #​5876)
  • Force colored output for Ninja build. (#​5959)
  • [CI] Assign larger /dev/shm to NCCL (#​5966)
  • Add missing Pytest marks to AsyncIO unit test (#​5968)
  • [CI] Use latest cuDF and dask-cudf (#​6048)
  • Add CMake flag to log C API invocations, to aid debugging (#​5925)
  • Fix a unit test on CLI, to handle RC versions (#​6050)
  • [CI] Use mgpu machine to run gpu hist unit tests (#​6050)
  • [CI] Build GPU-enabled JAR artifact and deploy to xgboost-maven-repo (#​6050)
Maintenance: Refactor code for legibility and maintainability
  • Remove dead code in DMatrix initialization. (#​5635)
  • Catch dmlc error by ref. (#​5678)
  • Refactor the gpu_hist split evaluation in preparation for batched nodes enumeration. (#​5610)
  • Remove column major specialization. (#​5755)
  • Remove unused imports in Python (#​5776)
  • Avoid including c_api.h in header files. (#​5782)
  • Remove unweighted GK quantile, which is unused. (#​5816)
  • Add Python binding for rabit ops. (#​5743)
  • Implement Empty method for host device vector. (#​5781)
  • Remove print (#​5867)
  • Enforce tree order in JSON (#​5974)
Acknowledgement

Contributors: Nan Zhu (@​CodingCat), @​LionOrCatThatIsTheQuestion, Dmitry Mottl (@​Mottl), Rory Mitchell (@​RAMitchell), @​ShvetsKS, Alex Wozniakowski (@​a-wozniakowski), Alexander Gugel (@​alexanderGugel), @​anttisaukko, @​boxdot, Andy Adinets (@​canonizer), Ram Rachum (@​cool-RR), Elliot Hershberg (@​elliothershberg), Jason E. Aten, Ph.D. (@​glycerine), Philip Hyunsu Cho (@​hcho3), @​jameskrach, James Lamb (@​jameslamb), James Bourbeau (@​jrbourbeau), Peter Jung (@​kongzii), Lorenz Walthert (@​lorenzwalthert), Oleksandr Kuvshynov (@​okuvshynov), Rong Ou (@​rongou), Shaochen Shi (@​shishaochen), Yuan Tang (@​terrytangyuan), Jiaming Yuan (@​trivialfis), Bobby Wang (@​wbo4958), Zhang Zhang (@​zhangzhang10)

Reviewers: Nan Zhu (@​CodingCat), @​LionOrCatThatIsTheQuestion, Hao Yang (@​QuantHao), Rory Mitchell (@​RAMitchell), @​ShvetsKS, Egor Smirnov (@​SmirnovEgorRu), Alex Wozniakowski (@​a-wozniakowski), Amit Kumar (@​aktech), Avinash Barnwal (@​avinashbarnwal), @​boxdot, Andy Adinets (@​canonizer), Chandra Shekhar Reddy (@​chandrureddy), Ram Rachum (@​cool-RR), Cristiano Goncalves (@​cristianogoncalves), Elliot Hershberg (@​elliothershberg), Jason E. Aten, Ph.D. (@​glycerine), Philip Hyunsu Cho (@​hcho3), Tong He (@​hetong007), James Lamb (@​jameslamb), James Bourbeau (@​jrbourbeau), Lee Drake (@​leedrake5), DougM (@​mengdong), Oleksandr Kuvshynov (@​okuvshynov), RongOu (@​rongou), Shaochen Shi (@​shishaochen), Xu Xiao (@​sperlingxx), Yuan Tang (@​terrytangyuan), Theodore Vasiloudis (@​thvasilo), Jiaming Yuan (@​trivialfis), Bobby Wang (@​wbo4958), Zhang Zhang (@​zhangzhang10)

v1.1.1

Compare Source

This patch release applies the following patches to 1.1.0 release:

  • CPU performance improvement in the PyPI wheels (#​5720)
  • Fix loading old model. (#​5724)
  • Install pkg-config file (#​5744)

v1.1.0

Compare Source

Better performance on multi-core CPUs (#​5244, #​5334, #​5522)
  • Poor performance scaling of the hist algorithm for multi-core CPUs has been under investigation (#​3810). #​5244 concludes the ongoing effort to improve performance scaling on multi-CPUs, in particular Intel CPUs. Roadmap: #​5104
  • #​5334 makes steps toward reducing memory consumption for the hist tree method on CPU.
  • #​5522 optimizes random number generation for data sampling.
Deterministic GPU algorithm for regression and classification (#​5361)
  • GPU algorithm for regression and classification tasks is now deterministic.
  • Roadmap: #​5023. Currently only single-GPU training is deterministic. Distributed training with multiple GPUs is not yet deterministic.
Improve external memory support on GPUs (#​5093, #​5365)
  • Starting from 1.0.0 release, we added support for external memory on GPUs to enable training with larger datasets. Gradient-based sampling (#​5093) speeds up the external memory algorithm by intelligently sampling a subset of the training data to copy into the GPU memory. Learn more about out-of-core GPU gradient boosting.
  • GPU-side data sketching now works with data from external memory (#​5365).
Parameter validation: detection of unused or incorrect parameters (#​5477, #​5569, #​5508)
  • Mis-spelled training parameter is a common user mistake. In previous versions of XGBoost, mis-spelled parameters were silently ignored. Starting with 1.0.0 release, XGBoost will produce a warning message if there is any unused training parameters. The 1.1.0 release makes parameter validation available to the scikit-learn interface (#​5477) and the R binding (#​5569).
Thread-safe, in-place prediction method (#​5389, #​5512)
  • Previously, the prediction method was not thread-safe (#​5339). This release adds a new API function inplace_predict() that is thread-safe. It is now possible to serve concurrent requests for prediction using a shared model object.
  • It is now possible to compute prediction in-place for selected data formats (numpy.ndarray / scipy.sparse.csr_matrix / cupy.ndarray / cudf.DataFrame / pd.DataFrame) without creating a DMatrix object.
Addition of Accelerated Failure Time objective for survival analysis (#​4763, #​5473, #​5486, #​5552, #​5553)
  • Survival analysis (regression) models the time it takes for an event of interest to occur. The target label is potentially censored, i.e. the label is a range rather than a single number. We added a new objective survival:aft to support survival analysis. Also added is the new API to specify the ranged labels. Check out the tutorial and the demos.
  • GPU support is work in progress (#​5714).
Improved installation experience on Mac OSX (#​5597, #​5602, #​5606, #​5701)
  • It only takes two commands to install the XGBoost Python package: brew install libomp followed by pip install xgboost. The installed XGBoost will use all CPU cores. Even better, starting with this release, we distribute pre-compiled binary wheels targeting Mac OSX. Now the install command pip install xgboost finishes instantly, as it no longer compiles the C++ source of XGBoost. The last three Mac versions (High Sierra, Mojave, Catalina) are supported.
  • R package: the 1.1.0 release fixes the error Initializing libomp.dylib, but found libomp.dylib already initialized (#​5701)
Ranking metrics are now accelerated on GPUs (#​5380, #​5387, #​5398)
GPU-side data matrix to ingest data directly from other GPU libraries (#​5420, #​5465)
  • Previously, data on GPU memory had to be copied back to the main memory before it could be used by XGBoost. Starting with 1.1.0 release, XGBoost provides a dedicated interface (DeviceQuantileDMatrix) so that it can ingest data from GPU memory directly. The result is that XGBoost interoperates better with GPU-accelerated data science libraries, such as cuDF, cuPy, and PyTorch.
  • Set device in device dmatrix. (#​5596)
Robust model serialization with JSON (#​5123, #​5217)
  • We continue efforts from the 1.0.0 release to adopt JSON as the format to save and load models robustly. Refer to the release note for 1.0.0 to learn more.
  • It is now possible to store internal configuration of the trained model (Booster) object in R as a JSON string (#​5123, #​5217).
Improved integration with Dask
  • Pass through verbose parameter for dask fit (#​5413)
  • Use DMLC_TASK_ID. (#​5415)
  • Order the prediction result. (#​5416)
  • Honor nthreads from dask worker. (#​5414)
  • Enable grid searching with scikit-learn. (#​5417)
  • Check non-equal when setting threads. (#​5421)
  • Accept other inputs for prediction. (#​5428)
  • Fix missing value for scikit-learn interface. (#​5435)
XGBoost4J-Spark: Check number of columns in the data iterator (#​5202, #​5303)
  • Before, the native layer in XGBoost did not know the number of columns (features) ahead of time and had to guess the number of columns by counting the feature index when ingesting data. This method has a failure more in distributed setting: if the training data is highly sparse, some features may be completely missing in one or more worker partitions. Thus, one or more workers may deduce an incorrect data shape, leading to crashes or silently wrong models.
  • Enforce correct data shape by passing the number of columns explicitly from the JVM layer into the native layer.
Major refactoring of the DMatrix class
  • Continued from 1.0.0 release.
  • Remove update prediction cache from predictors. (#​5312)
  • Predict on Ellpack. (#​5327)
  • Partial rewrite EllpackPage (#​5352)
  • Use ellpack for prediction only when sparsepage doesn't exist. (#​5504)
  • RFC: #​4354, Roadmap: #​5143
Breaking: XGBoost Python package now requires Pip 19.0 and higher (#​5589)
  • Your Linux machine may have an old version of Pip and may attempt to install a source package, leading to long installation time. This is because we are now using manylinux2010 tag in the binary wheel release. Ensure you have Pip 19.0 or

Configuration

📅 Schedule: At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box.

This PR has been generated by WhiteSource Renovate. View repository job log here.

@renovate-bot renovate-bot requested a review from a team as a code owner August 4, 2021 12:23
@renovate-bot renovate-bot force-pushed the renovate/xgboost-1.x branch 2 times, most recently from 604d347 to e0f07cf Compare August 9, 2021 18:07
@renovate-bot renovate-bot force-pushed the renovate/xgboost-1.x branch from e0f07cf to 265a321 Compare August 9, 2021 20:57
@renovate-bot renovate-bot changed the title chore(deps): update dependency xgboost to v1 chore(deps): update dependency xgboost to v1 - autoclosed Aug 10, 2021
@renovate-bot renovate-bot deleted the renovate/xgboost-1.x branch August 10, 2021 08:45
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant