Releases: dmlc/xgboost
Release 1.6.0 stable
v1.6.0 (2022 Apr 16)
After a long period of development, XGBoost v1.6.0 is packed with many new features and
improvements. We summarize them in the following sections starting with an introduction to
some major new features, then moving on to language binding specific changes including new
features and notable bug fixes for that binding.
Development of categorical data support
This version of XGBoost features new improvements and full coverage of experimental
categorical data support in Python and C package with tree model. Both hist
, approx
and gpu_hist
now support training with categorical data. Also, partition-based
categorical split is introduced in this release. This split type is first available in
LightGBM in the context of gradient boosting. The previous XGBoost release supported one-hot
split where the splitting criteria is of form x \in {c}
, i.e. the categorical feature x
is tested
against a single candidate. The new release allows for more expressive conditions: x \in S
where the categorical feature x
is tested against multiple candidates. Moreover, it is now
possible to use any tree algorithms (hist
, approx
, gpu_hist
) when creating categorical splits.
For more information, please see our tutorial on categorical data, along with
examples linked on that page. (#7380, #7708, #7695, #7330, #7307, #7322, #7705,
#7652, #7592, #7666, #7576, #7569, #7529, #7575, #7393, #7465, #7385, #7371, #7745, #7810)
In the future, we will continue to improve categorical data support with new features and
optimizations. Also, we are looking forward to bringing the feature beyond Python binding,
contributions and feedback are welcomed! Lastly, as a result of experimental status, the
behavior might be subject to change, especially the default value of related
hyper-parameters.
Experimental support for multi-output model
XGBoost 1.6 features initial support for the multi-output model, which includes
multi-output regression and multi-label classification. Along with this, the XGBoost
classifier has proper support for base margin without to need for the user to flatten the
input. In this initial support, XGBoost builds one model for each target similar to the
sklearn meta estimator, for more details, please see our quick
introduction.
(#7365, #7736, #7607, #7574, #7521, #7514, #7456, #7453, #7455, #7434, #7429, #7405, #7381)
External memory support
External memory support for both approx and hist tree method is considered feature
complete in XGBoost 1.6. Building upon the iterator-based interface introduced in the
previous version, now both hist
and approx
iterates over each batch of data during
training and prediction. In previous versions, hist
concatenates all the batches into
an internal representation, which is removed in this version. As a result, users can
expect higher scalability in terms of data size but might experience lower performance due
to disk IO. (#7531, #7320, #7638, #7372)
Rewritten approx
The approx
tree method is rewritten based on the existing hist
tree method. The
rewrite closes the feature gap between approx
and hist
and improves the performance.
Now the behavior of approx
should be more aligned with hist
and gpu_hist
. Here is a
list of user-visible changes:
- Supports both
max_leaves
andmax_depth
. - Supports
grow_policy
. - Supports monotonic constraint.
- Supports feature weights.
- Use
max_bin
to replacesketch_eps
. - Supports categorical data.
- Faster performance for many of the datasets.
- Improved performance and robustness for distributed training.
- Supports prediction cache.
- Significantly better performance for external memory when
depthwise
policy is used.
New serialization format
Based on the existing JSON serialization format, we introduce UBJSON support as a more
efficient alternative. Both formats will be available in the future and we plan to
gradually phase out support for the old
binary model format. Users can opt to use the different formats in the serialization
function by providing the file extension json
or ubj
. Also, the save_raw
function in
all supported languages bindings gains a new parameter for exporting the model in different
formats, available options are json
, ubj
, and deprecated
, see document for the
language binding you are using for details. Lastly, the default internal serialization
format is set to UBJSON, which affects Python pickle and R RDS. (#7572, #7570, #7358,
#7571, #7556, #7549, #7416)
General new features and improvements
Aside from the major new features mentioned above, some others are summarized here:
- Users can now access the build information of XGBoost binary in Python and C
interface. (#7399, #7553) - Auto-configuration of
seed_per_iteration
is removed, now distributed training should
generate closer results to single node training when sampling is used. (#7009) - A new parameter
huber_slope
is introduced for thePseudo-Huber
objective. - During source build, XGBoost can choose cub in the system path automatically. (#7579)
- XGBoost now honors the CPU counts from CFS, which is usually set in docker
environments. (#7654, #7704) - The metric
aucpr
is rewritten for better performance and GPU support. (#7297, #7368) - Metric calculation is now performed in double precision. (#7364)
- XGBoost no longer mutates the global OpenMP thread limit. (#7537, #7519, #7608, #7590,
#7589, #7588, #7687) - The default behavior of
max_leave
andmax_depth
is now unified (#7302, #7551). - CUDA fat binary is now compressed. (#7601)
- Deterministic result for evaluation metric and linear model. In previous versions of
XGBoost, evaluation results might differ slightly for each run due to parallel reduction
for floating-point values, which is now addressed. (#7362, #7303, #7316, #7349) - XGBoost now uses double for GPU Hist node sum, which improves the accuracy of
gpu_hist
. (#7507)
Performance improvements
Most of the performance improvements are integrated into other refactors during feature
developments. The approx
should see significant performance gain for many datasets as
mentioned in the previous section, while the hist
tree method also enjoys improved
performance with the removal of the internal pruner
along with some other
refactoring. Lastly, gpu_hist
no longer synchronizes the device during training. (#7737)
General bug fixes
This section lists bug fixes that are not specific to any language binding.
- The
num_parallel_tree
is now a model parameter instead of a training hyper-parameter,
which fixes model IO with random forest. (#7751) - Fixes in CMake script for exporting configuration. (#7730)
- XGBoost can now handle unsorted sparse input. This includes text file formats like
libsvm and scipy sparse matrix where column index might not be sorted. (#7731) - Fix tree param feature type, this affects inputs with the number of columns greater than
the maximum value of int32. (#7565) - Fix external memory with gpu_hist and subsampling. (#7481)
- Check the number of trees in inplace predict, this avoids a potential segfault when an
incorrect value foriteration_range
is provided. (#7409) - Fix non-stable result in cox regression (#7756)
Changes in the Python package
Other than the changes in Dask, the XGBoost Python package gained some new features and
improvements along with small bug fixes.
- Python 3.7 is required as the lowest Python version. (#7682)
- Pre-built binary wheel for Apple Silicon. (#7621, #7612, #7747) Apple Silicon users will
now be able to runpip install xgboost
to install XGBoost. - MacOS users no longer need to install
libomp
from Homebrew, as the XGBoost wheel now
bundleslibomp.dylib
library. - There are new parameters for users to specify the custom metric with new
behavior. XGBoost can now output transformed prediction values when a custom objective is
not supplied. See our explanation in the
tutorial
for details. - For the sklearn interface, following the estimator guideline from scikit-learn, all
parameters infit
that are not related to input data are moved into the constructor
and can be set byset_params
. (#6751, #7420, #7375, #7369) - Apache arrow format is now supported, which can bring better performance to users'
pipeline (#7512) - Pandas nullable types are now supported (#7760)
- A new function
get_group
is introduced forDMatrix
to allow users to get the group
information in the custom objective function. (#7564) - More training parameters are exposed in the sklearn interface instead of relying on the
**kwargs
. (#7629) - A new attribute
feature_names_in_
is defined for all sklearn estimators like
XGBRegressor
to follow the convention of sklearn. (#7526) - More work on Python type hint. (#7432, #7348, #7338, #7513, #7707)
- Support the latest pandas Index type. (#7595)
- Fix for Feature shape mismatch error on s390x platform (#7715)
- Fix using feature names for constraints with multiple groups (#7711)
- We clarified the behavior of the callback function when it contains mutable
states. (#7685) - Lastly, there are some code cleanups and maintenance work. (#7585, #7426, #7634, #7665,
#7667, #7377, #7360, #7498, #7438, #7667, #7752, #7749, #7751)
Changes in the Dask interface
- Dask module now supports user-supplied host IP and port address of scheduler node.
Please see introduction and
...
Release candidate of version 1.6.0
1.5.2 Patch Release
This is a patch release for compatibility with latest dependencies and bug fixes.
- [dask] Fix asyncio with latest dask and distributed.
- [R] Fix single sample SHAP prediction.
- [Python] Update python classifier to indicate support for latest Python versions.
- [Python] Fix with latest mypy and pylint.
- Fix indexing type for bitfield, which may affect missing value and categorical data.
- Fix
num_boosted_rounds
for linear model. - Fix early stopping with linear model.
1.5.1 Patch Release
This is a patch release for compatibility with the latest dependencies and bug fixes. Also, all GPU-compatible binaries are built with CUDA 11.0.
-
[Python] Handle missing values in dataframe with category dtype. (#7331)
-
[R] Fix R CRAN failures about prediction and some compiler warnings.
-
[JVM packages] Fix compatibility with latest Spark (#7438, #7376)
-
Support building with CTK11.5. (#7379)
-
Check user input for iteration in inplace predict.
-
Handle
OMP_THREAD_LIMIT
environment variable. -
[doc] Fix broken links. (#7341)
Artifacts
You can verify the downloaded packages by running this on your Unix shell:
echo "<hash> <artifact>" | shasum -a 256 --check
3a6cc7526c0dff1186f01b53dcbac5c58f12781988400e2d340dda61ef8d14ca xgboost_r_gpu_linux_afb9dfd4210e8b8db8fe03380f83b404b1721443.tar.gz
6f74deb62776f1e2fd030e1fa08b93ba95b32ac69cc4096b4bcec3821dd0a480 xgboost_r_gpu_win64_afb9dfd4210e8b8db8fe03380f83b404b1721443.tar.gz
565dea0320ed4b6f807dbb92a8a57e86ec16db50eff9a3f405c651d1f53a259d xgboost.tar.gz
Release 1.5.0 stable
This release comes with many exciting new features and optimizations, along with some bug
fixes. We will describe the experimental categorical data support and the external memory
interface independently. Package-specific new features will be listed in respective
sections.
Development on categorical data support
In version 1.3, XGBoost introduced an experimental feature for handling categorical data
natively, without one-hot encoding. XGBoost can fit categorical splits in decision
trees. (Currently, the generated splits will be of form x \in {v}
, where the input is
compared to a single category value. A future version of XGBoost will generate splits that
compare the input against a list of multiple category values.)
Most of the other features, including prediction, SHAP value computation, feature
importance, and model plotting were revised to natively handle categorical splits. Also,
all Python interfaces including native interface with and without quantized DMatrix
,
scikit-learn interface, and Dask interface now accept categorical data with a wide range
of data structures support including numpy/cupy array and cuDF/pandas/modin dataframe. In
practice, the following are required for enabling categorical data support during
training:
- Use Python package.
- Use
gpu_hist
to train the model. - Use JSON model file format for saving the model.
Once the model is trained, it can be used with most of the features that are available on
the Python package. For a quick introduction, see
https://xgboost.readthedocs.io/en/latest/tutorials/categorical.html
Related PRs: (#7011, #7001, #7042, #7041, #7047, #7043, #7036, #7054, #7053, #7065, #7213, #7228, #7220, #7221, #7231, #7306)
-
Next steps
- Revise the CPU training algorithm to handle categorical data natively and generate categorical splits
- Extend the CPU and GPU algorithms to generate categorical splits of form
x \in S
where the input is compared with multiple category values. split. (#7081)
External memory
This release features a brand-new interface and implementation for external memory (also
known as out-of-core training). (#6901, #7064, #7088, #7089, #7087, #7092, #7070,
#7216). The new implementation leverages the data iterator interface, which is currently
used to create DeviceQuantileDMatrix
. For a quick introduction, see
https://xgboost.readthedocs.io/en/latest/tutorials/external_memory.html#data-iterator
. During the development of this new interface, lz4
compression is removed. (#7076).
Please note that external memory support is still experimental and not ready for
production use yet. All future development will focus on this new interface and users are
advised to migrate. (You are using the old interface if you are using a URL suffix to use
external memory.)
New features in Python package
- Support numpy array interface and all numeric types from numpy in
DMatrix
construction andinplace_predict
(#6998, #7003). Now XGBoost no longer makes data
copy when input is numpy array view. - The early stopping callback in Python has a new
min_delta
parameter to control the
stopping behavior (#7137) - Python package now supports calculating feature scores for the linear model, which is
also available on R package. (#7048) - Python interface now supports configuring constraints using feature names instead of
feature indices. - Typehint support for more Python code including scikit-learn interface and rabit
module. (#6799, #7240) - Add tutorial for XGBoost-Ray (#6884)
New features in R package
- In 1.4 we have a new prediction function in the C API which is used by the Python
package. This release revises the R package to use the new prediction function as well.
A new parameteriteration_range
for the predict function is available, which can be
used for specifying the range of trees for running prediction. (#6819, #7126) - R package now supports the
nthread
parameter inDMatrix
construction. (#7127)
New features in JVM packages
- Support GPU dataframe and
DeviceQuantileDMatrix
(#7195). ConstructingDMatrix
with GPU data structures and the interface for quantizedDMatrix
were first
introduced in the Python package and are now available in the xgboost4j package. - JVM packages now support saving and getting early stopping attributes. (#7095) Here is a
quick example in JAVA (#7252).
General new features
- We now have a pre-built binary package for R on Windows with GPU support. (#7185)
- CUDA compute capability 86 is now part of the default CMake build configuration with
newly added support for CUDA 11.4. (#7131, #7182, #7254) - XGBoost can be compiled using system CUB provided by CUDA 11.x installation. (#7232)
Optimizations
The performance for both hist
and gpu_hist
has been significantly improved in 1.5
with the following optimizations:
- GPU multi-class model training now supports prediction cache. (#6860)
- GPU histogram building is sped up and the overall training time is 2-3 times faster on
large datasets (#7180, #7198). In addition, we removed the parameterdeterministic_histogram
and now
the GPU algorithm is always deterministic. - CPU hist has an optimized procedure for data sampling (#6922)
- More performance optimization in regression and binary classification objectives on
CPU (#7206) - Tree model dump is now performed in parallel (#7040)
Breaking changes
n_gpus
was deprecated in 1.0 release and is now removed.- Feature grouping in CPU hist tree method is removed, which was disabled long
ago. (#7018) - C API for Quantile DMatrix is changed to be consistent with the new external memory
implementation. (#7082)
Notable general bug fixes
- XGBoost no long changes global CUDA device ordinal when
gpu_id
is specified (#6891,
#6987) - Fix
gamma
negative likelihood evaluation metric. (#7275) - Fix integer value of
verbose_eal
forxgboost.cv
function in Python. (#7291) - Remove extra sync in CPU hist for dense data, which can lead to incorrect tree node
statistics. (#7120, #7128) - Fix a bug in GPU hist when data size is larger than
UINT32_MAX
with missing
values. (#7026) - Fix a thread safety issue in prediction with the
softmax
objective. (#7104) - Fix a thread safety issue in CPU SHAP value computation. (#7050) Please note that all
prediction functions in Python are thread-safe. - Fix model slicing. (#7149, #7078)
- Workaround a bug in old GCC which can lead to segfault during construction of
DMatrix. (#7161) - Fix histogram truncation in GPU hist, which can lead to slightly-off results. (#7181)
- Fix loading GPU linear model pickle files on CPU-only machine. (#7154)
- Check input value is duplicated when CPU quantile queue is full (#7091)
- Fix parameter loading with training continuation. (#7121)
- Fix CMake interface for exposing C library by specifying dependencies. (#7099)
- Callback and early stopping are explicitly disabled for the scikit-learn interface
random forest estimator. (#7236) - Fix compilation error on x86 (32-bit machine) (#6964)
- Fix CPU memory usage with extremely sparse datasets (#7255)
- Fix a bug in GPU multi-class AUC implementation with weighted data (#7300)
Python package
Other than the items mentioned in the previous sections, there are some Python-specific
improvements.
- Change development release postfix to
dev
(#6988) - Fix early stopping behavior with MAPE metric (#7061)
- Fixed incorrect feature mismatch error message (#6949)
- Add predictor to skl constructor. (#7000, #7159)
- Re-enable feature validation in predict proba. (#7177)
- scikit learn interface regression estimator now can pass the scikit-learn estimator
check and is fully compatible with scikit-learn utilities.__sklearn_is_fitted__
is
implemented as part of the changes (#7130, #7230) - Conform the latest pylint. (#7071, #7241)
- Support latest panda range index in DMatrix construction. (#7074)
- Fix DMatrix construction from pandas series. (#7243)
- Fix typo and grammatical mistake in error message (#7134)
- [dask] disable work stealing explicitly for training tasks (#6794)
- [dask] Set dataframe index in predict. (#6944)
- [dask] Fix prediction on df with latest dask. (#6969)
- [dask] Fix dask predict on
DaskDMatrix
withiteration_range
. (#7005) - [dask] Disallow importing non-dask estimators from xgboost.dask (#7133)
R package
Improvements other than new features on R package:
- Optimization for updating R handles in-place (#6903)
- Removed the magrittr dependency. (#6855, #6906, #6928)
- The R package now hides all C++ symbols to avoid conflicts. (#7245)
- Other maintenance including code cleanups, document updates. (#6863, #6915, #6930, #6966, #6967)
JVM packages
Improvements other than new features on JVM packages:
- Constructors with implicit missing value are deprecated due to confusing behaviors. (#7225)
- Reduce scala-compiler, scalatest dependency scopes (#6730)
- Making the Java library loader emit helpful error messages on missing dependencies. (#6926)
- JVM packages now use the Python tracker in XGBoost instead of dmlc. The one in XGBoost
is shared between JVM packages and Python Dask and enjoys better maintenance (#7132) - Fix "key not found: train" error (#6842)
- Fix model loading from stream (#7067)
General document improvements
Release candidate of version 1.5.0
1.4.2 Patch Release
This is a patch release for Python package with following fixes:
- Handle the latest version of
cupy.ndarray
ininplace_predict
. #6933 - Ensure output array from
predict_leaf
is(n_samples, )
when there's only 1 tree. 1.4.0 outputs(n_samples, 1)
. #6889 - Fix empty dataset handling with multi-class AUC. #6947
- Handle object type from pandas in
inplace_predict
. #6927
You can verify the downloaded source code xgboost.tar.gz by running this on your unix shell:
echo "3ffd4a90cd03efde596e51cadf7f344c8b6c91aefd06cc92db349cd47056c05a *xgboost.tar.gz" | shasum -a 256 --check
1.4.1 Patch Release
This is a bug fix release.
- Fix GPU implementation of AUC on some large datasets. (#6866)
You can verify the downloaded source code xgboost.tar.gz by
running this on your unix shell:
echo "f3a37e5ddac10786e46423db874b29af413eed49fd9baed85035bbfee6fc6635 *xgboost.tar.gz" | shasum -a 256 --check
Release 1.4.0 stable
Introduction of pre-built binary package for R, with GPU support
Starting with release 1.4.0, users now have the option of installing {xgboost}
without
having to build it from the source. This is particularly advantageous for users who want
to take advantage of the GPU algorithm (gpu_hist
), as previously they'd have to build
{xgboost}
from the source using CMake and NVCC. Now installing {xgboost}
with GPU
support is as easy as: R CMD INSTALL ./xgboost_r_gpu_linux.tar.gz
. (#6827)
See the instructions at https://xgboost.readthedocs.io/en/latest/build.html
Improvements on prediction functions
XGBoost has many prediction types including shap value computation and inplace prediction.
In 1.4 we overhauled the underlying prediction functions for C API and Python API with an
unified interface. (#6777, #6693, #6653, #6662, #6648, #6668, #6804)
- Starting with 1.4, sklearn interface prediction will use inplace predict by default when
input data is supported. - Users can use inplace predict with
dart
booster and enable GPU acceleration just
likegbtree
. - Also all prediction functions with tree models are now thread-safe. Inplace predict is
improved withbase_margin
support. - A new set of C predict functions are exposed in the public interface.
- A user-visible change is a newly added parameter called
strict_shape
. See
https://xgboost.readthedocs.io/en/latest/prediction.html for more details.
Improvement on Dask interface
-
Starting with 1.4, the Dask interface is considered to be feature-complete, which means
all of the models found in the single node Python interface are now supported in Dask,
including but not limited to ranking and random forest. Also, the prediction function
is significantly faster and supports shap value computation.- Most of the parameters found in single node sklearn interface are supported by
Dask interface. (#6471, #6591) - Implements learning to rank. On the Dask interface, we use the newly added support of
query ID to enable group structure. (#6576) - The Dask interface has Python type hints support. (#6519)
- All models can be safely pickled. (#6651)
- Random forest estimators are now supported. (#6602)
- Shap value computation is now supported. (#6575, #6645, #6614)
- Evaluation result is printed on the scheduler process. (#6609)
DaskDMatrix
(and device quantile dmatrix) now accepts all meta-information. (#6601)
- Most of the parameters found in single node sklearn interface are supported by
-
Prediction optimization. We enhanced and speeded up the prediction function for the
Dask interface. See the latest Dask tutorial page in our document for an overview of
how you can optimize it even further. (#6650, #6645, #6648, #6668) -
Bug fixes
-
Other improvements on documents, blogs, tutorials, and demos. (#6389, #6366, #6687,
#6699, #6532, #6501)
Python package
With changes from Dask and general improvement on prediction, we have made some
enhancements on the general Python interface and IO for booster information. Starting
from 1.4, booster feature names and types can be saved into the JSON model. Also some
model attributes like best_iteration
, best_score
are restored upon model load. On
sklearn interface, some attributes are now implemented as Python object property with
better documents.
-
Breaking change: All
data
parameters in prediction functions are renamed toX
for better compliance to sklearn estimator interface guidelines. -
Breaking change: XGBoost used to generate some pseudo feature names with
DMatrix
when inputs likenp.ndarray
don't have column names. The procedure is removed to
avoid conflict with other inputs. (#6605) -
Early stopping with training continuation is now supported. (#6506)
-
Optional import for Dask and cuDF are now lazy. (#6522)
-
As mentioned in the prediction improvement summary, the sklearn interface uses inplace
prediction whenever possible. (#6718) -
Booster information like feature names and feature types are now saved into the JSON
model file. (#6605) -
All
DMatrix
interfaces includingDeviceQuantileDMatrix
and counterparts in Dask
interface (as mentioned in the Dask changes summary) now accept all the meta-information
likegroup
andqid
in their constructor for better consistency. (#6601) -
Booster attributes are restored upon model load so users don't have to call
attr
manually. (#6593) -
On sklearn interface, all models accept
base_margin
for evaluation datasets. (#6591) -
Improvements over the setup script including smaller sdist size and faster installation
if the C++ library is already built (#6611, #6694, #6565). -
Bug fixes for Python package:
- Don't validate feature when number of rows is 0. (#6472)
- Move metric configuration into booster. (#6504)
- Calling XGBModel.fit() should clear the Booster by default (#6562)
- Support
_estimator_type
. (#6582) - [dask, sklearn] Fix predict proba. (#6566, #6817)
- Restore unknown data support. (#6595)
- Fix learning rate scheduler with cv. (#6720)
- Fixes small typo in sklearn documentation (#6717)
- [python-package] Fix class Booster: feature_types = None (#6705)
- Fix divide by 0 in feature importance when no split is found. (#6676)
JVM package
- [jvm-packages] fix early stopping doesn't work even without custom_eval setting (#6738)
- fix potential TaskFailedListener's callback won't be called (#6612)
- [jvm] Add ability to load booster direct from byte array (#6655)
- [jvm-packages] JVM library loader extensions (#6630)
R package
- R documentation: Make construction of DMatrix consistent.
- Fix R documentation for xgb.train. (#6764)
ROC-AUC
We re-implemented the ROC-AUC metric in XGBoost. The new implementation supports
multi-class classification and has better support for learning to rank tasks that are not
binary. Also, it has a better-defined average on distributed environments with additional
handling for invalid datasets. (#6749, #6747, #6797)
Global configuration.
Starting from 1.4, XGBoost's Python, R and C interfaces support a new global configuration
model where users can specify some global parameters. Currently, supported parameters are
verbosity
and use_rmm
. The latter is experimental, see rmm plugin demo and
related README file for details. (#6414, #6656)
Other New features.
- Better handling for input data types that support
__array_interface__
. For some
data types including GPU inputs andscipy.sparse.csr_matrix
, XGBoost employs
__array_interface__
for processing the underlying data. Starting from 1.4, XGBoost
can accept arbitrary array strides (which means column-major is supported) without
making data copies, potentially reducing a significant amount of memory consumption.
Also version 3 of__cuda_array_interface__
is now supported. (#6776, #6765, #6459,
#6675) - Improved parameter validation, now feeding XGBoost with parameters that contain
whitespace will trigger an error. (#6769) - For Python and R packages, file paths containing the home indicator
~
are supported. - As mentioned in the Python changes summary, the JSON model can now save feature
information of the trained booster. The JSON schema is updated accordingly. (#6605) - Development of categorical data support is continued. Newly added weighted data support
anddart
booster support. (#6508, #6693) - As mentioned in Dask change summary, ranking now supports the
qid
parameter for
query groups. (#6576) DMatrix.slice
can now consume a numpy array. (#6368)
Other breaking changes
- Aside from the feature name generation, there are 2 breaking changes:
CPU Optimization
- Aside from the general changes on predict function, some optimizations are applied on
CPU implementation. (#6683, #6550, #6696, #6700) - Also performance for sampling initialization in
hist
is improved. (#6410)
Notable fixes in the core library
These fixes do not reside in particular language bindings:
- Fixes for gamma regression. This includes checking for invalid input values, fixes for
gamma deviance metric, and better floating point guard for gamma negative log-likelihood
metric. (#6778, #6537, #6761) - Random forest with
gpu_hist
might generate low accuracy in previous versions. (#6755) - Fix a bug in GPU sketching when data size exceeds limit of 32-bit integer. (#6826)
- Memory consumption fix for row-major adapters (#6779)
- Don't estimate sketch batch size when rmm is used. (#6807) (#6830)
- Fix in-place predict with missing value. (#6787)
- Re-introduce double buffer in UpdatePosition, to fix perf regression in gpu_hist (#6757)
- Pass correct split_type to GPU predictor (#6491)
- Fix DMatrix feature names/types IO. (#6507)
- Use view for
SparsePage
exclusively to avoid some data access races. (#6590) - Check for invalid data. (#6742)
- Fix relocatable include in CMakeList (#6734) (#6737)
- Fix DMatrix slice with feature types. (#6689)
Other deprecation notices:
-
This release will be the last release to support CUDA 10.0. (#6642)
-
Starting in the next release, the Python package will require Pip 19.3+ due to the use
of manylinux2014 tag. Also, CentOS 6, RHEL 6 and other old distributions will not be
supported.
Known issue:
MacOS build of the JVM packages doesn't support multi-threading out of the box. To enable
mul...
1.3.3 Patch Release
- Fix regression on
best_ntree_limit
. (#6616)