Skip to content

Commit

Permalink
Upstream updates + data set merging + cost effective gradient boosting (
Browse files Browse the repository at this point in the history
#2)

* [ci] removed temp brew hotfix and deprecated sudo option (microsoft#1951)

* removed brew hotfix and deprecated sudo option on Travis

* removed brew hotfix on Azure

* updated Boost docs (microsoft#1955)

* removed warnings about types in comparison ([-Wsign-compare]) (microsoft#1953)

* removed comparison warning

* fixed spacing

* [docs] ask to provide LightGBM version for issue (microsoft#1958)

* [R] Fix multiclass demo (microsoft#1940)

* Fix multiclass custom objective demo

* Use option not to boost from average instead of setting init score explicitly

* Reference microsoft#1846 when turning off boost_from_average

* Add trailing whitespace

* [R] Correcting lgb.prepare output comment (microsoft#1831)

* Correcting lgb.prepare output comment

* updated Roxygen files

* [docs] bump xcode version in docs (microsoft#1952)

* fix typo

* [docs] Added the links to the libraries used (microsoft#1962)

* Added links to the libraries used.

* Fixing the header

* Fixes

* ot -> to

* [docs] fixed minor typos in documentation (microsoft#1959)

* fixed minor typos in documentation

* fixed typo in gpu_tree_learner.cpp

* Update .gitignore

* support to override some parameters in Dataset (microsoft#1876)

* add warnings for override parameters of Dataset

* fix pep8

* add feature_penalty

* refactor

* add R's code

* Update basic.py

* Update basic.py

* fix parameter bug

* Update lgb.Dataset.R

* fix a bug

* Fix build on macOS Mojave (microsoft#1923)

* Fix build on macOS Mojave

Fixed microsoft#1898

- https://iscinumpy.gitlab.io/post/omp-on-high-sierra/
- https://cliutils.gitlab.io/modern-cmake/chapters/packages/OpenMP.html
- Homebrew/homebrew-core#20589

* update setup.py

* update docs

* fix setup.py

* update docs

* update docs

* update setup.py

* update docs

* [tests][python] added tests for metrics' behavior and fixed case for multiclass task with custom objective (microsoft#1954)

* added metrics test for standard interface

* simplified code

* less trees

* less trees

* use dummy custom objective and metric

* added tests for multiclass metrics aliases

* fixed bug in case of custom obj and num_class > 1

* added metric test for sklearn wrapper

* [python][R][docs] added possibility to install with Visual Studio 2019 Preview (microsoft#1956)

* Found error from microsoft#1939 (microsoft#1974)

* fix more edge cases in mape (microsoft#1977)

* fix R's overflow (microsoft#1960)

* [tests][python] added test for huge string model (microsoft#1964)

* added test for huge string model

* fixed tree sizes field

* simplified model structure

* fixed test and added try/except

* fix nan in eval results (microsoft#1973)

* always save the score of the first round in early stopping

fix microsoft#1971

* avoid using std::log on non-positive numbers

* remove unnecessary changes

* add tests

* Update test_sklearn.py

* enhanced tests

* fix microsoft#1981

* [python] added OpenMP options for python-package installation (microsoft#1975)

* added OpenMP options for python-package installation

* fixed grammar typo

* improved model loading routines (microsoft#1979)

* [ci] refined command status check  (microsoft#1980)

* refined command status check

* refined Appveyor

* redirect all warnings to stdout

* cpplint whitespaces and new lines (microsoft#1986)

* fix microsoft#1994

* [docs] Fixed OpenCL Debian package name typo (microsoft#1995)

[docs] Fixed OpenCL Debian package name typo

* [python] convert datatable to numpy directly (microsoft#1970)

* convert datatable to numpy directly

* fix according to comments

* updated more docstrings

* simplified isinstance check

* Update compat.py

* [R-package] Fix demos not using lgb.Dataset.create.valid (microsoft#1993)

* Hand edit broken commit

* Hand edit broken commit

* Hand edit broken commit

* Hand edit broken commit

* 2.2.3 release (microsoft#1987)

* Update DESCRIPTION

* Update DESCRIPTION

* update version number at master branch (microsoft#1996)

* Update VERSION.txt

* Update .appveyor.yml

* Update DESCRIPTION

* Initial attempt to implement appending features in-memory to another data set

The intent is for this to enable munging files together easily, without needing to round-trip via numpy or write multiple copies to disk.
In turn, that enables working more efficiently with data sets that were written separately.

* Implement Dataset.dump_text, and fix small bug in appending of group bin boundaries.

Dumping to text enables us to compare results, without having to worry about issues like features being reordered.

* Add basic tests for validation logic for add_features_from.

* Remove various internal mapping items from dataset text dumps

These are too sensitive to the exact feature order chosen, which is not visible to the user.
Including them in tests appears unnecessary, as the data dumping code should provide enough coverage.

* Add test that add_features_from results in identical data sets according to dump_text.

* Add test that booster behaviour after using add_features_from matches that of training on the full data

This checks:
- That training after add_features_from works at all
- That add_features_from does not cause training to misbehave

* Expose feature_penalty and monotone_types/constraints via get_field

These getters allow us to check that add_features_from does the right thing with these vectors.

* Add tests that add_features correctly handles feature_penalty and monotone_constraints.

* Ensure add_features_from properly frees the added dataset and add unit test for this

Since add_features_from moves the feature group pointers from the added dataset to the dataset being added to, the added dataset is invalid after the call.
We must ensure we do not try and access this handle.

* Remove some obsolete TODOs

* Tidy up DumpTextFile by using a single iterator for each feature

This iterators were also passed around as raw pointers without being freed, which is now fixed.

* Factor out offsetting logic in AddFeaturesFrom

* Remove obsolete TODO

* Remove another TODO

This one is debatable, test code can be a bit messy and duplicate-heavy, factoring it out tends to end badly.
Leaving this for now, will revisit if adding more tests later on becomes a mess.

* Add documentation for newly-added methods.

* Initial work towards add_data_from

This currently only merges the feature groups and updates num_data_.
It does not deal with Metadata or non-dense bins yet.

* Fix bug where dense bin copy of num_data_ wasn't updated

* Small bug fix in dense_bin.hpp, initial implementation of Merge for 4-bits bin.

* Add unit test for dense bin case of add_data_from, and refactor tests slightly.

* Initial implementation of Merge for sparse bins and unit tests for it.

* Ensure we test merging sparse data sets after loading them from binary

This seems silly, but push_buffers_ aren't populated if the data was loaded from a binary file.
This forces us to reconstruct the index,value form of the data in the target bin before merging.
Adding this test ensures that code is covered.

* Add labels to text dumps.

* Add weights to text dumps.

* Ensure add_data_from properly merges labels.

* Ensure metadata appends weights correctly, and unit test for it.

* Implement metadata merging for query bits

This is currently not covered by unit tests.

* Check datasets are aligned before merging.

This catches the majority of obvious errors, e.g. not having the same number of features or having different bin mappings.

* Add test that booster behaviour is preserved by add_data_from.

* Add configuration parameters for CEGB.

* Add skeleton CEGB tree learner

Like the original CEGB version, this inherits from SerialTreeLearner.
Currently, it changes nothing from the original.

* Track features used in CEGB tree learner.

* Pull CEGB tradeoff and coupled feature penalty from config.

* Implement finding best splits for CEGB

This is heavily based on the serial version, but just adds using the coupled penalties.

* Set proper defaults for cegb parameters.

* Ensure sanity checks don't switch off CEGB.

* Implement per-data-point feature penalties in CEGB.

* Implement split penalty and remove unused parameters.

* Merge changes from CEGB tree learner into serial tree learner

* Represent features_used_in_data by a bitset, to reduce the memory overhead of CEGB, and add sanity checks for the lengths of the penalty vectors.
  • Loading branch information
remcob-gr authored and alisterw committed Feb 13, 2019
1 parent e2d0812 commit ceff18c
Show file tree
Hide file tree
Showing 121 changed files with 2,202 additions and 550 deletions.
7 changes: 4 additions & 3 deletions .appveyor.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
version: 2.2.3.{build}
version: 2.2.4.{build}

image: Visual Studio 2015
platform: x64
Expand Down Expand Up @@ -30,7 +30,7 @@ install:
- ps: $env:LGB_VER = (Get-Content VERSION.txt).trim()
- conda config --set always_yes yes --set changeps1 no
- conda update -q conda
- conda create -q -n test-env python=%PYTHON_VERSION% matplotlib nose numpy pandas pytest python-graphviz scikit-learn scipy
- conda create -q -n test-env python=%PYTHON_VERSION% matplotlib nose numpy pandas psutil pytest python-graphviz scikit-learn scipy
- activate test-env

build_script:
Expand All @@ -48,8 +48,9 @@ test_script:
(Get-Content "plot_example.py").replace('graph.render(view=True)', 'graph.render(view=False)') | Set-Content "plot_example.py"
- ps: >-
foreach ($file in @(Get-ChildItem *.py)) {
@("import sys, warnings", "warnings.showwarning = lambda message, category, filename, lineno, file=None, line=None: sys.stdout.write(warnings.formatwarning(message, category, filename, lineno, line))") + (Get-Content $file) | Set-Content $file
python $file
if ($LastExitCode -ne 0) { $host.SetShouldExit($LastExitCode) }
if (!$?) { $host.SetShouldExit(-1) }
} # run all examples
- cd %APPVEYOR_BUILD_FOLDER%\examples\python-guide\notebooks
- conda install -y -n test-env ipywidgets notebook
Expand Down
2 changes: 1 addition & 1 deletion .ci/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ if [[ $TASK == "if-else" ]]; then
exit 0
fi

conda install -q -y -n $CONDA_ENV matplotlib nose numpy pandas pytest python-graphviz scikit-learn scipy
conda install -q -y -n $CONDA_ENV matplotlib nose numpy pandas psutil pytest python-graphviz scikit-learn scipy

if [[ $OS_NAME == "macos" ]] && [[ $COMPILER == "clang" ]]; then
sudo ln -sf `ls -d "$(brew --cellar libomp)"/*/lib`/* $CONDA_PREFIX/lib || exit -1 # fix "OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized." (OpenMP library conflict due to conda's MKL)
Expand Down
25 changes: 13 additions & 12 deletions .ci/test_windows.ps1
Original file line number Diff line number Diff line change
@@ -1,42 +1,43 @@
function Check-Output {
param( [int]$ExitCode )
if ($ExitCode -ne 0) {
$host.SetShouldExit($ExitCode)
param( [bool]$success )
if (!$success) {
$host.SetShouldExit(-1)
Exit -1
}
}

if ($env:TASK -eq "regular") {
mkdir $env:BUILD_SOURCESDIRECTORY/build; cd $env:BUILD_SOURCESDIRECTORY/build
cmake -DCMAKE_GENERATOR_PLATFORM=x64 .. ; cmake --build . --target ALL_BUILD --config Release ; Check-Output $LastExitCode
cmake -DCMAKE_GENERATOR_PLATFORM=x64 .. ; cmake --build . --target ALL_BUILD --config Release ; Check-Output $?
cd $env:BUILD_SOURCESDIRECTORY/python-package
python setup.py install --precompile ; Check-Output $LastExitCode
python setup.py install --precompile ; Check-Output $?
cp $env:BUILD_SOURCESDIRECTORY/Release/lib_lightgbm.dll $env:BUILD_ARTIFACTSTAGINGDIRECTORY
cp $env:BUILD_SOURCESDIRECTORY/Release/lightgbm.exe $env:BUILD_ARTIFACTSTAGINGDIRECTORY
}
elseif ($env:TASK -eq "sdist") {
cd $env:BUILD_SOURCESDIRECTORY/python-package
python setup.py sdist --formats gztar ; Check-Output $LastExitCode
cd dist; pip install @(Get-ChildItem *.gz) -v ; Check-Output $LastExitCode
python setup.py sdist --formats gztar ; Check-Output $?
cd dist; pip install @(Get-ChildItem *.gz) -v ; Check-Output $?
}
elseif ($env:TASK -eq "bdist") {
cd $env:BUILD_SOURCESDIRECTORY/python-package
python setup.py bdist_wheel --plat-name=win-amd64 --universal ; Check-Output $LastExitCode
cd dist; pip install @(Get-ChildItem *.whl) ; Check-Output $LastExitCode
python setup.py bdist_wheel --plat-name=win-amd64 --universal ; Check-Output $?
cd dist; pip install @(Get-ChildItem *.whl) ; Check-Output $?
cp @(Get-ChildItem *.whl) $env:BUILD_ARTIFACTSTAGINGDIRECTORY
}

$tests = $env:BUILD_SOURCESDIRECTORY + $(If ($env:TASK -eq "sdist") {"/tests/python_package_test"} Else {"/tests"}) # cannot test C API with "sdist" task
pytest $tests ; Check-Output $LastExitCode
pytest $tests ; Check-Output $?

if ($env:TASK -eq "regular") {
cd $env:BUILD_SOURCESDIRECTORY/examples/python-guide
@("import matplotlib", "matplotlib.use('Agg')") + (Get-Content "plot_example.py") | Set-Content "plot_example.py"
(Get-Content "plot_example.py").replace('graph.render(view=True)', 'graph.render(view=False)') | Set-Content "plot_example.py"
foreach ($file in @(Get-ChildItem *.py)) {
python $file ; Check-Output $LastExitCode
@("import sys, warnings", "warnings.showwarning = lambda message, category, filename, lineno, file=None, line=None: sys.stdout.write(warnings.formatwarning(message, category, filename, lineno, line))") + (Get-Content $file) | Set-Content $file
python $file ; Check-Output $?
} # run all examples
cd $env:BUILD_SOURCESDIRECTORY/examples/python-guide/notebooks
conda install -y -n $env:CONDA_ENV ipywidgets notebook
jupyter nbconvert --ExecutePreprocessor.timeout=180 --to notebook --execute --inplace *.ipynb ; Check-Output $LastExitCode # run all notebooks
jupyter nbconvert --ExecutePreprocessor.timeout=180 --to notebook --execute --inplace *.ipynb ; Check-Output $? # run all notebooks
}
2 changes: 2 additions & 0 deletions .github/ISSUE_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ CPU/GPU model:

C++/Python/R version:

LightGBM version or commit hash:

## Error message

<!-- Paste error log here -->
Expand Down
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -390,3 +390,6 @@ R-package/src/src/
lightgbm_r/*
lightgbm*.tar.gz
lightgbm.Rcheck/

# Files generated by aspell
**/*.bak
2 changes: 0 additions & 2 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
language: cpp
sudo: required
dist: trusty

git:
Expand Down Expand Up @@ -38,7 +37,6 @@ matrix:
before_install:
- test -n $CC && unset CC
- test -n $CXX && unset CXX
- export HOMEBREW_LOGS="$HOME/brew_logs" # brew hotfix
- export HOME_DIRECTORY="$HOME"
- export BUILD_DIRECTORY="$TRAVIS_BUILD_DIR"
- if [[ $TRAVIS_OS_NAME == "osx" ]]; then
Expand Down
3 changes: 1 addition & 2 deletions .vsts-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,6 @@ jobs:
inputs:
updateConda: false
- script: |
echo "##vso[task.setvariable variable=HOMEBREW_LOGS]$AGENT_HOMEDIRECTORY/brew_logs"
echo "##vso[task.setvariable variable=HOME_DIRECTORY]$AGENT_HOMEDIRECTORY"
echo "##vso[task.setvariable variable=BUILD_DIRECTORY]$BUILD_SOURCESDIRECTORY"
echo "##vso[task.setvariable variable=OS_NAME]macos"
Expand Down Expand Up @@ -147,7 +146,7 @@ jobs:
createCustomEnvironment: true
updateConda: true
environmentName: $(CONDA_ENV)
packageSpecs: 'python=$(PYTHON_VERSION) matplotlib nose numpy pandas pytest python-graphviz scikit-learn scipy'
packageSpecs: 'python=$(PYTHON_VERSION) matplotlib nose numpy pandas psutil pytest python-graphviz scikit-learn scipy'
createOptions: '-q'
- powershell: $(Build.SourcesDirectory)/.ci/test_windows.ps1
displayName: Test
Expand Down
4 changes: 2 additions & 2 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -211,8 +211,8 @@ endif(USE_MPI)

if(USE_OPENMP)
if(CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang")
TARGET_LINK_LIBRARIES(lightgbm ${OpenMP_libomp_LIBRARY})
TARGET_LINK_LIBRARIES(_lightgbm ${OpenMP_libomp_LIBRARY})
TARGET_LINK_LIBRARIES(lightgbm OpenMP::OpenMP_CXX)
TARGET_LINK_LIBRARIES(_lightgbm OpenMP::OpenMP_CXX)
endif()
endif(USE_OPENMP)

Expand Down
4 changes: 2 additions & 2 deletions R-package/DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
Package: lightgbm
Type: Package
Title: Light Gradient Boosting Machine
Version: 2.2.3
Date: 2018-11-6
Version: 2.2.4
Date: 2019-02-05
Authors@R: c(
person("Guolin", "Ke", email = "guolin.ke@microsoft.com", role = c("aut", "cre")),
person("Damien", "Soukhavong", email = "damien.soukhavong@skema.edu", role = c("ctb")),
Expand Down
4 changes: 4 additions & 0 deletions R-package/R/lgb.Dataset.R
Original file line number Diff line number Diff line change
Expand Up @@ -492,6 +492,10 @@ Dataset <- R6::R6Class(
update_params = function(params) {

# Parameter updating
if (!lgb.is.null.handle(private$handle)) {
lgb.call("LGBM_DatasetUpdateParam_R", ret = NULL, private$handle, lgb.params2str(params))
return(invisible(self))
}
private$params <- modifyList(private$params, params)
return(invisible(self))

Expand Down
2 changes: 1 addition & 1 deletion R-package/demo/basic_walkthrough.R
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ print(paste("sum(abs(pred2-pred))=", sum(abs(pred2 - pred))))
#--------------------Advanced features ---------------------------
# To use advanced features, we need to put data in lgb.Dataset
dtrain <- lgb.Dataset(data = train$data, label = train$label, free_raw_data = FALSE)
dtest <- lgb.Dataset(data = test$data, label = test$label, free_raw_data = FALSE)
dtest <- lgb.Dataset.create.valid(dtrain, data = test$data, label = test$label)

#--------------------Using validation set-------------------------
# valids is a list of lgb.Dataset, each of them is tagged with name
Expand Down
2 changes: 1 addition & 1 deletion R-package/demo/boost_from_prediction.R
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ require(methods)
data(agaricus.train, package = "lightgbm")
data(agaricus.test, package = "lightgbm")
dtrain <- lgb.Dataset(agaricus.train$data, label = agaricus.train$label)
dtest <- lgb.Dataset(agaricus.test$data, label = agaricus.test$label)
dtest <- lgb.Dataset.create.valid(dtrain, data = agaricus.test$data, label = agaricus.test$label)

valids <- list(eval = dtest, train = dtrain)
#--------------------Advanced features ---------------------------
Expand Down
22 changes: 11 additions & 11 deletions R-package/demo/categorical_features_prepare.R
Original file line number Diff line number Diff line change
Expand Up @@ -31,24 +31,24 @@ str(bank)
# For this task, we use lgb.prepare
# The function transforms the data into a fittable data
#
# Classes 'data.table' and 'data.frame': 4521 obs. of 17 variables:
# Classes data.table and 'data.frame': 4521 obs. of 17 variables:
# $ age : int 30 33 35 30 59 35 36 39 41 43 ...
# $ job : chr "unemployed" "services" "management" "management" ...
# $ marital : chr "married" "married" "single" "married" ...
# $ education: chr "primary" "secondary" "tertiary" "tertiary" ...
# $ default : chr "no" "no" "no" "no" ...
# $ job : num 11 8 5 5 2 5 7 10 3 8 ...
# $ marital : num 2 2 3 2 2 3 2 2 2 2 ...
# $ education: num 1 2 3 3 2 3 3 2 3 1 ...
# $ default : num 1 1 1 1 1 1 1 1 1 1 ...
# $ balance : int 1787 4789 1350 1476 0 747 307 147 221 -88 ...
# $ housing : chr "no" "yes" "yes" "yes" ...
# $ loan : chr "no" "yes" "no" "yes" ...
# $ contact : chr "cellular" "cellular" "cellular" "unknown" ...
# $ housing : num 1 2 2 2 2 1 2 2 2 2 ...
# $ loan : num 1 2 1 2 1 1 1 1 1 2 ...
# $ contact : num 1 1 1 3 3 1 1 1 3 1 ...
# $ day : int 19 11 16 3 5 23 14 6 14 17 ...
# $ month : chr "oct" "may" "apr" "jun" ...
# $ month : num 11 9 1 7 9 4 9 9 9 1 ...
# $ duration : int 79 220 185 199 226 141 341 151 57 313 ...
# $ campaign : int 1 1 1 4 1 2 1 2 2 1 ...
# $ pdays : int -1 339 330 -1 -1 176 330 -1 -1 147 ...
# $ previous : int 0 4 1 0 0 3 2 0 0 2 ...
# $ poutcome : chr "unknown" "failure" "failure" "unknown" ...
# $ y : chr "no" "no" "no" "no" ...
# $ poutcome : num 4 1 1 4 4 1 2 4 4 1 ...
# $ y : num 1 1 1 1 1 1 1 1 1 1 ...
bank <- lgb.prepare(data = bank)
str(bank)

Expand Down
5 changes: 3 additions & 2 deletions R-package/demo/categorical_features_rules.R
Original file line number Diff line number Diff line change
Expand Up @@ -70,8 +70,9 @@ my_data_test <- as.matrix(bank_test[, 1:16, with = FALSE])
# The categorical features can be passed to lgb.train to not copy and paste a lot
dtrain <- lgb.Dataset(data = my_data_train,
label = bank_train$y)
dtest <- lgb.Dataset(data = my_data_test,
label = bank_test$y)
dtest <- lgb.Dataset.create.valid(dtrain,
data = my_data_test,
label = bank_test$y)

# We can now train a model
model <- lgb.train(list(objective = "binary",
Expand Down
2 changes: 1 addition & 1 deletion R-package/demo/cross_validation.R
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ require(lightgbm)
data(agaricus.train, package = "lightgbm")
data(agaricus.test, package = "lightgbm")
dtrain <- lgb.Dataset(agaricus.train$data, label = agaricus.train$label)
dtest <- lgb.Dataset(agaricus.test$data, label = agaricus.test$label)
dtest <- lgb.Dataset.create.valid(dtrain, data = agaricus.test$data, label = agaricus.test$label)

nrounds <- 2
param <- list(num_leaves = 4,
Expand Down
2 changes: 1 addition & 1 deletion R-package/demo/early_stopping.R
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ data(agaricus.train, package = "lightgbm")
data(agaricus.test, package = "lightgbm")

dtrain <- lgb.Dataset(agaricus.train$data, label = agaricus.train$label)
dtest <- lgb.Dataset(agaricus.test$data, label = agaricus.test$label)
dtest <- lgb.Dataset.create.valid(dtrain, data = agaricus.test$data, label = agaricus.test$label)

# Note: for customized objective function, we leave objective as default
# Note: what we are getting is margin value in prediction
Expand Down
21 changes: 13 additions & 8 deletions R-package/demo/multiclass_custom_objective.R
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 +8,21 @@ data(iris)
# For instance: 0, 1, 2, 3, 4, 5...
iris$Species <- as.numeric(as.factor(iris$Species)) - 1

# We cut the data set into 80% train and 20% validation
# Create imbalanced training data (20, 30, 40 examples for classes 0, 1, 2)
train <- as.matrix(iris[c(1:20, 51:80, 101:140), ])
# The 10 last samples of each class are for validation

train <- as.matrix(iris[c(1:40, 51:90, 101:140), ])
test <- as.matrix(iris[c(41:50, 91:100, 141:150), ])

dtrain <- lgb.Dataset(data = train[, 1:4], label = train[, 5])
dtest <- lgb.Dataset.create.valid(dtrain, data = test[, 1:4], label = test[, 5])
valids <- list(test = dtest)
valids <- list(train = dtrain, test = dtest)

# Method 1 of training with built-in multiclass objective
# Note: need to turn off boost from average to match custom objective
# (https://github.com/Microsoft/LightGBM/issues/1846)
model_builtin <- lgb.train(list(),
dtrain,
boost_from_average = FALSE,
100,
valids,
min_data = 1,
Expand All @@ -29,7 +32,8 @@ model_builtin <- lgb.train(list(),
metric = "multi_logloss",
num_class = 3)

preds_builtin <- predict(model_builtin, test[, 1:4], rawscore = TRUE)
preds_builtin <- predict(model_builtin, test[, 1:4], rawscore = TRUE, reshape = TRUE)
probs_builtin <- exp(preds_builtin) / rowSums(exp(preds_builtin))

# Method 2 of training with custom objective function

Expand Down Expand Up @@ -64,7 +68,6 @@ custom_multiclass_metric = function(preds, dtrain) {
return(list(name = "error",
value = -mean(log(prob[cbind(1:length(labels), labels + 1)])),
higher_better = FALSE))

}

model_custom <- lgb.train(list(),
Expand All @@ -78,8 +81,10 @@ model_custom <- lgb.train(list(),
eval = custom_multiclass_metric,
num_class = 3)

preds_custom <- predict(model_custom, test[, 1:4], rawscore = TRUE)
preds_custom <- predict(model_custom, test[, 1:4], rawscore = TRUE, reshape = TRUE)
probs_custom <- exp(preds_custom) / rowSums(exp(preds_custom))

# compare predictions
identical(preds_builtin, preds_custom)
stopifnot(identical(probs_builtin, probs_custom))
stopifnot(identical(preds_builtin, preds_custom))

3 changes: 2 additions & 1 deletion R-package/man/lgb.Dataset.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

14 changes: 9 additions & 5 deletions R-package/man/lgb.cv.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 4 additions & 4 deletions R-package/man/lgb.train.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion R-package/src/install.libs.R
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ if (!use_precompile) {
} else {
try_vs <- 0
local_vs_def <- ""
vs_versions <- c("Visual Studio 15 2017 Win64", "Visual Studio 14 2015 Win64")
vs_versions <- c("Visual Studio 15 2017 Win64", "Visual Studio 14 2015 Win64", "Visual Studio 16 2019")
for(vs in vs_versions){
vs_def <- paste0(" -G \"", vs, "\"")
tmp_cmake_cmd <- paste0(cmake_cmd, vs_def)
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,4 +130,4 @@ Huan Zhang, Si Si and Cho-Jui Hsieh. "[GPU Acceleration for Large-scale Tree Boo
License
-------

This project is licensed under the terms of the MIT license. See [LICENSE](https://github.com/Microsoft/LightGBM/blob/master/LICENSE) for addtional details.
This project is licensed under the terms of the MIT license. See [LICENSE](https://github.com/Microsoft/LightGBM/blob/master/LICENSE) for additional details.
2 changes: 1 addition & 1 deletion VERSION.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
2.2.3
2.2.4
2 changes: 1 addition & 1 deletion docs/Development-Guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ Refer to `docs README <./README.rst>`__.
C API
-----

Refere to the comments in `c\_api.h <https://github.com/Microsoft/LightGBM/blob/master/include/LightGBM/c_api.h>`__.
Refer to the comments in `c\_api.h <https://github.com/Microsoft/LightGBM/blob/master/include/LightGBM/c_api.h>`__.

High Level Language Package
---------------------------
Expand Down
Loading

0 comments on commit ceff18c

Please sign in to comment.