-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[tests][python] added test for huge string model #1964
Conversation
@@ -828,3 +829,20 @@ def preprocess_data(dtrain, dtest, params): | |||
results = lgb.cv(params, dataset, num_boost_round=10, fpreproc=preprocess_data) | |||
self.assertIn('multi_logloss-mean', results) | |||
self.assertEqual(len(results['multi_logloss-mean']), 10) | |||
|
|||
@unittest.skipIf(psutil.virtual_memory().free / 1024 / 1024 / 1024 < 3, 'not enough RAM') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be honest, I don't know whether it's cross-platform or not.
On my Windows with Python 3.6 sys.getsizeof(new_model_str)
shows approximately 2 GB.
Or maybe it's better to wrap it in
try:
…
except MemoryError:
pass
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems some tests throw memory error..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah!..
I'll add try/except
, but I don't know should I bump free memory requirement too (and to what value) or not. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@StrikerRUS I think we can keep them both, can use a memory size larger than 4GB and smaller than 8GB, such as 7GB?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@guolinke OK. I agree with this. One more question: did you mean 7GB of free RAM or overall?
At present it requires 3GB of free RAM, because only big string occupy 2GB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I forget it's signed type. In this case, maybe total 4GB is enough for us.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/Microsoft/LightGBM/blob/cbae0c1b1681dedd6a519759c13964f4d3ee7e35/tests/python_package_test/test_engine.py#L833
Unfortunately, it caused even more MemoryErrors. Will try with 4 GB of free volume.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no difference in results for 4, 5, 6 GB. Only requirement of 7 GB of free memory makes this test to be skipped which results in green CI checks.
https://github.com/Microsoft/LightGBM/blob/e72ee0335cb5c05462ce3dddbe517d1eb4447bc8/tests/python_package_test/test_engine.py#L833
Seems that this skipIf
is not useful actually...
It turned out that |
I think it's too much and it's caused by that calculation of trees number is performed for |
one_tree = model_str[model_str.find('Tree=1'):model_str.find('end of trees')] | ||
one_tree_size = len(one_tree) | ||
one_tree = one_tree.replace('Tree=1', 'Tree={}') | ||
multiplier = int(2**31 / (one_tree_size + len(str(one_tree_size)) + 1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@guolinke Now test fails not with MemoryError
, but with ci/test.sh: line 120: 4299 Killed pytest $BUILD_DIRECTORY/tests
.
Should I add a magic number here to take into account the growing size of tree(due to Tree=1,...100,...,10000) and decrease the difference between actual model size and desired? Or is any other easy way to calculate the needed number of trees?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually, the tree_size field could be deleted directly. This will slow down the model loading, but I think it is okay for test.
BTW, I think the simplest solution is padding many \r\n
, as it will increase the length of model string and will be ignored by model parsing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, tree_sizes
can be simply removed, it's great!
Now, some fixed little number of trees (100) is appended to the model and then \n
-padding is used to reach 2**31 total model size. Unfortunately, it causes weird memory consumption behaviour during the model loading process, please see the screenshot below:
Compare with simply huge number of trees without \n
-padding behaviour (multiplier=1370000
instead of 100, and the total model size is even bigger compared with \n
-padding solution):
UPD: This leads to that this test is either skipped or fails with MemoryError
😢
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is weird, it should be in this loop...
https://github.com/Microsoft/LightGBM/blob/b37065dbd5510079aac341a34895f4854c782fcb/src/boosting/gbdt_model_text.cpp#L442-L459
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to append \n
s at the model's end - got the same weird behavior.
index 049c682..b1791ea 100644
--- a/tests/python_package_test/test_engine.py
+++ b/tests/python_package_test/test_engine.py
@@ -843,10 +843,10 @@ class TestEngine(unittest.TestCase):
multiplier = 100
total_trees = multiplier + 2
new_model_str = (model_str[:model_str.find('tree_sizes')]
- + '\n' * (2**31 - one_tree_size * total_trees)
+ model_str[model_str.find('Tree=0'):model_str.find('end of trees')]
+ (one_tree * multiplier).format(*range(2, total_trees))
- + model_str[model_str.find('end of trees'):])
+ + model_str[model_str.find('end of trees'):]
+ + '\n' * (2**31 - one_tree_size * total_trees))
self.assertGreater(len(new_model_str), 2**31)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to use spaces instead of new lines. The situation became much better.
index 049c682..b06ce16 100644
--- a/tests/python_package_test/test_engine.py
+++ b/tests/python_package_test/test_engine.py
@@ -843,7 +843,7 @@ class TestEngine(unittest.TestCase):
multiplier = 100
total_trees = multiplier + 2
new_model_str = (model_str[:model_str.find('tree_sizes')]
- + '\n' * (2**31 - one_tree_size * total_trees)
+ + '\n\n\n' + ' ' * (2**31 - one_tree_size * total_trees)
+ model_str[model_str.find('Tree=0'):model_str.find('end of trees')]
+ (one_tree * multiplier).format(*range(2, total_trees))
+ model_str[model_str.find('end of trees'):])
But surprisingly it loaded 101 trees instead of 102...
After that I tried to move whitespaces to model's end: everything became even much better and the correct number of trees was loaded.
index 049c682..31743b8 100644
--- a/tests/python_package_test/test_engine.py
+++ b/tests/python_package_test/test_engine.py
@@ -843,10 +843,11 @@ class TestEngine(unittest.TestCase):
multiplier = 100
total_trees = multiplier + 2
new_model_str = (model_str[:model_str.find('tree_sizes')]
- + '\n' * (2**31 - one_tree_size * total_trees)
+ + '\n\n'
+ model_str[model_str.find('Tree=0'):model_str.find('end of trees')]
+ (one_tree * multiplier).format(*range(2, total_trees))
- + model_str[model_str.find('end of trees'):])
+ + model_str[model_str.find('end of trees'):]
+ + ' ' * (2**31 - one_tree_size * total_trees))
self.assertGreater(len(new_model_str), 2**31)
bst.model_from_string(new_model_str, verbose=False)
self.assertEqual(bst.num_trees(), total_trees)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great! can this pass the test now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haven't tried yet.
BTW, seems that this line is the root cause of weird memory consumption.
https://github.com/Microsoft/LightGBM/blob/b96849918d802f750657b3dc3d17324e070993a2/python-package/lightgbm/basic.py#L318
Don't you know why spaces in the middle of the model cause wrong number of trees are being loaded?
But surprisingly it loaded 101 trees instead of 102...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because it will break here:
https://github.com/Microsoft/LightGBM/blob/b37065dbd5510079aac341a34895f4854c782fcb/src/boosting/gbdt_model_text.cpp#L455
maybe we should remove this break in cpp side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, got it! Thanks!
maybe we should remove this break in cpp side.
Not sure that it's actually needed. Unexpected whitespaces in a model file are not something that can happen during save/load use cases, and manual editing is not something that we're supporting. That question was just my curiosity.
@guolinke I think this PR is ready for review. Test is skipped at:
Quite strange list, but I think OK... |
#2) * [ci] removed temp brew hotfix and deprecated sudo option (microsoft#1951) * removed brew hotfix and deprecated sudo option on Travis * removed brew hotfix on Azure * updated Boost docs (microsoft#1955) * removed warnings about types in comparison ([-Wsign-compare]) (microsoft#1953) * removed comparison warning * fixed spacing * [docs] ask to provide LightGBM version for issue (microsoft#1958) * [R] Fix multiclass demo (microsoft#1940) * Fix multiclass custom objective demo * Use option not to boost from average instead of setting init score explicitly * Reference microsoft#1846 when turning off boost_from_average * Add trailing whitespace * [R] Correcting lgb.prepare output comment (microsoft#1831) * Correcting lgb.prepare output comment * updated Roxygen files * [docs] bump xcode version in docs (microsoft#1952) * fix typo * [docs] Added the links to the libraries used (microsoft#1962) * Added links to the libraries used. * Fixing the header * Fixes * ot -> to * [docs] fixed minor typos in documentation (microsoft#1959) * fixed minor typos in documentation * fixed typo in gpu_tree_learner.cpp * Update .gitignore * support to override some parameters in Dataset (microsoft#1876) * add warnings for override parameters of Dataset * fix pep8 * add feature_penalty * refactor * add R's code * Update basic.py * Update basic.py * fix parameter bug * Update lgb.Dataset.R * fix a bug * Fix build on macOS Mojave (microsoft#1923) * Fix build on macOS Mojave Fixed microsoft#1898 - https://iscinumpy.gitlab.io/post/omp-on-high-sierra/ - https://cliutils.gitlab.io/modern-cmake/chapters/packages/OpenMP.html - Homebrew/homebrew-core#20589 * update setup.py * update docs * fix setup.py * update docs * update docs * update setup.py * update docs * [tests][python] added tests for metrics' behavior and fixed case for multiclass task with custom objective (microsoft#1954) * added metrics test for standard interface * simplified code * less trees * less trees * use dummy custom objective and metric * added tests for multiclass metrics aliases * fixed bug in case of custom obj and num_class > 1 * added metric test for sklearn wrapper * [python][R][docs] added possibility to install with Visual Studio 2019 Preview (microsoft#1956) * Found error from microsoft#1939 (microsoft#1974) * fix more edge cases in mape (microsoft#1977) * fix R's overflow (microsoft#1960) * [tests][python] added test for huge string model (microsoft#1964) * added test for huge string model * fixed tree sizes field * simplified model structure * fixed test and added try/except * fix nan in eval results (microsoft#1973) * always save the score of the first round in early stopping fix microsoft#1971 * avoid using std::log on non-positive numbers * remove unnecessary changes * add tests * Update test_sklearn.py * enhanced tests * fix microsoft#1981 * [python] added OpenMP options for python-package installation (microsoft#1975) * added OpenMP options for python-package installation * fixed grammar typo * improved model loading routines (microsoft#1979) * [ci] refined command status check (microsoft#1980) * refined command status check * refined Appveyor * redirect all warnings to stdout * cpplint whitespaces and new lines (microsoft#1986) * fix microsoft#1994 * [docs] Fixed OpenCL Debian package name typo (microsoft#1995) [docs] Fixed OpenCL Debian package name typo * [python] convert datatable to numpy directly (microsoft#1970) * convert datatable to numpy directly * fix according to comments * updated more docstrings * simplified isinstance check * Update compat.py * [R-package] Fix demos not using lgb.Dataset.create.valid (microsoft#1993) * Hand edit broken commit * Hand edit broken commit * Hand edit broken commit * Hand edit broken commit * 2.2.3 release (microsoft#1987) * Update DESCRIPTION * Update DESCRIPTION * update version number at master branch (microsoft#1996) * Update VERSION.txt * Update .appveyor.yml * Update DESCRIPTION * Initial attempt to implement appending features in-memory to another data set The intent is for this to enable munging files together easily, without needing to round-trip via numpy or write multiple copies to disk. In turn, that enables working more efficiently with data sets that were written separately. * Implement Dataset.dump_text, and fix small bug in appending of group bin boundaries. Dumping to text enables us to compare results, without having to worry about issues like features being reordered. * Add basic tests for validation logic for add_features_from. * Remove various internal mapping items from dataset text dumps These are too sensitive to the exact feature order chosen, which is not visible to the user. Including them in tests appears unnecessary, as the data dumping code should provide enough coverage. * Add test that add_features_from results in identical data sets according to dump_text. * Add test that booster behaviour after using add_features_from matches that of training on the full data This checks: - That training after add_features_from works at all - That add_features_from does not cause training to misbehave * Expose feature_penalty and monotone_types/constraints via get_field These getters allow us to check that add_features_from does the right thing with these vectors. * Add tests that add_features correctly handles feature_penalty and monotone_constraints. * Ensure add_features_from properly frees the added dataset and add unit test for this Since add_features_from moves the feature group pointers from the added dataset to the dataset being added to, the added dataset is invalid after the call. We must ensure we do not try and access this handle. * Remove some obsolete TODOs * Tidy up DumpTextFile by using a single iterator for each feature This iterators were also passed around as raw pointers without being freed, which is now fixed. * Factor out offsetting logic in AddFeaturesFrom * Remove obsolete TODO * Remove another TODO This one is debatable, test code can be a bit messy and duplicate-heavy, factoring it out tends to end badly. Leaving this for now, will revisit if adding more tests later on becomes a mess. * Add documentation for newly-added methods. * Initial work towards add_data_from This currently only merges the feature groups and updates num_data_. It does not deal with Metadata or non-dense bins yet. * Fix bug where dense bin copy of num_data_ wasn't updated * Small bug fix in dense_bin.hpp, initial implementation of Merge for 4-bits bin. * Add unit test for dense bin case of add_data_from, and refactor tests slightly. * Initial implementation of Merge for sparse bins and unit tests for it. * Ensure we test merging sparse data sets after loading them from binary This seems silly, but push_buffers_ aren't populated if the data was loaded from a binary file. This forces us to reconstruct the index,value form of the data in the target bin before merging. Adding this test ensures that code is covered. * Add labels to text dumps. * Add weights to text dumps. * Ensure add_data_from properly merges labels. * Ensure metadata appends weights correctly, and unit test for it. * Implement metadata merging for query bits This is currently not covered by unit tests. * Check datasets are aligned before merging. This catches the majority of obvious errors, e.g. not having the same number of features or having different bin mappings. * Add test that booster behaviour is preserved by add_data_from. * Add configuration parameters for CEGB. * Add skeleton CEGB tree learner Like the original CEGB version, this inherits from SerialTreeLearner. Currently, it changes nothing from the original. * Track features used in CEGB tree learner. * Pull CEGB tradeoff and coupled feature penalty from config. * Implement finding best splits for CEGB This is heavily based on the serial version, but just adds using the coupled penalties. * Set proper defaults for cegb parameters. * Ensure sanity checks don't switch off CEGB. * Implement per-data-point feature penalties in CEGB. * Implement split penalty and remove unused parameters. * Merge changes from CEGB tree learner into serial tree learner * Represent features_used_in_data by a bitset, to reduce the memory overhead of CEGB, and add sanity checks for the lengths of the penalty vectors.
Closed #1598.