Releases: uclamii/model_tuner
Model Tuner 0.0.24a
Model Tuner Version 0.0.24a Changelog
-
Updated
.gitignore
to incl. doctrees -
Added
pickleObjects
tests and updated reqs, tests passed -
Added boostrapper test and tests passed
-
Adding multi class test script
-
Updated Metrics Output
- Added optl' threshold print inside
return_metrics
- KFold metric printing:
- Added new input
per_fold_print
to allow user to return per fold metrics, otherwise average - Added
tqdm
output for KFold metric printing - Fixed KFold avg output in report_model_metrics
- Added new input
- Added optl' threshold print inside
-
Added a regression test and updating
report_model_metrics
to work with regression and multi class -
Augmented predict_proba test, and
train_val_test_split
-
Fixed pipeline_steps arg in model definition
-
Refactored
metrics_df
inreport_model_metrics
for aesthetics -
Unit Tests
- XGB early stopping multi class test
- Added fit method tests
- Added early stopping test
- Added
get_best_score_params()
tests - Added
return_bootstrap_metrics()
tests - Added tests for
get_preprocessing_and_feature_selection_pipeline
andget_feature_selection_pipeline
- Tested init, passed tests
- Tested
get_preprocessing_pipeline
-
Imbalance Sampler
- Addded
process_imbalance_sampler()
tests, passed - Renamed
process_imbalance_sampler()
toverify_imbalance_sampler
- Addded
-
Made
return_dict
optional in return_metrics -
Added
openpyxl
versions for all python versions inrequirements.txt
-
Refactor metrics, foldwise metrics and foldwise
con_mat
,class_labels
-
Cleaned notebooks dir
-
Renamed notebooks to
py_example_scripts
, linted files, cleaned code -
Added
model_tuner
version print to scripts -
Added fix for sort of pipelinesteps now optional:
- Added required model_tuner import to
xgb_multi.py
- Added requisite model_tuner import to
multi_class_test.py
- Added required model_tuner import to
-
Added
catboost_multi_class.py
script -
Removed
pip
dependency from requirements
Model Tuner 0.0.23a
- Fixed a bug found when calibrating early stopping models
- Fixed early stopping in Column Transformer application
Model Tuner 0.0.22a
- Fixed an issue where the feature selection name was not referenced correctly, causing a bug when printing selected feature names with the updated pipeline.
- Removed resolved print statements from April, 2024.
Model Tuner 0.0.21a
- Specified the pipeline class otherwise the method just returned a list
- Removed need to specify
self.estimator
when its called - Generalized (renamed)
"K Best Features"
to just"Best Features"
inside returns ofreturn_metrics()
- Generalized (renamed)
k_best_features
tobest_features
Model Tuner 0.0.20a
- Added flexibility between
boolean
andNone
for stratification inputs - Added custom exception for non pandas inputs in
return_bootstrap_metrics
- Enforced required
model_type
input to be specified as"classification"
or"regression"
- Removed extraneous
"="
print belowpipeline_steps
- Handled missing
pipeline_steps
when usingimbalance_sampler
- Updated requirements for
python==3.11
- Fixed SMOTE for early stopping
- Removed extra
model_type
input fromxgb_early_test.py
Model Tuner 0.0.19a
- Requirements updated again to make compatible with google colab out of the box.
- Bug in
fit()
method wherebest_params
wasn't defined if we didn't specify a score - Threshold bug now actually fixed. Specificity and other metrics should reflect this. (Defaults to 0.5 if optimal_threshold is not specified).
Model Tuner 0.0.18a
- Updated requirements to include
numpy
versions<1.26
for Python 3.8-3.11.
This should stop a rerun occurring when using the library on a google colab.
Model Tuner 0.0.17a
Major fixes:
- Verbosity variable is now popped from the parameters before the fit
- Bug with Column Transformer early stopping fixed (valid set is now transformed correctly)
- Return metrics now has a consistent naming convention
report_model_metrics
is now using the correct threshold in all cases- Default values updated for
train_val_test_split
tune_threshold_Fbeta
is now called with the correct number of parameters in all cases- Requirements updates:
XGBoost
updated to2.1.2
for later Python versions.
Minor changes:
help(model_tuner)
should now be correctly formatted in google colab
Model Tuner 0.0.16a
Version 0.0.16a
- Custom pipeline steps now updated (our pipeline usage has been completely changed and should now order itself and support non named steps) always ensures correct order
- This fixed multiple other issues that were occuring to do with logging of imbalanced learn
- Reporting model metrics now works.
AutoKeras
code deprecated and removed.KFold
bug introduced because ofCatBoost
. This has now been fixed.- Pretty print of pipeline.
- Boosting variable has been renamed.
- Version constraints have been updated and refactored.
tune_threshold_Fbeta
has been cleaned up to remove unused parameters.train_val_test
unnecessary self removed and taken outside of class method.- deprecated
setup.py
in favor ofpyproject.toml
per forthcomingpip25
update.
Model Tuner 0.0.15a
Version 0.0.15a
Contains all previous fixes relating to:
CatBoost
support (early stopping, and support involving resetting estimators).- Pipeline steps now support hyperparameter tuning of the resamplers (
SMOTE
,ADASYN
, etc.). - Removed older implementations of impute and scaling and moved onto supporting only custom
pipeline_steps
. - Fixed bugs in stratification with regards to length mismatch of dependent variable when using column names to stratify.
- Cleaned a removed multiple lines of unused code and unused initialisation parameters.