Skip to content

Releases: mle-infrastructure/mle-hyperopt

Minor fixes πŸ”§

27 Aug 07:34
Compare
Choose a tag to compare

[v0.0.10] - [08/2024]

  • Fix rounding and numpy version

Fix pickle5 and MutableMapping Python Version Bug

13 Jun 09:49
Compare
Choose a tag to compare

[v0.0.9] - [06/2023]

Fixed

  • Fix MutableMapping to also work with python 3.10.
  • Fix pickle5 install requirement to only apply for python < 3.8

πŸ§‘β€βš•οΈ Dependency fix `scikit-optimize`

08 Mar 13:01
Compare
Choose a tag to compare

[v0.0.8] - [03/2023]

Fixed

  • Minor fixes for broken scikit-optimize dependency.

Bug fix mle-search & nested configurations

20 Feb 15:52
Compare
Choose a tag to compare

Added

  • Log reloading helper for post-processing.

Fixed

  • Bug fix in mle-search with imports of dependencies. Needed to append path.
  • Bug fix with cleaning nested dictionaries. Have to make sure not to delete entire sub-dictionary.

mle-search CLI ⌨️ & nested param dictionaries

20 Feb 10:37
c6252b0
Compare
Choose a tag to compare

Added

  • Adds a command line interface for running a sequential search given a python script <script>.py containing a function main(config), a default configuration file <base>.yaml & a search configuration <search>.yaml. The main function should return a single scalar performance score. You can then start the search via:

    mle-search <script>.py --base_config <base>.yaml --search_config <search>.yaml --num_iters <search_iters>
    

    Or short via:

    mle-search <script>.py -base <base>.yaml -search <search>.yaml -iters <search_iters>
    
  • Adds doc-strings to all functionalities.

Changed

  • Make it possible to optimize parameters in nested dictionaries. Added helpers flatten_config and unflatten_config. For shaping 'sub1/sub2/vname' <-> {sub1: {sub2: {vname: v}}}
  • Make start-up message also print fixed parameter settings.
  • Cleaned up decorator with the help of Strategies wrapper.

Successive Halving, Hyperband & PBT

05 Jan 14:48
Compare
Choose a tag to compare

Added

  • Adds possibility to store and reload entire strategies as pkl file (as asked for in issue #2).
  • Adds improvement method indicating if score is better than best stored one
  • Adds save option for best plot
  • Adds args, kwargs into decorator
  • Adds synchronous Successive Halving (SuccessiveHalvingSearch - issue #3)
  • Adds synchronous HyperBand (HyperbandSearch - issue #3)
  • Adds synchronous PBT (PBTSearch - issue #4)
  • Adds option to save log in tell method
  • Adds small torch mlp example for SH/Hyperband/PBT w. logging/scheduler
  • Adds print welcome/update message for strategy specific info

Changed

  • Major internal restructuring:
    • clean_data: Get rid of extra data provided in configuration file
    • tell_search: Update model of search strategy (e.g. SMBO/Nevergrad)
    • log_search: Add search specific log data to evaluation log
    • update_search: Refine search space/change active strategy etc.
  • Also allow to store checkpoint of trained models in tell method.
  • Fix logging message when log is stored
  • Make json serializer more robust for numpy data types
  • Robust type checking with isinstance(self.log[0]["objective"], (float, int, np.integer, np.float))
  • Update NB to include mle-scheduler example
  • Make PBT explore robust for integer/categorical valued hyperparams
  • Calculate total batches & their sizes for hyperband

Bug Fix Data Types & internal refactor πŸ”Ί

10 Dec 11:14
Compare
Choose a tag to compare
v0.0.4

Bump up version string

`CoordinateSearch` Bug Fix + Rich Logging

24 Oct 09:43
Compare
Choose a tag to compare
  • Fixes CoordinateSearch active grid search dimension updating. We have to account for the fact that previous coordinates are not evaluated again after switching the active variable.
  • Generalizes NevergradSearch to wrap around all search strategies.
  • Adds rich logging to all console print statements.
  • Updates documentation and adds text to getting_started.ipynb.

Space Refinement & Maximize Objective

20 Oct 16:14
Compare
Choose a tag to compare
  • Fixes import bug when using PyPi installation.
  • Enhances documentation and test coverage.
  • Adds search space refinement for nevergrad and smbo search strategies via refine_after and refine_top_k:
strategy = SMBOSearch(
        real={"lrate": {"begin": 0.1, "end": 0.5, "prior": "uniform"}},
        integer={"batch_size": {"begin": 1, "end": 5, "prior": "uniform"}},
        categorical={"arch": ["mlp", "cnn"]},
        search_config={
            "base_estimator": "GP",
            "acq_function": "gp_hedge",
            "n_initial_points": 5,
            "refine_after": 5,
            "refine_top_k": 2,
        },
        seed_id=42,
        verbose=True
    )
  • Adds additional strategy boolean option maximize_objective to maximize instead of performing default black-box minimization.

Welcome to MLE-Hyperopt

16 Oct 15:15
Compare
Choose a tag to compare

Base API implementation:

from mle_hyperopt import RandomSearch

# Instantiate random search class
strategy = RandomSearch(real={"lrate": {"begin": 0.1,
                                        "end": 0.5,
                                        "prior": "log-uniform"}},
                        integer={"batch_size": {"begin": 32,
                                                "end": 128,
                                                "prior": "uniform"}},
                        categorical={"arch": ["mlp", "cnn"]})

# Simple ask - eval - tell API
configs = strategy.ask(5)
values = [train_network(**c) for c in configs]
strategy.tell(configs, values)