Skip to content

A collection of benchmarking problems and datasets for testing the performance of advanced optimization algorithms in the field of materials science and chemistry.

License

Notifications You must be signed in to change notification settings

sparks-baird/matsci-opt-benchmarks

Repository files navigation

Project generated with PyScaffold PyPI-Server ReadTheDocs Coveralls Monthly Downloads

matsci-opt-benchmarks (WIP)

A collection of benchmarking problems and datasets for testing the performance of advanced optimization algorithms in the field of materials science and chemistry for a variety of "hard" problems involving one or several of: constraints, heteroskedasticity, multiple objectives, multiple fidelities, and high-dimensionality.

There are already materials-science-specific resources related to datasets, surrogate models, and benchmarks out there:

  • Matbench focuses on materials property prediction using composition and/or crystal structure
  • Olympus focuses on small datasets generated via experimental self-driving laboratories
  • Foundry focuses on delivering ML-ready datasets in materials science and chemistry
  • Matbench-genmetrics focuses on generative modeling for crystal structure using metrics inspired by guacamol and CDVAE

In March 2021, pymatgen reorganized the code into namespace packages, which makes it easier to distribute a collection of related subpackages and modules under an umbrella project. Tangent to that, PyScaffold is a project generator for high-quality Python packages, ready to be shared on PyPI and installable via pip; coincidentally, it also supports namespace package configurations. Our plan for this repository is to host pip-installable packages that allow for loading datasets, surrogate models, and benchmarks for recent manuscripts from the Sparks group. We will look into hosting the datasets via Foundry and using the surrogate model API via Olympus. We will likely do logging to a MongoDB database via MongoDB Atlas and later take a snapshot of the dataset for Foundry. Initially, we plan to use a basic scikit-learn model, such as RandomForestRegressor or GradientBoostingRegressor, along with cross-validated hyperparameter optimization via RandomizedSearchCV or HalvingRandomSearchCV for the surrogate model.

What will really differentiate the contribution of this repository is the modeling of non-Gaussian, heteroskedastic noise, where the noise can be a complex function of the input parameters. This is contrasted with Gaussian homoskedastic noise, where the noise for a given parameter is both Gaussian and fixed [Wikipedia].

The goal is to win a "Turing test" of sorts for the surrogate model, where the model is indistinguishable from the true, underlying objective function.

To accomplish this, we plan to:

  • run repeats for every set of parameters and fit separate models for quantiles of the noise distribution
  • Get a large enough quasi-random sampling of the search space to accurately model intricate interactions between parameters (i.e., the response surface)
  • Train a classification model that short-circuits the regression model: return NaN values for inaccessible regions of objective functions and return the regression model values for accessible regions

Our plans for implementation include:

  • packing fraction of a random 3D packing of spheres as a function of the number of spheres, 6 parameters that define three separate truncated log-normal distributions, and 3 parameters that define the weight fractions [code] [paper1] [paper2] [data]
  • discrete intensity vs. wavelength spectra (measured experimentally via a spectrophotometer) as a function of red, green, and blue LED powers and three sensor settings: number of integration steps, integration time per step, and signal gain [code] [paper]
  • Two error metrics (RMSE and MAE) and two hardware performance metrics (runtime and memory) of a CrabNet regression model trained on the Matbench experimental band gap dataset as a function of 23 CrabNet hyperparameters reframed as a composition-based optimization task [code] [paper]

Quick start

pip install matsci-opt-benchmarks

Not implemented yet

from matsci_opt_benchmarks.core import MatSciOpt

mso = MatSciOpt(dataset="crabnet_hyperparameter")
# mso = MatSciOpt(dataset="particle_packing")

print(mso.features)
#

results = mso.predict(parameterization)

Generate benchmark from existing dataset

import pandas as pd
from matsci_opt_benchmarks.core import Benchmark

# load dataset
dataset_name = "dummy"
dataset_path = f"data/external/{dataset_name}.csv"
dataset = pd.read_csv(...)

# define inputs/outputs (and parameter types? if so, then Ax-like dict)
parameter_names = [...]
output_names = [...]

X = dataset[parameters]
y = dataset[outputs]

bench = Benchmark()
bench.fit(X=X, Y=y)
y_pred = bench.predict(X.head(5))
print(y_pred)
# [[...], [...], ...]

bench.save(fpath=f"models/{dataset_name}")
bench.upload(zenodo_id=zenodo_id)

# upload to HuggingFace
...

Installation

In order to set up the necessary environment:

  1. review and uncomment what you need in environment.yml and create an environment matsci-opt-benchmarks with the help of conda:
    conda env create -f environment.yml
    
  2. activate the new environment with:
    conda activate matsci-opt-benchmarks
    

NOTE: The conda environment will have matsci-opt-benchmarks installed in editable mode. Some changes, e.g. in setup.cfg, might require you to run pip install -e . again.

Optional and needed only once after git clone:

  1. install several pre-commit git hooks with:

    pre-commit install
    # You might also want to run `pre-commit autoupdate`

    and checkout the configuration under .pre-commit-config.yaml. The -n, --no-verify flag of git commit can be used to deactivate pre-commit hooks temporarily.

  2. install nbstripout git hooks to remove the output cells of committed notebooks with:

    nbstripout --install --attributes notebooks/.gitattributes

    This is useful to avoid large diffs due to plots in your notebooks. A simple nbstripout --uninstall will revert these changes.

Then take a look into the scripts and notebooks folders.

Dependency Management & Reproducibility

  1. Always keep your abstract (unpinned) dependencies updated in environment.yml and eventually in setup.cfg if you want to ship and install your package via pip later on.
  2. Create concrete dependencies as environment.lock.yml for the exact reproduction of your environment with:
    conda env export -n matsci-opt-benchmarks -f environment.lock.yml
    For multi-OS development, consider using --no-builds during the export.
  3. Update your current environment with respect to a new environment.lock.yml using:
    conda env update -f environment.lock.yml --prune

Project Organization

├── AUTHORS.md              <- List of developers and maintainers.
├── CHANGELOG.md            <- Changelog to keep track of new features and fixes.
├── CONTRIBUTING.md         <- Guidelines for contributing to this project.
├── Dockerfile              <- Build a docker container with `docker build .`.
├── LICENSE.txt             <- License as chosen on the command-line.
├── README.md               <- The top-level README for developers.
├── configs                 <- Directory for configurations of model & application.
├── data
│   ├── external            <- Data from third party sources.
│   ├── interim             <- Intermediate data that has been transformed.
│   ├── processed           <- The final, canonical data sets for modeling.
│   └── raw                 <- The original, immutable data dump.
├── docs                    <- Directory for Sphinx documentation in rst or md.
├── environment.yml         <- The conda environment file for reproducibility.
├── models                  <- Trained and serialized models, model predictions,
│                              or model summaries.
├── notebooks               <- Jupyter notebooks. Naming convention is a number (for
│                              ordering), the creator's initials and a description,
│                              e.g. `1.0-fw-initial-data-exploration`.
├── pyproject.toml          <- Build configuration. Don't change! Use `pip install -e .`
│                              to install for development or to build `tox -e build`.
├── references              <- Data dictionaries, manuals, and all other materials.
├── reports                 <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures             <- Generated plots and figures for reports.
├── scripts                 <- Analysis and production scripts which import the
│                              actual PYTHON_PKG, e.g. train_model.
├── setup.cfg               <- Declarative configuration of your project.
├── setup.py                <- [DEPRECATED] Use `python setup.py develop` to install for
│                              development or `python setup.py bdist_wheel` to build.
├── src
│   └── particle_packing    <- Actual Python package where the main functionality goes.
│   └── crabnet_hyperparameter <- Actual Python package where the main functionality goes.
├── tests                   <- Unit tests which can be run with `pytest`.
├── .coveragerc             <- Configuration for coverage reports of unit tests.
├── .isort.cfg              <- Configuration for git hook that sorts imports.
└── .pre-commit-config.yaml <- Configuration of pre-commit git hooks.

Note

This project has been set up using PyScaffold 4.3.1 and the dsproject extension 0.7.2.

About

A collection of benchmarking problems and datasets for testing the performance of advanced optimization algorithms in the field of materials science and chemistry.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •