PiML (or π·ML, /ˈpaɪ·ˈem·ˈel/) is a new Python toolbox for Interpretable Machine Learning model development and validation. Through low-code automation and high-code programming, PiML supports various machine learning models in the following two categories:
- Inherently interpretable models:
- EBM: Explainable Boosting Machine (Nori, et al. 2019; Lou, et al. 2013)
- GAMI-Net: Generalized Additive Model with Structured Interactions (Yang, Zhang and Sudjianto, 2021)
- ReLU-DNN: Deep ReLU Networks using Aletheia Unwrapper (Sudjianto, et al. 2020)
- Arbitrary black-box models,e.g.
- LightGBM or XGBoost of varying depth
- RandomForest of varying depth
- Residual Deep Neural Networks
Run the following piece of script to download and install PiML v0.1.0 to Google Colab:
!pip install wget
import wget
url = "https://github.com/SelfExplainML/PiML-Toolbox/releases/download/V0.1.0/PiML-0.1.0-cp37-cp37m-linux_x86_64.whl"
wget.download(url, 'PiML-0.1.0-cp37-cp37m-linux_x86_64.whl')
!pip install PiML-0.1.0-cp37-cp37m-linux_x86_64.whl
from piml import Experiment
exp = Experiment(platform="colab")
exp.data_loader()
exp.data_summary()
exp.data_prepare()
exp.eda()
exp.model_train()
exp.model_explain()
exp.model_interpret()
exp.model_diagnose()
exp.model_compare()
For example, train a complex LightGBM with depth 7 and register it to the experiment:
from lightgbm import LGBMRegressor
pipeline = exp.make_pipeline(LGBMRegressor(max_depth=7))
pipeline.fit()
exp.register(pipeline=pipeline, name='LGBM')
Then, compare it to inherently interpretable models (e.g. EBM and GAMI-Net):
exp.model_compare()
"PiML: A Python Toolbox for Interpretable Machine Learning Model Development and Validation" (Sudjianto, Zhang*, Yang, Su, Zeng and Nair, 2022)
@article{sudjianto2022piml,
title={PiML: A Python Toolbox for Interpretable Machine Learning Model Development and Validation},
author={Sudjianto, Agus and Zhang*, Aijun and Yang, Zebin and Su, Yu and Zeng, Ningzhou and Nair Vijay},
journal={To appear},
year={2022}
}