Skip to content

What and How of Machine Learning Transparency – ECML-PKDD 2020 Hands-on Tutorial

License

Unknown, BSD-3-Clause licenses found

Licenses found

Unknown
LICENCE
BSD-3-Clause
LICENCE-code
Notifications You must be signed in to change notification settings

fat-forensics/Surrogates-Tutorial

Repository files navigation

Read More
JOSE DOI arXiv DOI ZENODO DOI
Open in Binder Open In Colab
GitHub Release Tests Status
Text Licence Code Licence

What and How of Machine Learning Transparency

Building Bespoke Explainability Tools with Interoperable Algorithmic Components

Explainability techniques for data-driven predictive models based on artificial intelligence and machine learning algorithms allow us to better understand the operation of such systems and hold them accountable1. New transparency approaches are therefore developed at breakneck speed to peek inside these black boxes and interpret their decisions. Many of these techniques are introduced as monolithic tools, giving the impression of one-size-fits-all and end-to-end algorithms with limited customisability. However, such approaches are often composed of multiple interchangeable modules that need to be tuned to the problem at hand to produce meaningful explanations2. This repository holds a collection of interactive, hands-on training materials (offered as Jupyter Notebooks) that provide guidance through the process of building and evaluating bespoke modular surrogate explainers for black-box predictions of tabular data. These resources cover the three core building blocks of this technique introduced by the bLIMEy meta-algorithm: interpretable representation composition, data sampling and explanation generation2.

The following materials are available (follow the links to see their respective descriptions):

  • Hands-on Resources (Jupyter Notebooks) – notebooks directory.
  • Presentation Slides – slides directory.
  • Video Recordings – YouTube playlist.

These resources were used to deliver a hands-on tutorial at ECML-PKDD 2020; see https://events.fat-forensics.org/2020_ecml-pkdd for more details. Alternatively, see the paper published in the Journal of Open Source Education. The notebooks can either be launched online (via MyBinder or Google Colab) or on a personal machine (Python 3.7 or higher is required). The latter can be achieved with the following steps:

  1. Clone this repository.
    git clone --depth 1 https://github.com/fat-forensics/Surrogates-Tutorial.git
  2. Install Python dependencies.
    pip install -r notebooks/requirements.txt
  3. Launch Jupyter Lab.
    jupyter lab
  4. Navigate to the notebooks directory and open the desired notebook.

Note that code is licenced under BSD 3-Clause, and text is covered by CC BY-NC-SA 4.0. The CONTRIBUTING.md file provides contribution guidelines. To reference this repository and the training materials it provides please use:

@article{sokol2022what,
  title={What and How of Machine Learning Transparency:
         {Building} Bespoke Explainability Tools with Interoperable
         Algorithmic Components},
  author={Sokol, Kacper and Hepburn, Alexander and
          Santos-Rodriguez, Raul and Flach, Peter},
  journal={Journal of Open Source Education},
  volume={5},
  number={58},
  pages={175},
  publisher={The Open Journal},
  year={2022},
  doi={10.21105/jose.00175},
  url={https://events.fat-forensics.org/2020_ecml-pkdd}
}

or refer to the CITATION.cff file.

Footnotes

  1. Sokol, K., & Flach, P. (2021). Explainability is in the mind of the beholder: Establishing the foundations of explainable artificial intelligence. arXiv Preprint arXiv:2112.14466. https://doi.org/10.48550/arXiv.2112.14466

  2. Sokol, K., Hepburn, A., Santos-Rodriguez, R., & Flach, P. (2019). bLIMEy: Surrogate prediction explanations beyond LIME. Workshop on Human-Centric Machine Learning (HCML 2019) at the 33rd Conference on Neural Information Processing Systems (NeurIPS). https://doi.org/10.48550/arXiv.1910.13016 2