🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
-
Updated
Oct 28, 2024 - Jupyter Notebook
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
A foundational Haxe framework for cross-platform development
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Python implementation of two low-light image enhancement techniques via illumination map estimation
Qt-DAB, a general software DAB (DAB+) decoder with a (slight) focus on showing the signal
InterpretDL: Interpretation of Deep Learning Models,基于『飞桨』的模型可解释性算法库。
Reading list for "The Shapley Value in Machine Learning" (JCAI 2022)
C# LIME protocol implementation
Application of the LIME algorithm by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin to the domain of time series classification
Multicycles.org aggregates on one map, more than 300 share vehicles like bikes, scooters, mopeds and cars. Demo APP for the Data Flow API, see https://flow.fluctuo.com
Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)
ProjectFNF is a mostly quality-of-life engine for Friday Night Funkin. It is easy to understand and is super flexible.
Implementation of the paper, "LIME: Low-Light Image Enhancement via Illumination Map Estimation", which is for my graduation thesis.
Overview of different model interpretability libraries.
A port of Friday Night Funkin' v0.2.8 made by rebuilding the code via reverse engineering.
Local explanations with uncertainty 💐!
Add a description, image, and links to the lime topic page so that developers can more easily learn about it.
To associate your repository with the lime topic, visit your repo's landing page and select "manage topics."