Explainability is a quality sometimes required in the context of Artificial Intelligence (AI). The European’s proposed AI Act will probably contain obligations for some AI systems to provide explanations about decisions that impact user rights.


A system S is explainable with respect to an aspect X of S relative to an addressee A in context C if and only if there is an entity E (the explainer) who, by giving a corpus of information I (the explanation of X), enables A to understand X of S in C. L. Chazette, W. Brunotte and T. Speith, “Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue,” 2021 IEEE 29th International Requirements Engineering Conference (RE), Notre Dame, IN, USA, 2021, pp. 197-208, doi: 10.1109/RE51729.2021.00025.

Explainable AI (XAI) [..] either refers to an AI system over which it is possible for humans to retain intellectual oversight, or to the methods to achieve this. Wikipedia

[..] local explainability helps answer the question, “for this particular example, why did the model make this particular decision?”

Cohort explainability is the process of understanding to what degree your model’s features contribute to its predictions over a subset of your data.

We consider an ML engineer to have access to global model explainability if across all predictions they are able to attribute which features contributed the most to the model’s decisions. Towards Datascience