EVENTS
A workshop for ML practitioners wishing to make their models better understandable for themselves and decision makers.
Safety and reliability concerns are major obstacles for the adoption of AI in practice. In addition, European regulation will make explaining the decisions of models a requirement for so-called high-risk AI applications.
When models grow in size and complexity they become less interpretable. Explainability in machine learning attempts to address this problem by providing insights into the inference process of a model. It can be used as a tool to make models more trustworthy, reliable, transferable, fair, and robust. However, it is not without its own problems, with algorithms often reporting contradictory explanations for the same phenomena.
Contents
We consider the machine learning pipeline under the lens of explainability. Through a series of hands-on use case examples the participant will be introduced to methods for:
- Exploratory data analysis
- Feature selection and engineering
- Model selection
- Model evaluation and visualization
- Model interpretation and explanation