Taking a peek under the hood: Interpreting black box models
A model’s interpretability is just as important as its performance. In some industries, even more so. Unfortunately, some high performing models, like neural networks and ensemble methods, act more like black boxes. As practitioners, we are often asked to make a trade-off: interpretability or performance. Fortunately, as these complex models increase in popularity, there are ways to take a peek under the hood and interpret them. In this talk, we’ll present SHAP (Shapley Additive exPlanations) proposed by Lundberg et al., a model agnostic method to add to your data science toolbox to interpret machine learning models and how you can use it to explain your own models.
- Why interpretability is important
- Current limitations with interpretability measures
- Introducing SHAP and its origins
- Case Study: Applying SHAP to interpret a financial model
This talk will be most valuable to novice and intermediate data science practitioners who want another tool to add to their arsenal. A general understanding of the following would be helpful but not necessary. Audience should still be able to walk away with a high level understanding of SHAP
familiarity with linear regression
familiarity with one machine learning algorithm, e.g., random forests, neural networks
- Cooperative game theory
Jennifer is the Lead Data Scientist at Sun Life’s Canadian Analytics Centre of Excellence, helping the company to build intelligent data solutions to better serve their clients. Her past experience in the field includes the Globe and Mail, Scribd and Slyce. She holds a Master’s in Machine Learning from University College London and a B. Math from the University of Waterloo. Jennifer is a strong proponent of gender diversity in her field and partners with the University of Waterloo to support young females pursuing careers in STEM.