Interpretable Machine Learning - Fairness, Accountability and Transparency in ML systems
The good news is building fair, accountable, and transparent machine learning systems is possible. The bad news is it’s harder than many blogs and software package docs would have you believe. The truth is nearly all interpretable machine learning techniques generate approximate explanations, that the fields of eXplainable AI (XAI) and Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) are very new, and that few best practices have been widely agreed upon. This combination can lead to some ugly outcomes!
This talk aims to make your interpretable machine learning project a success by describing fundamental technical challenges you will face in building an interpretable machine learning system, defining the real-world value proposition of approximate explanations for exact models, and then outlining the following viable techniques for debugging, explaining, and testing machine learning models:
- Model visualizations including decision tree surrogate models, individual conditional expectation (ICE) plots, partial dependence plots, and residual analysis.
- Reason code generation techniques like LIME, Shapley explanations, and Tree-interpreter. *Sensitivity Analysis. Plenty of guidance on when, and when not, to use these techniques will also be shared, and the talk will conclude by providing guidelines for testing generated explanations themselves for accuracy and stability.
Basic familiarity with machine learning concepts.
The demo deck is currently being prepared. The repository can be found here: https://github.com/sayakpaul/Benchmarking-and-MLI-experiments-on-the-Adult-dataset
Outline of the talk:
- What is Machine Learning Interpretability?
- Why should you care about Machine Learning Interpretability?
- Why is Machine Learning Interpretability difficult?
- What is the Value Proposition of Machine Learning Interpretability?
- How can Machine Learning Interpretability be practiced? (several examples)
- Can Machine Learning Interpretability be tested? (General recommendations Tool-based observations)
Preview video: https://www.loom.com/share/d3607487a04e4f71b4dbdc77f03dba3a
And this is the learning outcome from this talk:
By the end of the session, the attendees will have a clear idea of the importance of fairness, accountability and transparency in machine learning and how it stands up in real-world scenarios. They will also get to see some real examples justifying the importance of interpretability of ML systems. They will get know about some of the tools that are used in this regard (such as LIME, Shapley etc.).
My previous slide decks can be checked here: https://github.com/sayakpaul/TalksGiven
I blog on a daily basis. All of my blogs can be found here: https://sites.google.com/view/spsayakpaul#h.p_3NSyRc-OMiTm
In my current role at DataCamp, I develop projects for DataCamp Projects. I am also responsible for creating different exercises for DataCamp Practice. I have worked at TCS Research and Innovation (TRDDC) as a developer in the field of Data Privacy. Prior to that, I have worked as a Web Services Developer at TCS (Kolkata area) for a USA Information major in the field of Communication and Media Interface. I am also working with Dr. Anupam Ghosh and my beloved college juniors for Machine Learning research/tinkering. Currently, we are working on the application of machine learning in Phonocardiogram classification. Recently, I became an Intel Software Innovator.
My subject of interest broadly lies in areas like Machine Learning Interpretability, Full-Stack Data Science. I aspire for a career in Data Science where I should be able to interpret models and communicate the results effectively.