Interpretable Machine Learning - Fairness, Accountability and Transparency in ML systems

sayakpaul


11

Votes

Description:

The good news is building fair, accountable, and transparent machine learning systems is possible. The bad news is it’s harder than many blogs and software package docs would have you believe. The truth is nearly all interpretable machine learning techniques generate approximate explanations, that the fields of eXplainable AI (XAI) and Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) are very new, and that few best practices have been widely agreed upon. This combination can lead to some ugly outcomes!

This talk aims to make your interpretable machine learning project a success by describing fundamental technical challenges you will face in building an interpretable machine learning system, defining the real-world value proposition of approximate explanations for exact models, and then outlining the following viable techniques for debugging, explaining, and testing machine learning models:

  • Model visualizations including decision tree surrogate models, individual conditional expectation (ICE) plots, partial dependence plots, and residual analysis.
  • Reason code generation techniques like LIME, Shapley explanations, and Tree-interpreter. *Sensitivity Analysis. Plenty of guidance on when, and when not, to use these techniques will also be shared, and the talk will conclude by providing guidelines for testing generated explanations themselves for accuracy and stability.

Prerequisites:

Basic familiarity with machine learning concepts.

Content URLs:

The demo deck is currently being prepared. The repository can be found here: https://github.com/sayakpaul/Benchmarking-and-MLI-experiments-on-the-Adult-dataset

Outline of the talk:

  • What is Machine Learning Interpretability?
  • Why should you care about Machine Learning Interpretability?
  • Why is Machine Learning Interpretability difficult?
  • What is the Value Proposition of Machine Learning Interpretability?
  • How can Machine Learning Interpretability be practiced? (several examples)
  • Can Machine Learning Interpretability be tested? (General recommendations Tool-based observations)

Preview video: https://www.loom.com/share/d3607487a04e4f71b4dbdc77f03dba3a

And this is the learning outcome from this talk:
By the end of the session, the attendees will have a clear idea of the importance of fairness, accountability and transparency in machine learning and how it stands up in real-world scenarios. They will also get to see some real examples justifying the importance of interpretability of ML systems. They will get know about some of the tools that are used in this regard (such as LIME, Shapley etc.).

My previous slide decks can be checked here: https://github.com/sayakpaul/TalksGiven

I blog on a daily basis. All of my blogs can be found here: https://sites.google.com/view/spsayakpaul#h.p_3NSyRc-OMiTm

Speaker Info:

In my current role at DataCamp, I develop projects for DataCamp Projects. I am also responsible for creating different exercises for DataCamp Practice. I have worked at TCS Research and Innovation (TRDDC) as a developer in the field of Data Privacy. Prior to that, I have worked as a Web Services Developer at TCS (Kolkata area) for a USA Information major in the field of Communication and Media Interface. I am also working with Dr. Anupam Ghosh and my beloved college juniors for Machine Learning research/tinkering. Currently, we are working on the application of machine learning in Phonocardiogram classification. Recently, I became an Intel Software Innovator.

My subject of interest broadly lies in areas like Machine Learning Interpretability, Full-Stack Data Science. I aspire for a career in Data Science where I should be able to interpret models and communicate the results effectively.

Section: Data Science, Machine Learning and AI
Type: Talks
Target Audience: Intermediate
Last Updated:

Hello Sayak,

Thanks for the proposal. Your's is the first proposal for 2019 edition of PyCon India.
You'll hear for us soon as we start with the review process.

Regards,
Abhishek

Abhishek Yadav (~zerothabhishek)

Thank you :)

sayakpaul

Nice Top @sayak - Wish you good luck!

amrrs

Glad you liked the proposal. :)

sayakpaul

great topic, all the best. Looking forward to see this in the conference.

desmond00

Hello Sayak,

We have put together a set of best practices for proposals - please take a look. Your proposal is already fairly detailed, and it will be great if you add the outline of the talk, the slides and a two minute preview video.

Regards,

Abhishek Yadav (~zerothabhishek)

Hello Abhishek. The outline is already mentioned. I will add the preliminary slides and two minutes demo video. Thank you for mentioning it.

sayakpaul

Hi Abhishek. I updated the proposal with two minutes preview video. However, my schedule is a bit tied up hence preparing the slides within this month would be a problem for me since there is a lot to cover that too briefly. So, it will take some time.

sayakpaul

Thanks Sayak.
We have some time for the slides - I'm hoping to get started with expert reviews sometime in early May. If you can get them ready by then it'd be okay.

Abhishek Yadav (~zerothabhishek)

Login to add a new comment.