Towards a more transparent AI - Decrypting ML models using LIME

LAISHA WADHWA (~laisha77)





With a wide no. of libraries & frameworks available for building ML models ML has become a black-box these days. Thus model explainability is vital. But its hard to define a model’s decision boundary in simple terms. With LIME it's easy to produce faithful explanations & decrypt any ML model. The talk introduces library - LIME& SHAP and how it makes interpreting model outputs easy.

The talk will revolve around the current state of explainability in AI and what can we do about it.


Origin [5 minutes]

  • Speaker introduction
  • Introduction of the talk
  • Need and current state of explainable AI

Theory[10 minutes]

  • How do we interpret models currently
  • How do LIME and SHAP work
  • ML algorithmic tradeoff
  • Traditional approaches vs LIME and SHAP
  • Graphical explanation of locality aware loss
  • features of LIME

Demo/Hands-on [10 minutes]

  • Live Code an image classifier and show the functioning of LIME.
  • Play with different classes of LIME to show model agnostic behavior with text and tabular data.
  • Which LIME class to use when?

Conclusion [5 minutes]

  • Pros and Cons of using LIME
  • Key takeaways
  • Q & A Traditional approach vs LIME for digit Classifier

Why Interpretability in AI??

The most common question all ML enthusiasts have is: why was this prediction made or which variables caused the prediction? * While model interpretability tricks like CV and grid search only answer the question from the perspective of the entire data-set, feature importance explains this on data-set level- which features are important in predicting the target. - It allows you to verify hypotheses and whether the model is overfitting to noise, but it's hard to diagnose specific model predictions. From the business perspective: - In today’s business-centric world, there’s a renewed focus on model interpretability. With ML finding multiple use cases for elevating businesses, it has become vital to Interpret the model, build trust in the model (because there’s money at stake!) and understand how it works for any given data. - Also, for companies building smart AI assistants for healthcare and assistive apps for budget planning and other daily chores, it's very important to build the trust of their potential customers in what they do and how they do. Explanations are MORE CRITICAL to the business than PERFORMANCE. - If we think about it, what good is a high-performance model that predicts employee churn if we can’t tell what features are causing people to quit? We need explanations to improve business decision making. Not just performance. Model explanability

Content (Theory)

LIME: L ocal I nterpretable M odel Agnostic E xplanations.

How many times have we all been stuck when our model performs well for some labels but fails for spurious data?

Let's say you are trying to predict whether a person with a certain credit score can repay a loan if approved. You trained a model that predicts well for most of the data points but fails for some. In other words, we cannot understand it’s learning or figure out its spurious conclusions. In such cases, we are left thinking what went wrong during training? That’s where the magical tool LIME comes into the picture. I'll be introducing a python library called LIME which tries to solve for model interpretability by producing locally faithful explanations thus explaining the decision boundaries of our model in a human-understandable form.

I'll be giving real-life examples which are visually easy to interpret and comprehend(For text and Images(CNN network). In the era of Deep Learning and Machine Learning, all industries from healthcare to computer vision face a common problem: Explainable AI. The LIME library is model agnostic and serves all use cases from text and images to supervised and unsupervised data. LIME is an ideal model explainer. It uses a representation that is understood by the humans irrespective of the actual features used by the model. This is coined as interpretable representation. An interpretable representation would vary with the type of data we work with. For instance:

  • Text: It represents the presence/absence of words.
  • Image: It represents the presence/absence of superpixels ( contiguous patch of similar pixels ).
  • Tabular data: It is a weighted combination of columns.

  • I'll be comparing the traditional methodologies(EDA, eval metrics, tsne plots, etc) used for understanding models output with how LIME segment the regions in the image to highlight what contributed to giving a certain prediction.

  • The mathematical formulation of the locality aware loss will be covered through an example of wolves and dogs classifier. Traditional approch vs LIME for digit Classifier

  • During the introduction of LIME classes following classes will be covered LimeTextExplainer, LimeImageExplainer, and LimeTabularExplainer.

  • In order to build trust in our model, we tend to run multiple cross-validations and perform hold out set validation. Though these simulations give an aggregated view of model performance over unknown data, they don’t help in understanding why some of the predictions are correct while other’s are wrong nor can we trace our model’s decision path.


  1. Digit classification and cat dog classification using LIME: Understanding why we get false positives. Decrypting the decision boundary of Image classifiers.
  2. Text Analysis using LIME: Structured Data: You'll be working with StackOverflow question and Tags data. After training a Logistic regression model you'll be understanding the importance of terms towards contributing the class of the tag. The demos will be available on Google Colab with prefabricated notebooks All the material and prerequisites will be released through a GITHUB repository.
    How LimeTextExplainer Works

Who should attend it?

If you are a novice or an expert in ML or with some experience in ML, LIME is just the tool for everyone. It explains a prediction so that even the nonexperts could compare and improve on an untrustworthy model through feature engineering. LIME is an ideal model explainer for anyone who's a data scientist, a business analyst, or even a researcher.

Key Takeaways

  • At the end of the talk, you’ll know how to explain your model to possibly anyone with few lines of code! This makes it easier for you to sell your business idea.
  • You’ll be equipped in creating a model agnostic locally faithful explanation for any kind of ML/DL model be it an image classifier, text classifier, or a regression algorithm.
  • You'll be learning about the lasses available to build a model interpreter of your own and further research more about the same through the Open source codebase.


  • Python Basics
  • Basics of ML

Video URL:

Speaker Info:

I am a Data Engineer at, India. I have been working with python for over 3 years now and I am a big time machine Learning aficionado. In the past few years I have worked working with Computer Vision and Music Analysis related search. While I am not working I build use AI and ML based applications for social good and work on building applications at scale while at work. I love participating in hackathons. I am multiple hackathon winner (Microsoft AI hacakthon, Sabre Hack, Amex AI hackathon, Icertis Blockchain and AIML hackathon, Mercedes Benz Digital Challenge) and people often call me "The Hackathon Girl". As a tech enthusiast, I enjoy sharing my knowledge and work with the community. I am a tech speaker(Pyconf Hyd 2019), tech blogger, podcast host(, hackathon mentor at MLH hacks , Technical content creator at Omdena and Global Ambassador at Women.Tech Network I believe in hacking my way through life one bit at a time.

Speaker Links:

Github: laishawadhwa

LinkedIn: laisha-wadhwa

Twitter: laishawadhwa

Medium: laisha.w16_85978


Section: Data Science, Machine Learning and AI
Type: Talks
Target Audience: Intermediate
Last Updated: