Interpret Computer Vision Models in Python

harshavardhan T (~harshavardhan07)


Description:

We have a vast number of architectures/backbones to carry out Image classification, Object detection, 3d-estimation estimation, keypoint, inpainting, self-supervised tasks sec. Most of the models are judged based on accuracy metrics such as precision and recall of validation and test set. While it is important to note the model accuracy scores, it does not give reasoning behind the models' decisions. Model Interpretability gives us the ability to understand the model at multiple levels. We will be showing some experiments with different types of saliency maps such as Full grad, integrated grad, grad cam, smooth grad, etc and how to use them to improve model performance.

Talk Outline:

  • Why Model Interpretability - A practitioner's perspective - 3 mins
  • Gradient Descent and visualizing convNets - 2 mins
  • Saliency Maps overview and experiments - 10 mins
  • Evaluating Saliency maps and producing saliency score(Self-Made approach) - 2mins
  • Making use of Interpretability to build better models - 3mins
  • Future Steps - 2 mins

Prerequisites:

  • Knowledge of Deep Learning Image classification
  • Basic understanding of backpropagation and Conv-nets

Video URL:

https://youtu.be/_1vXm6MPMlg

Speaker Info:

Harshavardhan is a deep learning computer vision Engineer at Toyota connected. Over the years, he has applied computer vision techniques to build products in the field of robotics, autonomous cars, infotainment systems, and medical devices. He believes understanding the working of ML models can help us build reliable, simpler, faster, and better models.

Speaker Links:

https://www.youtube.com/watch?v=AruupR4MsOY (first speaker) https://www.youtube.com/channel/UC1vgr9As8TdFkZy1k7jAYSg - Youtube channel

Have talked at a few other public events organized by meetups but don't have links.

Section: Data Science, Machine Learning and AI
Type: Talks
Target Audience: Intermediate
Last Updated: