Explaining Convolutional Neural Networks using Class Activation Maps

Soumya (~aymuos15)


2

Votes

Description:

Description

To tackle any vision problem the first thought which crosses anyone's mind is Convolutional Neural Networks. But determining the intuition involved in the network as to why it puts out the results is quite incomprehensible. While this may not seem important while recognizing hand written digits it is of utmost importance for sensitive environments such as its application in disease detection and autonomous vehicles.


  • The problem: How can one understand the functioning of convolutional neural networks?
  • The solution: Class activation Maps

Who is the talk for?

  • Anyone who has worked with CNNs before
  • Anyone who wants to find a way to debug CNN's and find biases present in their solution
  • Anyone who works with vision problems
  • Anyone who loves machine learning

Outline

  • What are CNN’s and Why do we need them to be explainable? [4 Minutes]
  • Why Class Activation Maps? [3 Minutes]
  • What are Class Activation Maps [3 Minutes]
  • How do we know if we're in the right directions? (Metrics to measure CAMs) [1 Minutes]
  • Grad-CAM, Grad-CAM++ ~ A method which generalises CAMs. [3 Minutes]
  • Smooth Grad-CAM ++ [2 Minutes]
  • Problems with gradient based CAMs [2 Minutes]
  • Score-CAM (A non-gradient based CAM) [2 Minutes]
  • What we've done to improve on score-cam (ss-cam) [4 Minutes]
  • Questions

All topics will be shown with a Xdeep implementation except ss-cam

Details

Increasing Complexity in Convolutional Networks have helped in developing state of the art solutions to complex vision problems. In several occasions a model might begin to mis-classify or even if the model successfully completes the desired task there is no way of knowing that the model is truly identifying the desired objective. There also the problem of data bias and the debugging which follows to help reduce the bias and improve the model efficiency.

With an explainable model the user is given confidence in the model's findings. The feature maps generated help in identifying the CNNs intuition and also help in debugging the model. One will know where exactly the model is going wrong. A boost in the models confidence will help its implementation in sensitive environments as the outputs generated need to be accounted for in every direction in such situations.

Attendees will learn how exactly a CNN works and what "it sees" while performing several tasks. The code demonstrations will also help them implement the above work in their future tasks or research endeavours.

Takeaways

  • A knowledge base to start exploring explainable CNN models
  • Developing a concrete understanding of the intuitions followed by CNNs

Prerequisites:

  1. A basic knowledge of the layers present in a convolutional neural network
  2. A basic knowledge of coding Convolutional Neural Networks using pytorch
  3. College level Linear Algebra

Video URL:

https://drive.google.com/file/d/1ECZt_zDF2UhMerl9WbSRoRxGmN_4Hp8Y/view?usp=sharing

Content URLs:

https://arxiv.org/abs/2006.14255

https://github.com/datamllab/xdeep - for implementation purposes.

https://docs.google.com/presentation/d/1LyABbGoYFe1x4OkJ0meHB78rNDNt-uLPwKCd_CwTcIA/edit?usp=sharing - PPT Slides

Speaker Info:

Soumya Snigdha Kundu is a CSE undergrad at SRM Institute of Science and Technology. His research interests lies in the filed of deep learning and machine vision. He is currently interning with a few proffessors in the field of machine vision to improve his subject matter knowledge. His current goal is to secure an admit for his higher education. He also loves playing football and watching anime.

Speaker Links:

https://www.linkedin.com/in/soumya-snigdha-kundu-84b812183/

Section: Data Science, Machine Learning and AI
Type: Talks
Target Audience: Intermediate
Last Updated: