Seeing Is Believing - Visualizing Convolutional Neural Networks

Shruti Ganapathhy Subramanian (~shruti87)


Ever wondered what kind of filters convolutional neural networks learn? Ever wondered what they “see”?

Ever felt like you are working with a “black box” and had no idea what is going on within?

To understand your neural networks better, sometimes, you got to put yourself in its shoes and see what it sees. And visualizations help you do just that. Its power can be leveraged in understanding the features learned by your model and ultimately interpret the results. Not only will this help you build better models but it can also help you tailor your training data to better suit your use case.

In this workshop, various state-of-the-art visualization techniques used by deep learning practitioners around the world will be explored. A hands-on session will be conducted where you’ll learn the intuition behind them and to implement the topics outlined below.

Outline + Time break-up:

  1. Introduction to CNN Visualizations - 5 min
    • Understanding the intuition behind visualizations and the need for the same
  2. Hands on session - Understanding the theory and implementing the following: - 2 hrs
    1. Visualizing filters and Activation maps - 15 min
    2. Deconvolution - 30 min
    3. Class Activation Maps - 30 min
    4. Saliency Maps - 30 min
    5. T-SNE - 15 min
  3. Using visualization in your daily CNN tasks - 15 min
  4. Q&A and comparing notes - 10 min


  1. Basic understanding of Convolutional Neural Networks; preferably experience with training models and evaluating them.
  2. Basic knowledge in math and linear algebra

Speaker Links:

Hey guys, we are part of the Computer Vision and Machine Learning team at Mad Street Den.
Connect with us at:

Check out our previous talks on Deep Learning as part of Women Who Code:

  1. An Introduction to Deep Learning
  2. Complex Model Structures

Id: 1392
Section: Data Science, Machine Learning and AI
Type: Workshop
Target Audience: Intermediate
Last Updated: