Deep Learning fundamentals in Python: Implementing MLPs, Convnets Recurrent Nets and Reinforcement algorithms from scratch

muktabh mayank (~muktabh)




Writing a Neural Network is one thing and making its training fast is another. For the same reason, programming a Neural Network should ideally neither be tough (as in Theano) or abstract (as in Keras). However, we have to use these frameworks to speed up the training (they compile the code for the CPU/GPU). The talk here is concerned about how can we implement these neural networks without aid of any frameworks (except numpy or autograd at worst) , so that we can strip down the complexity and see how the algorithms actually run. Bare minimal code is devoid of any complexity or abstractions and hence we can focus on the algorithm than the framework.

Here is what we will walk through:

  1. Implementing a Multi Layer Perceptron using Numpy. Highlighting stages like forward propogation, backprop, training testing.

  2. How is a Recurrent Neural Network different from simple MLP (Multi Layer Perceptron) ?

  3. Implementing a Recurrent Neural Network to remember sequences.

  4. How to backpropogate errors in a Convolution ? Using Numpy + Autograd for writing a Convolutional Neural Network.

  5. How do Convnets work ? Significance of Maxpool and ReLU.

  6. Introduction to Reinforcement Learning.

  7. Implementing a Reinforcement Learning Neural Network.


Audience need to know programming in numpy.

Speaker Info:

Muktabh is cofounder at ParallelDots, a Gurgaon based AI startup. He leads the R&D efforts at the firm and loves solving hard data problems. He has four year experience in Machine Learning across various sectors. He is one of the most viewed authors on Quora on various Deep Learning and other programming topics.

Speaker Links:

Section: Scientific Computing
Type: Talks
Target Audience: Intermediate
Last Updated:

No comments added so far.

Login to add a new comment.