Optimizing Deep Convolutional Neural Networks for Speed and Performance





In this talk, I will be focusing on techniques to run the DCNNs as efficiently as possible in terms of :

1) Decrease running time on CPU
2) Decrease running time on GPU
3) Increase performance in case of very little data

There will be a few strategies taught which will allow the neural networks to train and run much faster on CPUs without compromising on the accuracy. A few of them will be changing the neural network itself, which others will focus more on using tools to enhance performance. I have ran very deep neural networks in real-time on CPUs using some of the techniques which will presented.

The latter half of the talk will be focused on increasing performance in case there is very little data. I have personally achieved accuracies of above 90% using Deep Neural Networks when there are only a few hundreds of images available as training data. I'll be sharing some of those intuitions in this talk, including concepts like initialization, normalization, tweaking the learning rate, regularization, when to finetune and when not to etc.

The whole talk will be focused on using Python to run Deep Neural Networks using the Theano/Keras library, which is the most popular deep learning library and is used widely by amateurs and professionals alike.

It will not be a theoretical talk where I talk about theories to achieve something. I'll demonstrate how changing certain parameters (variables in Keras) change the performance in terms of speed, accuracy and size of model


Necessary :

1) An understanding of how DCNNs work

2) A background on Machine Learning

Recommended :

1) Knowledge of Keras since the demos will be shown using this. However the concepts will remain the same across frameworks.

Content URLs:

Links to previous talk/workshops I took :

1) https://docs.google.com/presentation/d/1-Jm7Fx5kFFe67iqUK6NdP8ebDbzC7PZ9OAeQDoAvZ2M/edit?usp=sharing

2) https://docs.google.com/presentation/d/1D-7FNAXVpZffS9ph1nSUsHtwUQAxomasJ_a7ATfgaX8/edit?usp=sharing

Speaker Info:

Arush Kakkar is a co-founder at Agrex.ai, an AI based Video Analytics Platform and has wide variety of experience with Deep Learning in different domains like Medical Imaging, NLP, Object detection and tracking. He has worked on adding intelligence to Self-Driving cars and drones (with the team that's currently handling Amazon's Drone program, AVG, TU Graz). He was also the team lead of the Solar Car team of Delhi Technological University and is the author of a book on the Raspberry Pi, "Raspberry Pi by Example". He is also building a Self-Driving car as a personal project which uses end-to-end Deep Learning to navigate without any hard-coded algorithms.

He is also a part of the "25 Under 25" program by Campus Diaries in the field of Science and Tech

Speaker Links:






Section: Scientific Computing
Type: Talks
Target Audience: Intermediate
Last Updated: