Model Optimization 101
sayakpaul |
4
Description:
State of the art Deep Learning models are often heavy in sizes and this makes it extremely challenging when it comes to deploying them. Think of deploying to mobile devices, Raspberry Pis, etc. The compute capability of these won't support those heavyweight models. This talk is going to be about optimizing Deep Learning models to make them eligible for deployment to embedded devices. We will be covering techniques like Pruning, Quantization and along the way, we will also be discussing some practical tips and tricks to make them work really well. These techniques not only help us to develop to lighter, faster, performant models but also help us to make them more energy-efficient.
Following is going to be the outline of the talk:
- What is model optimization?
- Why should we care about it?
- Different areas for a model to optimize
- Mapping areas to optimization techniques
- Quantization
- Pruning
- Considerations & best practices
- Further directions and QA
Here's a reference deck I prepared on this topic: http://bit.ly/mo-101.
Prerequisites:
- Machine Learning Developers having worked on image models (in Keras).
- Machine Learning Engineers looking for ways to optimize models for deployment purposes.
Video URL:
https://www.youtube.com/watch?v=u5_JorytAcI
Content URLs:
- Here's a repository that reflects my work done so far in the area of Model Optimization with TensorFlow Lite: https://github.com/sayakpaul/Adventures-in-TensorFlow-Lite.
- I have published a number of faster, lighter, and efficient semantic segmentation models on TF Hub: https://tfhub.dev/s?publisher=sayakpaul.
- Here are the links to some of the decks I prepared on similar topics:
Speaker Info:
I am currently with PyImageSearch where I apply deep learning to solve real-world problems in computer vision and bring some of the solutions to edge devices. I am also responsible for providing Q&A support to PyImageSearch readers.
Previously at DataCamp, I developed projects (here and here), and practice pools (here) for DataCamp. Prior to DataCamp, I have worked at TCS Research and Innovation (TRDDC) on Data Privacy. There, I was a part of TCS’s critically acclaimed GDPR solution called Crystal Ball.
Off the work, I enjoy writing technical articles and talking at developer meetups and conferences. My subject of interest broadly lies in areas like Machine Learning Interpretability, Full-Stack Data Science.
Speaker Links:
- Personal website: https://sayak.dev/
- GitHub: https://github.com/sayakpaul/
- Twitter: https://twitter.com/RisingSayak
- LinkedIn: https://www.linkedin.com/in/sayak-paul/