Resource Utilization as a Metric for Machine Learning
Akshay Bahadur (~akshaybahadur21) |
The advent of machine learning along with its integration with big data has enabled users to efficiently develop solutions for innumerable use cases. A machine learning model consists of an algorithm that draws some meaningful correlation between the data without being tightly coupled to a specific set of rules. It's crucial to explain the subtle nuances of the network along with the use-case we are trying to solve. With the advent of technology, the quantity of data has increased which in turn has increased the need for resources to process the data while building a model. The main question, however, is to discuss the need to develop lightweight models keeping the performance of the system intact. To connect the dots, we will talk about the development of these applications specifically aimed to provide equally accurate results without using much of the resources. This is achieved by using image processing techniques along with optimizing the network architecture.
The presentation will have code excerpts for the preprocessing and computer vision part for filtering out the unwanted background from the data. Each excerpt will be followed by a demo of how the changes work in real-time. For instance, I will be taking up a research paper by NVIDIA on behavioral cloning for self-driving cars. We can reduce the number of trainable parameters of the model proposed in the paper by 50% if we use an optimized CNN model, thus saving on training and prediction time ( the total trainable parameters , as per the model described in the research paper, are 132,501. However, with my implementation, we only need to train 80,213 parameters). First, we will start with formulating and addressing a strong problem statement followed by a thorough literature review. Once these things are taken care of, we will discuss the data gathering part, followed by the algorithm evaluation and future scope. While giving each of the demos, I would be talking about the models and algorithms used. Why is the literature review the most important phase of your project? How contributing to the community helps you ultimately.
- MNIST [10 mins]
- Autopilot (NVIDIA) [15 mins]
- Emojinator [15 mins]
- Malaria Detection [10 mins]
- Quick, Draw (Google) [15 mins]
Techniques for minimization of CPU resources
- Normalization of data (When using on MNIST dataset, the unnormalized data takes 371us/step (accuracy - 22%), however, the normalized data takes 323us/step(accuracy -73%)) [20 mins]
- Stripping channels from the images. Instead of all the 3 color channels, can use only 1 or use them separately to train the model. [10 mins]
- Hyperparamter tuning and how to affects the epoch training rate
- Rescaling/augmentation of the data. [10 mins]
- Designing filters to filter out the object/region of interest and removing the excessive background noise. [20 mins]
- Using fit_generator capability of TensorFlow. Instead of loading the entire dataset at once which might crash the RAM. We can use multiprocessing in loading data batch-wise at runtime. [20 mins]
Target audience and outcome
This tutorial is aimed at machine learning practitioners who have relevant experience in this field with basic understanding of neural networks and image processing would be highly appreciated. By the end of the session, the audience will have a clearer understanding of building vision-based optimized models that can be run on low resources. In a developing country like India, the crux of the problem lies with the requirement of heavy resources for performing computation. With the help of this tutorial, I want to share my insight on developing learning models frugally and efficiently.
- Basic understanding and Coding experience in Python
- Basic understanding of Machine Learning
- Basic concepts of image processing
Akshay Bahadur’s interest in computer science sparked when he was working on a women's safety application aimed towards the women's welfare in India and since then he has been incessantly tackling social issues in India through technology. He is currently working alongside Google to make an Indian sign language recognition system (ISLAR) specifically aimed at running on low resource environments for developing countries. His ambition is to make valuable contributions towards the ML community and leave a message of perseverance and tenacity.
He’s one out of 8 Google Developers Expert (Machine Learning) from India along with being one of 150 members worldwide for the Intel Software Innovator program.
- Invited by Google to present my research on ISLAR (Indian Sign Language) at Google headquarters in California.
- Received "The Most Influential Young Data Scientist of the Year 2019" by The International Society of Data Scientists for my contributions in the field of Machine Learning.
- Tutorial acceptance at the IEEE Winter Conference on Applications of Computer Vision (WACV 2020) - “Minimizing CPU utilization for Deep Networks”.
- Delegate at the 2020 Harvard College Conference in Cambridge, Massachusetts, USA.
- Awarded Top Innovator Award (2019) by Intel.
- Contributed to Google’s open-source project (Quick, Draw) and NVIDIA’s open source project (Autopilot).
- Presented my work along with the Google TensorFlow team at TensorFlow Roadshow (Bangalore).
Presenting author details
- GDE Summit, California 2019
- Data Hack Summit 2019
- GDG DevFest Kokata 2019
- Open Data Science Conference (ODSC), India 2019
- Indian Institute of Science, 2019
- Open Data Science Conference (ODSC), Boston 2019
- Open Data Science Conference (ODSC), India 2018
- DeepCogntion Workshop
- Institute of Analytics [Part 1] [Part 2]
- Microsoft Advanced Analytics User group