Running Tensorflow models on a $35 Device

Soham Chatterjee (~soham48)




tl;dr: As data becomes increasingly extensive, it becomes important to move your models away from the cloud to where your data is being generated to reduce latency, increase security and save internet bandwidth.

This talk will be about how you can run trained TensorFlow models on Edge devices and how you can use Edge Computing accelerators like the Neural Compute Stick to make your models run even faster.

Long Version

There are a lot of very compelling reasons for shifting computations away from the edge and into the cloud, with the most important being latency issues. Here, latency refers to the time it might take to send data to a server and then receive the response. The few seconds of delay caused by this might not be a problem for your smart home applications, but when in an industry, those few precious seconds, or even microseconds can cause a machine to breakdown or even take lives. Furthermore, many industrial processes might be happening in places where running an internet line may not be possible: a mine, for example. And even if having an internet connection is possible, most companies are hesitant to send data over an internet connection and risk exposing their data to hackers prompting them to keep their data in-house. Finally, if you have a lot of sensors, you will probably be streaming data in the order of gigabytes every hour. It does not make sense for companies to pay for the bandwidth to send that much data when most of it will be discarded anyways. Thus it is important to shift all that computation to where the data is being generated.

This talk will be about how to move your existing TensorFlow models to Edge devices like Raspberry Pi's. The talk will also introduce other Edge Computing hardware like the Neural Compute Stick to make your models run even faster on Raspberry Pi.

Why Attend this talk

This talk will give the audience an understanding of Edge devices and Edge Computing. You will also learn the best practices to deploying models on the Edge. The live demo's will also give the audience an idea about how to run TensorFlow models on embedded devices.

Topics covered:

  • Edge Computing and Raspberry Pi - 5 Minutes
  • TensorFlow Models - 5 minutes
  • Demo on how to run models on the Edge - 10 minutes
  • Demo with Benchmarking tests - 5 minutes
  • Q/A - 5 minutes


  • Python 3.5
  • TensorFlow 1.7

Content URLs:

Rough draft of slides

Speaker Info:

I have been working in the field of ML for the last year. I am currently working as a Deep Learning Research Engineering Intern at Saama Technologies, where I am using TensorFlow to reduce the time taken for clinical trials and help get patients medicines quickly.

My primary work was with the University of Cambridge. There I used TensorFlow to create a model that can optimize the design of Gallium Nitride circuits. This work was published in one of the world's largest conferences on Power Electronics - WiPDA.

In my second year of UG studies, I realized that engineers should have more practical knowledge. I started a student-run cross-disciplinary research lab called Next Tech Lab. As a part of the lab, I won the Smart India Hackathon for creating an app that could be used to detect electricity power theft. I have also published many research papers in IEEE and Elsevier.

I am also an active member of the Indian Deep Learning Community. I also write articles such as this one: convolutional filter types and Data Correlation and Machine Learning.

I believe in spreading knowledge and teaching others about Machine Learning.

Speaker Links:

Section: Embedded python
Type: Talks
Target Audience: Advanced
Last Updated:

No comments added so far.

Login to add a new comment.