Let’s make you invisible from the surveillance cameras

Pratik Parmar (~HackyRoot)


Description:

The goal of this session is to demystify Machine Learning for attendees and show them how a Machine Learning system(AI surveillance systems) works under the hood. One more outcome of this session is to show that Machine Learning is just a technology, and it’s not foolproof. In fact, many deep learning models are vulnerable to the adversarial attack, i.e., imperceptible but intentionally-designed perturbations to the input can cause the incorrect output of the models.

Hence, how you can use it to make yourself invisible to such AI surveillance systems, here’s the demo video from the original paper: https://youtu.be/MIbFvK2S9g8 This session aims to be highly engaging, collaborative and can be adjusted to suit the attendee’s knowledge of AI and programming.

Agenda

The session will kickstart with a gentle introduction to Machine Learning, which will not involve heavy math or coding, to make it more inclusive. We’ll dive into how a Machine Learning model works in a real-life scenario, using an image classifier demo.

Once the audience gets convinced how Machine Learning works, we’ll briefly introduce how it is being used for surveillance around the world. We live in an age of mass surveillance, and big data, mixed with the ever-increasing power of artificial intelligence, means all of our actions are being recorded and stored like never before. Surveillance is being used to make people safe, but how much privacy are we willing to give up to be safe? That’s where “Adversarial Attack” comes into the picture.

To introduce participants to “Adversarial Attack” - which looks like a random pattern but can trick the ML models and cause the incorrect output of the model, we’ll use this demo. More importantly, we’ll demonstrate how adversarial attacks can be used to make you invisible from such AI surveillance systems. The session will conclude with a discussion with participants on the pros and cons of adversarial attacks.

Basic Outline of the talk:

  1. Machine learning [7 minutes]

    • A visual introduction to machine learning

    • Image classifier demo

    1. Surveillance systems [7 minutes]
    • What is AI surveillance systems

    • Case study of AI surveillance in China

    1. Adversarial attacks [11 minutes]
    • The theory behind the adversarial attack moreover, it's working - Demo of adversarial attack

    • Use of adversarial patch against AI surveillance systems

    1. Q/A - [5 minutes]

Prerequisites:

Basic understanding of AI

Content URLs:

Slides (still being curated): https://slides.com/pratikparmar/let-s-make-you-invisible

Previous Slide decks: https://github.com/HackyRoot/Workshop-Content

Speaker Info:

I'm Pratik Parmar of House Gryffindor, a millionth of my name, bachelor of the information technology, tech speaker of machine learning, the reader of books, a dancer of garba, player of badminton, contributor of open source and lover of the European charm.

Jokes apart, Pratik is an enthusiast machine learning developer who's always eager to tinker with new ML frameworks. However, TensorFlow and PyTorch are some of his favorite toys. He’s Student Partner at Microsoft and Machine Learning and Cloud facilitator at Google. He has facilitated 4 MLCC Study Jam in Gujarat which helped a lot of people to get started with ML. He’s in the final year of his engineering at SVIT. When he’s not talking at a conference or studying at college, he loves to travel and cook.

Speaker Links:

  • Medium Blog: http://medium.com/@Hackyroot/
    • LinkedIn: http://linkedin.com/in/pratikparmar1/
    • Twitter: http://twitter.com/hackyroot
    • GitHub: http://github.com/hackyroot

Section: Data Science, Machine Learning and AI
Type: Talks
Target Audience: Beginner
Last Updated: