Tricking a Deep Neural Network with Adversarial Examples
Ojasava Paras (~ojasava) |
Adversarial Examples is one of the biggest threat to modern Deep Learning safety and its future. The goal of this talk is to make the audience familiar with adversarial examples and how they can trick/fool even a fully trained Deep Neural Network (DNN). Data Scientist and Researchers all over the world are spending a lot of time in solving this problem and coming out with their ideas and solutions.
Adversarial Examples has challenged and raised a question in everyone's mind. Is it safe to implement AI in the real world? Because as long as DNN can be tricked there will always be misuse/misbehavior of such technology. For example, if we implement an autonomous driving system in a car. We have trained it to detect red and green signals using Object Detection and Image Recognition. But someone changes the red signal in such a way, which is not detected by human eye, that it appears as a green signal then it may cause an accident. This will create a total chaos in the real world if misused.
While it is easy to trick a neural network by just changing weights in back-propagation, it is quite difficult to find a solution for such abnormality. There are many ideas like adversarial training and other defensive training, but there is no real time solution as such. Therefore, It is necessary to come up with something in upcoming years to continue developing further in Deep Learning.
The objectives of this talk:
- Discuss neural networks and back-propagation concepts
- Discuss Adversarial examples and why they are threat to DNN
- Brief training of a Deep Neural Network using PyTorch
- Show how to make changes in the network to fool the same DNN
- Discuss various solutions by people all over the world to counter this behavior
All the codes will be written or discussed in Python, PyTorch. So, It will be assumed that you are familiar with basic Python syntax and libraries. Also, it will be better if you have some basic knowledge of neural network and back-propagation ( though it is not required because it will be discussed).
Github : https://github.com/ojasavaparas/adversarial https://github.com/ojasavaparas/AdversarialNetsPapers
Presentation Slides: https://www.slideshare.net/ojasavaparas/tricking-a-dnn-with-adversarial-examples
I am passionate about Deep Learning and other areas of Machine Learning. I am currently pursuing my BTech in CSE from VIT University, Chennai Campus. I try to spend my free time to learn more about Deep Learning and its application in real world. I like to train and test different data sets to contribute something productive in research.