Fooling A Neural Network Using Adversarial Attacks
Shubhi Sareen (~shubhi863) |
Deep Learning Architectures have achieved state of the art performance on computer vision tasks. However, these algorithms have not been tested on diverse datasets consisting of unusual but natural images. This makes such “human-level performance” models extremely susceptible to small perturbations of the inputs, highlighting the vast difference between the processing power of humans & machines. Contrary to random noise, these perturbations are intelligently crafted and intentionally generated disturbances added to images in a dataset. Given the wide-range use of neural networks in this AI boom, the vulnerability can have unprecedented effect if they get exploited in the real world. The submission tries to answer whether the adversarial examples are simply a fun toy problem for researchers, or an example of a deeper, more chronic frailty in state-of-the-art image recognition architectures. The talk also expands on why is it difficult to defend against such attacks. This talk reaches out to all beginners, intermediate and expert audience in AI - asking them to raise questions in the field of AI security, and focusing on developing robust architectures and strengthening previously developed models against adversarial attacks.
Introduction - 5 minutes The session starts with why the topic focuses on fooling Neural Network particularly, and how convolutional neural networks achieved human like performance on various tasks in Computer Vision. I, then go on to intuitively explain what adversarial attacks mean, and why they are a critical problem in the AI pipeline.
Adversarial Attacks - 10 minutes The next part focuses on the types of attacks (Targeted and Untargeted), and how simple machine learning concepts like gradient and saliency maps can be exploited to create adversarial attacks. I also expand on the trade-offs between these attacks, and how they represent different intuitions, when it comes to generating good attacks.
Adversarial Defense & What Next? - 10 minutes In the last part, the session focuses on why adversarial attacks are difficult to defend against, and what are some intuitive techniques that have proven to be effective against such attacks. It focuses on why and how students, engineers, researchers should focus on robustness, in addition to accuracy of their models.
Basic understanding of Python and a mind full of curiosity!
A 2 time intern at Google and the recipient of Upsilon Pi Epsilon Scholarship, Shubhi Sareen will be joining the Google Docs team as a Software Engineer in July’19. She is passionate about the field of computer vision and digital image processing with its wide array of applications in diverse fields. She was among the Top 10 Teams at Targeted Adversarial Vision Challenge, NeurIPS 2018, working on low-frequency boundary attacks and ensembling that with Gradient Based Attacks. In her 2 internships, she worked in the Google Cloud Search and Apps Trust Team working on Quality and Machine Learning Problems, and eventually building deployable solutions. She is also the co-founder and chapter head of the Delhi Chapter of Women in Machine Learning & Data Science, with the objective of supporting and promoting women practising, studying or interested in the fields of machine learning & data science. She is also the Director of Women Who Code, Delhi. Having worked on various projects and co-authoring research papers in the field of Computer Vision and Machine Learning, Shubhi strongly believes in exploring various technologies, trying to build applications in multiple domains with an objective of being well equipped with all the skills that may be required to face the challenges awaiting us in the unpredictable future.
I have spoken at various events including AI Fest 2.0 which included speakers like Siraj Raval and Ajinkya Kolhe (http://aifest.iedccoet.org/), MLCC Study Jams organized by Google Developer Agency, Extended IWD Summits, Data Science Program by CSIR-NISCAIR, Webinars reaching out to a global audience organized by Women Who Code, Python and ML Nerdie, and at multiple events for Women Who Code, WiMLDS and Google Cloud Developer Community. I have majorly focussed on Neural Networks, Adversarial Attacks, Tensorflow, and Mathematics of ML.
Link to my blog