Differential Privacy and Adversarial Examples

Sadhana Srinivasan (~rotuna)


2

Votes

Description:

In recent times, we have seen a startling rise in data aggregation and reliance on machine learning models. This has grave consequences when our data is not protected and when model behaviour is deliberately modified.

Differential Privacy is a privacy aware sampling technique that ensures that no one individuals property can be extracted from the model. Adversarial examples look similar to real images but are engineered in such a way that they result in nonsensical predictions from ml models. Recent research has shown that the issue of adversarial attacks on machine learning models could be solved by using differential privacy.

This talk aims to introduce differential privacy, adversarial examples and introduce the audience to the vibrant python research community around these topics.

Prerequisites:

An understanding of how neural networks work

Speaker Info:

I'm Sadhana Srinivasan, I did my master's in Mathematics from BITS Pilani. I've been coding in python and working in machine learning for the past 3 years, having taught deep learning and machine learning courses at BITS. I interned at EY working on chatbots for analytics. I'm currently a research engineer at Saama Technologies working on AI based solutions for the healthcare industry.

Section: Data science
Type: Talks
Target Audience: Intermediate
Last Updated: