top of page

How adversarial noise protects my selfies from the AI Deepfake dance TikTok trend

This session will cover the practise of AI security with an example use case: protecting my personal photos against misuse through AI-based deepfake generation. I will first explain the different components of an AI-based deepfake model. Then, I will walk you through a method to alter the photos to make them harder to use by these models. We’ll then apply why we learnt against the TikTok famous AI dance filter and other leading deepfake models to see what happens!

As AI systems are being rapidly adopted and developed, understanding how they can be exploited is important. With this deepfake example, I aim to show you that, when applied creatively, AI security techniques can provide us with an additional consideration and tool in our cyber toolbox.

This will be a practical and fun session for both users or developers of AI - and anyone else who is interested in learning about the surprising ways machine learning models can fail!

Tania sadhani

Tania Sadhani is currently an AI security researcher with Mileva Security Labs working on investigating and addressing the unique vulnerabilities of machine learning systems. She has over 3 years of Government experience in emerging technologies in technical and strategic roles, covering machine learning and quantum technologies. She has recently completed an undergraduate degree in applied data analytics and is continuing a specialisation in computer science.

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
bottom of page