Finding actor look-alikes with multi-modal LLMs
Anand S (~anand40) |
Description:
Ever got confused between Matt Damon and Mark Wahlberg? Or Daniel Radcliffe and Elijah Wood? Margot Robbie and Jaime Pressly?
This talk explores how multi-modal LLMs can be used to create embeddings of actors, identify similar-looking pairs, uncover clusters of look-alikes, and determine which actors resemble each other the most.
- Multi-Modal LLMs: Understand the role of multi-modal LLMs in analyzing and embedding visual and textual data to identify actor similarities.
- Creating Actor Embeddings: Learn about the process of generating embeddings for actors, capturing their unique facial features and characteristics.
- Identifying Look-Alikes: Discover how to find pairs of actors who look strikingly similar using embedding comparisons.
- Clustering Similar Actors: Explore techniques for clustering actors based on their visual similarities, revealing interesting groupings and trends in appearance.
- Similarity Metrics: Dive into the metrics and algorithms used to quantify visual similarity between actors and see how to apply these to real-world data.
- Interactive Demonstrations: Engage with live demonstrations showcasing the identification of actor look-alikes, clustering, and similarity searches.
Prerequisites:
A working knowledge of Python, REST APIs, and Hollywood
Content URLs:
None so far. I plan to work on this next month.
Speaker Info:
Anand is a co-founder of Gramener, a data science company. He leads a team that automates insights from data and narrates these as visual data stories. He is recognized as one of India's top 10 data scientists and is a regular PyCon speaker.
Anand is a gold medalist at IIM Bangalore and an alumnus of IIT Madras, London Business School, IBM, Infosys, Lehman Brothers, and BCG.
More importantly, he has hand-transcribed every Calvin & Hobbes strip ever and dreams of watching every film on the IMDb Top 250.