Visualizing LLM Hallucinations

Anand S (~anand40)


0

Votes

Description:

This talk explains how LLMs generate text, why they sometimes produce unexpected results, and what it means for their reliability and creativity.

The talk will help you:

  • Token Generation and Probabilities: Understand how LLMs guess the next word by considering probabilities and making random selections. Learn about the concept of "temperature" and how it affects creativity and accuracy.
  • Visualizing Hallucinations: Visually explore how LLMs can "hallucinate" by generating plausible but incorrect answers. Learn how log probabilities can signal potential errors.
  • Favorite Numbers and Biases: Discover the quirks of LLMs by looking at their preferred numbers when asked to pick randomly. Explore why some numbers are favored over others and what this tells us about the underlying biases in these models.
  • Practical Implications: Learn how to adjust LLM parameters for different use cases, from generating creative ideas to ensuring factual accuracy.

Prerequisites:

A working knowledge of Python and REST APIs

Video URL:

https://www.youtube.com/watch?v=IKOMMYhA528

Speaker Info:

Anand is a co-founder of Gramener, a data science company. He leads a team that automates insights from data and narrates these as visual data stories. He is recognized as one of India's top 10 data scientists and is a regular PyCon speaker.

Anand is a gold medalist at IIM Bangalore and an alumnus of IIT Madras, London Business School, IBM, Infosys, Lehman Brothers, and BCG.

More importantly, he has hand-transcribed every Calvin & Hobbes strip ever and dreams of watching every film on the IMDb Top 250.

Speaker Links:

Section: Artificial Intelligence and Machine Learning
Type: Talk
Target Audience: Intermediate
Last Updated: