Practical tips for building AI applications using LLMs - Best practices and trade-offs

Sourabh Gawande (~sourabh97)


9

Votes

Description:

Overview

At KushoAI, we’ve built an AI agent that can autonomously perform API testing for you. While building this, we came across a lot of problems specific to AI applications built on top of LLMs that you don’t see anywhere else. Since this is a fairly new area of development, we had to spend a lot of time figuring out solutions for them on our own.

The agenda of this talk is to give you an idea of the kind of problems that you’ll face while building AI applications, various tried and tested approaches to solve them and trade-offs that you need to consider while building AI applications.

We hope that attendees will be able to learn from our experience of building AI applications and get started on their own journey.

Talk outline

  • How to handle LLM inconsistencies while generating structured data

  • How (and why) to implement streaming in your application

  • Background jobs - why do you need them and how to manage them

  • Tools for A/B testing your prompts to find the most effective model for a particular task

  • Prompt observability for debugging

  • Prompt caching for cost-saving

  • Comparison of various LLM APIs available for general use - which ones work better based on task at hand

Prerequisites:

  • Python basics
  • GenAI basics

Speaker Info:

Sourabh Gawande is the co-founder and CTO of KushoAI. He has 9+ years of experience building products for domains ranging from crypto (FalconX) to supply chain (Ninjacart).

Speaker Links:

Section: Artificial Intelligence and Machine Learning
Type: Talk
Target Audience: Intermediate
Last Updated: