AI/ML Pipelines with Python
In this hands-on course using Python, participants will learn how to use Python for building AI/ML Pipelines
Participants will work on a real-life scenario of building AI/ML pipeline covering the various aspects like
- Ingesting data
- Cleaning & Transforming data
- Perform Exploratory Data Analysis (EDA) on the dataset
- Running ML models
- Analyzing results
- Conclusion: Stitching it all-together as a Pipeline
As part of this exercise participants will be introduced to various useful Python libraries that every AI/ML Engineer should know. The session will cover various other aspects of a robust, scalable AI/ML pipeline.
This is an intermediate level hands-on course on Python. To benefit from this course the participants are expected to have
- Basic familiarity with Python programming
- Conceptual knowledge of data pipelines, relational data and big data
- Using Jupyter Python notebook environment
The code in the form of Jupyter notebooks are currently work-in-progress and it will be provided much ahead of the workshop in this github repo.
Arijit Saha is a data professional with over sixteen years of industry work experience in architecting, designing & developing large-scale data products, platforms & solutions for both big & medium size enterprises. Currently he is busy architecting Enterprise AI data platform & products in one of the fastest growing startup Noodle.ai. He is an alumnus of the Business Analytics and Intelligence course from IIM Bangalore. His interests include Data Architecture, Big Data Analytics, Geospatial Analytics and application of Artificial Intelligence in Enterprises.
Sumit Sen is a software development professional with more than 15 years of development experience in areas of embedded systems, mobile and virtualization technologies. Currently he is working on the architecture of the AI as a Service offerings of Noodle.ai, an exciting startup in the Enterprise AI space. He is passionate about High Performance Computing, virtualization and IoT systems.