·
Module 1 - Introduction to Agentic AI
·
What
is Agentic AI?
·
What are Agents? Agentic AI vs AI Agents,
Agentic
AI vs Generative AI, What are Multi-Agents?
· Agentic
AI Frameworks
·
Overview of Agentic AI Frameworks
· Module 2
- Phi
Data: Agentic AI Framework
· Core
Concepts
·
Agents in Phi Data, Models, Tools, Knowledge,
Chunking
· Data and
Storage
· Vector Databases (VectorDbs), Storage,
Embeddings
· Workflows
·
Workflow Design and Execution
· Use Cases
·
Web Search Agents, Financial Agents, Retrieval-
Augmented Generation (RAG) Agents
· Module 3
- LangChain
· Core
Components and Data Handling
·
Introduction to Basic Components and Modules in
LangChain, Data Ingestion
with Document
Loaders
· Text
Splitting Techniques
·
Recursive Character Text Splitter, Character Text
Splitter, HTML Header
Text Splitter, Recursive
JSON Splitter
· Embeddings
and Vector Storage
·
OpenAI Embedding, Ollama Embedding, Hugging
Face Embedding, VectorStores: FAISS
and
ChromaDB, VectorStore and Retriever
· Module 4
- LCEL (LangChain Expression
Language)
· Getting Started
· Open Source Models Using Groq API
· Building
and Deploying
· Building LLMs, Prompt and Structured Output
Chains with LCEL, Deploying LangServe
Runnables and Chains as APIs
· Module 5
- LangServe for Efficient AI
Deployment
· Overview
and Setup
· Overview of LangServe and Its Capabilities,
Importance of Efficient AI Model Serving, Key
Features and Benefits of
LangServe, Setting Up the
LangServe Environment, Installing LangServe and
Initial Configuration, Configuring Environment
Variables and Dependencies
· Model
Deployment
· API-Driven Model Serving: How LangServe
Bridges
AI Models and Applications, Deploying Your Model
with LangServe,
Creating and Managing Custom
Endpoints, Integrations with External Tools
· Module 6
- LangGraph
· Core
Concepts
· Introduction, Simple Graph, LangGraph Studio,
Chain, Router
· Agents
· Agent, Agent with Memory, Intro to Deployment
· State
Concepts
· State Schema, State Reducers, Multiple
Schemas
· Message
Handling
· Trim and Filter Messages
· Deployment
Concepts
· Deployment Concepts, Creating and Connecting
to
Deployment
· Module 7
- UX and Human-in-the-Loop with
LangGraph
· Interaction
· Streaming, Breakpoints, Editing State and
Human
Feedback, Dynamic Breakpoints
· Time
Travel
· Time Travel
· Module 8
- Agentic RAG
· Adaptive
RAG
· Adaptive Rag, Adaptive Rag with Cohere,
Adaptive
rag in Local
· RAG
Variants
· Agentic Rag, C-Rag, C-Rag in Local, Self Rag,
Self
Rag in Local, Self Rag with VectorDB
· Module 9
- Designing Multi-Agent Systems with
LangGraph
· Agent
Design
· Building Agent Nodes in LangGraph, Agent
Communication Protocols and Coordination,
Defining Tasks and Roles for Agents
· System
Design
· Creating Scalable Multi-Agent Systems in
LangGraph, Building A Real-World Multi-Agent
System
· Module 10
- CrewAI Platform
· Overview
· Definition and Overview, Key Features and
Capabilities,
Crew Collaboration Framework
· Collaboration
and Tools
· AI-Agent Communication, Workflow Automation
in
CrewAI, Customizing CrewAI, Managing Data
Across Agents, Role-playing,
Memory, Tools, Focus,
Guardrails, Cooperation, Using LangChain Tools
· Module 11
- LangFlow Overview and Setup
· Introduction
and Setup
· What is LangFlow? Overview and Use Cases, Key
Features of LangFlow for LLM Applications, Setting
Up Your LangFlow Environment
· LangFlow
UI and Terminologies
· Understanding LangFlow UI and Workflows, Key
Terminologies in LangFlow (Nodes, Chains,
Prompts)
· Quick
Start
· Quick Start: Creating Your First LangFlow
Application
· Core
Concepts
· Nodes and Chains: Core Concepts,
Understanding
LLMs and Their Integration with LangFlow, Pre-built
vs. Custom
Workflows
· LangChain
and Prompt Engineering
· Prompt Engineering Basics in LangFlow,
LangChain
Integration: Using LangFlow with LangChain
· Commonly
Used Nodes
· Exploring Commonly Used LangFlow Nodes
· Module 12
- Integration with Third-Party Tools
· Data Integration
· Connecting LangFlow with Data Sources (SQL,
CSV, NoSQL), Using LangFlow with Vector
Databases for Embeddings
· API
Integration
· API Integration for External Services (REST,
GraphQL), LangFlow with OpenAI and Hugging
Face Models
· Workflow
Automation
· Automating Workflows Using LangFlow, Building
Chatbot Applications with LangFlow
· Module 13
- Langfuse for LLM Observability
· Langfuse
Overview
· What is Langfuse? Overview and Applications,
Importance of Observability in LLMs, Key Features
and Benefits of Langfuse,
Understanding
Langfuse's Integration Ecosystem
· Integration
and Monitoring
· Step-by-Step Integration with Popular
Frameworks
(LangChain, OpenAI, etc.), Setting Up API Calls for
Observability,
Tracking Key Metrics: Response
Times, Costs, and Errors, Monitoring Prompt
Effectiveness and Token Usage.
· Module
14 -Metrics and Monitoring in
LangWatch
· LangWatch
Overview
· What is LangWatch? Overview and Use Cases,
Key
Features of LangWatch in Monitoring Language
Models, Connecting LangWatch
with LLMs
· API Integration
and Setup
· API Integration: Sending Logs and Data to
LangWatch, Setting Up Observability in AI
Workflows
· Using
LangWatch with Frameworks
· Using LangWatch with Popular Frameworks
· Module 15
- Langsmith
· Langsmith
Overview
· What is LangSmith? Overview and Key Features,
LangSmith in the AI Development Workflow
· Setup and
Configuration
· Setting Up LangSmith: Installation and
Configuration, Exploring the User Interface and
Core Functionalities
· Workflow
Management
· Understanding Workflow Pipelines in LangSmith,
Creating and Managing AI Workflows, Data
Integration in LangSmith,
Preprocessing and
Cleaning Data, Managing Data Streams and
Sources.
· Module 16
- Introduction to Autogen
· Framework
Overview
· Overview, Key Concepts: Autonomy,
Adaptability,
and Inter-Agent Communication, Installation and
Environment Setup
· Agentic
System Development
· Introduction to Agents, Goals, Environments,
and
Actions, APIs, Libraries, and Tools Available Within
the Autogen Framework,
Designing and Developing
Agentic Systems, Framework for Agentic Decision-
Making
· Agent
Interaction and Learning
· Interaction and Communication Between Agents,
Implementing Feedback Loops, Handling
Uncertainty and Constraints, Agent
Learning and
Adaptation, Multi-Agent Collaboration
· Deployment
and Monitoring
· Deployment, Monitoring Agent Performance
· Module 17
- End to End Agentic AI Projects
· Project-Based
Learning
· Agentic AI Projects
· Module 18
- AWS Cloud & Services for Generative AI
· Introduction
to AWS Cloud
· Detail introduction of AWS Cloud services,
how to
create an AWS account, how to create an IAM,
Understanding Regions and
Zones
· AWS
Compute and Container Services
· AWS Elastic Container Registry, AWS Elastic
Cloud
Compute, AWS App Runner
· Module 19
- AWS Bedrock
· Introduction
to AWS Bedrock
· Amazon Bedrock - Introduction, Bedrock
Console
Walkthrough, Amazon Bedrock - Architecture
· Bedrock
Models and Use Cases
· Bedrock Foundation Models, Bedrock
Embeddings,
Bedrock Chat Playgrounds
· Bedrock
Inference and Pricing
· Amazon Bedrock - Inference Parameters,
Bedrock
Pricing
· Module 20
- AWS SageMaker
· Overview
of AWS SageMaker
· AWS SageMaker Overview, AWS SageMaker
Walk-
through, AWS SageMaker Studio Overview, AWS
SageMaker Studio Walk-through
· Model
Deployment with SageMaker
· Choose a Pre-trained Model, SageMaker
Endpoint
Creation, SageMaker Console Access, Create
SageMaker Domain, Open
SageMaker Studio,
SageMaker Models Deployment
· Module 21
- AWS Lambda
· Overview
of AWS Lambda
· Overview of AWS Lambda, Lambda Console
Walkthrough, Lambda Permissions Model
· Module 22
- AWS API Gateway
· API
Gateway Overview
· AWS API Gateway, RESTful APIs, Web Socket
APIs
· Efficient
API Development
· Efficient API Development
· Module 23
- Text Summarization with AWS
Services
· Integration
of AWS Lambda with Bedrock and
API Gateway
· Creation of AWS Lambda function and Boto3
upgrade, Writing the AWS Lambda function to
connect to Bedrock Service, Create
REST API
using AWS API Gateway and Lambda Integration
· Module 24
- Fine-Tuning Foundation Models on
Custom Data
· Fine-Tuning
Overview
· Fine-Tuning of Foundation Model - Overview,
Fine-
Tuning of Foundation Model – Architecture
· Hands-On
with AWS SageMaker
· Fine-Tuning of Foundation Models - Hands On
AWS
SageMaker
· Module 25
- Project: AWS
· Retrieval-Augmented
Generation (RAG) in AWS
· Overview, Setup, Data Transformation and
Processing, LLM and Retrieval QA, Frontend and
Backend Development
· Building
Chatbot with Llama3, Langchain &
Streamlit
· Overview, Setup, Data Handling and LLM Creation,
Frontend and Final Demo
· Module 26
- GCP Basics & Introduction to Vertex
AI
· Introduction
to Google Cloud and Vertex AI
· What is Vertex AI? Google AI Studio
Introduction,
Google Cloud Regions & Zones, Foundation
Google Models
· Vertex AI
Setup and Installation
· Vertex AI Installation, Google Cloud Setup
for
Production, Vertex AI Overview, Vertex AI Model
Garden
· Module 27
- Gemini Models with Vertex AI and
Google AI Studio
· Introduction
to Google Gemini
· What is Google Gemini? Google Gemini: Playing
with Gemini, Gemini 1.5 Pro (Preview only), Gemini
1.0 Pro
· Gemini
Embeddings and Retrieval
· Gemini Embeddings, Advanced Information
Retrieval with Gemini
· Working
with Prompts
· Working with Freeform & Structured
prompts,
Working with Text Chat prompt
· Multimodal
and Text-Based Use Cases
· Generate Code, Unit test with Code Chat Bison
model, Translate text with Translation LLM,
Summarization, Classification
· Multimodal
Applications
· Vision Model, Speech to Text & Text to
Speech,
Multimodal Prompts
· Module 28
- Project: GCP
· Retrieval-Augmented
Generation (RAG) in GCP
· Overview, Setup, Data Transformation and LLM
Context, Frontend and Final Demo
· Building
Chatbot with Gemini Pro, Langchain &
Streamlit in GCP
· Overview, Setup, Data Transformation and LLM
Creation, Frontend and Final Demo