What you'll learn

The AI revolution has created a new breed of engineer that top companies are desperately hiring: the LLM Engineer. This course is designed to transform you from an AI user into an AI builder, ready to walk into interviews at product companies, AI startups, and global tech giants. Over this intensive summer program, you will master the complete stack that real AI engineers use every day: transformers, prompt engineering, fine-tuning, retrieval-augmented generation (RAG), autonomous agents, and multi-agent systems. You will not just learn theory, you will build production-grade projects using industry tools like LangChain, CrewAI, Hugging Face, and vector databases. By the end, you will have a portfolio of real AI applications, a deployed capstone project, and the confidence to apply for roles like AI Engineer, LLM Engineer, Generative AI Developer, and Applied ML Engineer. If you want to stand out in placements and be the student recruiters notice, this is your launchpad. Learning Outcomes: CO1: Explain the foundations, architecture, and working of transformer-based Large Language Models and intelligent agents, including tokenization, embeddings, and the attention mechanism. CO2: Apply advanced prompt engineering and reasoning strategies, including zero-shot, few-shot, chain-of-thought, and function calling, to build effective LLM-driven applications. CO3: Adapt and customize LLMs using parameter-efficient fine-tuning techniques such as LoRA and QLoRA, and integrate retrieval-augmented generation (RAG) pipelines with vector databases for knowledge-grounded reasoning. CO4: Design and build autonomous, tool-using AI agents and collaborative multi-agent systems using industry frameworks such as LangChain, LangGraph, CrewAI, and AutoGen. CO5: Evaluate LLM and agentic AI systems for accuracy, safety, hallucination, and ethical compliance, and apply guardrails to ensure responsible AI deployment. CO6: Deploy production-ready LLM applications using industry-standard tools and platforms, and demonstrate a complete capstone project suitable for a professional portfolio.

  • Introduction to Large Language Models and intelligent agents, the emergence of Agentic AI, transformer architecture and attention mechanism (intuitive walkthrough), tokenization approaches and embeddings, pre-training objectives, and an overview of popular LLMs including GPT, Claude, Gemini, LLaMA, and Mistral. Students will understand the core building blocks that power modern AI systems.
  • Zero-shot, few-shot, and chain-of-thought prompting strategies, system prompts, role prompting, and structured prompting techniques. Function calling for tool integration, goal-oriented task planning, agent memory and state management basics. Students will learn to design effective prompts and reasoning workflows for real-world LLM applications.
  • When to fine-tune versus prompt versus use RAG, supervised fine-tuning and instruction tuning overview, parameter-efficient fine-tuning using LoRA and QLoRA, quantization techniques for efficient deployment, and reinforcement learning from human feedback (RLHF) intuition. Includes hands-on LoRA fine-tuning of a small model and evaluation of fine-tuned performance.
  • The hallucination problem and why RAG matters, embeddings and semantic search, vector databases including ChromaDB and FAISS, document loading and chunking strategies, indexing and retrieval pipelines, and advanced RAG techniques such as re-ranking and query transformation. Students will build a complete RAG pipeline and a Chat with PDF application.
  • What makes an AI agent including perception, reasoning, and action, the ReAct framework for reasoning and acting in a loop, tool use and API orchestration, LangChain and LangGraph for agent workflows, multi-agent collaboration using CrewAI and AutoGen, and designing agent teams with roles, goals, and task delegation. Includes hands-on projects on building autonomous and collaborative agents.
  • Evaluating LLM applications using standard metrics and methods, safety guardrails and prompt filtering, responsible AI and ethical considerations, and deployment of LLM applications using Streamlit and HuggingFace Spaces. The unit concludes with the capstone project where students define, design, implement, and deploy a complete AI application for their professional portfolio.

Rohit Bharti
Assistant Professor

Rohit Bharti is an Assistant Professor and researcher specializing in Machine Learning, Deep Learning, Generative AI, and Large Language Models (LLMs). He is actively pursuing his PhD with a research focus on applied AI and generative language models, and has hands-on experience in designing and deploying real-world AI systems. He has published research in the domains of machine learning, deep learning, and generative AI, contributing to advancements in applied AI. His work focuses on building scalable AI solutions, including prompt engineering, retrieval-augmented generation (RAG), parameter-efficient fine-tuning of open-source LLMs such as LLaMA and Mistral, and the design of AI agents and multi-agent systems using industry frameworks like LangChain, CrewAI, and AutoGen. His expertise spans the complete LLM engineering lifecycle, from data preprocessing and model adaptation to end-to-end deployment of production-grade AI applications.


Dipen Saini
Assistant Professor

Dipen Saini is an Assistant Professor and researcher specializing in Machine Learning, Deep Learning, Generative AI, and Large Language Models (LLMs). With hands-on experience in Artificial Intelligence since 2012, he has developed strong expertise in designing and deploying real-world AI systems. He has published multiple research papers in the domains of machine learning and deep learning, contributing to advancements in applied AI. His work focuses on building scalable AI solutions, including AI agents with tool integration, fine-tuning LLMs (such as LLaMA, Mistral, and GPT-based models), and developing end-to-end AI pipelines—from data preprocessing to model deployment.