The AI revolution has created a new breed of engineer that top companies are desperately hiring: the LLM Engineer. This course is designed to transform you from an AI user into an AI builder, ready to walk into interviews at product companies, AI startups, and global tech giants. Over this intensive summer program, you will master the complete stack that real AI engineers use every day: transformers, prompt engineering, fine-tuning, retrieval-augmented generation (RAG), autonomous agents, and multi-agent systems. You will not just learn theory, you will build production-grade projects using industry tools like LangChain, CrewAI, Hugging Face, and vector databases. By the end, you will have a portfolio of real AI applications, a deployed capstone project, and the confidence to apply for roles like AI Engineer, LLM Engineer, Generative AI Developer, and Applied ML Engineer. If you want to stand out in placements and be the student recruiters notice, this is your launchpad. Learning Outcomes: CO1: Explain the foundations, architecture, and working of transformer-based Large Language Models and intelligent agents, including tokenization, embeddings, and the attention mechanism. CO2: Apply advanced prompt engineering and reasoning strategies, including zero-shot, few-shot, chain-of-thought, and function calling, to build effective LLM-driven applications. CO3: Adapt and customize LLMs using parameter-efficient fine-tuning techniques such as LoRA and QLoRA, and integrate retrieval-augmented generation (RAG) pipelines with vector databases for knowledge-grounded reasoning. CO4: Design and build autonomous, tool-using AI agents and collaborative multi-agent systems using industry frameworks such as LangChain, LangGraph, CrewAI, and AutoGen. CO5: Evaluate LLM and agentic AI systems for accuracy, safety, hallucination, and ethical compliance, and apply guardrails to ensure responsible AI deployment. CO6: Deploy production-ready LLM applications using industry-standard tools and platforms, and demonstrate a complete capstone project suitable for a professional portfolio.
Rohit Bharti is an Assistant Professor and researcher specializing in Machine Learning, Deep Learning, Generative AI, and Large Language Models (LLMs). He is actively pursuing his PhD with a research focus on applied AI and generative language models, and has hands-on experience in designing and deploying real-world AI systems. He has published research in the domains of machine learning, deep learning, and generative AI, contributing to advancements in applied AI. His work focuses on building scalable AI solutions, including prompt engineering, retrieval-augmented generation (RAG), parameter-efficient fine-tuning of open-source LLMs such as LLaMA and Mistral, and the design of AI agents and multi-agent systems using industry frameworks like LangChain, CrewAI, and AutoGen. His expertise spans the complete LLM engineering lifecycle, from data preprocessing and model adaptation to end-to-end deployment of production-grade AI applications.
Dipen Saini is an Assistant Professor and researcher specializing in Machine Learning, Deep Learning, Generative AI, and Large Language Models (LLMs). With hands-on experience in Artificial Intelligence since 2012, he has developed strong expertise in designing and deploying real-world AI systems. He has published multiple research papers in the domains of machine learning and deep learning, contributing to advancements in applied AI. His work focuses on building scalable AI solutions, including AI agents with tool integration, fine-tuning LLMs (such as LLaMA, Mistral, and GPT-based models), and developing end-to-end AI pipelines—from data preprocessing to model deployment.
Email ID for Comments & Feedback: registrar@lpu.co.in
Dr Vishal Sharma,
E-mail: dsr@lpu.co.in
Mobile: +91-9815845414
To continue interaction with our expert admission counsellors enter the passcode for career guidance to discover the best suited programmes for you. Available from 10 am to 5 pm on all working days.