What you'll learn

This course introduces learners to the rapidly emerging domain of Generative Artificial Intelligence, focusing on text, image, audio, and video synthesis using advanced models such as GPT, DALL·E, and Stable Diffusion. Students gain hands-on experience in prompt engineering, AI tool integration, API-based project development, and responsible AI practices. The course culminates in a capstone project that empowers learners to build deployable generative AI products bridging creativity and computation.

  • This module lays the conceptual foundation required to understand how modern generative AI systems work. Students are introduced to the evolution of AI, the shift from traditional machine learning to deep learning, and the emergence of transformer-based architectures that power today’s large language models (LLMs). The module covers core concepts such as tokenization, embeddings, neural network representation of language, and how models learn patterns from massive datasets. Learners explore different categories of generative models—LLMs, diffusion models, text-to-image systems, and multimodal generators—along with their capabilities and limitations. Real-world applications across industries such as healthcare, education, software development, content creation, and automation are highlighted to provide practical context. The module also introduces responsible AI use, the idea of hallucinations, and the importance of designing safe and reliable AI workflows. Key learning outcomes: Understand how generative AI works at a conceptual level Explore the history and evolution of AI leading to modern LLMs Learn about embeddings, tokens, and transformer architecture fundamentals Recognize different types of generative models (LLMs, diffusion, multimodal) Identify real-world applications and use cases across domains Develop awareness of the challenges, risks, and safety considerations in AI
  • This module dives deep into the science and strategy of designing effective prompts for Large Language Models (LLMs). Learners explore various prompting frameworks such as zero-shot, few-shot, chain-of-thought reasoning, role-based prompting, and multi-step prompt pipelines. The focus is on how to guide AI models to think logically, generate reliable outputs, and solve real-world problems. Students also develop skills to break complex tasks into simpler sub-problems using prompting strategies. Key learning outcomes: Write optimized, structured prompts Improve reasoning quality in AI responses Build prompt workflows for complex tasks Reduce hallucinations using controlled prompting
  • This module introduces multimodal generative AI systems capable of producing outputs across different media forms such as text, images, audio, and video. Students explore text-to-image tools, video generation models, speech synthesis, image editing through prompts, and cross-modal transformation (e.g., “text ? image” or “image ? story”). Hands-on exercises teach how these models function, their architecture, and how to combine them for creative applications. Key learning outcomes: Understand multimodal architectures Generate and edit AI images, videos, and audio Convert one modality into another (e.g., text-to-video) Use multimodal AI tools for design, advertising, media creation
  • This module focuses on modifying generative AI models to suit domain-specific tasks. Students learn techniques like parameter-efficient fine-tuning (LoRA, adapters), dataset engineering, evaluation of fine-tuned models, and deployment of custom models. A major emphasis is placed on AI safety, ethical guardrails, bias detection, and responsible development to ensure that generative models behave safely and comply with regulations. Key learning outcomes: Fine-tune and customize pre-trained LLMs Perform dataset curation for fine-tuning Implement safety layers and content filters Understand ethical risks, bias, and fairness in AI
  • In this module, students learn how to transform generative AI outputs into full-fledged, user-facing applications. It covers backend integration with AI APIs, frontend interfaces, cloud deployment, database management, and authentication. This module also teaches best practices in versioning, cost optimization, API load management, and building scalable AI workflows. Key learning outcomes: Design full-stack AI applications Integrate LLMs and multimodal models into products Deploy applications to cloud platforms Optimize performance, cost, and user experience
  • his final module trains learners to evaluate, test, and improve the performance of generative AI systems. Students learn evaluation metrics for text generation, image quality, accuracy, safety, hallucination control, and user satisfaction. Additionally, the module covers techniques for scaling AI products—handling higher traffic, improving reliability, and maintaining quality across diverse use cases. Key learning outcomes: Evaluate model output using human and automated metrics Perform testing, debugging, and monitoring of AI systems Implement QA frameworks for generative AI Scale applications to support large user bases If you'd like, I can also: Add these explanations into your Excel file Convert them into a Word/PDF syllabus Prepare an LMS-friendly version Include Bloom’s taxonomy–aligned learning outcomes Just tell me what you need!

Dr. Manpreet Singh Sehgal
Associate Professor

Dr. Manpreet Singh Sehgal is working as Associate Professor, in the School of Computer Science and Engineering, in Lovely Professional University and holds Professional Certification in Genarative AI from EICT Academy IIT Kanpur, He has been faculty research fellow at IIT Delhi and has gained Data Science specialization from IIT Roorkee. He has over two decade of teaching experience in the field of Computer Science and Engineering,