X

Foundations of Deep Learning: Concepts and Applications

By Prof. Sriram Ganapathy, Prof. Ashwini Kodipalli, Prof. Baishali Garai   |   Indian Institute of Science, Bangalore, RV University
Learners enrolled: 1059
ABOUT THE COURSE:

1.Deep Learning is a core pillar of modern Artificial Intelligence, powering applications in computer vision, natural language processing, healthcare, and robotics.

2.With the rapid expansion of AI-related programs in AICTE-affiliated institutions, it's essential for students to build a strong foundation in this field.

3.This course is designed to guide learners step-by-step—from the basics of neural networks to advanced architectures like CNNs, RNNs, and Autoencoders.

4.Emphasis is placed on both conceptual clarity and practical implementation using Python and Google Colab.

5.By the end of the course, students will be able to apply deep learning algorithms on real-world data to develop intelligent solutions.

INTENDED AUDIENCE: UG and PG students of all the AICTE affiliated institutions

PREREQUISITES: This course is designed to be self-contained and suitable for learners with no prior background in the subject. However, a basic familiarity with Python programming is recommended to facilitate better understanding of the concepts and hands-on components of the course

INDUSTRY SUPPORT: Deep Learning is a critical component in the AI and Data Science ecosystem, and this course aligns well with the skill requirements of leading technology and research-driven companies. The following industries and companies are likely to recognize and value this course due to their active engagement in AI and deep learning applications:
•Technology Companies: Google, Microsoft, Amazon, Meta, Apple, IBM
•AI and Data Science Firms: NVIDIA, OpenAI, DeepMind, DataRobot
•IT Services and Consulting: TCS, Infosys, Wipro, Accenture, Cognizant, Capgemini
•Startups in AI/ML: Fractal Analytics, SigTuple, Mad Street Den, InData Labs
•Healthcare and Bioinformatics: Siemens Healthineers, Philips, GE Healthcare, Tata Elxsi
•Automotive and Robotics: Tesla, Bosch, Continental, Qualcomm
•Finance and Banking: JPMorgan Chase, Goldman Sachs, PayPal, Razorpay, HDFC Bank (AI-driven risk modeling and fraud detection)
This course will help learners build strong foundational knowledge and practical skills that are highly sought after in roles such as Machine Learning Engineer, AI Researcher, Data Scientist, Computer Vision Engineer, and NLP Specialist.
Summary
Course Status : Upcoming
Course Type : Core
Language for course content : English
Duration : 12 weeks
Category :
  • Computer Science and Engineering
Credit Points : 3
Level : Undergraduate/Postgraduate
Start Date : 19 Jan 2026
End Date : 10 Apr 2026
Enrollment Ends : 26 Jan 2026
Exam Registration Ends : 13 Feb 2026
Exam Date : 25 Apr 2026 IST
NCrF Level   : 4.5 — 8.0

Note: This exam date is subject to change based on seat availability. You can check final exam date on your hall ticket.


Page Visits



Course layout

Week 1:  Overview and motivation for the course: Why DL, Importance, Companies which are working, Applications, Future. To whom the course is designed (target audience), Contents (week wise). How this course is different from others. (More hands-on) and live classes, Make it more impactful.

Overview of machine learning and deep learning, difference between ML and DL with an example, History and Evolution of Deep Learning with computational efficiency.

Introduction to Neural networks: Perceptron: logistic regression, Single Layer Perceptron, Single Layer Perceptron numerical problem, Limitations of Single Layer Perceptron

Hands-on on building a simple perceptron model using colab file

Introduction to Multilayer Perceptron, Difference between Shallow neural networks, deep neural networks, Take an example and show how to design the NN (5 to 7 examples), Activation Functions, Loss Functions

Week 2: Gradient Descent (GD) and Backpropagation (MSE)

Optimizers: Momentum-Based GD, Nesterov Accelerated GD, Stochastic GD, AdaDelta, AdaGrad, RMSProp, Adam. Regularization Techniques: L1/L2 regularization, dropout, Early stopping

Hands-on on building the Artificial Neural Network for classification and regression problems with exposure to hyperparameter tuning. Interpreting the results using simple XAI techniques: LIME & SHAP

Week 3: CNN: Fundamentals of Image representation and Image preprocessing and Data augmentation

Introduction to Convolutional Neural Networks, Inspiration behind CNN, Key Components of CNN, Types of convolutions

CNN architecture

CNN architecture

Hands-on on building a simple CNN model for binary and multiclass classification.

Week 4: A typical CNN structure, Standard CNN models: AlexNet, VGGNet 16 and 19

Standard CNN Models: GoogLeNet, ResNet 18, 34

Standard CNN Models: Inception, Transfer Learning

Hands-on on transfer learning and building an ensemble model.

Week 5: Introduction to XAI: Algorithms and its working mechanism

Hands-on: Interpreting the results from CNN model using simple XAI techniques: GRADCAM and SMOOTHGRAD

Week 6: Evaluation metrics for segmentation, CNN based segmentation algorithms: UNet
Attention-based UNet, Introduction to CNN based Object Detection models.
Object detection algorithms: YOLO, RCNN, Faster RCNN models.
Hands-on on object detection using YOLO
Hands-on on UNet and attention based UNet

Week 7: Sequence-to-sequence models: Introduction to Recurrent Neural Networks and their structure Challenges in RNN (Vanishing and Exploding Gradients)
Numerical problem on RNN
Hands-on on building RNN on structured and unstructured data
Variants of RNN and its hands-on

Week 8: Introduction to Long-short term memory (LSTM) architecture and its necessity, Bidirectional LSTM, Stacked LSTMs

Understand the GRU architecture, Compare LSTM vs GRU: speed, accuracy, complexity. When to use GRU over LSTM

Why attention mechanism in RNNs, working of attention mechanism, benefits of attention mechanism

Hands-on on building LSTM models for structured and unstructured data.

Week 9: Introduce NLP tasks, Classical NLP vs Deep Learning NLP, Text Preprocessing: Tokenization, Stopwords, Lemmatization
Word Representations: One-hot encoding, Word embeddings: Word2Vec, GloVe, FastText

Sequence modeling in NLP, Recurrent Neural Networks (RNN) basics, Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), Word embeddings + RNN for sequence tasks

Hands on session on RNN for NLP task

Week 10: Unsupervised Learning: Introduction to Autoencoder, Architecture of AE and math behind
Types (Simple, Deep, CNN-based, Types), Training of Autoencoders,
Hands-on on building a AE and types of AE

Week 11: Transformer architectures: Self Attention, Encoder Decoder Attention, In-context-learning,

Low-rank adaptation. Self-supervised learning: Objectives and Loss Functions, Masked Language

Modeling

Week 12: Large Language Models: Tokenizers, Pre-training and post-training, multimodal alignment,

model compression, Reinforcement Learning for fine-tuning, Proximal Policy Optimization,

Benchmarking and Evaluation of LLMs.,

Diffusion Models: Deep generative models, VAEs and GANs, Forward and reverse diffusion, Denoising Score Matching, Variational Lower Bounds, Stable Diffusion

Books and references

1.Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep learning. MIT Press, 2016, ISBN : ‎ 9780262035613.
2.Zhang, Aston, et al. "Dive into deep learning." Cambridge University Press, 2023, ISBN-13 ‏ : ‎ 9781009389433.
3.CS231n: Deep Learning for Computer Vision, Stanford University.
4.Practical Deep Learning, Fast.ai (https://course.fast.ai/)

Instructor bio

Prof. Sriram Ganapathy

Indian Institute of Science, Bangalore
Dr. Sriram Ganapathy is an Associate Professor in the Department of Electrical Engineering at the Indian Institute of Science (IISc), Bengaluru, and the principal investigator of the LEAP Lab (Learning & Extraction of Acoustic Patterns). He holds a B.Tech. in Electronics and Communication Engineering from the College of Engineering, Trivandrum (2004), an M.E. in Signal Processing from IISc (2006), and a Ph.D. in Electrical and Computer Engineering from Johns Hopkins University (2012). Before joining IISc in 2016, he worked as a Research Staff Member at the IBM Watson Research Center (2011–2015) and served as a Visiting Research Scientist at Google Research India/DeepMind (2022–2024). His research focuses on speech and audio signal processing, deep learning, representation learning, auditory neuroscience, and explainable AI, with applications in speech recognition, speaker recognition, emotion analysis, and acoustic sensing. He has led impactful projects such as modulation filter learning for robust speech models and the Coswara COVID-19 vocal screening initiative. Dr. Ganapathy has authored over 120 peer-reviewed publications, guided award-winning student projects including the Qualcomm Innovation Fellowship, and teaches courses like Speech Information Processing and Machine Learning for Signal Processing. His contributions have been recognized with several honors, including the Verisk AI Faculty Research Award (2021, 2022), the DAE Young Scientist Award (2018), the IEEE SigPort Chief Editorship, and the title of IBM Master Inventor. He is a Senior Member of IEEE and an active member of the International Speech Communication Association (ISCA).


Prof. Ashwini Kodipalli

RV University
Dr. Ashwini Kodipalli holds a Ph.D. from the Indian Institute of Science (IISc), Bangalore, and is currently a faculty member at RV University. With over 15 years of teaching experience, she is deeply passionate about simplifying complex concepts and making them accessible to students. Her research expertise lies in Biomedical Image Analysis, particularly in the application of advanced Deep Learning algorithms for healthcare. She has published over 15 research articles in top-ranked journals and more than 75 papers in reputed IEEE international conferences. Dr. Ashwini has served as a resource person for multiple National-level Faculty Development Programs (FDPs), delivering expert sessions on Deep Learning and its applications. She has also successfully completed two funded research projects under VGST, focused on the early detection of PCOD and associated mental health issues, in collaboration with NIMHANS, Bengaluru. Her academic and research contributions, combined with hands-on experience in practical AI applications, make her well-suited to offer this course. For a full list of publications, please refer to [https://scholar.google.com/citations?user=vy9VcokAAAAJ ] and [https://www.scopus.com/authid/detail.uri?authorId=57203964287].


Prof. Baishali Garai

Dr Baishali Garai completed her PhD from Indian Institute of Science, Bangalore in 2014. She has completed the PG Level Advanced Certification in Deep Learning Foundations and Application conducted by IISc in association with talent Sprint with A grade in 2023. She has 11 years of experience including academia and industry. She is currently working as Associate Professor in school of Computer Science and Engineering, RV University, Bangalore. She is the recipient of Early Career Research Award from SERB-DST in 2017.She has completed 3 Govt funded research projects as Principal Investigator including projects from ISRO and SERB. Currently she is the Principal investigator of a DST project. Her current research interest encompasses application of Deep Learning in material science research. She has 10 Journal publications in reputed Journals and more than 30 international conference publications. Dr Baishali has served as resource person in various national and international seminars/workshops. Her talks focus on application of Deep learning concepts to solve Science related problems.

Course certificate

The course is free to enroll and learn from. But if you want a certificate, you have to register and write the proctored exam conducted by us in person at any of the designated exam centres.
The exam is optional for a fee of Rs 1000/- (Rupees one thousand only).
Date and Time of Exams: April 25, 2026 Morning session 9am to 12 noon; Afternoon Session 2pm to 5pm.
Registration url: Announcements will be made when the registration form is open for registrations.
The online registration form has to be filled and the certification exam fee needs to be paid. More details will be made available when the exam registration form is published. If there are any changes, it will be mentioned then.
Please check the form for more details on the cities where the exams will be held, the conditions you agree to when you fill the form etc.

CRITERIA TO GET A CERTIFICATE

Average assignment score = 25% of average of best 8 assignments out of the total 12 assignments given in the course.
Exam score = 75% of the proctored certification exam score out of 100

Final score = Average assignment score + Exam score

Please note that assignments encompass all types (including quizzes, programming tasks, and essay submissions) available in the specific week.

YOU WILL BE ELIGIBLE FOR A CERTIFICATE ONLY IF AVERAGE ASSIGNMENT SCORE >=10/25 AND EXAM SCORE >= 30/75. If one of the 2 criteria is not met, you will not get the certificate even if the Final score >= 40/100.

Certificate will have your name, photograph and the score in the final exam with the breakup.It will have the logos of NPTEL and IISc Bangalore. It will be e-verifiable at nptel.ac.in/noc.

Only the e-certificate will be made available. Hard copies will not be dispatched.

Once again, thanks for your interest in our online courses and certification. Happy learning.

- NPTEL team
MHRD logo Swayam logo

DOWNLOAD APP

Goto google play store

FOLLOW US