Week 1: Introduction to Supervised/Unsupervised/Generative, Learning via Empirical Risk Minimization
Week 2: Bayes Optimality and Density Estimation via Divergence Minimization
Week 3: Maximum Likelihood and MAP Estimates, Non-Parametric Estimates (Nearest Neighbours and Parzen Window)
Week 4: Linear Models: Linear regression, least squares, Fisher discriminant, Logistic regressions
Week 5: Regularization & Generalization: Bias–variance Decomposition, Ridge regression, Lasso, Probabilistic interpretation of regularization
Week 6: Kernel Machines & SVMs: Maximum margin classifiers, Dual form, KKT conditions, Kernel trick & RKHS intuition.
Week 7: Perceptron, Neural Networks, Gradient-based Optimization, Error Back Propagation
Week 8: Convolutional Neural Networks: Convolution, pooling, receptive fields, CNN architectures,Transfer learning.
Week 9: Sequence Models: RNNs, backpropagation through time, Vanishing/exploding gradients, GRU, LSTMs
Week 10: Attention & Transformers: Attention mechanism, Self-attention vs recurrence, Encoder–decoder Transformers.
Week 11: Ensembles and Decision trees: Bagging & Random Forests, Boosting (AdaBoost, XGBoost).
Week 12: - Unsupervised Learning & EM: Clustering: k-Means, Gaussian mixtures, EM algorithm, dimensionality reduction and PCA.
- A preview of Generative Models: GANs and VAEs, Diffusion models (high-level only).
DOWNLOAD APP
FOLLOW US