S.No.
|
Topics
|
Lectures
|
Instructor
|
References/Notes
|
0
|
Introduction to Machine Learning
|
01-01
|
SDR
|
|
|
Flavours of Machine Learning: Unsupervised, Supervised,
Reinforcement, Hybrid models. Decision Boundaries: crisp, and
non-crisp, optimisation problems. Examples of unsupervised learning.
|
04 Jan (Tue) {lecture#01}
|
SDR
|
MS-Teams folder:
video_k_means_em1_04jan22.mp4,
slides_k_means_em1_04jan22.pdf,
lecture_notes_k_means_em1_04jan22.pdf
|
1
|
Unsupervised Learning:
K-Means, Gaussian Mixture Models, EM
|
01-06
|
SDR
|
[Bishop Chap.9],
[Do: Gaussians],
[Do: More on Gaussians],
[Ng: K-Means],
[Ng: GMM],
[Ng: EM],
[Smyth:
EM]
|
|
The K-Means algorithm: Introduction. Algorithms: history,
flavours. A mathematical formulation of the K-Means algorithm.
The Objective function to minimise.
The K-Means algorithm:
The Objective function to minimise.
|
04 Jan (Tue) {lecture#01}
|
SDR
|
MS-Teams folder:
video_k_means_em1_04jan22.mp4,
slides_k_means_em1_04jan22.pdf,
lecture_notes_k_means_em1_04jan22.pdf
|
|
The basic K-Means algorithm, computation complexity issues: each
step, overall. Limitations of K-Means. K-Means: Alternate
formulation with a distance threshold.
|
05 Jan (Wed) {lecture#02}
|
SDR
|
MS-Teams folder:
video_k_means_em2_05jan22.mp4,
slides_k_means_em2_05jan22.pdf,
lecture_notes_k_means_em2_05jan22.pdf
|
|
An introduction to
Gaussian Mixture Models.
The Bayes rule, and Responsibilities.
Maximum Likelihood Estimation. Parameter estimation for a mixture
of Gaussians, starting with a simple 1-D single Gaussian case.
ML-Estimation: the simple case of one 1-D Gaussian, to
the general case of K D-dimensional Gaussians.
|
07 Jan (Fri) {lecture#03}
|
SDR
|
MS-Teams folder:
video_k_means_em3_07jan22.mp4,
slides_k_means_em3_07jan22.pdf,
lecture_notes_k_means_em3_07jan22.pdf
|
|
The general case of K D-dimensional Gaussians.
Getting stuck, using Lagrange Multipliers.
The EM Algorithm for Gaussian Mixtures.
Application: Assignment 1:
The Stauffer and Grimson Adaptive Background Subtraction
Algorithm.
An introduction to the basic set of interesting heuristics!
|
11 Jan (Tue) {lecture#04}
|
SDR
|
MS-Teams folder:
video_k_means_em4_11jan21.mp4, slides_k_means_em4_11jan21.pdf,
lecture_notes_k_means_em4_11jan21.pdf
|
|
The Stauffer and Grimson algorithm (contd)
|
12 Jan (Wed) {lecture#05}
|
SDR
|
MS-Teams folder:
video_k_means_em5_12jan21.mp4,
slides_k_means_em5_12jan21.pdf,
lecture_notes_k_means_em5_12jan21.pdf
|
|
The Stauffer and Grimson algorithm (contd)
|
15 Jan (Sat) {lecture#06}
|
SDR
|
MS-Teams folder:
video_k_means_em6_eigen1_15jan22.mp4,
slides_k_means_em6_15jan22.pdf,
|
2
|
Unsupervised Learning:
EigenAnalysis:
PCA, LDA and Subspaces
|
06-10
|
SDR
|
[Ng: PCA],
[Ng: ICA],
[Burges: Dimension Reduction],
[Bishop Chap.12]
|
|
Introduction to Eigenvalues and Eigenvectors.
Properties of Eigenvalues and Eigenvectors.
|
15 Jan (Sat) {lecture#06}
|
SDR
|
MS-Teams folder:
video_k_means_em6_eigen1_15jan22.mp4,
slides_eigen1_15jan22.pdf
|
|
Properties of Eigenvalues and Eigenvectors (contd).
|
18 Jan (Tue) {lecture#07}
|
SDR
|
MS-Teams folder:
video_eigen2_18jan22.mp4,
slides_eigen2_18jan22.pdf
|
|
Properties of Eigenvalues and Eigenvectors (contd).
Gram-Schmidt Orthogonalisation, other properties.
|
19 Jan (Wed) {lecture#08}
|
SDR
|
MS-Teams folder:
video_eigen3_19jan22.mp4,
slides_eigen3_19jan22.pdf
|
|
The KL Transform (contd).
The SVD and its properties.
|
21 Jan (Fri) {lecture#09}
|
SDR
|
MS-Teams folder:
video_eigen4_21jan22.mp4,
slides_eigen4_21jan22.pdf
|
|
The SVD and its properties (contd).
Application: Assignment 2: Eigenfaces and Fisherfaces
|
25 Jan (Tue) {lecture#10}
|
SDR
|
MS-Teams folder:
video_eigen5_linear1_25jan22.mp4,
slides_eigen5_25jan22.pdf
|
3
|
Linear Models for Regression, Classification
|
10-14
|
SDR
|
[Bishop Chap.3],
[Bishop Chap.4],
[Ng: Supervised, Discriminant Analysis],
[Ng: Generative]
|
|
General introduction to Regression and Classification.
|
25 Jan (Tue) {lecture#10}
|
SDR
|
MS-Teams folder:
video_eigen5_linear1_25jan22.mp4,
slides_linear1_25jan22.pdf
|
|
Linearity and restricted non-linearity.
Maximum Likelihood and Least Squares.
The Moore-Penrose Pseudo-inverse.
|
28 Jan (Fri) {lecture#11}
|
SDR
|
MS-Teams folder:
video_linear2_28jan22.mp4,
slides_linear2_28jan22.pdf,
lecture_notes_linear2_28jan22.pdf
|
|
The Moore-Penrose Psuedo-inverse (contd).
Regularised Least Squares.
Three approaches to classification: restricted non-linear models
(linear combination of possible non-linear feature transformations).
Introduction to linear models: equation of a line in terms of the
physical significance of the space, and the weights
w.
Introduction to linear models: equation of a line in terms of the
physical significance of the space, and the weights
w (contd).
Linear Discriminant Functions: 2 classes, and K classes. Fisher's
Linear Discriminant (basic build-up).
Fisher's Linear Discriminant.
Application: Assignment 2: Eigenfaces and Fisherfaces
|
29 Jan (Sat) {lecture#12}
|
SDR
|
MS-Teams folder:
video_linear3_29jan22.mp4,
slides_linear3_29jan22.pdf
|
|
Introduction to linear models: equation of a line in terms of the
physical significance of the space, and the weights
w.
Introduction to linear models: equation of a line in terms of the
physical significance of the space, and the weights
w (contd).
Linear Discriminant Functions: 2 classes, and K classes. Fisher's
Linear Discriminant (basic build-up).
Fisher's Linear Discriminant.
Application: Assignment 2: Eigenfaces and Fisherfaces
|
01 Feb (Tue) {lecture#13}
|
SDR
|
MS-Teams folder:
video_linear4_01feb22.mp4,
slides_linear4_01feb22.pdf
|
|
Fisher's Linear Discriminant (contd).
Fisher's Linear Discriminant.
Application: Assignment 2: Eigenfaces and Fisherfaces
|
02 Feb (Wed) {lecture#14}
|
SDR
|
MS-Teams folder:
video_linear5_svm1_02feb22.mp4, slides_linear5_02feb22.pdf
|
4
|
SVMs
|
14-20
|
SDR
|
[Bishop Chap.7],
[Alex: SVMs],
[Ng: SVMs],
[Burges: SVMs],
[Bishop Chap.6]
|
|
SVMs: the concept of the margin.
|
02 Feb (Wed) {lecture#14}
|
SDR
|
MS-Teams folder:
video_linear5_svm1_02feb22.mp4, slides_svm1_02feb22.pdf
|
|
SVMs: the optimisation problem, getting
the physical significance of the y = +1 and y = -1 lines.
The two `golden' regions for the 2-class perfectly separable case.
The generalised canonical representation in terms of one inequation.
|
04 Feb (Fri) {lecture#15}
|
SDR
|
MS-Teams folder:
video_svm2_04feb22.mp4, slides_svm2_04feb22.pdf
|
|
The basic SVM optimisation: the primal and the dual problems.
An illustration of the kernel trick. Lagrange Multipliers and the
KKT Conditions.
|
08 Feb (Tue) {lecture#16}
|
SDR
|
MS-Teams folder:
video_svm3_08feb22.mp4, slides_svm3_08feb22.pdf
|
|
Lagrange Multipliers and the KKT Conditions (contd).
The Soft-Margin SVM. Abstracting the basic concepts of the
hard-margin SVM, to use in a similar formulation. The function to
optimise, the inequality constraints, the KKT conditions from
Lagrange's theory. The Primal and Dual formulations.
An illustration of the kernel trick. Lagrange Multipliers and the
KKT Conditions.
Introduction to Kernels.
|
09 Feb (Wed) {lecture#17}
|
SDR
|
MS-Teams folder:
video_svm4_09feb22.mp4, slides_svm4_09feb22.pdf
|
---
|
Minor
|
Minor: 15 February 2022 (Tuesday)
|
---
|
01:30 pm - 02:30 pm.
Scanned scripts to be uploaded by 03:00 pm.
|
|
Introduction to Kernels.
Kernels in Regression.
|
18 Feb (Fri) {lecture#18}
|
SDR
|
MS-Teams folder:
video_kernel2_18feb22.mp4, lecture_notes_kernel2_18feb22.pdf
|
|
Kernels in Regression (contd).
|
22 Feb (Tue) {lecture#19}
|
SDR
|
MS-Teams folder:
video_kernel3_22feb22.mp4, lecture_notes_kernel3_22feb22.pdf
|
|
Properties of kernels, Constructing kernels.
The Soft-Margin SVM (contd). The physical significance of the
slack parameter in the formulation. The hard margin SVM
`template', into which we wish to put, the new soft margin SVM formulation.
|
23 Feb (Wed) {lecture#20}
|
SDR
|
MS-Teams folder:
video_kernel4_svm5_23feb22.mp4,
lecture_notes_kernel4_23feb22.pdf,
lecture_notes_svm5_23feb22.pdf
|
5
|
Neural Networks
|
21-xx
|
SDR
|
[Bishop Chap.5], [DL Chap.6], [DL Chap.9]
|
|
Introduction to Neural Networks: the Multi-Layer Perceptron:
Conventions, restricted non-linearity.
|
25 Feb (Fri) {lecture#21}
|
SDR
|
MS-Teams folder:
video_nn1_25feb22.mp4,
lecture_notes_nn1_25feb22.pdf
|
|
Basic Perceptron
|
02 Mar (Wed) {lecture#22}
|
SDR
|
MS-Teams folder:
video_nn2_02mar22.mp4,
lecture_notes_nn2_02mar22.pdf
|
|
The Basic Perceptron (contd).
Mathematical Basics: `factorisation'
|
04 Mar (Fri) {lecture#23}
|
SDR
|
MS-Teams folder:
video_nn3_04mar22.mp4,
lecture_notes_nn3_04mar22.pdf
|
|
Mathematical Basics: `factorisation' (contd). Second order Taylor
series expansion.
|
05 Mar (Sat) {lecture#24}
|
SDR
|
MS-Teams folder:
video_nn4_05mar22.mp4,
lecture_notes_nn4_05mar22.pdf
|
|
Mathematical Interlude (contd): The need for a second order
Taylor series expansion. Eigenanalysis of a Hessian.
The Backpropagation Algorithm: some initial points
|
08 Mar (Tue) {lecture#25}
|
SDR
|
MS-Teams folder:
video_nn5_08mar22.mp4,
lecture_notes_nn5_08mar22.pdf
|
|
The Backpropagation Algorithm (contd).
|
09 Mar (Wed) {lecture#26}
|
SDR
|
MS-Teams folder:
video_nn6_09mar22.mp4,
lecture_notes_nn6_09mar22.pdf
|
|
Deep Learning structures and concepts: the hidden layer as a
kernel function. The XOR example: a linear model does not
suffice.
|
11 Mar (Fri) {lecture#27}
|
SDR
|
MS-Teams folder:
video_nn7_11mar22.mp4,
lecture_notes_nn7_11mar22.pdf
|
|
The XOR example (contd). Other hand-crafted examples, with
another activation function. An introduction to the ReLU.
|
15 Mar (Tue) {lecture#28}
|
SDR
|
MS-Teams folder:
video_nn8_15mar22.mp4,
lecture_notes_nn8_15mar22.pdf
|
|
Vector-Matrix representations. Activation Functions: some further
discussion. The sigmoid and tanh revisited. ReLU, Leaky ReLU, ELU.
|
16 Mar (Wed) {lecture#29}
|
SDR
|
MS-Teams folder:
video_nn9_16mar22.mp4,
lecture_notes_nn9_16mar22.pdf
|
|
Some small hand-crafted neural network examples.
Some insight into the expressive power of feed-forward neural
networks. Some insight into 2-D inputs, and intepreting first
layer weights as images, and its importance for CNNs.
|
19 Mar (Sat) {lecture#30}
|
SDR
|
(Make-up lecture for the missed 22 Mar (Tue) lecture)
MS-Teams folder:
video_nn10_19mar22.mp4,
lecture_notes_nn10_19mar22.pdf
|
|
---
|
22 Mar (Tue) {no lecture}
|
SDR
|
(no class)
|
|
Some insight into 2-D inputs, and intepreting first
layer weights as images, and its importance for CNNs (contd): an
empirical observation, and evidence for the idea of local
receptive fields, low-level differentiation/edge features, with
some biological motivation as well, from the visual cortex.
Some insight into the expressive power of feed-forward neural
networks: the insight from shallow networks with a large width.
An example with asymmetric values for inputs and outputs for a
digital circuit, and estimating neural network parameters, for
D input neurons and 2^D neurons in one hidden
layer, and one output neuron.
|
23 March (Wed) {lecture#31}
|
SDR
|
MS-Teams folder:
video_nn11_23mar22.mp4,
lecture_notes_nn11_23mar22.pdf
|
|
Some insight into the expressive power of feed-forward neural
networks (contd).
A domain-independent introduction to Convolution
|
25 Mar (Fri) {lecture#32}
|
SDR
|
MS-Teams folder:
video_nn12_25mar22.mp4,
lecture_notes_nn12_25mar22.pdf
|
|
A domain-independent introduction to Convolution (contd).
|
26 Mar (Sat) {lecture#33}
|
SDR
|
MS-Teams folder:
video_nn13_26mar22.mp4,
lecture_notes_nn13_26mar22.pdf
|
|
Introducing CNNs: some basic features.
|
29 Mar (Tue) {lecture#34}
|
SDR
|
MS-Teams folder:
video_nn14_29mar22.mp4,
lecture_notes_nn14_29mar22.pdf
|
|
CNN basics.
The basic LeNet architecture.
|
30 Mar (Wed) {lecture#35}
|
SDR
|
MS-Teams folder:
video_nn15_30mar22.mp4,
lecture_notes_nn15_30mar22.pdf
|
---
|
Major
|
05 Apr (Tue) - 12 Apr (Tue)
|
---
|
---
|
xx
|
Mathematical Basics for Machine Learning
|
xx-xx
|
xx
|
[Burges: Math for ML],
[Do,
Kolter: Linear Algebra Notes],
|