|
This page contains the schedule, slide from the lectures, lecture notes, reading lists,
assigments, and web links.
I urge you to download the DjVu viewer
and view the DjVu version of the documents below. They display faster,
are higher quality, and have generally smaller file sizes than the PS and PDF.
WARNING: The schedule below is almost certainly going to change.
Introduction and basic concepts |
Subjects treated: Intro, types of learning, nearest neighbor, how biology does it,
linear classifier, perceptron learning procedure, linear regression,
Slides: [DjVu | PDF]
Recommended Reading:
- Bishop, Chapter 1.
- Refresher on random variables and probabilites by
Andrew Moore: (slides 1-27) [DjVu | PDF]
- Refresher on joint probabilities, Bayes theorem by
Chris Willams: [DjVu | PDF]
- Refresher on statistics and probabilities by
Sam Roweis: [DjVu | PS]
- If you are interested in the early history of self-organizing
systems and cybernetics, have a look at this book available from the
Internet Archive's Million Book Project: Self-Organizing
Systems, proceedings of a 1959 conference edited by Yovits and
Cameron (DjVu viewer required for full text).
Linear Classifiers, Basis Functions, Kernel Trick, Regularization, Generalization |
Subjects treated: Linear machines: perceptron, logistic
regression. Linearly parameterized classifiers: Polynomial
classifiers, basis function expansion, RBFs, Kernel-based expansion.
Slides Basis Functions, Kernel Trick: [DjVu | PDF]
Slides Regularization, Generalization: [DjVu | PDF]
Kernel Methods and Support Vector Machines |
- Slides Support Vector Machines, part 1: [DjVu | PDF]
- Slides Support Vector Machines, part 2: [DjVu | PDF]
Another set of older slides on SVM, if you find that useful:
- Slides Support Vector Machines, part 1: [DjVu | PDF]
- Slides Support Vector Machines, part 2: [DjVu | PDF]
Energy-Based Models, Loss Functions, Linear Machines |
Subjects treated: Energy-based learning, minimum-energy
inference, loss functions.
Linear machines: least square, perceptron, logistic regression.
Slides: [DjVu | PDF]
Recommended Reading:
Gradient-Based Learning I, Multi-Module Architectures and Back-Propagation, Regularization |
Subjects treated: Multi-Module learning machines. Vector
modules and switches. Multilayer neural nets. Backpropagation
Learning.
Slides Multi-Module Learning machines, backprop: [DjVu | PDF]
Slides Special modules: [DjVu | PDF]
Recommended Reading: Bishop, Chapter 5.
Gradient-Based Learning II: Special Modules and Architectures |
Convolutional Nets, Image Recognition |
Required Reading:
If you haven't read it already: Gradient-based Learning Applied to
Document Recognition by LeCun, Bottou, Bengio, and Haffner; pages 1 to
the first column of page 18:
[ DjVu | .pdf ]
Optional Reading: Fu-Jie Huang, Yann LeCun, Leon Bottou: "Learning Methods for Generic Object
Recognition with Invariance to Pose and Lighting.", Proc. CVPR 2004.
[DJVU,
PDF,
.PS.GZ,
Required Reading:
Efficient Backprop, by LeCun, Bottou, Orr, and Muller:
[ DjVu | .pdf ]
Probabilistic Learning, MLE, MAP, Bayesian Learning |
Slides Review of Probability and Statistics: [DjVu | PDF]
Slides Bayesian Learning: [DjVu | PDF]
Intro to unsupervised Learning |
Slides Unsupervised Learning, PCA, K-Means: [DjVu | PDF]
More on unsupervised Learning, Latent Variables, EM |
Slides Latent variables: [DjVu | PDF]
Slides EM, Mixture of Gaussians: [DjVu | PDF]
Learning Theory, Bagging, Boosting, VC-Dim |
Slides Ensemble Methods: [DjVu | PDF]
Structured Prediction: HMMs, CRF, Graph Transformer Networks |
with introduction to factor graphs.
Slides:
Required Reading:
- Read parts of Gradient-based Learning Applied to
Document Recognition by LeCun, Bottou, Bengio, and Haffner; pages 18
(part IV) to the end.
[ DjVu | .pdf ]
- Read "A Tutorial on Energy-Based Learning" by LeCun, Chopra, Hadsell, Ranzato, and Huang.
[ DjVu | .pdf ]
Suggested Reading:
Sparse Coding and Deep Learning |
Slides Sparse Coding, Deep Learning: [DjVu | PDF]
|
|