| MATH60629A | Lectures | Lab | Quizzes and Assignment | Project | Office hours
1- Week 1 (January 10): Class introduction and math review
- Lecture: Slides
- Reading:
- Prologue to The Master Algorithm
- Chapter 1 of ESL
- For Math review: Check here
2- Week 2 (January 17): Machine learning fundamentals
- Lecture: Slides
- Capsules:
- Learning Problem [14:40]
- Types of Experiences [13:15]
- A first Supervised Model [8:03]
- Model Evaluation [15:26]
- Regularization [4:09]
- Model Validation [3:08]
- Bias / Variance tradeoff [11:50]
- Reading:
- Chapter 5 of Deep Learning. You can skip 5.4 (except 5.4.4) to 5.10.
3- Week 3 (January 24): Supervised learning algorithms
- Lecture: Slides
- Capsules:
- Nearest Neighbor [19:05]
- Linear Classification [15:26]
- Introduction to Probabilistic Models (for Classification) [11:55]
- The Naive Bayes Model [24:28]
- Naive Bayes Example [9:26]
- Reading: Sections 4.1-4.3, 4.5 of The Elements of Statistical Learning (available online), Sections 3.5 and 4.2 of Machine Learning (K. Murphy)
4- Week 4 (January 31): Python for scientific computations and machine learning
- ML Lab location: Salle Groupe Cholette
- Lecture: Tutorial
- solution: solution
- I encourage you to start the tutorial ahead of time and to finish it during our 180 minutes together.
- It is mandatory to bring your laptop to class for this session.
5- Week 5 (February 7): Neural networks and deep learning
- Lecture: Slides
- Capsules:
- From linear classification to neural networks [19:28]
- Training neural networks [20:14]
- Learning representations [13:40]
- Neural networks hyperparameters [25:20]
- Neural networks takeaways [7:00]
- Reading:
- Sections 6.1--6.3 and 6.5 (stop at 6.5.4) of Deep Learning (the book).
- Optional: Chapter 11 of the Elements of Statistical Learning.
6- Week 6 (February 14): Recurrent Neural networks and Convolutional neural networks
- Lecture: Slides
- Capsules:
- Modelling Sequential Data [8:42]
- Practical Overview of RNNs [29:32]
- RNNs for language modelling [15:13]
- Overview of CNNs [13:30]
- Convolutions and Pooling [26:00]
- Conclusions and Practical remarks [9:17]
- Reading: Required readings: Sections 10, 10.1, 10.2 (skim 10.2.2, skip 10.2.3), and 10.7. Sections 9, 9.1, 9.2, 9.3 (9.11 for fun). Both from the Deep Learning book.
7- Week 7 (February 21): Unsupervised learning
- Lecture: Slides
- Capsules:
- Introduction to unsupervised learning [8:17]
- K-means clustering [41:58] (there's a natural break at 22:28)
- GMMs for clustering [17:52]
- Beyond Clustering [14:42]
- Reading: Section 14.3 (skip 14.3.5 and 14.3.12) of the Elements of Statistical Learning. Optional: Chapter 9 of the Pattern Recognition and Machine Learning.
8- Week 8 (February 28): Reading week
- No lectures.
9- Week 9 (March 7): Project meetings
10- Week 10 (March 14): Parallel computational paradigms for large-scale data processing & Project meetings
- Lecture: Slides
- Capsules:
- Introduction to Distributed Computing for ML [19:35]
- MapReduce [17:41]
- Spark [17:37]
11- Week 11 (March 21): Recommender systems
- Case Study: Case Presentation and class execution
- Required preparation for the case study: Answer to Question 1 must be submitted by March 21st, 11:00am, via Remise de Travaux on ZoneCours. This is an indvidual submission. All students must make the submission.
- Lecture: slides
- Reading: Chapters 1 through 4, Aggarwal, Charu C. Recommender Systems: the Textbook. Cham: Springer, 2016
12- Week 12 (March 28): Sequential decision making I
- Lecture: Slides
- Capsules:
- Motivating RL [8:22]
- Planning with MDPs [12:16]
- MDP objective [14:16]
- Algorithms for solving MDPs [17:51]: Note: In this capsule, there is a mistake in the second equation of the policy iteration algorithm (the transition should be given a and not π(s)), the slides have been corrected (see slides 47 and 48)
- Reading: Optional: Demo of the policy iteration algorithm (from Andrej Karpathy)
13- Week 13 (April 4): Sequential decision making II
- Lecture: Slides
- Capsules:
- Introduction to RL [13:31]
- A first RL algorithm [17:13]
- RL Algorithms for Control [21:10]
- Reading: Required reading: Sections 1 through 4 from this Survey, Chapters 1,3,4, and 6 from Reinforcement Learning: An Introduction. Optional: Demo of the TD algorithm (from Andrej Karpathy)
14- Week 14 (April 11): Class Project presentation
Final exam: April 29
Final exam covers all the topics taught in the course (capsules, required readings, new concepts introduced in hands-on exercises). You will be examined on the conceptual level. In other words, you will not be asked to generate an algorithm. However, you need to have a solid understanding of the machine learning methods and their implementation concepts to answer the questions (see the sample exams below).