CPSC/AMTH 663 - Deep Learning Theory and Applications - Spring 2018 Yale
Deep Learning Theory and Applications
Instructors: Kevin Moon (email@example.com) & Guy Wolf (firstname.lastname@example.org)
TA: Emily Guo (email@example.com)
ULAs: Tyler Dohrn (firstname.lastname@example.org), Scott Stankey (email@example.com), & Alex Atanasov (firstname.lastname@example.org)
Deep neural networks have gained immense popularity within the last decade due to their outstanding success in many important machine learning tasks
such as image recognition, speech recognition, and natural language processing.
This course will provide a principled and hands-on approach to deep learning with neural networks.
By the end of the course, students will have mastered the principles and practices underlying neural networks
including modern methods of deep learning, and will have applied deep learning methods to real-world problems
including image recognition, natural language processing, and biomedical applications.
The course will be based on homework and a final group project. The project will include both a written and oral (i.e. presentation) component.
Grades in this courser will be based on their homework scores and the quality of the written and oral component of their projects.
The course assumes basic prior knowledge in linear algebra and probability.
Lectures: Tuesdays & Thursdays 4:00-5:15, DL 220
First lecture on Tuesday, January 23rd
Python for Machine Learning: Monday, Feb. 12th, 6:00-8:00 PM, AKW 400
TensorFlow: Monday, Feb. 26th, 7:00-8:00 PM, AKW 400
ULAs: Mondays 6:00-8:00 PM, AKW 400 (or by appointment)
TA: Tuesdays 5:30-7:30 PM, AKW 104 (or by appointment)
Kevin Moon: Wednesdays 3:00-5:00 PM, AKW 103
Guy Wolf: Thursdays 5:30-7:30 PM in AKW 103
This is a tentative list of topics we intend to cover, which may change as we progress through the course:
- Deep learning overview & relevant machine learning background
- Gradient descent & stochastic gradient descent
- Backpropagation & convergence in neural networks
- Deep neural network concepts:
- Cost functions & activation functions
- Regularization & weight initialization
- Johnson-Lindenstrauss lemma
- Convolutional neural networks
- Word embeddings (e.g., word2vec)
- Recurrent neural networks
- Autoencoders for unsupervised learning
- Ultra deep learning with ResNet
- Generative models (e.g., Generative Adverserial Networks)
- Deep reinforcement learning
- Boltzman machines
Next topics to be uploaded after they are presented in class (subject to changes):
- Topic 13 - ConvNets
- Topic 14 - Autoencoders
- Exercise 1 - due by Thursday, Feb. 15, 4:00 PM.
- Exercise 2 - due by Thursday, Mar. 1, 4:00 PM.
- Exercise 3 - due by Thursday, Mar. 29, 4:00 PM.
- Exercise 4 will be published on TBD, and due Thursday, TBD.
- Exercise 5 will be published on TBD, and due Thursday, TBD.