Feedforward deep neural networks
- Basic architectures and activation functions
- Optimization algorithms for training deep models
Regularization of deep models
- Classic regularization using parameter norm penalty
- Dropout
- Label smoothing
- Batch normalization
- Multi-task learning
Convolutional neural networks
- Convolutional and pooling layers
- Architectures suitable for very deep convolutional networks
- State-of-the-art models for image recognition, object localization and image segmentation
- Pre-training and finetuning of deep neural networks
Recurrent neural networks
- Basic recurrent network, specifics of training
- Long short-term memory
- Gated recurrent units
- Bidirectional and deep recurrent networks
- Encoder-decoder sequence-to-sequence architectures
Practical methodology
- Choosing suitable architecture
- Hyperparameter selection
Natural language processing
- Distributed word representations
- Character-level word embeddings
- Transformer architecture
- State-of-the-art POS tagging, named entity recognition, machine translation, image labeling
Deep generative models
- Variational autoencoders
- Generative adversarial networks
- Speech generation
Structured prediction
- CRF layer
- CTC loss and its application in state-of-the-art speech recognition
Introduction to deep reinforcement learning
Neural networks with external memory
In recent years, deep neural networks have been used to solve complex machine-learning problems. They have achieved significant state-of-the-art results in many areas.
The goal of the course is to introduce deep neural networks, from the basics to the latest advances. The course focuses both on theory as well as on practical aspects (students will implement and train several deep neural networks capable of achieving state-of-the-art results, for example in image recognition, 3d object recognition, speech recognition, image generation or playing video games).