Popis předmětu - BEV033DLE

Přehled studia | Přehled oborů | Všechny skupiny předmětů | Všechny předměty | Seznam rolí | Vysvětlivky               Návod
BEV033DLE Deep Learning
Role:  Rozsah výuky:2P+2C
Katedra:13133 Jazyk výuky:EN
Garanti:Flach B. Zakončení:Z,ZK
Přednášející:Flach B., Shekhovtsov O. Kreditů:6
Cvičící:Flach B., Shekhovtsov O., Šochman J. Semestr:L

Webová stránka:

https://cw.fel.cvut.cz/wiki/courses/bev033dle/start

Anotace:

The course introduces deep neural networks and deep learning – a branch of machine learning and artificial intelligence. Starting from a recap of generic concepts of machine learning (empirical risk minimisation, linear classifiers and regressions, generalisation bounds), it will introduce deep networks as model classes for prediction (classification) and regression and discuss their model complexity and generalisation bounds. The course aims at a solid understanding of all concepts and algorithms needed to successfully design, implement and learn deep networks in machine learning applications. This includes error back propagation and stochastic gradient methods, weight initialisation and normalisation, deterministic and stochastic regularisation methods, data augmentation as well as adversarially robust learning approaches. The course concludes with an introductory discussion of generative neural networks (VAEs and GANs) as well as recurrent neural networks (GRU and LSTM) for structured output classification. Students will gain solid knowledge of all related methods and concepts as well as practical skills needed for successfully designing, implementing and learning deep networks for machine learning applications. At the same time, this course will provide a solid fundament for forthcoming courses (e.g. computer vision), which consider specialised and often more complex variants of neural networks, loss functions and learning approaches for solving machine learning task in their respective area.

Cíle studia:

The proposed course aims at providing the relevant algorithmic and theoretical concepts needed for successfully designing and training NNs. At the same time it strives at providing technical and practical skills in this domain.

Osnovy přednášek:

1. Recap: linear classifiers, linear regression, logistic regression, loss function, empirical risk minimisation, regularisation
2. Artificial neurons, activation functions, network architectures; sidestep: stochastic neurons; sidestep: biological
neurons
3. Neural networks as classifiers, empirical risk minimisation, loss functions, model complexity and generalisation bounds; neural networks as nonlinear regression models, loss functions
4. Backpropagation for feed-forward networks with arbitrary DAG structure, simplification/modularisation for layered
networks
5. NN loss landscape, Stochastic gradient descent for convex functions, SGD for nonlinear functions, (Nesterov)
momentum
6. Convolutional neural networks, architectures, application examples, side step: visual cortex
7. Training neural networks 0: project pipeline, data collection, training/validation/test set, model selection (architecture), overfitting, early stopping
8. Training neural networks 1: data preprocessing, weight initialisation, batch normalisation
9. Training neural networks 2: Adaptive SGD methods
10. Training neural networks 3: regularisation, L1/L2 weight regularisation, randomised predictors, dropout, data
augmentation
11. Training neural networks 4: adversarial patterns, robust learning approaches
12. Generative models: VAE, GANs (introductory level)
13. Recurrent neural networks: recurrent back-propagation, RNN, GRU, LSTM
14. Reserve, other topics not covered, e.g. graph neural networks, convolutions on graphs

Osnovy cvičení:

Two types of labs (tutorials) will be proposed for the course (alternating): • practical labs discussing homework assignments in which students will implement selected methods discussed in the course and experiment with them • theoretical labs in which students will discuss solutions of theoretical assignments (made available before the class).

Literatura:

I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, MIT Press, 2016

Požadavky:

Fundamentals of mathematics comparable to the following courses: Linear Algebra (B0B01LAG), Calculus (B0B01MA2), Optimization (B0B33OPT) as well as Probability, Statistics, and Theory of Information (B0B01PST ). Besides proficient knowledge of mathematics as given above, students are expected to have solid knowledge in the following areas of computer science and artificial intelligence: basics of graph theory and related algorithms; basics of pattern recognition, empirical risk minimisation, linear classifiers, support vector machines as in Pattern Recognition and Machine Learning (B4B33RPZ or BE4B33RPZ).

Klíčová slova:

artificial neural networks, deep learning

Předmět je zahrnut do těchto studijních plánů:

Plán Obor Role Dop. semestr


Stránka vytvořena 28.3.2024 17:52:19, semestry: Z,L/2023-4, Z/2024-5, připomínky k informační náplni zasílejte správci studijních plánů Návrh a realizace: I. Halaška (K336), J. Novák (K336)