Popis předmětu - BECM33DPL

Přehled studia | Přehled oborů | Všechny skupiny předmětů | Všechny předměty | Seznam rolí | Vysvětlivky               Návod
BECM33DPL Deep Learning Essentials
Role:P Rozsah výuky:2P+2C
Katedra:13133 Jazyk výuky:EN
Garanti:Zimmermann K. Zakončení:Z,ZK
Přednášející:Neumann L. Kreditů:6
Cvičící:Neumann L., Škvrna J. Semestr:Z

Webová stránka:

https://cw.fel.cvut.cz/wiki/courses/becm33dpl/start

Anotace:

The course teaches deep learning methods on known robotic problems, such as semantic segmentation or reactive motion control. The overall goal is timeless, universal knowledge rather than listing all known deep learning architectures. Students are assumed to have working prior knowledge of mathematics (gradient, jacobian, hessian, gradient descent, Taylor polynomial) and machine learning (Bayes risk minimization, linear classifier). The labs are divided into two parts; in the first one, the students will solve elementary deep ML tasks from scratch (including the reimplementation of autograd backpropagation), and in the second one, students will build on existing templates in order to solve complex tasks including RL, vision transformers and generative networks.

Cíle studia:

The course teaches deep learning methods on known robotic problems, such as semantic segmentation or reactive motion control. The overall goal is timeless, universal knowledge rather than listing all known deep learning architectures. Students are assumed to have working prior knowledge of mathematics (gradient, jacobian, hessian, gradient descent, Taylor polynomial) and machine learning (Bayes risk minimization, linear classifier). The labs are divided into two parts; in the first one, the students will solve elementary deep ML tasks from scratch (including the reimplementation of autograd backpropagation), and in the second one, students will build on existing templates in order to solve complex tasks including RL, vision transformers and generative networks.

Obsah:

https://cw.fel.cvut.cz/wiki/courses/becm33dpl/start

Osnovy přednášek:

1. Machine learning 101: model, loss, learning, issues, regression, classification
2. Under the hood of a linear classifier: two-class and multiclass linear classifier on RGB images
3. Under the hood of auto-differentiation: Computational graph of fully connected NN, Vector-Jacobian-Product (VJP) vs chain rule and multiplication of Jacobians.
4. The story of the cat's brain surgery: cortex + convolutional layer and its Vector-Jacobian-Product (VJP)
5. The loss: MAP and ML estimate, KL divergence and losses.
6. Why is learning prone to fail? - Structural issues: layers + issues, batch-norm, drop-out
7. Why is learning prone to fail? - Optimization issues: optimization vs learning, KL divergence, SGD, momentum, convergence rate, Adagrad, RMSProp, Adam, diminishing/exploding gradient, oscillation, double descent
8. What can('t) we do with a deep net? Classification, Segmentation, Detection, Regression
9. Reinforcement learning: Approximated Q-learning, DQN, DDPG, Derivation of the policy gradient, Reward shaping, Inverse RL, Applications
10. Memory and attention: recurrent nets, Image transformers with attention module
11. Generative models: GANs and diffusion models
12. Implicit layers: Backpropagation through unconstrained and constrained optimization problems, ODE solvers, root finders, fixed points) + existing end-to-end differentiable modules

Osnovy cvičení:

Literatura:

Ian Goodfellow and Yoshua Bengio and Aaron Courville, Deep learning, MIT press, 2016, http://www.deeplearningbook.org
F. Fleuret. The Little Book of Deep Learning. lulu.com, 2023.

Požadavky:

Klíčová slova:

Deep learning, neural networks

Předmět je zahrnut do těchto studijních plánů:

Plán Obor Role Dop. semestr
MPPRGAI_2025 Před zařazením do oboru P 1


Stránka vytvořena 13.11.2025 12:51:06, semestry: Z,L/2026-7, Z/2025-6, L/2024-5, L/2025-6, připomínky k informační náplni zasílejte správci studijních plánů Návrh a realizace: I. Halaška (K336), J. Novák (K336)